Running the first Ansible Playbooks on the Windows-based VMs directly from Ansible
In this part of the Citrix Automation series, we run the first Ansible Playbooks on the Windows-based VMs directly using Ansible:
- Joining the Virtual Machines to an Active Directory Domain
- Deploying and Configuring the Citrix Cloud Connector software on the Domain-joined VMs
Why should we use an Ansible Playbook for adding a VM to a Domain?
Not all Terraform providers offer the possibility of automatically joining a newly created VM to a Domain, so we decided to create a generic way using an Ansible Playbook.
Here is an example of the Ansible Playbook we use:
GNU nano 7.2 /etc/ansible/Join-VMToDomain.microsoft.ad.ansible.yml
---
- name: join host to domain with automatic reboot
hosts: cloudconnectors-ip
tasks:
- name: join host to domain with automatic reboot
microsoft.ad.membership:
dns_domain_name: az.the-austrian-citrix-guy.at
hostname: TMM-GK-W2K22-N1
domain_admin_user: tmm-azadmin@az.the-austrian-citrix-guy.at
domain_admin_password: "xXxXxXxXxXxXxXxXxXxXx"
domain_ou_path: "OU=_INFRA,OU=_COMPUTERS,OU=TMM-AZ,DC=az,DC=the-austrian-citrix-guy,DC=at"
state: domain
reboot: true
IMPORTANT: You must watch the correct indents the YAML syntax requires!
Running this playbook shows successful completion (the parameter -vvvv means a lot of verbose output – perfect for debugging!)
tmm-azadmin@TMM-GK-UBUNTU-AUTOMATION:/etc/ansible$ ansible-playbook Join-VMToDomain.microsoft.ad.ansible.yml -vvvv
ansible-playbook [core 2.17.6]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tmm-azadmin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/tmm-azadmin/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-playbook
python version = 3.12.3 (main, Sep 11 2024, 14:17:37) [GCC 13.2.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: Join-VMToDomain.microsoft.ad.ansible.yml ***************************************************
Positional arguments: Join-VMToDomain.microsoft.ad.ansible.yml
verbosity: 4
connection: ssh
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in Join-VMToDomain.microsoft.ad.ansible.yml
PLAY [join host to domain with automatic reboot] *****************************************************
TASK [Gathering Facts] *******************************************************************************
task path: /etc/ansible/Join-VMToDomain.microsoft.ad.ansible.yml:2
redirecting (type: modules) ansible.builtin.setup to ansible.windows.setup
Loading collection ansible.windows from /usr/lib/python3/dist-packages/ansible_collections/ansible/windows
redirecting (type: modules) ansible.builtin.setup to ansible.windows.setup
Loading collection ansible.windows from /usr/lib/python3/dist-packages/ansible_collections/ansible/windows
Using module file /usr/lib/python3/dist-packages/ansible_collections/ansible/windows/plugins/modules/setup.ps1
Pipelining is enabled.
<172.31.4.17> ESTABLISH WINRM CONNECTION FOR USER: tmm-azadmin on PORT 5986 TO 172.31.4.17
Using module file /usr/lib/python3/dist-packages/ansible_collections/ansible/windows/plugins/modules/setup.ps1
Pipelining is enabled.
<172.31.4.18> ESTABLISH WINRM CONNECTION FOR USER: tmm-azadmin on PORT 5986 TO 172.31.4.18
EXEC (via pipeline wrapper)
fatal: [172.31.4.18]: UNREACHABLE! => {
"changed": false,
"msg": "ssl: HTTPSConnectionPool(host='172.31.4.18', port=5986): Max retries exceeded with url: /wsman (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x722a7a5d2de0>: Failed to establish a new connection: [Errno 113] No route to host'))",
"unreachable": true
}
ok: [172.31.4.17]
...
microsoft.ad.membership: last boot time: 133758162116740500
microsoft.ad.membership running post reboot test command
microsoft.ad.membership: attempting post-reboot test command
EXEC (via pipeline wrapper)
microsoft.ad.membership: system successfully rebooted
changed: [172.31.4.17] => {
"changed": true,
"invocation": {
"module_args": {
"dns_domain_name": "az.the-austrian-citrix-guy.at",
"dns_domain_name": "az.the-austrian-citrix-guy.at",
"domain_admin_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"domain_admin_user": "tmm-azadmin@az.the-austrian-citrix-guy.at",
"domain_ou_path": "OU=_INFRA,OU=_COMPUTERS,OU=TMM-AZ,DC=az,DC=the-austrian-citrix-guy,DC=at",
"domain_server": null,
"hostname": "TMM-GK-W2K22-N1",
"offline_join_blob": null,
"reboot": true,
"reboot_timeout": 600,
"state": "domain",
"workgroup_name": null
}
},
"reboot_required": false
}
PLAY RECAP *******************************************************************************************
172.31.4.17 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
172.31.4.18 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
tmm-azadmin@TMM-GK-UBUNTU-AUTOMATION:/etc/ansible$
The errors for 172.31.4.18 occur, because this VM was not switched on while Ansible ran the playbook.
Ansible has successfully put the VM into the Active Directory Domain.
Now we can install the Cloud Connector software.
Installing the Citrix Cloud Connector software on the VMs
The Cloud Connector is a needed component for Citrix DaaS.
It handles all communications between the Resource Location and the Control Plane on Citrix Cloud.
It is an EXE file that needs a JSON-based configuration file.
The EXE is downloaded from an Azure Storage Location after the creation of the VM by Terraform.
The JSON file depends on various configuration steps done during the deployment.
It is dynamically created by Terraform and also uploaded after VM creation in the correct folder required by the Playbook.
Example of the JSON-based configuration file:
{
"acceptTermsOfService":true,
"clientId":"f4e2xxxx-xxxx-xxxx-xxxx-xxxxxxxx5c15",
"clientSecret":"VJCxXxXxXxXxXxXxXxXxXxgA==",
"customerName":"uzyxXxXxXxXj",
"resourceLocationId":"22edxXxX-xXxX-xXxX-xXxX-xXxXxXxXddaf"
}
This file is saved in the same directory as the cwcconnector.exe.
Running the Ansible Playbook installs the Cloud Connector and puts it into the needed Resource Location.
Here is an example of the Ansible Playbook we use:
GNU nano 7.2 /etc/ansible/Install-CWC-System.ansible.yml
---
- name: install citrix cloud connector from cloud repository
hosts: cloudconnectors-ip
tasks:
- name: install citrix cloud connector from azure cloud repository
ansible.windows.win_package:
path: C:\_SW\cwcconnector.exe
product_id: CWCConnector.exe
arguments:
- /q
- /ParametersFilePath:C:\_SW\cwc.json
state: present
become: true
become_method: runas
become_user: SYSTEM
IMPORTANT: You must watch the correct indents the YAML syntax requires!
Running this playbook shows successful completion:
tmm-azadmin@TMM-GK-UBUNTU-AUTOMATION:/etc/ansible$ ansible-playbook Install-CWC.ansible.yml -v
Using /etc/ansible/ansible.cfg as config file
PLAY [install citrix cloud connector from cloud repository]
************************************************************
TASK [Gathering Facts]
************************************************************
fatal: [172.31.4.18]: UNREACHABLE! => {"changed": false, "msg": "ssl: HTTPSConnectionPool(host='172.31.4.18', port=5986): Max retries exceeded with url: /wsman (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x72a3052db830>: Failed to establish a new connection: [Errno 113] No route to host'))", "unreachable": true}
ok: [172.31.4.17]
TASK [install citrix cloud connector from azure cloud repository]
******************************************************
changed: [172.31.4.17] => {"changed": true, "rc": 0, "reboot_required": false}
PLAY RECAP *******************************************************************************************
172.31.4.17 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
172.31.4.18 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
tmm-azadmin@TMM-GK-UBUNTU-AUTOMATION:/etc/ansible$
The errors for 172.31.4.18 occur, because this VM was not switched on while Ansible ran the playbook.
Ansible has successfully configured the Cloud Connector and registered it in the Resource Location.
In the next part, we will run the Ansible Playbooks directly out of Terraform.