Wednesday, July 3, 2024
HomeData ModellingAutomationHow To Install Ansible AWX on Ubuntu 22.04|20.04|18.04

How To Install Ansible AWX on Ubuntu 22.04|20.04|18.04

Welcome to today’s guide on how to install Ansible AWX on Ubuntu 22.04|20.04|18.04 with Nginx Reverse Proxy and optional Let’s Encrypt SSL Certificate. Ansible AWX is an open source tool which provides a web-based user interface, REST API, and task engine for easy and collaborative management of Ansible Playbooks and Inventories.

AWX allows you to centrally manage Ansible playbooks, inventories, Secrets, and scheduled jobs from a web interface. It is easy to install AWX on Ubuntu Linux system. Use the steps shared below to install and configure Ansible AWX on Ubuntu Linux server.

Starting in version 18.0 of AWX, the recommended installation method is via AWX Operator. As the operator installation method requires a Kubernetes Cluster, we will perform a single node Kubernetes installation on Ubuntu Linux using k3s.

Setup minimum requirements

  • Ubuntu 22.04|20.04|18.04 LTS Server
  • At least 16GB of RAM – More is better
  • 4vcpus – Pump more if you have
  • 20GB free disk storage
  • root or user with sudo for ssh

Step 1: Update Ubuntu system

Update and upgrade your system

sudo apt update && sudo apt -y upgrade
[ -f /var/run/reboot-required ] && sudo reboot -f

Step 2: Install Single Node k3s Kubernetes

We will deploy a single node kubernetes using k3s lightweight tool. K3s is a certified Kubernetes distribution designed for production workloads in unattended, resource-constrained environments. The good thing with k3s is that you can add more Worker nodes at later stage if need arises.

K3s provides an installation script that is a convenient way to install it as a service on systemd or openrc based systems

Let’s run the following command to install K3s on our Ubuntu system:

curl -sfL https://get.k3s.io | sudo bash -

Installation process output:

[INFO]  Finding release for channel stable
[INFO]  Using v1.27.4+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.27.4+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.27.4+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

The kubeconfig file stored at /etc/rancher/k3s/k3s.yaml is used to configure access to the Kubernetes cluster. You can set permissions and set KUBECONFIG for global access.

sudo chmod 644 /etc/rancher/k3s/k3s.yaml
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
kubectl get pods --all-namespaces

You can as well copy /etc/rancher/k3s/k3s.yaml on to a machine located outside the cluster as ~/.kube/config.

mkdir ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

Validate K3s installation:

The next step is to validate our installation of K3s using kubectl command which was installed and configured by installer script.

$ kubectl get nodes
NAME    STATUS   ROLES                  AGE   VERSION
jammy   Ready    control-plane,master   81s   v1.27.4+k3s1

You can also confirm Kubernetes version deployed using the following command:

$ kubectl version --short
Client Version: v1.27.4+k3s1
Kustomize Version: v5.0.1
Server Version: v1.27.4+k3s1

The K3s service will be configured to automatically restart after node reboots or if the process crashes or is killed.

Step 3: Deploy AWX Operator on Kubernetes

This Kubernetes Operator has to be deployed in your Kubernetes cluster, which in our case is powered by K3s. The operator we’ll deploy can manage one or more AWX instances in any namespace.

Install git and make tools:

sudo apt update
sudo apt install git build-essential -y

Clone operator deployment code:

$ git clone https://github.com/ansible/awx-operator.git
Cloning into 'awx-operator'...
remote: Enumerating objects: 8773, done.
remote: Counting objects: 100% (1388/1388), done.
remote: Compressing objects: 100% (201/201), done.
remote: Total 8773 (delta 1223), reused 1275 (delta 1184), pack-reused 7385
Receiving objects: 100% (8773/8773), 2.35 MiB | 17.69 MiB/s, done.
Resolving deltas: 100% (5033/5033), done.

Next we create a namespace where operator will be deployed. We’ll name this awx:

export NAMESPACE=awx
kubectl create ns ${NAMESPACE}

Set current context to value set in NAMESPACE variable:

# kubectl config set-context --current --namespace=$NAMESPACE 
Context "default" modified.

Switch to awx-operator directory:

cd awx-operator/

Save the latest version from AWX Operator releases as RELEASE_TAG variable then checkout to the branch using git.

sudo apt install curl jq -y
RELEASE_TAG=`curl -s https://api.github.com/repos/ansible/awx-operator/releases/latest | grep tag_name | cut -d '"' -f 4`
echo $RELEASE_TAG
git checkout $RELEASE_TAG

Deploy AWX Operator into your cluster:

export NAMESPACE=awx
make deploy

Command output:

namespace/awx configured
customresourcedefinition.apiextensions.k8s.io/awxbackups.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxrestores.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxs.awx.ansible.com created
serviceaccount/awx-operator-controller-manager created
role.rbac.authorization.k8s.io/awx-operator-awx-manager-role created
role.rbac.authorization.k8s.io/awx-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/awx-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/awx-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/awx-operator-awx-manager-rolebinding created
rolebinding.rbac.authorization.k8s.io/awx-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/awx-operator-proxy-rolebinding created
configmap/awx-operator-awx-manager-config created
service/awx-operator-controller-manager-metrics-service created
deployment.apps/awx-operator-controller-manager created

Wait a few minutes and awx-operator should be running:

# kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
awx-operator-controller-manager-68d787cfbd-z75n4   2/2     Running   0          40s

How To Uninstall AWX Operator (Don’t run this unless you’re sure it uninstalls!

You can always remove the operator and all associated CRDs by running the command below:

# export NAMESPACE=awx
# make undeploy
/root/awx-operator/bin/kustomize build config/default | kubectl delete -f -
namespace "awx" deleted
customresourcedefinition.apiextensions.k8s.io "awxbackups.awx.ansible.com" deleted
customresourcedefinition.apiextensions.k8s.io "awxrestores.awx.ansible.com" deleted
customresourcedefinition.apiextensions.k8s.io "awxs.awx.ansible.com" deleted
serviceaccount "awx-operator-controller-manager" deleted
role.rbac.authorization.k8s.io "awx-operator-leader-election-role" deleted
role.rbac.authorization.k8s.io "awx-operator-manager-role" deleted
clusterrole.rbac.authorization.k8s.io "awx-operator-metrics-reader" deleted
clusterrole.rbac.authorization.k8s.io "awx-operator-proxy-role" deleted
rolebinding.rbac.authorization.k8s.io "awx-operator-leader-election-rolebinding" deleted
rolebinding.rbac.authorization.k8s.io "awx-operator-manager-rolebinding" deleted
clusterrolebinding.rbac.authorization.k8s.io "awx-operator-proxy-rolebinding" deleted
configmap "awx-operator-manager-config" deleted
service "awx-operator-controller-manager-metrics-service" deleted
deployment.apps "awx-operator-controller-manager" deleted

Step 4: Install Ansible AWX on Ubuntu using Operator

Create Static data PVC – Ref AWX data persistence:

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: static-data-pvc
  namespace: awx
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 5Gi
EOF

PVC won’t be bound until the pod that uses it is created.

# kubectl get pvc -n awx
NAME                     STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
static-data-pvc          Pending                                      local-path     43s

Let’s now create AWX deployment file with basic information about what is installed:

vim awx-deploy.yml

Paste below contents into the file:

---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx
spec:
  service_type: nodeport
  projects_persistence: true
  projects_storage_access_mode: ReadWriteOnce
  web_extra_volume_mounts: |
    - name: static-data
      mountPath: /var/lib/projects
  extra_volumes: |
    - name: static-data
      persistentVolumeClaim:
        claimName: static-data-pvc

We have defined resource name as awx and service type as nodeport to enable us access AWX from the Node IP address and given port. We also added extra PV mount on the web server pod.

Apply configuration manifest file:

$ kubectl apply -f awx-deploy.yml
awx.awx.ansible.com/awx created

Wait a few minutes then check AWX instance deployed:

watch kubectl get pods -l "app.kubernetes.io/managed-by=awx-operator"

Once running you can confirm with the following commands: 

$ kubectl get pods -l "app.kubernetes.io/managed-by=awx-operator"
NAME                        READY   STATUS    RESTARTS   AGE
awx-postgres-13-0           1/1     Running   0          7m58s
awx-task-6874d8656b-v75s4   4/4     Running   0          7m10s
awx-web-6c797b8657-sllzt    3/3     Running   0          5m52s

You can track the installation process at the operator pod logs:

kubectl logs -f deployments/awx-operator-controller-manager -c awx-manager

Fixing Postgres Pod in CrashLoopBackOff state (Skip if you don’t have this error message)

Check status of PostgreSQL Pod:

$ kubectl get pods -l "app.kubernetes.io/managed-by=awx-operator"
NAME                   READY   STATUS             RESTARTS   AGE
awx-postgres-0         0/1     CrashLoopBackOff   4          2m38s
awx-66c64f8d67-qvfk9   3/4     ImagePullBackOff   0          2m27s

Check logs of the Pod:

$ kubectl logs awx-postgres-xxx
mkdir: cannot create directory ‘/var/lib/postgresql/data’: Permission denied

It means the Postgres pod cannot write to the persistent volume directory inside /var/lib/rancher/k3s/storage/:

# ls -lh /var/lib/rancher/k3s/storage/ | grep awx-postgres-0
total 0
drwx------ 3 root root 4.0K Aug 24 09:18 pvc-b2acb2d0-6d2f-457b-a40c-390cff5ec6d2_default_postgres-awx-postgres-0

Try setting the directory mode to 777

# chmod -R 777  /var/lib/rancher/k3s/storage/*
# kubectl delete pods -l "app.kubernetes.io/managed-by=awx-operator"
pod "awx-66c64f8d67-qvfk9" deleted
pod "awx-postgres-0" deleted

The Postgres container pod should come up in few seconds:

# kubectl get pods -l "app.kubernetes.io/managed-by=awx-operator"
NAME                   READY   STATUS             RESTARTS   AGE
awx-postgres-0         1/1     Running            0          42s
awx-66c64f8d67-4qmh5   4/4     Running            0          42s

If you experience any issues with the Pods starting check deployment logs:

kubectl logs -f deployments/awx-operator-controller-manager -c awx-manager

Data Persistence

The database data will be persistent as they are stored in a persistent volume:

$ kubectl get pvc
NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
postgres-13-awx-postgres-13-0   Bound    pvc-1ff27783-a9d7-46c0-8160-803615784267   8Gi        RWO            local-path     20m
static-data-pvc                 Bound    pvc-cac663a8-0f9d-4d8e-96a2-4821576fab97   5Gi        RWO            local-path     21m
awx-projects-claim              Bound    pvc-e49c3bd8-79d4-4640-acd1-990578ca27c1   8Gi        RWO            local-path     19m

Volumes are created using local-path-provisioner and host path

$ ls -1 /var/lib/rancher/k3s/storage/
pvc-1ff27783-a9d7-46c0-8160-803615784267_awx_postgres-13-awx-postgres-13-0
pvc-cac663a8-0f9d-4d8e-96a2-4821576fab97_awx_static-data-pvc
pvc-e49c3bd8-79d4-4640-acd1-990578ca27c1_awx_awx-projects-claim

Checking AWX Container’s logs

The awx-xxx-yyy pod will have four containers, namely:

  • redis
  • awx-web
  • awx-task
  • awx-ee

Get deployments:

$ kubectl get deploy
NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
awx-operator-controller-manager   1/1     1            1           22m
awx-task                          1/1     1            1           20m
awx-web                           1/1     1            1           19m

List Pods:

$ kubectl get pod
NAME                                               READY   STATUS    RESTARTS   AGE
awx-operator-controller-manager-74889d49c8-kch8t   2/2     Running   0          36m
awx-postgres-13-0                                  1/1     Running   0          34m
awx-task-6874d8656b-v75s4                          4/4     Running   0          33m
awx-web-6c797b8657-sllzt                           3/3     Running   0          32m

List containers in a Pod

$ kubectl get pods  awx-web-6c797b8657-sllzt -o jsonpath='{range .spec.containers[*]}{.name}{"\n"}{end}'
redis
awx-web
awx-rsyslog

$ kubectl get pods  awx-postgres-13-0 -o jsonpath='{range .spec.containers[*]}{.name}{"\n"}{end}'
postgres

$ kubectl get pods  awx-task-6874d8656b-v75s4  -o jsonpath='{range .spec.containers[*]}{.name}{"\n"}{end}'
redis
awx-task
awx-ee
awx-rsyslog

$ kubectl get pods  awx-operator-controller-manager-74889d49c8-kch8t  -o jsonpath='{range .spec.containers[*]}{.name}{"\n"}{end}'
kube-rbac-proxy
awx-manager

To check logs in deployment.

# kubectl -n awx  logs deploy/awx-web
Defaulted container "redis" out of: redis, awx-web, awx-rsyslog, init (init), init-projects (init)
1:C 16 Jun 2023 10:26:05.653 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 16 Jun 2023 10:26:05.653 # Redis version=7.0.11, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 16 Jun 2023 10:26:05.653 # Configuration loaded
1:M 16 Jun 2023 10:26:05.654 * monotonic clock: POSIX clock_gettime
1:M 16 Jun 2023 10:26:05.655 * Running mode=standalone, port=0.
1:M 16 Jun 2023 10:26:05.655 # Server initialized
1:M 16 Jun 2023 10:26:05.655 * The server is now ready to accept connections at /var/run/redis/redis.sock
1:M 16 Jun 2023 10:31:06.022 * 100 changes in 300 seconds. Saving...
1:M 16 Jun 2023 10:31:06.023 * Background saving started by pid 21
21:C 16 Jun 2023 10:31:06.030 * DB saved on disk
21:C 16 Jun 2023 10:31:06.031 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
1:M 16 Jun 2023 10:31:06.123 * Background saving terminated with success
1:M 16 Jun 2023 10:36:07.077 * 100 changes in 300 seconds. Saving...
1:M 16 Jun 2023 10:36:07.077 * Background saving started by pid 22
22:C 16 Jun 2023 10:36:07.088 * DB saved on disk
22:C 16 Jun 2023 10:36:07.089 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
1:M 16 Jun 2023 10:36:07.178 * Background saving terminated with success
1:M 16 Jun 2023 10:41:08.027 * 100 changes in 300 seconds. Saving...
1:M 16 Jun 2023 10:41:08.028 * Background saving started by pid 23
23:C 16 Jun 2023 10:41:08.038 * DB saved on disk
23:C 16 Jun 2023 10:41:08.039 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
1:M 16 Jun 2023 10:41:08.129 * Background saving terminated with success

You’ll need to provide container name after the pod:

kubectl -n awx  logs deploy/awx-web -c redis
kubectl -n awx  logs deploy/awx-web -c awx-web
kubectl -n awx  logs deploy/awx-web -c awx-rsyslog

Access AWX Container’s Shell

Here is how to access each container’s shell:

kubectl exec -ti deploy/awx-web -c redis -- /bin/bash
kubectl exec -ti deploy/awx-web  -c  awx-web -- /bin/bash
kubectl exec -ti awx-postgres-13-0  -c  postgres -- /bin/bash

Upgrading AWX Operator and instance

We have created a dedicated guide for upgrading the Operator and AWX instance:

Step 5: Access Ansible AWX Dashboard

List all available services and check awx-service Nodeport

$ kubectl get svc -l "app.kubernetes.io/managed-by=awx-operator"
NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
awx-postgres-13   ClusterIP   None            <none>        5432/TCP       153m
awx-service       NodePort    10.43.179.217   <none>        80:30059/TCP   152m

We can see the service type is set to NodePort. To access our service from outside Kubernetes cluster using k3s node IP address and give node port

http://hostip_or_hostname:30080

You can edit the Node Port and set to figure of your preference, but it has to be in the range of 30000-32768

On accessing AWX web portal you are presented with the welcome dashboard similar to one below.

install ansible awx ubuntu using operator 01

The login username is admin

Obtain admin user password by decoding the secret with the password value:

kubectl get secret awx-admin-password -o jsonpath="{.data.password}" | base64 --decode

Better output format:

kubectl get secret awx-admin-password -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'

Login with admin username and decoded password:

install ansible awx ubuntu using operator 02

The new AWX interface is beautifully designed with Love. It is a serious redesign with good polishing.

install ansible awx ubuntu using operator 03

In the articles to follow we will explore Ansible AWX administration and usage in automation Infrastructure repetitive tasks.

Review Kubernetes Node resources to ensure they are enough to run AWX:

$ kubectl top nodes --use-protocol-buffers
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
ubuntu   102m         1%     2534Mi          15%

Step 6: Configure Ingress for AWX

If you would like to access your AWX using domain names and SSL, check out our ingress articles:

Similar articles:

Dominic Rubhabha Wardslaus
Dominic Rubhabha Wardslaushttps://neveropen.dev
infosec,malicious & dos attacks generator, boot rom exploit philanthropist , wild hacker , game developer,
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments