In most cases you will not require SSH access to your OpenShift cluster nodes for regular administration tasks because OpenShift 4 provides the oc debug command which can be used for shell access. If you still prefer SSH access to the cluster nodes for day 2 operations, SSH keys used during deployment have to be used. For lost Private & Public SSH key pairs, there is a possibility to update OpenShift 4.x Cluster SSH Keys after installation provided you have administrative oc management.
This also applies to configuration of ssh keys post-installation if the OpenShift cluster was installed without ssh keys. We will see how you update ssh keys for master, infra and worker machines in your OpenShift 4.x cluster. The update of the SSH keys in the cluster is performed by modifying (or creating) the proper MachineConfig objects on the cluster.
OpenShift 4 is an operator-focused platform, and the Machine Config operator extends that to the operating system itself, managing updates and configuration changes to essentially everything between the kernel and kubelet. By default, RHCOS contains a single user named core (derived in spirit from CoreOS Container Linux) with optional SSH keys specified at install time.
Update OpenShift 4.x SSH keys after cluster Setup
By default, there are two MachineConfig objects that handles management of SSH keys:
- 99-worker-ssh – This is used for worker nodes
- 99-master-ssh – For Master nodes in the cluster
If SSH keys are specified at the time of cluster installation they are propagated to above MachineConfig objects.
Update OpenShift Master nodes SSH Keys
If earlier cluster installation was done with SSH keys, download current SSH MachineConfig object for the Master nodes
oc get mc 99-master-ssh -o yaml > 99-master-ssh.yml
This should be done from bastion server with admin level cluster access.
Once the file is downloaded, edit it with the desired keys like. For cluster created without SSH keys create a new file 99-master-ssh.yml:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 99-master-ssh
spec:
config:
ignition:
config: {}
security:
tls: {}
timeouts: {}
version: 2.2.0
networkd: {}
passwd:
users:
- name: core
sshAuthorizedKeys:
- ssh-rsa XXXXXXX.....
- ssh-rsa YYYYYYY.....
storage: {}
systemd: {}
fips: false
kernelArguments: null
osImageURL: ""
Key notes:
- sshAuthorizedKeys array contains all the valid SSH public keys. Each SSH key at each element. You must be careful with YAML syntax to have a working configuration file.
- The user: name field should not be updated as core is the only user currently supported in our configuration.
- Updating the MachineConfig object may drain and reboot all the nodes one by one (as per maxUnavailable setting on MachineConfigPool). Decide if this is okay in your Infrastructure.
To update MachineConfig object run the command:
oc apply -f 99-master-ssh.yml
Update OpenShift Worker / Infra nodes SSH Keys
The same process applies to Worker nodes MachineConfigPool. Download or create worker ssh configuration.
oc get mc 99-worker-ssh -o yaml > 99-worker-ssh.yml
The configuration is update similar to previous:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-worker-ssh
spec:
config:
ignition:
config: {}
security:
tls: {}
timeouts: {}
version: 2.2.0
networkd: {}
passwd:
users:
- name: core
sshAuthorizedKeys:
- ssh-rsa XXXXXXX.....
- ssh-rsa YYYYYYY.....
storage: {}
systemd: {}
fips: false
kernelArguments: null
osImageURL: ""
Once changes are done, apply the file:
$ oc apply -f 99-worker-ssh.yml
OpenShift Machine Config Operator should start applying the changes shortly. You can run the following command to see which MachineConfigPools are being updated:
oc get mcp
You can also get more insight on how the change are being applied in a single node using the command below:
export NODE="master-01.example.com"
oc -n openshift-machine-config-operator logs -c machine-config-daemon $(oc -n openshift-machine-config-operator get pod -l k8s-app=machine-config-daemon --field-selector spec.nodeName=${NODE} -o name) -f
Replace master-01.example.com with node name to check.
We hope this article enabled you to update SSH keys in your OpenShift 4.x Cluster after installation with or without SSH keys.
More guides on OpenShift:
Backup Etcd data on OpenShift 4.x to AWS S3 Bucket
How to change pids_limit value in OpenShift 4.x