Artifactory is a binary repository manager. It is a software tool used to manage and organize binary artifacts, such as software libraries, application binaries, Docker images, and other build artifacts. It acts as a central hub for storing, versioning, and distributing these artifacts in a controlled and efficient manner.
JFrog Artifactory is one of the most popular and widely used binary repository managers. It is developed by JFrog, a company known for its DevOps and software development tools. It serves as a universal repository manager, supporting various package formats and build technologies, including Maven, Gradle, NuGet, Docker, npm, and more. It is a reliable and scalable solution for managing the entire artifact lifecycle, from development to deployment.
The features and benefits offered by JFrog Artifactory are:
- Build Promotion and Release Management: It supports build promotion, allowing controlled promotion of artifacts across different environments. It facilitates release management by providing release bundles, release notes, and integration with CI/CD pipelines.
- High Availability and Scalability: Artifactory can be deployed in a clustered architecture, providing high availability and scalability. It supports load balancing and replication, allowing the distribution of artifacts across multiple instances for improved performance and resilience.
- Dependency Management: The Artifactory efficiently handles artifact dependencies, ensuring consistent and reliable builds. It helps resolve and cache dependencies, reducing build time and optimizing software development workflows.
- Integration with CI/CD Tools: This Artifactory integrates with popular CI/CD tools like Jenkins, Bamboo, and TeamCity. It enables seamless artifact management and promotes automation within the software development pipeline.
- Security and Access Control: JFrog Artifactory includes robust security features, such as fine-grained access control, authentication mechanisms, and encryption of artifacts. It helps ensure the integrity and confidentiality of artifacts stored in the repository.
- Artifact Management: It provides a secure and centralized location to store and manage binary artifacts. It offers version control, metadata management, and search capabilities, making tracking, organising, and retrieving artefacts easy.
This guide provides a detailed demonstration of how to deploy JFrog Artifactory on Kubernetes With Ingress.
1: Setup Prerequisites
For this guide, you need to have:
- x86-64-v2 Support. This can be verified as shown:
$ cat /proc/cpuinfo | grep "model name"
model name : Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
model name : Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
$ lscpu | grep avx2
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat umip md_clear arch_capabilities
- 2 CPU cores and above
- Kubernetes 1.12+ cluster set up. Luckily, we have dedicated guides on our page to help you achieve this:
- Deploy HA Kubernetes Cluster on Rocky Linux 8 using RKE2
- Run Kubernetes on Debian with Minikube
- Deploy Kubernetes Cluster on Linux With k0s
- Install Kubernetes Cluster on Ubuntu using K3s
- Install Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O
- Deploy k0s Kubernetes on Rocky Linux 9 using k0sctl
- Install Minikube Rocky Linux 9 and Create Kubernetes Cluster
Once the cluster is up, you can install kubectl:
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin
To access the cluster, export the admin configuration
##For RKE2
export PATH=$PATH:/var/lib/rancher/rke2/bin export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
##For K0s
export KUBECONFIG=/var/lib/k0s/pki/admin.conf
Verify access with the command:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 3m42s v1.27.1+k0s
node1 Ready <none> 50s v1.27.1+k0s
node2 Ready <none> 71s v1.27.1+k0s
2: Run JFrog Artifactory on Kubernetes
When running JFrog Artifactory on Kubernetes, you can use two methods. The two options are:
- Using Helm Charts (Pro Edition) – License Required
- Manually with Manifest files (Open-Source Edition)
Option 1: Run JFrog Artifactory using Helm Charts(Pro Edition)
For this method, begin by installing Helm on your system. You can use the below guide to achieve this:
Verify the installation with the command:
$ helm version
version.BuildInfo{Version:"v3.12.0", GitCommit:"c9f554d75773799f72ceef38c51210f1842a1dea", GitTreeState:"clean", GoVersion:"go1.20.3"}
Add the JFrog Artifactory helm chart:
helm repo add jfrog-charts https://charts.jfrog.io
helm repo update
When deploying JFrog Artifactory on Kubernetes, you need a values.yml file that provides custom configurations and override default settings. The file allows you to specify various parameters such as resource allocation, database configuration, security settings, etc.
Create the file:
vim values.yml
In the file, add the below lines:
postgresql:
enabled: true
postgresqlUsername: artifactory
postgresqlPassword: Passw0rd
postgresqlDatabase: artifactory
#databaseUpgradeReady: true
#unifiedUpgradeAllowed: true
artifactory:
database:
type: postgresql
host: postgresql
port: 5432
name: artifactory
username: artifactory
password: Passw0rd
# masterKeySecretName: my-masterkey-secret
# joinKeySecretName: my-joinkey-secret
# license:
# secret: artifactory-cluster-license
# dataKey: art.lic
nginx:
enabled: false
In the value file, we have disabled Nginx and enabled the PostgreSQL database for JFrog. There are quite a number of configurations to make here.
JFrog Artifactory storage
With the enterprise license, JFrof Artifactory supports a wide range of storage back ends. The supported backends can be found at Artifactory Filestore options.
To configure storage, you need to pass the artifactory.persistence.type
 and pass the required configuration settings. The default-enabled storage is  file-system
 replication, where the data is replicated to all nodes.
To enable persistent storage, you can use a number of variables.
- NFS
To use NFS, you can deploy the NFS storage on Kubernetes with the guide below:
Once created, you can run the helm install
or helm upgrade
commands with the below variables:
...
--set artifactory.persistence.type=nfs \
--set artifactory.persistence.nfs.ip=${NFS_IP} \
...
- Google Storage
You can also use the Google Storage bucket as the cluster’s filestore. You need to pass the below commands with the helm install
 and helm upgrade
commands:
...Verfy
--set artifactory.persistence.type=google-storage \
--set artifactory.persistence.googleStorage.identity=${GCP_ID} \
--set artifactory.persistence.googleStorage.credential=${GCP_KEY} \
...
- AWS S3
To use the AWS S3 bucket as the cluster’s filestore, pass AWS S3 the below parameters to helm install
 and helm upgrade
:
...
# With explicit credentials:
--set artifactory.persistence.type=aws-s3 \
--set artifactory.persistence.awsS3.endpoint=${AWS_S3_ENDPOINT} \
--set artifactory.persistence.awsS3.region=${AWS_REGION} \
--set artifactory.persistence.awsS3.identity=${AWS_ACCESS_KEY_ID} \
--set artifactory.persistence.awsS3.credential=${AWS_SECRET_ACCESS_KEY} \
...
...
# With using existing IAM role
--set artifactory.persistence.type=aws-s3 \
--set artifactory.persistence.awsS3.endpoint=${AWS_S3_ENDPOINT} \
--set artifactory.persistence.awsS3.region=${AWS_REGION} \
--set artifactory.persistence.awsS3.roleName=${AWS_ROLE_NAME} \
...
- Microsoft Azure Blob Storage
You can use Microsoft Azure Blob Storage to persist data. You need to pass the below parameters to helm install
 and helm upgrade
:
...
--set artifactory.persistence.type=azure-blob \
--set artifactory.persistence.azureBlob.accountName=${AZURE_ACCOUNT_NAME} \
--set artifactory.persistence.azureBlob.accountKey=${AZURE_ACCOUNT_KEY} \
--set artifactory.persistence.azureBlob.endpoint=${AZURE_ENDPOINT} \
--set artifactory.persistence.azureBlob.containerName=${AZURE_CONTAINER_NAME} \
...
- Dynamic hostPath PV
You can also use a Dynamic hostPath PV to persist JFrog data. To create a Dynamic hostPath PV, follow the below guide:
After creating a storage class with the above guide, make it default:
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Verify changes:
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 48m
To use a Dynamic hostPath PV, you need to add the below lines to your values.yml file:
postgresql:
enabled: true
.....
persistence:
enabled: true
existingClaim: ""
......
artifactory:
....
persistence:
enabled: true
existingClaim: ""
......
nginx:
enabled: false
Once all the configurations have been done, run JFrog with the appropriate command and variables. For example, with the values.yml the command will be:
helm upgrade --install artifactory -f values.yml jfrog-charts/artifactory
Sample Output:
Release "artifactory" does not exist. Installing it now.
NAME: artifactory
LAST DEPLOYED: Thu May 18 12:56:41 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Congratulations. You have just deployed JFrog Artifactory!
1. Get the Artifactory URL by running these commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of the service by running 'kubectl get svc --namespace default -w artifactory-artifactory-nginx'
export SERVICE_IP=$(kubectl get svc --namespace default artifactory-artifactory-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP/
2. Open Artifactory in your browser
Default credential for Artifactory:
user: admin
password: password
Once started, view the running pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
artifactory-0 1/1 Running 0 4m19s
artifactory-postgresql-0 1/1 Running 0 4m19s
View the PVCs:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
artifactory-volume-artifactory-0 Bound pvc-8014040d-c483-4327-a0fb-a8d832ba6733 20Gi RWO local-path 18m
data-artifactory-postgresql-0 Bound pvc-ac6174ff-d7af-453d-96fc-c6766f82df74 200Gi RWO local-path 18m
View the service:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
artifactory ClusterIP 10.110.206.174 <none> 8082/TCP,8081/TCP 2m13s
artifactory-postgresql ClusterIP 10.110.9.107 <none> 5432/TCP 2m13s
artifactory-postgresql-headless ClusterIP None <none> 5432/TCP 2m13s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 84m
We have the ClusterIP service for JFrog. This cannot be accessed from outside the cluster. To access it, proceed and create an Ingress controller.
If you want to remove the deployment, use:
helm uninstall artifactory
Option 2: Run JFrog Artifactory using Manifests (Open-Source Edition)
It is also possible to run JFrog Artifactory with manifest files. First, create a namespace:
kubectl create namespace artifactory
Create a persistent storage, say Dynamic hostPath PV by following the guide below:
- Dynamic hostPath PV Creation in Kubernetes using Local Path Provisioner
- How To Deploy Rook Ceph Storage on Kubernetes Cluster
- Configure NFS as Kubernetes Persistent Volume Storage
Set the created storage class as the default SC:
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Now we can use the storage in the deployment. Create the database:
vim postgresql.yaml
Add the lines below to it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresql
namespace: artifactory
spec:
replicas: 1
selector:
matchLabels:
app: postgresql
template:
metadata:
labels:
app: postgresql
spec:
containers:
- name: postgresql
image: postgres:14
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: artifactory
- name: POSTGRES_USER
value: artifactory
- name: POSTGRES_PASSWORD
value: passw0rd
volumeMounts:
- name: postgresql-data
mountPath: /var/lib/postgresql/data
volumes:
- name: postgresql-data
persistentVolumeClaim:
claimName: postgresql
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgresql
namespace: artifactory
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Create the database and PVC:
kubectl apply -f postgresql.yaml
Create a service:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: postgresql-service
namespace: artifactory
spec:
selector:
app: postgresql
ports:
- protocol: TCP
port: 5432
targetPort: 5432
EOF
Create a deployment file:
vim jfrog.yaml
In the file, add the below lines:
apiVersion: apps/v1
kind: Deployment
metadata:
name: artifactory
namespace: artifactory
spec:
replicas: 1
selector:
matchLabels:
app: artifactory
template:
metadata:
labels:
app: artifactory
spec:
containers:
- name: artifactory
image: docker.bintray.io/jfrog/artifactory-oss:latest
ports:
- containerPort: 8081
env:
- name: DB_HOST
value: postgresql-service.artifactory.svc.cluster.local
- name: DB_PORT
value: "5432"
- name: DB_NAME
value: artifactory
- name: DB_USERNAME
value: artifactory
- name: DB_PASSWORD
value: Passw0rd
volumeMounts:
- name: artifactory-data
mountPath: /var/opt/jfrog/artifactory
volumes:
- name: artifactory-data
persistentVolumeClaim:
claimName: jfrog
Replace your-artifactory-pvc-name with the name of the created PVC before you proceed to run the deployment.
Create the PVC:
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jfrog
namespace: artifactory
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
Apply the deployment manifest:
kubectl apply -f jfrog.yaml
Create a JFrog service:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: artifactory
namespace: artifactory
spec:
selector:
app: artifactory
ports:
- name: http
protocol: TCP
port: 8082
targetPort: 8082
EOF
Check if the pods are running:
$ kubectl get pods -n artifactory
NAME READY STATUS RESTARTS AGE
artifactory-589d94c8bd-n4j7z 1/1 Running 0 68s
postgresql-5455df5b68-x58wb 1/1 Running 0 2m58s
View the service:
$ kubectl get svc -n artifactory
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
artifactory ClusterIP 10.100.195.71 <none> 8082/TCP 34s
postgresql-service ClusterIP 10.98.243.6 <none> 5432/TCP 2m15s
You can also view the PVCs:
$ kubectl get pvc -n artifactory
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jfrog Bound pvc-b01c466e-5f45-4d86-b839-e64a32cd9a2f 10Gi RWO local-path 2m12s
postgresql Bound pvc-ed5baa01-8681-4796-960a-acf2690c4e5e 10Gi RWO local-path 3m58s
3: Deploy Ingress For JFrog Artifactory
We want to access the JFrog Artifactory service. For this guide, we will learn how to spin two Ingress controllers for JFrog. These are:
- Nginx Ingress
- Traefik Ingress
Both controllers work almost the same, so choose one that fascinates you. But before we proceed if your cluster is deployed in a private cloud environment or any kind of on-prem Infrastructure then you need to deploy MetalLB. This can be done using the guide below:
Option 1: Deploy Nginx Ingress For JFrog Artifactory
This page provides a detailed demonstration of how to deploy Nginx Ingress on Kubernetes. Use the guide below to deploy an Nginx ingress controller
Once deployed, get the external service IP address of the Nginx ingress:
$ kubectl get service ingress-nginx-controller --namespace=ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.96.12.197 192.168.200.70 80:32392/TCP,443:32744/TCP 2m13s
Ensure that you have a LoadBalancer service type with an external IP address for the Nginx Ingress service as shown above.
Now create an Ingress rule to forward traffic to the JFrog app:
vim jfrog-ingress.yaml
Add the below lines:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jfrog-ingress
namespace: <my-ingress-namespace>
spec:
ingressClassName: nginx
rules:
- host: jfrog.geeksforgeeks.org
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: <jfrog-service-name>
port:
number: 8082
In the above file, replace jfrog-service-name and my-ingress-namespace correctly. The service should be in the same namespace as the ingress.
Now apply the manifest:
kubectl apply -f jfrog-ingress.yaml
View the ingress:
$ kubectl get ingress -A
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default jfrog-ingress nginx jfrog.geeksforgeeks.org 192.168.200.40 80 15m
Option 2: Deploy Traefik Ingress For JFrog Artifactory
You can also use Traefik Ingress for the JFrog Artifactory. You need to follow the below guide to install Traefik ingress:
Once installed, get the IP address of the Traefik service:
kubectl get svc -l app.kubernetes.io/name=traefik -n traefik
Create an Ingress Rule for JFrog.
vim jfrog-traefik.yaml
Add the below lines to the file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jfrog-traefik-ingress
namespace: <my-ingress-namespace>
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: jfrog.geeksforgeeks.org
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: <jfrog-service-name>
port:
number: 8082
4: Access JFrog Artifactory Web UI
Before we proceed and access the Web UI, we need to map the obtained IP to the domain name:
$ sudo vim /etc/hosts
192.168.200.40 jfrog.geeksforgeeks.org
Save the file and proceed to access the page using the URL //jfrog.geeksforgeeks.org.
Login with the default creds as shown above. Then click on Get Started to make initial configs.
Set a new password for the admin user.
For the Pro version, provide the licence obtained from the JFrog Licenses page.
Proceed and set the Base URL that will be used to access the JFrog Platform. For example:
You can configure the proxy:
Create a repository:
Now complete the settings:
You will have JFrog ready for use.
Closing Thoughts
That marks the end of this detailed illustration of how to deploy JFrog Artifactory on Kubernetes With Ingress. We have walked through all the necessary configurations when setting up JFrog Artifactory on Kubernetes. I hope this was informative.
See more: