Saturday, January 11, 2025
Google search engine
HomeGuest BlogsHow To Upgrade To Proxmox VE 8 from Proxmox VE 7

How To Upgrade To Proxmox VE 8 from Proxmox VE 7

.tdi_3.td-a-rec{text-align:center}.tdi_3 .td-element-style{z-index:-1}.tdi_3.td-a-rec-img{text-align:left}.tdi_3.td-a-rec-img img{margin:0 auto 0 0}@media(max-width:767px){.tdi_3.td-a-rec-img{text-align:center}}

Proxmox VE is an open-source and complete server management platform for enterprise virtualization. It is designed with tight KVM integration as hypervisor and Linux Containers using LXC. Proxmox VE also comes with an integrated web-based user interface that enables you to manage Virtual Machines, containers, high availability for clusters, or the integrated disaster recovery tools all from an easy to use web dashboard.

Proxmox Server Solutions GmbH released stable version of Proxmox VE 8.0 based on the latest Debian 12 (Bookworm). It is an extensively tested and detailed upgrade path for users currently using Proxmox VE 7.4 or older versions to enable a smooth upgrade. Proxmox VE 8.0 contains a newer Linux kernel 6.2 and other updates on QEMU, LXC, ZFS, and Ceph.

Proxmox Virtual Environment 8.0 Highlights

  • Network resources defined for Software-defined Networking (SDN) are now also available as objects in the access control subsystem (ACL) of Proxmox VE.
  • Authentication realm sync jobs: The synchronization of users and groups for LDAP-based realms (LDAP & Microsoft Active Directory), can now be configured to run automatically at regular intervals. This simplifies management, and removes a source for configuration errors and omissions compared to synchronizing the realm manually.
  • Resource mappings: Mappings between resources, such as PCI(e) or USB devices, and nodes in a Proxmox VE cluster, can now be created and managed in the API and the web interface.
  • New Ceph Enterprise repository: Proxmox Virtual Environment fully integrates Ceph Quincy, allowing to run and manage Ceph storage directly from any of the cluster nodes and to easily setup and manage a hyper-converged infrastructure.
  • Secure lockout for Two-factor authentication/TOTP: To further improve security, user accounts with too many login attempts – failing the second factor authentication – are locked out.
  • The x86-64-v2-AES model is the new default CPU type for VMs created via the web interface. It provides important extra features over the qemu64/kvm64, and improves performance of many computing operations.
  • Text-based user interface (TUI) for the installer ISO: A text-based user interface has been added and can now be used optionally to gather all required information. This eliminates issues when launching the GTK-based graphical installer that sometimes occur on very new as well as rather old hardware.

In this article we shall cover the process you’ll follow to upgrade from PVE 7 to PVE 8 from the command line. This article doesn’t consider existence of Hyper-converged Ceph. If you have Ceph refer to official documentation on how its upgrade can be performed. Follow the guide Ceph Octopus to Pacific and Ceph Pacific to Quincy, respectively.

.tdi_2.td-a-rec{text-align:center}.tdi_2 .td-element-style{z-index:-1}.tdi_2.td-a-rec-img{text-align:left}.tdi_2.td-a-rec-img img{margin:0 auto 0 0}@media(max-width:767px){.tdi_2.td-a-rec-img{text-align:center}}

Recommended Pre-requisites

  • Upgraded to the latest version of Proxmox VE 7.4 on all nodes.
  • Co-installed Proxmox Backup Server
  • Valid and tested backup of all VMs and CTs (in case something goes wrong)
  • A healthy cluster
  • At least 5 GB free disk space on the root mount point.

Step 1: Get current Proxmox VE release

Login to your Proxmox VE 7 server and confirm its release.

root@wks:~# pveversion
pve-manager/7.4-3/9002ab8a (running kernel: 5.15.107-2-pve)

We can see I have version 7.4 of Proxmox VE. Start by ensuring your system and packages are on the latest releases.

apt update && apt upgrade -y
shutdown -r now

If you have cluster confirm it is healthy as well.

pvecm status

Step 2: Shutdown all running VMs & Containers

List all running instances and shut them down.

#  qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID
       100 mail.example.com     running    16384            100.00 1071

# pct list

To stop use the command

### VMS ###
# qm stop <VMID>

### LXC Containers ###
# pct stop <ContainerID>

It is recommended to take a backup and validate it is functional before you begin the upgrade process. See the guide  Backup and Restore for how you can backup your VMs and Containers on Proxmox.

Step 3: Configure Proxmox VE 8 repositories

For a standalone server it is much easier to perform an upgrade. On Promox VE 7.4, there exist a small checklist program named pve7to8 which provide hints and warnings about potential issues before, during and after the upgrade process.

#  which pve7to8
/usr/bin/pve7to8

If you don’t get any output first do an update and upgrade.

apt update && apt upgrade -y

You can run this program on the terminal using:

# pve7to8

To run it with all checks enabled, execute:

 pve7to8 --full

Sample execution output with a warning – I have a running Virtual Machine instance.

= CHECKING VERSION INFORMATION FOR PVE PACKAGES =

Checking for package updates..
PASS: all packages up-to-date

Checking proxmox-ve package version..
PASS: proxmox-ve package has version >= 7.4-1

Checking running kernel version..
PASS: running kernel '5.15.108-1-pve' is considered suitable for upgrade.

= CHECKING CLUSTER HEALTH/SETTINGS =

SKIP: standalone node.

= CHECKING HYPER-CONVERGED CEPH STATUS =

SKIP: no hyper-converged ceph setup detected!

= CHECKING CONFIGURED STORAGES =

PASS: storage 'local' enabled and active.
PASS: storage 'local-lvm' enabled and active.
INFO: Checking storage content type configuration..
PASS: no storage content problems found
PASS: no storage re-uses a directory for multiple content types.

= MISCELLANEOUS CHECKS =

INFO: Checking common daemon services..
PASS: systemd unit 'pveproxy.service' is in state 'active'
PASS: systemd unit 'pvedaemon.service' is in state 'active'
PASS: systemd unit 'pvescheduler.service' is in state 'active'
PASS: systemd unit 'pvestatd.service' is in state 'active'
INFO: Checking for supported & active NTP service..
PASS: Detected active time synchronisation unit 'chrony.service'
INFO: Checking for running guests..
WARN: 1 running guest(s) detected - consider migrating or stopping them.
INFO: Checking if the local node's hostname 'wks' is resolvable..
INFO: Checking if resolved IP is configured on local node..
PASS: Resolved node IP '192.168.1.3' configured and active on single interface.
INFO: Check node certificate's RSA key size
PASS: Certificate 'pve-root-ca.pem' passed Debian Busters (and newer) security level for TLS connections (4096 >= 2048)
PASS: Certificate 'pve-ssl.pem' passed Debian Busters (and newer) security level for TLS connections (2048 >= 2048)
PASS: Certificate 'pveproxy-ssl.pem' passed Debian Busters (and newer) security level for TLS connections (4096 >= 2048)
INFO: Checking backup retention settings..
PASS: no backup retention problems found.
INFO: checking CIFS credential location..
PASS: no CIFS credentials at outdated location found.
INFO: Checking permission system changes..
INFO: Checking custom role IDs for clashes with new 'PVE' namespace..
PASS: no custom roles defined, so no clash with 'PVE' role ID namespace enforced in Proxmox VE 8
INFO: Checking if LXCFS is running with FUSE3 library, if already upgraded..
SKIP: not yet upgraded, no need to check the FUSE library version LXCFS uses
INFO: Checking node and guest description/note length..
PASS: All node config descriptions fit in the new limit of 64 KiB
PASS: All guest config descriptions fit in the new limit of 8 KiB
INFO: Checking container configs for deprecated lxc.cgroup entries
PASS: No legacy 'lxc.cgroup' keys found.
INFO: Checking if the suite for the Debian security repository is correct..
INFO: Checking for existence of NVIDIA vGPU Manager..
PASS: No NVIDIA vGPU Service found.
INFO: Checking bootloader configuration...
SKIP: not yet upgraded, no need to check the presence of systemd-boot
SKIP: NOTE: Expensive checks, like CT cgroupv2 compat, not performed without '--full' parameter

= SUMMARY =

TOTAL:    29
PASSED:   23
SKIPPED:  5
WARNINGS: 1
FAILURES: 0

ATTENTION: Please check the output for detailed information!

After stopping the instance I get zero warnings and failures.

= SUMMARY =

TOTAL:    29
PASSED:   24
SKIPPED:  5
WARNINGS: 0
FAILURES: 0

Migrate important Virtual Machines and Containers

If you have any VMs and CTs that should be running for the duration of the upgrade, migrate them away from the node that is being upgraded. Below are some migration compatibility rules to keep in mind when planning your cluster upgrade:

  • A migration of a VM or CT from an older version of Proxmox VE to a newer version will always work.
  • A migration from a newer Proxmox VE version to an older version may work, but is generally not supported.

Let’s make sure that the system is using the latest Proxmox VE 7.4 packages:

apt update && apt dist-upgrade

Confirm version of your PVE

# pveversion
pve-manager/7.4-15/a5d2a31e (running kernel: 5.15.108-1-pve)

It should report at least 7.4-15 or newer.

Update Debian Base Repositories to Bookworm

Run the commands below to update all Debian and Proxmox VE repository entries to Bookworm.

sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list

Next we add Proxmox VE 8 Package Repository:

sed -i 's/bullseye/bookworm/g'  /etc/apt/sources.list.d/pve-enterprise.list
sed -i -e 's/bullseye/bookworm/g' /etc/apt/sources.list.d/pve-install-repo.list 

If using the “No Subscription” list it should look like this.

deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription

For PVE Enterprise it will be:

deb https://enterprise.proxmox.com/debian/pve bookworm pve-enterprise

If using Ceph configure Proxmox VE 8 repository for ceph:

# With enterprise subscription
echo "deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription" > /etc/apt/sources.list.d/ceph.list

# Without subscription
echo "deb https://enterprise.proxmox.com/debian/ceph-quincy bookworm enterprise" > /etc/apt/sources.list.d/ceph.list

Once done update the repositories’ package index:

apt update

Step 4: Upgrade to Debian Bookworm and Proxmox VE 8.0

Install tmux

apt install tmux -y

I would recommend you run an upgrade inside tmux session. The time required for finishing this step heavily depends on the system’s performance, especially the root filesystem’s IOPS and bandwidth. A slow spinner can take up to 60 minutes or more, while for a high-performance server with SSD storage, the dist-upgrade can be finished in under 5 minutes.

tmux

Once inside tmux session run the commands to upgrade OS packages.

apt dist-upgrade -y

If there was a network failure and the upgrade was only partially completed, try to repair the situation with

apt -f install

Upon a successful upgrade reboot the system to use the new Proxmox VE kernel.

reboot

Confirm PVE release after upgrade.

# pveversion
pve-manager/8.0.3/bbf3993334bfa916 (running kernel: 6.2.16-3-pve)

Step 5: Use Proxmox VE 8

Wait for the server to come online and confirm the access URL.

$ cat /etc/issue

------------------------------------------------------------------------------

Welcome to the Proxmox Virtual Environment. Please use your web browser to
configure this server - connect to:

  https://192.168.1.3:8006/

------------------------------------------------------------------------------

Open the browser and access Proxmox VE web dashboard using IP address or hostname on port 8006.

proxmox ve 6 login

To secure access with Let’s Encrypt SSL checkout our guide below.

From here you can perform normal PVE administration procedures. You can visit official Proxmox documentation for complete how-to process. Below are some articles on Proxmox available in our website.

.tdi_4.td-a-rec{text-align:center}.tdi_4 .td-element-style{z-index:-1}.tdi_4.td-a-rec-img{text-align:left}.tdi_4.td-a-rec-img img{margin:0 auto 0 0}@media(max-width:767px){.tdi_4.td-a-rec-img{text-align:center}}

RELATED ARTICLES

Most Popular

Recent Comments