The usual workflow for deploying image mode updates onto a host machine is dependent upon a network connection to access a registry and to obtain updates. However, for reasons involving security, location, or even hardware limitations, a system might end up needing an update when remote access isn’t possible. Fortunately, image mode for Red Hat Enterprise Linux is flexible enough to maintain and update when deployed online, offline, and in air-gapped environments.
Prerequisites
For this article, I prepared my updates on a system running Fedora Workstation 42, and deployed the update to hardware running Red Hat Enterprise Linux 10 (RHEL). However, as long as you’re able to create containers and have a system that supports image mode updates, the exact setup isn’t important.
Here are the requirements for this workflow:
- Podman
- Skopeo
- Access to a registry or a locally stored container
- An external storage device for the container requiring an update
Benefits and disadvantages
The method I demonstrate here is based on deploying updates on a device by device basis. While this is a perfectly functional method, it’s not ideal for all situations.
Benefits
- The machine you’re updating is able to be fully offline and airgapped
Disadvantages
- Can be a time consuming process when used across many devices
- Requires someone on site capable of deploying the update
If your system is capable of being online, then this approach is not the best option for you. Remote repositories are a great way to deploy updates to a system, and it doesn’t require as much setup as this method does. If your situation doesn’t match the benefits of this method, and you do have access to remote registries, read How to build, deploy, and manage image mode for RHEL to learn more about managing those systems.
An overview of the process from start to finish:
- Prepare an external storage device on an online system
- Copy the image containing the updates you want to distribute to external storage
- Use the external storage device to apply updates to your offline system
Prepare an external storage device on an online system
One of the challenges of an offline update is obtaining the container you wish to use as the source of the update. In most cases, a container would be made on another system where testing can be done, and then distributed remotely for deployment. However, because this workflow assumes no internet or wireless connection, everything must be done with external storage devices rather than remote registries.
Before you plug an external storage device into your online system, get a report about what storage devices are already connected to your system. Currently, you only care about the NAME
column:
$ lsblk
NAME MAJ:MIN SIZE RO TYPE MOUNTPOINTS
zram0 251:0 8G 0 disk [SWAP]
nvme0n1 259:0 476.9G 0 disk
├─nvme0n1p1 259:1 600M 0 part /boot/efi
├─nvme0n1p2 259:2 1G 0 part /boot
└─nvme0n1p3 259:3 475.4G 0 part
Now plug in your external storage and run the same command. You can compare these two outputs to ensure that you know what your system calls your external storage device. In my case, the USB drive being used is named sda
, and it has a partition called sda1
.
$ lsblk
NAME MAJ:MIN SIZE RO TYPE MOUNTPOINTS
sda 8:0 28.9G 0 disk
└─sda1 8:1 28.9G 0 part
zram0 251:0 8G 0 disk [SWAP]
nvme0n1 259:0 476.9G 0 disk
├─nvme0n1p1 259:1 600M 0 part /boot/efi
├─nvme0n1p2 259:2 1G 0 part /boot
└─nvme0n1p3 259:3 475.4G 0 part
The MOUNTPOINTS
column lists the mount points of the partitions on your external storage. If your system mounts external storage automatically, then valid mount points already exist. However, if there are no mount points (as in my example), then you must mount it yourself before you can store anything on the device.
Start with an empty directory. You can either create one, or use an existing empty directory that already exists for this purpose:
$ sudo mkdir /mnt/usb/
Once you’ve got an empty directory, you can mount your device partition. The mount command doesn’t normally provide confirmation (only an error generates output). You can verify success by checking the mount point again (I’ve truncated the output for brevity):
$ sudo mount /dev/sda1 /mnt/usb
$ lsblk
NAME MAJ:MIN SIZE RO TYPE MOUNTPOINTS
sda 8:0 28.9G 0 disk
└─sda1 8:1 28.9G 0 part /mnt/usb
[...]
Your external storage device is now ready for copying files onto it.
Transfer an image to external storage
You can now copy the container to your mounted device. For a container you’ve got stored locally, use the skopeo
command (adapt the paths and names of the container for your own environment):
$ sudo skopeo copy --preserve-digests --all \
containers-storage:localhost/rhel-container:latest \
oci://mnt/usb/
For a container stored on a remote registry:
$ sudo skopeo copy --preserve-digests --all \
example://quay.io/example:latest \
oci://mnt/usb/
Depending on the size of the container, these commands might take a few minutes to complete. Once the container has been copied, unmount and eject the external storage:
$ sudo umount /dev/sda1
$ sudo eject /dev/sda1
Update the container on an offline system
To apply the update, first plug the external storage device into your offline system. This might not be mounted automatically, so use the mkdir
and mount
commands as needed to locate the external storage and then mount it.
For the sake of stability and reusability, it’s best to copy the container from the external device over to the offline system’s local container registry. Copy the container to the offline machine’s local container storage:
$ skopeo copy --preserve-digests --all \
oci://var/mnt/usb \
containers-storage:rhel-update:latest
In this case, the mount point of the external storage is the path entered to the oci
section, while the containers-storage
section varies depending on the name and tag you wish the container to have. Use Podman to verify that your container is now local:
$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
example.io/library/rhel-update latest cdb6d... 1 min 1.48 GB
Deploy the update using bootc
:
$ bootc switch --transport containers-storage \
example.io/library/rhel-update:latest
If you weren’t able to copy your container to local storage, then you must use the oci
transport and the path to your storage device instead:
$ bootc switch --transport oci /var/mnt/usb
While it might seem to make more sense to use bootc upgrade
, it’s the --transport
flag in bootc switch
that enables you to specify an alternative source for the container. By default, bootc
would attempt to pull from a registry because the bootc image builder used a registry to build the original image. There is no way to specify where an update is located when using bootc upgrade
. By using bootc switch
and specifying that you’re using local container storage, this enables you to not only remove the requirement of a remote registry, but also to deploy updates by using this local container in the future.
After you’ve done this once, you can now successfully use bootc upgrade
, as long as your local container and the update share the same location. If you want to switch to updates on a remote repository in the future, then you’d have to use bootc switch
again. To ensure that the update was properly deployed, use the command bootc status
:
$ bootc status
Staged image: containers-storage:example.io/library/rhel-update:latest
Digest: sha256: 05b1dfa791...
Version: 10.0 (2025-07-07 18:33:19.380715153 UTC)
Booted Image: localhost/rhel-intel:base
Digest: sha256: 7d6f312e09...
Version: 10.0 (2025-06-23 15:58:12.228704562 UTC)
The output shows your current booted image, along with any changes staged to happen. The container you used earlier is here, but note that staged changes do not occur until the next reboot. The output also confirms that updates will be pulled from your container storage.
After a reboot, you can verify that you’ve booted into the correct image:
$ bootc status
Booted image: containers-storage:example.io/library/rhel-update:latest
Digest: sha256: 05b1dfa791...
Version: 10.0 (2025-07-07 18:33:19.380715153 UTC)
Rollback image: localhost/rhel-intel:base
Digest: sha256: 7d6f312e09...
Version: 10.0 (2025-06-23 15:58:12.228704562 UTC)
The Booted image
is your updated image, and the Rollback image
is your previous image. You’ve successfully performed an offline image mode update.
If you’re not using containers yet, and need help setting one up, read Image mode for Red Hat Enterprise Linux: A quick start guide.
The post How to deploy an image mode update in offline and air-gapped environments appeared first on Red Hat Developer.