Migrating GitLab from Bare-Metal to Docker (with reverse proxy and utilizing acme.sh)

Migrating GitLab from Bare-Metal to Docker (with reverse proxy and utilizing acme.sh)

I am using GitLab Enterprise Edition (Free Tier) for my coding projects. Recently some of my rented servers had raised their prices and I wanted to check what I can probably consolidate to save a few bucks. In this manner I did some research and I noticed that GitLab can be easily run as a docker container.

Previously my GitLab was hosted on a vServer solely used for collaboration (GitLab, Nextcloud, some docker containers along). The reason for this was, that on one hand gitlab is a quite massive compilation of software. On the other hand it also has a quite big footprint regarding ressources.

Since then GitLab has gotten much better regarding resource usage and the move to docker makes it possible to run GitLab on a shared system (as in other services running as well) much easier because there are no compatibility issues between different software configurations on the same system.

Downsides

Docker itself adds a small layer of complexity. You can not simply run gitlab-ctl right on the system, you must add a few configuration options at two different points, … But in the end these are minimalistic luxury issues. Also docker of course adds a little overhead.

That being said, there are no real downsides if you ever used docker before. There is one more thing to remember though: You can not double-use ports.

The positive thing is: You are adding a layer of security. If someone breaks in into your GitLab instance, the rest of the server is still quite secure.

The plan: GitLab behind a reverse proxy

There are multiple ways to run GitLab in a docker container on any docker host. You can give GitLab an own IP and it will behave as a sole server. You can alos give GitLab different ports (not 22, 80, 443) or simply not use those ports on the host otherwise.

In our case I am configuring the container on a server that also hosts other things – including web services. Ports 22, 80 and 443 are already in use. The plan is to give GitLab a different SSH port – which is not an issue, as only I will be using it for commit/pull. The web frontend however must run on the default ports. To achieve this, we will be setting up the already running Apache webserver to reverse-proxy the GitLab Hostnames to the docker container.

This guide will also work without a reverse proxy. You will only ahve to adapt a few configurations.

Preparation: Updating GitLab

This guide assumes, you are already running a server, having the latest docker engine (including docker compose) installed and Apache2 as a webserver including mod_proxy. It will also work with nginx, using adapted configurations. acme.sh will be used to gain a certificate. As always, the commands are based on a debian server. Other distros might be slightly different.

Before starting the migration, we should make sure that we are running the most recent version of GitLab. If not, update it! Source and Destination GitLab must run the exact same version and the newer, the better, because of fixed issues.

Step 1: Configuring the docker container

On your new server, create a folder that holds your docker-compose.yml

mkdir -p /opt/dockerapps/gitlab
cd /opt/dockerapps/gitlab
nano docker-compose.yml # this will open the nano editor to configure the compose file

Next, copy this content into the nano editor and set up the templated fields accordingly. Especially check the version (18.1.1-ee in this example – might be -ce if you’re migrating a Community Edition GitLab and likely a different version).

Also check the ports. They may not be used by anything else. In my case, I am using 9980, 9943 and 9922. GitLab will run under the FQDN git.example.com

Also remember to update the volume bind mount paths. I am a fan of absolute paths, don’t ask me why.

services:
  gitlab:
    image: gitlab/gitlab-ee:18.1.1-ee.0
    container_name: gitlab
    restart: always
    hostname: 'git.example.com'
    environment:
      GITLAB_OMNIBUS_CONFIG: |
        # Add any other gitlab.rb configuration that is required at initialization phase (Hostname, ports, ...) here, each on its own line. the gitlab.rb file will take over after initialization so configure it, too!
        external_url 'https://git.example.com'
        gitlab_rails['gitlab_shell_ssh_port'] = 9922
        nginx['ssl_certificate'] = "/etc/gitlab/ssl/gitlab-ssl.key.crt"
        nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/gitlab-ssl.key.key"
        letsencrypt['enable'] = false
    ports:
      - '127.0.0.1:9980:80'
      - '127.0.0.1:9943:443'
      - '9922:22'
    volumes:
      - '/opt/dockerapps/gitlab/config:/etc/gitlab'
      - '/opt/dockerapps/gitlab/logs:/var/log/gitlab'
      - '/opt/dockerapps/gitlab/data:/var/opt/gitlab'
      - '/opt/dockerapps/gitlab/ssl.pem:/etc/gitlab/ssl/gitlab-ssl.pem:ro'
      - '/opt/dockerapps/gitlab/ssl.key:/etc/gitlab/ssl/gitlab-ssl.key:ro'
    shm_size: '256m'
    healthcheck:
      disable: true

Note: We’re not setting any port at “hostname” or “external_url” as we are using a reverse proxy. If you are chosing the non-proxy custom port approach, add the port in “external_url” like external_url ‘https://git.example.com:9943’.

Save the file using Ctrl+X, confirming the save and spin GitLab up for the first time:

docker compose up

The startup will take a few minutes, depending on your server ressources and power.

When startup is done, let it settle for a few minutes and stop it again by pressing Ctrl+C

You should now see the three folders “config”, “logs” and “data” in your folder:

root@hostname:/opt/dockerapps/gitlab# ls
config data docker-compose.yaml logs

Now, we finally only have to remove the redis cache and secrets files:

rm /opt/dockerapps/gitlab/data/redis/dump.rdb
rm /opt/dockerapps/gitlab/config/gitlab-secrets.json

Step 2: Configuring the reverse proxy

This can be partially skipped, if not using a reverse proxy. The tutorial assumes apache2.

For this step, your DNS entries should already point to the new IP. Also you can skip the additional hostnames, if you are not using the features – or add more.

First, we create a dummy folder

mkdir -p /var/www/proxied

This folder should not contain any files so far but it must exist for the configuration to work. Now we will configure apache. Create config file in sites-available:

nano /etc/apache2/sites-available/gitlab-proxy.conf

Copy this content into the file and save it – again adapting hostnames:

<VirtualHost *:80>
    #Adapt the hostname
    ServerName git.example.com
    
    DocumentRoot "/var/www/proxied"
    
    Redirect permanent / https://git.example.com/
    
    <Directory "/var/www/proxied">
        Require all granted
    </Directory>
</VirtualHost>

<VirtualHost *:443>
    #Adapt the hostname
    ServerName git.example.com

    #Adapt the wildcard hostname or add explicit hostnames like registry, ... Remove if unneeded.
    ServerAlias *.git.example.com
    
    #Create a quite secure SSL configuration. Adapt to your needs or leave as it is.
    SSLEngine on
    SSLProtocol all -SSLv2 -SSLv3 -TLSv1.0 -TLSv1.1
    SSLCipherSuite TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:EECDH+AESGCM:EDH+AESGCM

    SSLCertificateFile /opt/dockerapps/gitlab/ssl.pem
    SSLCertificateKeyFile /opt/dockerapps/gitlab/ssl.key

    HostnameLookups Off
    UseCanonicalName Off
    ServerSignature Off

    ProxyRequests Off
    ProxyPreserveHost On
    
    ProxyTimeout 900;
    
    #We enable SSLProxy but we don't check the certificate on backend
    #because we are using localhost and the public certificate of the real domain.
    SSLProxyEngine On
    SSLProxyCheckPeerCN Off
    SSLProxyCheckPeerName Off
    
    #Do NOT proxy ACME - We handle this locally
    ProxyPass /.well-known !
    #Proxy anything else - Adapt the port 9943 here to your setup
    ProxyPass / https://127.0.0.1:9943/ timeout=900 Keepalive=On

    DocumentRoot "/var/www/proxied"

    <Directory "/var/www/proxied">
        Require all granted
    </Directory>
</VirtualHost>

We need to create a temporary dummy certificate to allow apache to serve. There are other ways but this is the easiest one to describe here:

openssl req -new -newkey rsa:2048 -days 7 -nodes -x509 \
    -subj "/C=XX/ST=Dummy/L=Dummy/O=Dummy/CN=git.example.com" \
    -keyout /opt/dockerapps/gitlab/ssl.key  -out /opt/dockerapps/gitlab/ssl.pem

Afterwards, reload the webserver:

service apache2 reload

You should now be able to browse to https://git.example.com and receive a 502 error page after confirming a certificate warning. Verify this before proceeding!

Next step is requesting a certificate from StartSSL or any other ACME CA using acme.sh – Assuming this tool is already installed and configured on your system, adapt and run this command. Ensure, you’re listing all relevant subdomains. Make sure that all these domains are already pointing to the new server and are configured in apache:

acme.sh --issue -d git.example.com -d registry.git.example.com --webroot /var/www/proxied/

When the certificate has been issued successfully, you can instruct acme.sh to install it into the desired location and automatically reload the services:

acme.sh --install-cert -d git.example.com --key-file /opt/dockerapps/gitlab/ssl.key \
    --fullchain-file /opt/dockerapps/gitlab/ssl.pem \
    --reloadcmd "service apache2 reload && cd /opt/dockerapps/gitlab/ && docker compose exec gitlab gitlab-ctl restart nginx"

You will now get an error at the reload stage because the docker container is not running. This is expected.

Step 3: Backing up and shutting down the old server

Beginning with this step, your old gitlab instance will be down (if not already because of the IP change) and it will take some time – depending on the size of your instance.

Create a snapshot or full backup before proceeding

On the source server, log in to GitLab Admin panel and disable periodic background jobs:

On the left sidebar, at the bottom, select Admin. Then,
on the left sidebar, select Monitoring > Background jobs.
Under the Sidekiq dashboard, select Cron tab and then Disable All.

Next, wait for Sidekiq jobs to finish:
Under the already open Sidekiq dashboard, select Queues and then Live Poll. Wait for Busy and Enqueued to drop to 0. These queues contain work that has been submitted by your users; shutting down before these jobs complete may cause the work to be lost. Make note of the numbers shown in the Sidekiq dashboard for post-migration verification.

You can probably skip these steps, if you are confident that nothing is pending.

Next, save redis cache to disk and shut down most gitlab services by running the following command:

sudo /opt/gitlab/embedded/bin/redis-cli -s /var/opt/gitlab/redis/redis.socket save && \
    sudo gitlab-ctl stop && sudo gitlab-ctl start postgresql && sudo gitlab-ctl start gitaly

GitLab will now run in a very minimal configuration, only ready to create a backup. To do this, execute

sudo gitlab-backup create

This will take some time. When ready, the backup can be found in GitLab’s backup folder – by default this is /var/opt/gitlab/backups

Before proceeding, we will make sure that GitLab can not start anymore. To do this, edit /etc/gitlab.rb and add this to the very bottom:

### Migration overrides:
alertmanager['enable'] = false
gitlab_exporter['enable'] = false
gitlab_pages['enable'] = false
gitlab_workhorse['enable'] = false
grafana['enable'] = false
logrotate['enable'] = false
gitlab_rails['incoming_email_enabled'] = false
nginx['enable'] = false
node_exporter['enable'] = false
postgres_exporter['enable'] = false
postgresql['enable'] = false
prometheus['enable'] = false
puma['enable'] = false
redis['enable'] = false
redis_exporter['enable'] = false
registry['enable'] = false
sidekiq['enable'] = false

And then run gitlab-ctl reconfigure

Now you must transfer the redis file, the secrets and the backup to your new server. For example using scp (run these commands on the old server one by one and adapt the hostname).
The example is using root to authenticate – which is hopefully wrong in your case. Use your username instead if applicable.

sudo scp /var/opt/gitlab/redis/dump.rdb \
    root@git.example.com:/opt/dockerapps/gitlab/config/dump.rdb

sudo scp /etc/gitlab/gitlab-secrets.json \
    root@git.example.com:/opt/dockerapps/gitlab/config/gitlab-secrets.json

#You might want to explicitly select the specific file to save time, space and bandwidth
sudo scp /var/opt/gitlab/backups/*.tar \
    root@git.example.com:/opt/dockerapps/gitlab/data/backups/

You can also referr to the official docs: https://docs.gitlab.com/administration/backup_restore/migrate_to_new_server/

Step 4: Configuring the all-new Docker-GitLab

Now, that we have backed up and shut down the old instance and transferred our backups to the new server, we can continue by configuring the new GitLab.

This is probably the most complicated part and you’re widely on your own with it. While copying the old gitlab.rb file (without the modifications above) to the new location could technically work, it’s bes to review the old file and reconfigure the new one cleanly. Remember: How old is your old configuration? Did you always check the upgrade guides regarding gitlab.rb? 🙂

I will only walk through the importand and required configuration. Most importantly make sure, that you are setting any variable that also is in the docker-compose.yml to the same value in gitlab.rb

Open the config file by running nano /opt/dockerapps/gitlab/config/gitlab.rb (or opening the file for example with notepad++ or any other preferred editor)

Then find all lines by searching the variable name, removing the hash-sign at the beginning of the line and then adjust their parameters like the following examples (and your further needs):

external_url 'https://git.example.com'
[...]
gitlab_rails['gitlab_shell_ssh_port'] = 9922
[...]
nginx['ssl_certificate'] = "/etc/gitlab/ssl/gitlab-ssl.pem"
nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/gitlab-ssl.key"
[...]
letsencrypt['enable'] = false

If using the container registry also change these:

registry_external_url 'https://registry.git.example.com'

#The following lines are crucial for registry but have to be ADDED to gitlab.rb.
#They are not included by default. Add them below 
# registry_nginx['listen_port'] = 5050

registry_nginx['ssl_certificate'] = "/etc/gitlab/ssl/gitlab-ssl.pem"
registry_nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/gitlab-ssl.key"

Step 5: Restoring the backup to the new GitLab instance

Now we can finally spin up the GitLab docker container and import the data back to it. Make sure that the copied files are at their desired locations (gitlab-secrets, redis.rdb and the backup file), then spin up the container on the target server:

cd /opt/dockerapps/gitlab
docker compose up -d

GitLab will now start up in the background. It will take a few minutes even though the process already returns. You can watch the startup by running docker compose logs -f

When GitLab is started up, we can restore the backup by running the following commands one by one:

# Stop the processes that are connected to the database
docker compose exec gitlab gitlab-ctl stop puma
docker compose exec gitlab gitlab-ctl stop sidekiq

# Verify that the processes are all down before continuing - check puma and sidekiq
docker compose exec gitlab gitlab-ctl status

# Start the restore process. Check the correct filename, omitting the ending!
docker compose exec gitlab gitlab-backup restore BACKUP=1751836179_2025_07_06_18.1.1-ee

GitLab will now show a warning before the restore process is started. Confirm it with “yes”. There will be a few warnings you have to confirm during the process, so keep it monitored.

When the restore is done, stop gitlab and start it again in foreground mode. This helps to observe critical errors. It is expected behavior, that there will be warning s and a few errors during this startup. Many of these are actually normal.

docker compose down
docker compose up

After GitLab has started up successfully, check reachability on the web and give it a few minutes to settle.

Afterwards hit Ctrl+C to shut GitLab down again. We have to remove the following two lines from docker-compose.yml:

healthcheck:
  disable: true

Now we can finally start GitLab up and start using it. Remember to change the remotes of your projects, if the SSH-Port has changed.

docker compose up -d

After a few last minutes, GitLab should be up and running.

As a last step, go back to the Admin Panel and re-enable all background jobs.

Time for coffee

…or an energy drink. Was quite some work, right? You should now have sucessfully migrated GitLab from a Bare-Metal installation to a modern docker container.

Did everything work for you? Are you using any features that required additional configuration?

The featured image of this blog post has been genereted by AI

Migrate Proxmox Backup Server

Migrate Proxmox Backup Server

I was recently confronted with the issue that I needed to move a running Proxmox Backup server to a new host. Along the migration I also needed to update the hostname to match the new concept. However, it does not really matter if you keep the hostname or not as fortunately PBS does not really care about it’s name.

Step 1: Research

As a senior admin I know you can usually find out anything you don’t know yet using google (ar any other search enginge out there). This time however I had a hard time. Looks like virtually nobody tried to migrate a PBS yet – or maybe the fact tht it is so easy leads to noboy writing the process down.

Okay, let’s do it on our own!

The Facts

Currently PBS is running an an “old” Proxmox VE host – and I mean on the physical host. We are migrating this PBS installation to a dedicated host, a VM on Hetzner’s cloud this time. As the PBS is only occasionally taking load and the backup performance does not really matter for our project, I chose a CX21 VM with 2 vCores and 2GB of RAM. The 40 GB HDD space are enoght for the OS and PBS’s own files. Additionally a Hetzner Storage Box is available to be used as the actual datastore.

In this case the chunks (the actual backed up files) have already been stored to the Storage Box. This means in this blog post I will not migrate the actual data but if you need to, it is as simple as rsyncing the data (rsync -azh source target) from one place to the other, just take care that the mount path is the same on the new server and stop the services on both servers before migrating!

Step 2: Preparation

The Cloud VM

First of all, I booked a CX21 cloud VM from Hetzner located in the same datacenter as the storage box resides. This is very important! Using a VM in FSN1 and storage box in HEL1 has very bad performance. I mean performance like when you’re driving right behind a farm tractor on a highway.

I chose Debian 11 (and you should do so, too). Right after the setup completed, I logged in using SSH and edited /etc/hostname to include or FQDN hostname – don’t forget to add a DNS entry!

As mentioned, our chunk data is stored on a Hetzner Storage box. I copied the respective line from fstab from the old server to the new one and created the mount path using mkdir /mnt/hetzner-sb1 in my case. For reference, this is what it looks like now (username/password redacted):

//u1234567-sub1.your-storagebox.de/u1234567-sub1 /mnt/hetzner-sb1 cifs username=u1234567-sub1,password=GeneratedPaSsWoRdFromRobot,uid=34,noforceuid,gid=34,noforcegid 0 0

Then I mounted the box using mount /mnt/hetzner-sb1 and verieifed by running an ls /mnt/hetzner-sb1

Install Proxmox Backup Server software

We need to do a clean install as docuemnted here. The commands are simple and reliable:

Trust the repository key: wget https://enterprise.proxmox.com/debian/proxmox-release-bullseye.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg

Add the repo to apt sources: echo "deb http://download.proxmox.com/debian/pbs bullseye pbs-no-subscription">/etc/apt/sources.list.d/pbs.list

Download package lists: apt update

Install the software: apt-get install proxmox-backup

You could also install proxmox-backu-server instead of proxmox-backup. The proxmox-backup package contains all recommended extra packages, the other onwe is a minimal setup.

Disable 2FA temporarily

If you can, you should remove 2FA configuration from the root user before migrating or create a new administrative user without 2FA if WebAuthN is used. Else you might not be able to log in anymore.

If you are changing the hostname along the migration like I do you are required to deconfigure WebAuthN and reconfigure it after migration as the hostname is the "key" for WebnAuthN accounts on the token.

Stopping services on old and new server

It is important that no backups or other jobs are running while you are migrating. Run these commands on both servers:

service proxmox-backup stop

service proxmox-backup-proxy stop

Make sure that no job or task is running before stopping the services.

Step 3: Migrate config and metadata

Proxmox backup server stores its configuration in /etc/proxmox-backup and metadata (rrd stats etc) in /var/lib/proxmox-backup – We need to sync both folders to the new host (assuming the new host has IP 10.10.20.20) using rsync on the old host:

rsync -avzh /etc/proxmox-backup/* root@10.10.20.20:/etc/proxmox-backup/

rsync -avzh /var/lib/proxmox-backup/* root@10.10.20.20:/var/lib/proxmox-backup/

Step 4: Reboot the new server

To make sure all configs are applied and all services are running smoothly, we will now reboot the new server and verify our installation and migration.

After the reboot, visit https://10.10.20.20:8007 (use the IP-Address for now)

You whould see your datastore on the dashboard now – including historical usage data graph. If so, select the datastore on the left hand menu and select the content ribbon. You should see all backups there. This might look a bit like that:

Great! The worst part is over. If you preserved the hostname, you’re done and can proceed to step 7.

Step 5: Configure SSL

This step does not apply if you keep the hostname while migrating.

Now you will need to setup Let’s encrypt again (or use your preferred method). You can do this at Configuration -> Certificates. Simply click the add button and enter yor FQDN there:

You may also want to remove the migrated old certificate. Then click the “Order certificates now” button. If everything runs smoothly, you should now have a valid SSL-certificate:

Verify this by pointing your browser to https://your-hostname.your-domain.tld:8007/

If the certificate is no used immediately, restart the proxmox-backup-proxy service

Step 6: Configuring the clients

This step does not apply if you keep the hostname while migrating.

Now you have to direct your clients to use the new server. For manual backups using proxmox-backup-client you need to edit the cronjobs or backups scripts. For Proxmox VE you can simply edit /etc/pve/storage.cfg (for example using nano) and change the “server” directive of the storage block:

Proxmox VE will reload the changed file automatically after a few seconds.

Step 7: Testing

You should try to backup a client to see that everything works. In my case I ran a manual backup from my Proxmox VE server:

While the backup is running, you will also see that in the "Tasks" bade on the top right corner of your PBS server.

Done!

You should monitor your upcoming backups and verfiy them using PBS’ builtin verifications to make sure all chunks are accessible, especially if you also migrated the chunk storage.

Also don’t forget to re-enable 2FA if you disable dit before….

…and if you changed the hostname, you have to reconfigure WebAuthN under Configuration -> Other (and the other is located at the top if you click on Configuration – The did a good job in hiding that!)

I hope you were able to follow my steps to successfully migrate your PBS installation in a rush.
Tell me, did you encounter any issues?