Gitlab update error

GitLab Community Edition

Upgrading GitLab (FREE SELF)

Upgrading GitLab is a relatively straightforward process, but the complexity
can increase based on the installation method you have used, how old your
GitLab version is, if you’re upgrading to a major version, and so on.

Make sure to read the whole page as it contains information related to every upgrade method.

The maintenance policy documentation
has additional information about upgrading, including:

  • How to interpret GitLab product versioning.
  • Recommendations on the what release to run.
  • How we use patch and security patch releases.
  • When we backport code changes.

Upgrade based on installation method

Depending on the installation method and your GitLab version, there are multiple
official ways to update GitLab:

  • Linux packages (Omnibus GitLab)
  • Source installations
  • Docker installations
  • Kubernetes (Helm) installations

Linux packages (Omnibus GitLab)

The package upgrade guide
contains the steps needed to update a package installed by official GitLab
repositories.

There are also instructions when you want to
update to a specific version.

Installation from source

  • Upgrading Community Edition and Enterprise Edition from source —
    The guidelines for upgrading Community Edition and Enterprise Edition from source.
  • Patch versions guide includes the steps needed for a
    patch version, such as 13.2.0 to 13.2.1, and apply to both Community and Enterprise
    Editions.

In the past we used separate documents for the upgrading instructions, but we
have switched to using a single document. The old upgrading guidelines
can still be found in the Git repository:

  • Old upgrading guidelines for Community Edition
  • Old upgrading guidelines for Enterprise Edition

Installation using Docker

GitLab provides official Docker images for both Community and Enterprise
editions, and they are based on the Omnibus package. See how to
install GitLab using Docker.

Installation using Helm

GitLab can be deployed into a Kubernetes cluster using Helm.
Instructions on how to update a cloud-native deployment are in
a separate document.

Use the version mapping
from the chart version to GitLab version to determine the upgrade path.

Plan your upgrade

See the guide to plan your GitLab upgrade.

Checking for background migrations before upgrading

Certain releases may require different migrations to be
finished before you update to the newer version.

Batched migrations are a migration type available in GitLab 14.0 and later.
Background migrations and batched migrations are not the same, so you should check that both are
complete before updating.

Decrease the time required to complete these migrations by increasing the number of
Sidekiq workers
that can process jobs in the background_migration queue.

Background migrations

Pending migrations

For Omnibus installations:

sudo gitlab-rails runner -e production 'puts Gitlab::BackgroundMigration.remaining'
sudo gitlab-rails runner -e production 'puts Gitlab::Database::BackgroundMigration::BatchedMigration.queued.count'

For installations from source:

cd /home/git/gitlab
sudo -u git -H bundle exec rails runner -e production 'puts Gitlab::BackgroundMigration.remaining'
sudo -u git -H bundle exec rails runner -e production 'puts Gitlab::Database::BackgroundMigration::BatchedMigration.queued.count'

Failed migrations

For Omnibus installations:

For GitLab 14.0-14.9:

sudo gitlab-rails runner -e production 'puts Gitlab::Database::BackgroundMigration::BatchedMigration.failed.count'

For GitLab 14.10 and later:

sudo gitlab-rails runner -e production 'puts Gitlab::Database::BackgroundMigration::BatchedMigration.with_status(:failed).count'

For installations from source:

For GitLab 14.0-14.9:

cd /home/git/gitlab
sudo -u git -H bundle exec rails runner -e production 'puts Gitlab::Database::BackgroundMigration::BatchedMigration.failed.count'

For GitLab 14.10 and later:

cd /home/git/gitlab
sudo -u git -H bundle exec rails runner -e production 'puts Gitlab::Database::BackgroundMigration::BatchedMigration.with_status(:failed).count'

Batched background migrations

GitLab 14.0 introduced batched background migrations.

Some installations may need to run GitLab 14.0 for at least a day to complete the database changes introduced by that upgrade.

Check the status of batched background migrations

To check the status of batched background migrations:

  1. On the top bar, select Menu > Admin.

  2. On the left sidebar, select Monitoring > Background Migrations.

    queued batched background migrations table

All migrations must have a Finished status before you upgrade GitLab.

The status of batched background migrations can also be queried directly in the database.

  1. Log into a psql prompt according to the directions for your instance’s installation method
    (for example, sudo gitlab-psql for Omnibus installations).

  2. Run the following query in the psql session to see details on incomplete batched background migrations:

    select job_class_name, table_name, column_name, job_arguments from batched_background_migrations where status <> 3;

If the migrations are not finished and you try to update to a later version,
GitLab prompts you with an error:

Expected batched background migration for the given configuration to be marked as 'finished', but it is 'active':

If you get this error, check the batched background migration options to complete the upgrade.

What do you do if your background migrations are stuck?

WARNING:
The following operations can disrupt your GitLab performance. They run a number of Sidekiq jobs that perform various database or file updates.

Background migrations remain in the Sidekiq queue

Run the following check. If it returns non-zero and the count does not decrease over time, follow the rest of the steps in this section.

# For Omnibus installations:
sudo gitlab-rails runner -e production 'puts Gitlab::BackgroundMigration.remaining'

# For installations from source:
cd /home/git/gitlab
sudo -u git -H bundle exec rails runner -e production 'puts Gitlab::BackgroundMigration.remaining'

It is safe to re-execute the following commands, especially if you have 1000+ pending jobs which would likely overflow your runtime memory.

For Omnibus installations

# Start the rails console
sudo gitlab-rails c

# Execute the following in the rails console
scheduled_queue = Sidekiq::ScheduledSet.new
pending_job_classes = scheduled_queue.select { |job| job["class"] == "BackgroundMigrationWorker" }.map { |job| job["args"].first }.uniq
pending_job_classes.each { |job_class| Gitlab::BackgroundMigration.steal(job_class) }

For installations from source

# Start the rails console
sudo -u git -H bundle exec rails RAILS_ENV=production

# Execute the following in the rails console
scheduled_queue = Sidekiq::ScheduledSet.new
pending_job_classes = scheduled_queue.select { |job| job["class"] == "BackgroundMigrationWorker" }.map { |job| job["args"].first }.uniq
pending_job_classes.each { |job_class| Gitlab::BackgroundMigration.steal(job_class) }

Background migrations stuck in ‘pending’ state

GitLab 13.6 introduced an issue where a background migration named BackfillJiraTrackerDeploymentType2 can be permanently stuck in a pending state across upgrades. To clean up this stuck migration, see the 13.6.0 version-specific instructions.

GitLab 14.2 introduced an issue where a background migration named BackfillDraftStatusOnMergeRequests can be permanently stuck in a pending state across upgrades when the instance lacks records that match the migration’s target. To clean up this stuck migration, see the 14.2.0 version-specific instructions.

GitLab 14.4 introduced an issue where a background migration named PopulateTopicsTotalProjectsCountCache can be permanently stuck in a pending state across upgrades when the instance lacks records that match the migration’s target. To clean up this stuck migration, see the 14.4.0 version-specific instructions.

GitLab 14.5 introduced an issue where a background migration named UpdateVulnerabilityOccurrencesLocation can be permanently stuck in a pending state across upgrades when the instance lacks records that match the migration’s target. To clean up this stuck migration, see the 14.5.0 version-specific instructions.

GitLab 14.8 introduced an issue where a background migration named PopulateTopicsNonPrivateProjectsCount can be permanently stuck in a pending state across upgrades. To clean up this stuck migration, see the 14.8.0 version-specific instructions.

GitLab 14.9 introduced an issue where a background migration named ResetDuplicateCiRunnersTokenValuesOnProjects can be permanently stuck in a pending state across upgrades when the instance lacks records that match the migration’s target. To clean up this stuck migration, see the 14.9.0 version-specific instructions.

For other background migrations stuck in pending, run the following check. If it returns non-zero and the count does not decrease over time, follow the rest of the steps in this section.

# For Omnibus installations:
sudo gitlab-rails runner -e production 'puts Gitlab::Database::BackgroundMigrationJob.pending.count'

# For installations from source:
cd /home/git/gitlab
sudo -u git -H bundle exec rails runner -e production 'puts Gitlab::Database::BackgroundMigrationJob.pending.count'

It is safe to re-attempt these migrations to clear them out from a pending status:

For Omnibus installations

# Start the rails console
sudo gitlab-rails c

# Execute the following in the rails console
Gitlab::Database::BackgroundMigrationJob.pending.find_each do |job|
  puts "Running pending job '#{job.class_name}' with arguments #{job.arguments}"
  result = Gitlab::BackgroundMigration.perform(job.class_name, job.arguments)
  puts "Result: #{result}"
end

For installations from source

# Start the rails console
sudo -u git -H bundle exec rails RAILS_ENV=production

# Execute the following in the rails console
Gitlab::Database::BackgroundMigrationJob.pending.find_each do |job|
  puts "Running pending job '#{job.class_name}' with arguments #{job.arguments}"
  result = Gitlab::BackgroundMigration.perform(job.class_name, job.arguments)
  puts "Result: #{result}"
end

Batched migrations (GitLab 14.0 and later)

See troubleshooting batched background migrations.

Dealing with running CI/CD pipelines and jobs

If you upgrade your GitLab instance while the GitLab Runner is processing jobs, the trace updates fail. When GitLab is back online, the trace updates should self-heal. However, depending on the error, the GitLab Runner either retries, or eventually terminates, job handling.

As for the artifacts, the GitLab Runner attempts to upload them three times, after which the job eventually fails.

To address the above two scenarios, it is advised to do the following prior to upgrading:

  1. Plan your maintenance.

  2. Pause your runners or block new jobs from starting by adding following to your /etc/gitlab/gitlab.rb:

    nginx['custom_gitlab_server_config'] = "location /api/v4/jobs/request {n deny all;n return 503;n}n"

    And reconfigure GitLab with:

    sudo gitlab-ctl reconfigure
  3. Wait until all jobs are finished.

  4. Upgrade GitLab.

  5. Update GitLab Runner to the same version
    as your GitLab version. Both versions should be the same.

  6. Unpause your runners and unblock new jobs from starting by reverting the previous /etc/gitlab/gitlab.rb change.

Checking for pending Advanced Search migrations (PREMIUM SELF)

This section is only applicable if you have enabled the Elasticsearch integration (PREMIUM SELF).

Major releases require all Advanced Search migrations
to be finished from the most recent minor release in your current version
before the major version upgrade. You can find pending migrations by
running the following command:

For Omnibus installations

sudo gitlab-rake gitlab:elastic:list_pending_migrations

For installations from source

cd /home/git/gitlab
sudo -u git -H bundle exec rake gitlab:elastic:list_pending_migrations

What do you do if your Advanced Search migrations are stuck?

In GitLab 15.0, an Advanced Search migration named DeleteOrphanedCommit can be permanently stuck
in a pending state across upgrades. This issue
is corrected in GitLab 15.1.

If you are a self-managed customer who uses GitLab 15.0 with Advanced Search, you will experience performance degradation.
To clean up the migration, upgrade to 15.1 or later.

For other Advanced Search migrations stuck in pending, see how to retry a halted migration.

What do you do for the error Elasticsearch version not compatible

Confirm that your version of Elasticsearch or OpenSearch is compatible with your version of GitLab.

Upgrading without downtime

Read how to upgrade without downtime.

Upgrading to a new major version

Upgrading the major version requires more attention.
Backward-incompatible changes and migrations are reserved for major versions.
Follow the directions carefully as we
cannot guarantee that upgrading between major versions is seamless.

A major upgrade requires the following steps:

  1. Start by identifying a supported upgrade path. This is essential for a successful major version upgrade.
  2. Upgrade to the latest minor version of the preceding major version.
  3. Upgrade to the «dot zero» release of the next major version (X.0.Z).
  4. Optional. Follow the upgrade path, and proceed with upgrading to newer releases of that major version.

It’s also important to ensure that any background migrations have been fully completed
before upgrading to a new major version.

If you have enabled the Elasticsearch integration (PREMIUM SELF), then
ensure all Advanced Search migrations are completed in the last minor version in
your current version
before proceeding with the major version upgrade.

If your GitLab instance has any runners associated with it, it is very
important to upgrade GitLab Runner to match the GitLab minor version that was
upgraded to. This is to ensure compatibility with GitLab versions.

Upgrade paths

Upgrading across multiple GitLab versions in one go is only possible by accepting downtime.
The following examples assume downtime is acceptable while upgrading.
If you don’t want any downtime, read how to upgrade with zero downtime.

Find where your version sits in the upgrade path below, and upgrade GitLab
accordingly, while also consulting the
version-specific upgrade instructions:

8.11.Z -> 8.12.0 -> 8.17.7 -> 9.5.10 -> 10.8.7 -> 11.11.8 -> 12.0.12 -> 12.1.17 -> 12.10.14 -> 13.0.14 -> 13.1.11 -> 13.8.8 -> 13.12.15 -> 14.0.12 -> 14.3.6 -> 14.9.5 -> 14.10.Z -> 15.0.Z -> latest 15.Y.Z

NOTE:
When not explicitly specified, upgrade GitLab to the latest available patch
release rather than the first patch release, for example 13.8.8 instead of 13.8.0.
This includes versions you must stop at on the upgrade path as there may
be fixes for issues relating to the upgrade process.

The following table, while not exhaustive, shows some examples of the supported
upgrade paths.
Additional steps between the mentioned versions are possible. We list the minimally necessary steps only.

Target version Your version Supported upgrade path Note
15.1.0 14.6.2 14.6.2 -> 14.9.5 -> 14.10.5 -> 15.0.2 -> 15.1.0 Three intermediate versions are required: 14.9 and 14.10, 15.0, then 15.1.0.
15.0.0 14.6.2 14.6.2 -> 14.9.5 -> 14.10.5 -> 15.0.2 Two intermediate versions are required: 14.9 and 14.10, then 15.0.0.
14.6.2 13.10.2 13.10.2 -> 13.12.15 -> 14.0.12 -> 14.3.6 => 14.6.2 Three intermediate versions are required: 13.12 and 14.0, 14.3, then 14.6.2.
14.1.8 13.9.2 13.9.2 -> 13.12.15 -> 14.0.12 -> 14.1.8 Two intermediate versions are required: 13.12 and 14.0, then 14.1.8.
13.12.15 12.9.2 12.9.2 -> 12.10.14 -> 13.0.14 -> 13.1.11 -> 13.8.8 -> 13.12.15 Four intermediate versions are required: 12.10, 13.0, 13.1 and 13.8.8, then 13.12.15.
13.2.10 11.5.0 11.5.0 -> 11.11.8 -> 12.0.12 -> 12.1.17 -> 12.10.14 -> 13.0.14 -> 13.1.11 -> 13.2.10 Six intermediate versions are required: 11.11, 12.0, 12.1, 12.10, 13.0 and 13.1, then 13.2.10.
12.10.14 11.3.4 11.3.4 -> 11.11.8 -> 12.0.12 -> 12.1.17 -> 12.10.14 Three intermediate versions are required: 11.11, 12.0 and 12.1, then 12.10.14.
12.9.5 10.4.5 10.4.5 -> 10.8.7 -> 11.11.8 -> 12.0.12 -> 12.1.17 -> 12.9.5 Four intermediate versions are required: 10.8, 11.11, 12.0 and 12.1, then 12.9.5.
12.2.5 9.2.6 9.2.6 -> 9.5.10 -> 10.8.7 -> 11.11.8 -> 12.0.12 -> 12.1.17 -> 12.2.5 Five intermediate versions are required: 9.5, 10.8, 11.11, 12.0, and 12.1, then 12.2.5.
11.3.4 8.13.4 8.13.4 -> 8.17.7 -> 9.5.10 -> 10.8.7 -> 11.3.4 8.17.7 is the last version in version 8, 9.5.10 is the last version in version 9, 10.8.7 is the last version in version 10.

Upgrading between editions

GitLab comes in two flavors: Community Edition which is MIT licensed,
and Enterprise Edition which builds on top of the Community Edition and
includes extra features mainly aimed at organizations with more than 100 users.

Below you can find some guides to help you change GitLab editions.

Community to Enterprise Edition

NOTE:
The following guides are for subscribers of the Enterprise Edition only.

If you wish to upgrade your GitLab installation from Community to Enterprise
Edition, follow the guides below based on the installation method:

  • Source CE to EE update guides — The steps are very similar
    to a version upgrade: stop the server, get the code, update configuration files for
    the new functionality, install libraries and do migrations, update the init
    script, start the application and check its status.
  • Omnibus CE to EE — Follow this guide to update your Omnibus
    GitLab Community Edition to the Enterprise Edition.
  • Docker CE to EE —
    Follow this guide to update your GitLab Community Edition container to an Enterprise Edition container.

Enterprise to Community Edition

To downgrade your Enterprise Edition installation back to Community
Edition, you can follow this guide to make the process as smooth as
possible.

Version-specific upgrading instructions

Each month, major, minor, or patch releases of GitLab are published along with a
release post.
You should read the release posts for all versions you’re passing over.
At the end of major and minor release posts, there are three sections to look for specifically:

  • Deprecations
  • Removals
  • Important notes on upgrading

These include:

  • Steps you must perform as part of an upgrade.
    For example 8.12
    required the Elasticsearch index to be recreated. Any older version of GitLab upgrading to 8.12 or later would require this.
  • Changes to the versions of software we support such as
    ceasing support for IE11 in GitLab 13.

Apart from the instructions in this section, you should also check the
installation-specific upgrade instructions, based on how you installed GitLab:

  • Linux packages (Omnibus GitLab)
  • Helm charts

NOTE:
Specific information that follow related to Ruby and Git versions do not apply to Omnibus installations
and Helm Chart deployments. They come with appropriate Ruby and Git versions and are not using system binaries for Ruby and Git. There is no need to install Ruby or Git when utilizing these two approaches.

15.2.0

  • GitLab installations that have multiple web nodes should be
    upgraded to 15.1 before upgrading to 15.2 (and later) due to a
    configuration change in Rails that can result in inconsistent ETag key
    generation.
  • Some Sidekiq workers were renamed in this release. To avoid any disruption, run the Rake tasks to migrate any pending jobs before starting the upgrade to GitLab 15.2.0.

15.1.0

  • If you run external PostgreSQL, particularly AWS RDS,
    check you have a PostgreSQL bug fix
    to avoid the database crashing.

  • In GitLab 15.1.0, we are switching Rails ActiveSupport::Digest to use SHA256 instead of MD5.
    This affects ETag key generation for resources such as raw Snippet file
    downloads. To ensure consistent ETag key generation across multiple
    web nodes when upgrading, all servers must first be upgraded to 15.1.Z before
    upgrading to 15.2.0 or later:

    1. Ensure all GitLab web nodes are running GitLab 15.1.Z.
    2. Enable the active_support_hash_digest_sha256 feature flag to switch ActiveSupport::Digest to use SHA256:
    3. Only then, continue to upgrade to later versions of GitLab.
  • Unauthenticated requests to the ciConfig GraphQL field are no longer supported.
    Before you upgrade to GitLab 15.1, add an access token to your requests.
    The user creating the token must have permission to create pipelines in the project.

15.0.0

  • Elasticsearch 6.8 is no longer supported. Before you upgrade to GitLab 15.0, update Elasticsearch to any 7.x version.
  • If you run external PostgreSQL, particularly AWS RDS,
    check you have a PostgreSQL bug fix
    to avoid the database crashing.
  • The use of encrypted S3 buckets with storage-specific configuration is no longer supported after removing support for using background_upload.
  • The certificate-based Kubernetes integration (DEPRECATED) is disabled by default, but you can be re-enable it through the certificate_based_clusters feature flag until GitLab 16.0.
  • When you use the GitLab Helm Chart project with a custom serviceAccount, ensure it has get and list permissions for the serviceAccount and secret resources.

14.10.0

  • Before upgrading to GitLab 14.10, you must already have the latest 14.9.Z installed on your instance.
    The upgrade to GitLab 14.10 executes a concurrent index drop of unneeded
    entries from the ci_job_artifacts database table. This could potentially run for multiple minutes, especially if the table has a lot of
    traffic and the migration is unable to acquire a lock. It is advised to let this process finish as restarting may result in data loss.

  • If you run external PostgreSQL, particularly AWS RDS,
    check you have a PostgreSQL bug fix
    to avoid the database crashing.

  • Upgrading to patch level 14.10.3 or later might encounter a one-hour timeout due to a long running database data change,
    if it was not completed while running GitLab 14.9.

    FATAL: Mixlib::ShellOut::CommandTimeout: rails_migration[gitlab-rails]
    (gitlab::database_migrations line 51) had an error:
    [..]
    Mixlib::ShellOut::CommandTimeout: Command timed out after 3600s:

    A workaround exists to complete the data change and the upgrade manually.

14.9.0

  • Database changes made by the upgrade to GitLab 14.9 can take hours or days to complete on larger GitLab instances.
    These batched background migrations update whole database tables to ensure corresponding
    records in namespaces table for each record in projects table.

    After you update to 14.9.0 or a later 14.9 patch version,
    batched background migrations must finish
    before you update to a later version.

    If the migrations are not finished and you try to update to a later version,
    you see errors like:

    Expected batched background migration for the given configuration to be marked as 'finished', but it is 'active':

    Or

    Error executing action `run` on resource 'bash[migrate gitlab-rails database]'
    ================================================================================
    
    Mixlib::ShellOut::ShellCommandFailed
    ------------------------------------
    Command execution failed. STDOUT/STDERR suppressed for sensitive resource
  • GitLab 14.9.0 includes a
    background migration ResetDuplicateCiRunnersTokenValuesOnProjects
    that may remain stuck permanently in a pending state.

    To clean up this stuck job, run the following in the GitLab Rails Console:

    Gitlab::Database::BackgroundMigrationJob.pending.where(class_name: "ResetDuplicateCiRunnersTokenValuesOnProjects").find_each do |job|
      puts Gitlab::Database::BackgroundMigrationJob.mark_all_as_succeeded("ResetDuplicateCiRunnersTokenValuesOnProjects", job.arguments)
    end
  • If you run external PostgreSQL, particularly AWS RDS,
    check you have a PostgreSQL bug fix
    to avoid the database crashing.

14.8.0

  • If upgrading from a version earlier than 14.6.5, 14.7.4, or 14.8.2, review the Critical Security Release: 14.8.2, 14.7.4, and 14.6.5 blog post.
    Updating to 14.8.2 or later resets runner registration tokens for your groups and projects.

  • The agent server for Kubernetes is enabled by default
    on Omnibus installations. If you run GitLab at scale,
    such as the reference architectures,
    you must disable the agent on the following server types, if the agent is not required.

    • Praefect
    • Gitaly
    • Sidekiq
    • Redis (if configured using redis['enable'] = true and not via roles)
    • Container registry
    • Any other server types based on roles(['application_role']), such as the GitLab Rails nodes

    The reference architectures have been updated
    with this configuration change and a specific role for standalone Redis servers.

    Steps to disable the agent:

    1. Add gitlab_kas['enable'] = false to gitlab.rb.
    2. If the server is already upgraded to 14.8, run gitlab-ctl reconfigure.
  • GitLab 14.8.0 includes a
    background migration PopulateTopicsNonPrivateProjectsCount
    that may remain stuck permanently in a pending state.

    To clean up this stuck job, run the following in the GitLab Rails Console:

        Gitlab::Database::BackgroundMigrationJob.pending.where(class_name: "PopulateTopicsNonPrivateProjectsCount").find_each do |job|
          puts Gitlab::Database::BackgroundMigrationJob.mark_all_as_succeeded("PopulateTopicsNonPrivateProjectsCount", job.arguments)
        end
  • If upgrading from a version earlier than 14.3.0, to avoid
    an issue with job retries, first upgrade
    to GitLab 14.7.x and make sure all batched migrations have finished.

  • If upgrading from version 14.3.0 or later, you might notice a failed
    batched migration named
    BackfillNamespaceIdForNamespaceRoute. You can ignore
    this. Retry it after you upgrade to version 14.9.x.

  • If you run external PostgreSQL, particularly AWS RDS,
    check you have a PostgreSQL bug fix
    to avoid the database crashing.

14.7.0

  • See LFS objects import and mirror issue in GitLab 14.6.0 to 14.7.2.

  • If upgrading from a version earlier than 14.6.5, 14.7.4, or 14.8.2, review the Critical Security Release: 14.8.2, 14.7.4, and 14.6.5 blog post.
    Updating to 14.7.4 or later resets runner registration tokens for your groups and projects.

  • GitLab 14.7 introduced a change where Gitaly expects persistent files in the /tmp directory.
    When using the noatime mount option on /tmp in a node running Gitaly, most Linux distributions
    run into an issue with Git server hooks getting deleted.
    These conditions are present in the default Amazon Linux configuration.

    If your Linux distribution manages files in /tmp with the tmpfiles.d service, you
    can override the behavior of tmpfiles.d for the Gitaly files and avoid this issue:

    sudo printf "x /tmp/gitaly-%s-*n" hooks git-exec-path >/etc/tmpfiles.d/gitaly-workaround.conf

    This issue is fixed in GitLab 14.10 and later when using the Gitaly runtime directory
    to specify a location to store persistent files.

14.6.0

  • See LFS objects import and mirror issue in GitLab 14.6.0 to 14.7.2.
  • If upgrading from a version earlier than 14.6.5, 14.7.4, or 14.8.2, review the Critical Security Release: 14.8.2, 14.7.4, and 14.6.5 blog post.
    Updating to 14.6.5 or later resets runner registration tokens for your groups and projects.

14.5.0

  • When make is run, Gitaly builds are now created in _build/bin and no longer in the root directory of the source directory. If you
    are using a source install, update paths to these binaries in your systemd unit files
    or init scripts by following the documentation.

  • Connections between Workhorse and Gitaly use the Gitaly backchannel protocol by default. If you deployed a gRPC proxy between Workhorse and Gitaly,
    Workhorse can no longer connect. As a workaround, disable the temporary workhorse_use_sidechannel
    feature flag. If you need a proxy between Workhorse and Gitaly, use a TCP proxy. If you have feedback about this change, go to this issue.

  • In 14.1 we introduced a background migration that changes how we store merge request diff commits,
    to significantly reduce the amount of storage needed.
    In 14.5 we introduce a set of migrations that wrap up this process by making sure
    that all remaining jobs over the merge_request_diff_commits table are completed.
    These jobs have already been processed in most cases so that no extra time is necessary during an upgrade to 14.5.
    However, if there are remaining jobs or you haven’t already upgraded to 14.1,
    the deployment may take multiple hours to complete.

    All merge request diff commits automatically incorporate these changes, and there are no
    additional requirements to perform the upgrade.
    Existing data in the merge_request_diff_commits table remains unpacked until you run VACUUM FULL merge_request_diff_commits.
    However, the VACUUM FULL operation locks and rewrites the entire merge_request_diff_commits table,
    so the operation takes some time to complete and it blocks access to this table until the end of the process.
    We advise you to only run this command while GitLab is not actively used or it is taken offline for the duration of the process.
    The time it takes to complete depends on the size of the table, which can be obtained by using select pg_size_pretty(pg_total_relation_size('merge_request_diff_commits'));.

    For more information, refer to this issue.

  • GitLab 14.5.0 includes a
    background migration UpdateVulnerabilityOccurrencesLocation
    that may remain stuck permanently in a pending state when the instance lacks records that match the migration’s target.

    To clean up this stuck job, run the following in the GitLab Rails Console:

        Gitlab::Database::BackgroundMigrationJob.pending.where(class_name: "UpdateVulnerabilityOccurrencesLocation").find_each do |job|
          puts Gitlab::Database::BackgroundMigrationJob.mark_all_as_succeeded("UpdateVulnerabilityOccurrencesLocation", job.arguments)
        end
  • Upgrading to 14.5 (or later) might encounter a one hour timeout
    owing to a long running database data change.

    FATAL: Mixlib::ShellOut::CommandTimeout: rails_migration[gitlab-rails]
    (gitlab::database_migrations line 51) had an error:
    [..]
    Mixlib::ShellOut::CommandTimeout: Command timed out after 3600s:

    There is a workaround to complete the data change and the upgrade manually

14.4.4

  • For zero-downtime upgrades on a GitLab cluster with separate Web and API nodes, you must enable the paginated_tree_graphql_query feature flag before upgrading GitLab Web nodes to 14.4.
    This is because we enabled paginated_tree_graphql_query by default in 14.4, so if GitLab UI is on 14.4 and its API is on 14.3, the frontend has this feature enabled but the backend has it disabled. This results in the following error:

    bundle.esm.js:63 Uncaught (in promise) Error: GraphQL error: Field 'paginatedTree' doesn't exist on type 'Repository'

14.4.0

  • Git 2.33.x and later is required. We recommend you use the
    Git version provided by Gitaly.

  • See Maintenance mode issue in GitLab 13.9 to 14.4.

  • After enabling database load balancing by default in 14.4.0, we found an issue where
    cron jobs would not work if the connection to PostgreSQL was severed,
    as Sidekiq would continue using a bad connection. Geo and other features that rely on
    cron jobs running regularly do not work until Sidekiq is restarted. We recommend
    upgrading to GitLab 14.4.3 and later if this issue affects you.

  • After enabling database load balancing by default in 14.4.0, we found an issue where
    Database load balancing does not work with an AWS Aurora cluster.
    We recommend moving your databases from Aurora to RDS for PostgreSQL before
    upgrading. Refer to Moving GitLab databases to a different PostgreSQL instance.

  • GitLab 14.4.0 includes a
    background migration PopulateTopicsTotalProjectsCountCache
    that may remain stuck permanently in a pending state when the instance lacks records that match the migration’s target.

    To clean up this stuck job, run the following in the GitLab Rails Console:

        Gitlab::Database::BackgroundMigrationJob.pending.where(class_name: "PopulateTopicsTotalProjectsCountCache").find_each do |job|
          puts Gitlab::Database::BackgroundMigrationJob.mark_all_as_succeeded("PopulateTopicsTotalProjectsCountCache", job.arguments)
        end

14.3.0

  • Instances running 14.0.0 — 14.0.4 should not upgrade directly to GitLab 14.2 or later.

  • Ensure batched background migrations finish before upgrading
    to 14.3.Z from earlier GitLab 14 releases.

  • Ruby 2.7.4 is required. Refer to the Ruby installation instructions
    for how to proceed.

  • GitLab 14.3.0 contains post-deployment migrations to address Primary Key overflow risk for tables with an integer PK for the tables listed below:

    • ci_builds.id
    • ci_builds.stage_id
    • ci_builds_metadata
    • taggings
    • events

    If the migrations are executed as part of a no-downtime deployment, there’s a risk of failure due to lock conflicts with the application logic, resulting in lock timeout or deadlocks. In each case, these migrations are safe to re-run until successful:

    # For Omnibus GitLab
    sudo gitlab-rake db:migrate
    
    # For source installations
    sudo -u git -H bundle exec rake db:migrate RAILS_ENV=production
  • After upgrading to 14.3, ensure that all the MigrateMergeRequestDiffCommitUsers background
    migration jobs have completed before continuing with upgrading to GitLab 14.5 or later.
    This is especially important if your GitLab instance has a large
    merge_request_diff_commits table. Any pending
    MigrateMergeRequestDiffCommitUsers background migration jobs are
    foregrounded in GitLab 14.5, and may take a long time to complete.
    You can check the count of pending jobs for
    MigrateMergeRequestDiffCommitUsers by using the PostgreSQL console (or sudo gitlab-psql):

    select count(*) from background_migration_jobs where class_name = 'MigrateMergeRequestDiffCommitUsers' and status = 0;
  • See Maintenance mode issue in GitLab 13.9 to 14.4.

14.2.0

  • Instances running 14.0.0 — 14.0.4 should not upgrade directly to GitLab 14.2 or later.

  • Ensure batched background migrations finish before upgrading
    to 14.2.Z from earlier GitLab 14 releases.

  • GitLab 14.2.0 contains background migrations to address Primary Key overflow risk for tables with an integer PK for the tables listed below:

    • ci_build_needs
    • ci_build_trace_chunks
    • ci_builds_runner_session
    • deployments
    • geo_job_artifact_deleted_events
    • push_event_payloads
    • ci_job_artifacts:

      • Finalize job_id conversion to bigint for ci_job_artifacts
      • Finalize ci_job_artifacts conversion to bigint

    If the migrations are executed as part of a no-downtime deployment, there’s a risk of failure due to lock conflicts with the application logic, resulting in lock timeout or deadlocks. In each case, these migrations are safe to re-run until successful:

    # For Omnibus GitLab
    sudo gitlab-rake db:migrate
    
    # For source installations
    sudo -u git -H bundle exec rake db:migrate RAILS_ENV=production
  • See Maintenance mode issue in GitLab 13.9 to 14.4.

  • GitLab 14.2.0 includes a
    background migration BackfillDraftStatusOnMergeRequests
    that may remain stuck permanently in a pending state when the instance lacks records that match the migration’s target.

    To clean up this stuck job, run the following in the GitLab Rails Console:

    Gitlab::Database::BackgroundMigrationJob.pending.where(class_name: "BackfillDraftStatusOnMergeRequests").find_each do |job|
      puts Gitlab::Database::BackgroundMigrationJob.mark_all_as_succeeded("BackfillDraftStatusOnMergeRequests", job.arguments)
    end

14.1.0

  • Instances running 14.0.0 — 14.0.4 should not upgrade directly to GitLab 14.2 or later
    but can upgrade to 14.1.Z.

    It is not required for instances already running 14.0.5 (or later) to stop at 14.1.Z.
    14.1 is included on the upgrade path for the broadest compatibility
    with self-managed installations, and ensure 14.0.0-14.0.4 installations do not
    encounter issues with batched background migrations.

  • Upgrading to GitLab 14.5 (or later) may take a lot longer if you do not upgrade to at least 14.1
    first. The 14.1 merge request diff commits database migration can take hours to run, but runs in the
    background while GitLab is in use. GitLab instances upgraded directly from 14.0 to 14.5 or later must
    run the migration in the foreground and therefore take a lot longer to complete.

  • See Maintenance mode issue in GitLab 13.9 to 14.4.

14.0.0

Prerequisites:

  • The GitLab 14.0 release post contains several important notes
    about pre-requisites including using Patroni instead of repmgr,
    migrating to hashed storage,
    and to Puma.
  • The support of PostgreSQL 11 has been dropped. Make sure to update your database to version 12 before updating to GitLab 14.0.

Long running batched background database migrations:

  • Database changes made by the upgrade to GitLab 14.0 can take hours or days to complete on larger GitLab instances.
    These batched background migrations update whole database tables to mitigate primary key overflow and must be finished before upgrading to GitLab 14.2 or later.

  • Due to an issue where BatchedBackgroundMigrationWorkers were
    not working
    for self-managed instances, a fix was created
    that requires an update to at least 14.0.5. The fix was also released in 14.1.0.

    After you update to 14.0.5 or a later 14.0 patch version,
    batched background migrations must finish
    before you update to a later version.

    If the migrations are not finished and you try to update to a later version,
    you see an error like:

    Expected batched background migration for the given configuration to be marked as 'finished', but it is 'active':

    See how to resolve this error.

Other issues:

  • In GitLab 13.3 some pipeline processing methods were deprecated
    and this code was completely removed in GitLab 14.0. If you plan to upgrade from
    GitLab 13.2 or older directly to 14.0, this is unsupported.
    You should instead follow a supported upgrade path.
  • See Maintenance mode issue in GitLab 13.9 to 14.4.
  • See Custom Rack Attack initializers if you persist your own custom Rack Attack
    initializers during upgrades.

Upgrading to later 14.Y releases

  • Instances running 14.0.0 — 14.0.4 should not upgrade directly to GitLab 14.2 or later,
    because of batched background migrations.

    1. Upgrade first to either:
      • 14.0.5 or a later 14.0.Z patch release.
      • 14.1.0 or a later 14.1.Z patch release.
    2. Batched background migrations must finish
      before you update to a later version and may take longer than usual.

13.12.0

  • See Maintenance mode issue in GitLab 13.9 to 14.4.

  • Check the GitLab database has no references to legacy storage.
    The GitLab 14.0 pre-install check causes the package update to fail if unmigrated data exists:

    Checking for unmigrated data on legacy storage
    
    Legacy storage is no longer supported. Please migrate your data to hashed storage.

13.11.0

  • Git 2.31.x and later is required. We recommend you use the
    Git version provided by Gitaly.

  • See Maintenance mode issue in GitLab 13.9 to 14.4.

  • GitLab 13.11 includes a faulty background migration (RescheduleArtifactExpiryBackfillAgain)
    that incorrectly sets the expire_at column in the ci_job_artifacts database table.
    Incorrect expire_at values can potentially cause data loss.

    To prevent this risk of data loss, you must remove the content of the RescheduleArtifactExpiryBackfillAgain
    migration, which makes it a no-op migration. You can repeat the changes from the
    commit that makes the migration no-op in 14.9 and later.
    For more information, see how to disable a data migration.

13.10.0

See Maintenance mode issue in GitLab 13.9 to 14.4.

13.9.0

  • We’ve detected an issue with a column rename
    that prevents upgrades to GitLab 13.9.0, 13.9.1, 13.9.2, and 13.9.3 when following the zero-downtime steps. It is necessary
    to perform the following additional steps for the zero-downtime upgrade:

    1. Before running the final sudo gitlab-rake db:migrate command on the deploy node,
      execute the following queries using the PostgreSQL console (or sudo gitlab-psql)
      to drop the problematic triggers:

      drop trigger trigger_e40a6f1858e6 on application_settings;
      drop trigger trigger_0d588df444c8 on application_settings;
      drop trigger trigger_1572cbc9a15f on application_settings;
      drop trigger trigger_22a39c5c25f3 on application_settings;
    2. Run the final migrations:

      sudo gitlab-rake db:migrate

    If you have already run the final sudo gitlab-rake db:migrate command on the deploy node and have
    encountered the column rename issue, you
    see the following error:

    -- remove_column(:application_settings, :asset_proxy_whitelist)
    rake aborted!
    StandardError: An error has occurred, all later migrations canceled:
    PG::DependentObjectsStillExist: ERROR: cannot drop column asset_proxy_whitelist of table application_settings because other objects depend on it
    DETAIL: trigger trigger_0d588df444c8 on table application_settings depends on column asset_proxy_whitelist of table application_settings

    To work around this bug, follow the previous steps to complete the update.
    More details are available in this issue.

  • See Maintenance mode issue in GitLab 13.9 to 14.4.

  • For GitLab Enterprise Edition customers, we noticed an issue when subscription expiration is upcoming, and you create new subgroups and projects. If you fall under that category and get 500 errors, you can work around this issue:

    1. SSH into you GitLab server, and open a Rails console:

      sudo gitlab-rails console
    2. Disable the following features:

      Feature.disable(:subscribable_subscription_banner)
      Feature.disable(:subscribable_license_banner)
    3. Restart Puma or Unicorn:

      #For installations using Puma
      sudo gitlab-ctl restart puma
      
      #For installations using Unicorn
      sudo gitlab-ctl restart unicorn

13.8.8

GitLab 13.8 includes a background migration to address an issue with duplicate service records. If duplicate services are present, this background migration must complete before a unique index is applied to the services table, which was introduced in GitLab 13.9. Upgrades from GitLab 13.8 and earlier to later versions must include an intermediate upgrade to GitLab 13.8.8 and must wait until the background migrations complete before proceeding.

If duplicate services are still present, an upgrade to 13.9.x or later results in a failed upgrade with the following error:

PG::UniqueViolation: ERROR:  could not create unique index "index_services_on_project_id_and_type_unique"
DETAIL:  Key (project_id, type)=(NNN, ServiceName) is duplicated.

13.6.0

Ruby 2.7.2 is required. GitLab does not start with Ruby 2.6.6 or older versions.

The required Git version is Git v2.29 or later.

GitLab 13.6 includes a
background migration BackfillJiraTrackerDeploymentType2
that may remain stuck permanently in a pending state despite completion of work
due to a bug.

To clean up this stuck job, run the following in the GitLab Rails Console:

Gitlab::Database::BackgroundMigrationJob.pending.where(class_name: "BackfillJiraTrackerDeploymentType2").find_each do |job|
  puts Gitlab::Database::BackgroundMigrationJob.mark_all_as_succeeded("BackfillJiraTrackerDeploymentType2", job.arguments)
end

13.4.0

GitLab 13.4.0 includes a background migration to move all remaining repositories in legacy storage to hashed storage. There are known issues with this migration which are fixed in GitLab 13.5.4 and later. If possible, skip 13.4.0 and upgrade to 13.5.4 or later instead. The migration can take quite a while to run, depending on how many repositories must be moved. Be sure to check that all background migrations have completed before upgrading further.

13.3.0

The recommended Git version is Git v2.28. The minimum required version of Git
v2.24 remains the same.

13.2.0

GitLab installations that have multiple web nodes must be
upgraded to 13.1 before upgrading to 13.2 (and later) due to a
breaking change in Rails that can result in authorization issues.

GitLab 13.2.0 remediates an email verification bypass.
After upgrading, if some of your users are unexpectedly encountering 404 or 422 errors when signing in,
or «blocked» messages when using the command line,
their accounts may have been un-confirmed.
In that case, ask them to check their email for a re-confirmation link.
For more information, see our discussion of Email confirmation issues.

GitLab 13.2.0 relies on the btree_gist extension for PostgreSQL. For installations with an externally managed PostgreSQL setup, make sure to
install the extension manually before upgrading GitLab if the database user for GitLab
is not a superuser. This is not necessary for installations using a GitLab managed PostgreSQL database.

13.1.0

In 13.1.0, you must upgrade to either:

  • At least Git v2.24 (previously, the minimum required version was Git v2.22).
  • The recommended Git v2.26.

Failure to do so results in internal errors in the Gitaly service in some RPCs due
to the use of the new --end-of-options Git flag.

Additionally, in GitLab 13.1.0, the version of
Rails was upgraded from 6.0.3 to 6.0.3.1.
The Rails upgrade included a change to CSRF token generation which is
not backwards-compatible — GitLab servers with the new Rails version
generate CSRF tokens that are not recognizable by GitLab servers
with the older Rails version — which could cause non-GET requests to
fail for multi-node GitLab installations.

So, if you are using multiple Rails servers and specifically upgrading from 13.0,
all servers must first be upgraded to 13.1.Z before upgrading to 13.2.0 or later:

  1. Ensure all GitLab web nodes are running GitLab 13.1.Z.

  2. Enable the global_csrf_token feature flag to enable new
    method of CSRF token generation:

    Feature.enable(:global_csrf_token)
  3. Only then, continue to upgrade to later versions of GitLab.

Custom Rack Attack initializers

From GitLab 13.0.1, custom Rack Attack initializers (config/initializers/rack_attack.rb) are replaced with initializers
supplied with GitLab during upgrades. We recommend you use these GitLab-supplied initializers.

If you persist your own Rack Attack initializers between upgrades, you might
get 500 errors when upgrading to GitLab 14.0 and later.

12.10.0

  • The final patch release (12.10.14)
    has a regression affecting maven package uploads.
    If you use this feature and must stay on 12.10 while preparing to upgrade to 13.0:

    • Upgrade to 12.10.13 instead.
    • Upgrade to 13.0.14 as soon as possible.
  • GitLab 13.0 requires PostgreSQL 11.

    • 12.10 is the final release that shipped with PostgreSQL 9.6, 10, and 11.
    • You should make sure that your database is PostgreSQL 11 on GitLab 12.10 before upgrading to 13.0. This upgrade requires downtime.

12.2.0

In 12.2.0, we enabled Rails’ authenticated cookie encryption. Old sessions are
automatically upgraded.

However, session cookie downgrades are not supported. So after upgrading to 12.2.0,
any downgrades would result to all sessions being invalidated and users are logged out.

12.1.0

If you are planning to upgrade from 12.0.Z to 12.10.Z, it is necessary to
perform an intermediary upgrade to 12.1.Z before upgrading to 12.10.Z to
avoid issues like #215141.

12.0.0

In 12.0.0 we made various database related changes. These changes require that
users first upgrade to the latest 11.11 patch release. After upgraded to 11.11.Z,
users can upgrade to 12.0.Z. Failure to do so may result in database migrations
not being applied, which could lead to application errors.

It is also required that you upgrade to 12.0.Z before moving to a later version
of 12.Y.

Example 1: you are currently using GitLab 11.11.8, which is the latest patch
release for 11.11.Z. You can upgrade as usual to 12.0.Z.

Example 2: you are currently using a version of GitLab 10.Y. To upgrade, first
upgrade to the last 10.Y release (10.8.7) then the last 11.Y release (11.11.8).
After upgraded to 11.11.8 you can safely upgrade to 12.0.Z.

See our documentation on upgrade paths
for more information.

Maintenance mode issue in GitLab 13.9 to 14.4

When Maintenance mode is enabled, users cannot sign in with SSO, SAML, or LDAP.

Users who were signed in before Maintenance mode was enabled, continue to be signed in. If the administrator who enabled Maintenance mode loses their session, then they can’t disable Maintenance mode via the UI. In that case, you can disable Maintenance mode via the API or Rails console.

This bug was fixed in GitLab 14.5.0 and backported into 14.4.3 and 14.3.5.

LFS objects import and mirror issue in GitLab 14.6.0 to 14.7.2

When Geo is enabled, LFS objects fail to be saved for imported or mirrored projects.

This bug was fixed in GitLab 14.8.0 and backported into 14.7.3.

PostgreSQL segmentation fault issue

If you run GitLab with external PostgreSQL, particularly AWS RDS, ensure you upgrade PostgreSQL
to patch levels to a minimum of 12.7 or 13.3 before upgrading to GitLab 14.8 or later.

In 14.8
for GitLab Enterprise Edition and in 15.1
for GitLab Community Edition a GitLab feature called Loose Foreign Keys was enabled.

After it was enabled, we have had reports of unplanned PostgreSQL restarts caused
by a database engine bug that causes a segmentation fault.

Read more in the issue.

Miscellaneous

  • MySQL to PostgreSQL guides you through migrating
    your database from MySQL to PostgreSQL.
  • Restoring from backup after a failed upgrade
  • Upgrading PostgreSQL Using Slony, for
    upgrading a PostgreSQL database with minimal downtime.
  • Managing PostgreSQL extensions

Upgrading GitLab (FREE SELF)

Upgrading GitLab is a relatively straightforward process, but the complexity
can increase based on the installation method you have used, how old your
GitLab version is, if you’re upgrading to a major version, and so on.

Make sure to read the whole page as it contains information related to every upgrade method.

The maintenance policy documentation
has additional information about upgrading, including:

  • How to interpret GitLab product versioning.
  • Recommendations on the what release to run.
  • How we use patch and security patch releases.
  • When we backport code changes.

Upgrade based on installation method

Depending on the installation method and your GitLab version, there are multiple
official ways to update GitLab:

  • Linux packages (Omnibus GitLab)
  • Source installations
  • Docker installations
  • Kubernetes (Helm) installations

Linux packages (Omnibus GitLab)

The package upgrade guide
contains the steps needed to update a package installed by official GitLab
repositories.

There are also instructions when you want to
update to a specific version.

Installation from source

  • Upgrading Community Edition and Enterprise Edition from source —
    The guidelines for upgrading Community Edition and Enterprise Edition from source.
  • Patch versions guide includes the steps needed for a
    patch version, such as 13.2.0 to 13.2.1, and apply to both Community and Enterprise
    Editions.

In the past we used separate documents for the upgrading instructions, but we
have switched to using a single document. The old upgrading guidelines
can still be found in the Git repository:

  • Old upgrading guidelines for Community Edition
  • Old upgrading guidelines for Enterprise Edition

Installation using Docker

GitLab provides official Docker images for both Community and Enterprise
editions, and they are based on the Omnibus package. See how to
install GitLab using Docker.

Installation using Helm

GitLab can be deployed into a Kubernetes cluster using Helm.
Instructions on how to update a cloud-native deployment are in
a separate document.

Use the version mapping
from the chart version to GitLab version to determine the upgrade path.

Plan your upgrade

See the guide to plan your GitLab upgrade.

Check for background migrations before upgrading

Certain releases may require different migrations to be
finished before you update to the newer version.

For more information, see background migrations.

Dealing with running CI/CD pipelines and jobs

If you upgrade your GitLab instance while the GitLab Runner is processing jobs, the trace updates fail. When GitLab is back online, the trace updates should self-heal. However, depending on the error, the GitLab Runner either retries, or eventually terminates, job handling.

As for the artifacts, the GitLab Runner attempts to upload them three times, after which the job eventually fails.

To address the above two scenarios, it is advised to do the following prior to upgrading:

  1. Plan your maintenance.

  2. Pause your runners or block new jobs from starting by adding following to your /etc/gitlab/gitlab.rb:

    nginx['custom_gitlab_server_config'] = "location /api/v4/jobs/request {n deny all;n return 503;n}n"

    And reconfigure GitLab with:

    sudo gitlab-ctl reconfigure
  3. Wait until all jobs are finished.

  4. Upgrade GitLab.

  5. Update GitLab Runner to the same version
    as your GitLab version. Both versions should be the same.

  6. Unpause your runners and unblock new jobs from starting by reverting the previous /etc/gitlab/gitlab.rb change.

Checking for pending Advanced Search migrations (PREMIUM SELF)

This section is only applicable if you have enabled the Elasticsearch integration (PREMIUM SELF).

Major releases require all Advanced Search migrations
to be finished from the most recent minor release in your current version
before the major version upgrade. You can find pending migrations by
running the following command:

For Omnibus installations

sudo gitlab-rake gitlab:elastic:list_pending_migrations

For installations from source

cd /home/git/gitlab
sudo -u git -H bundle exec rake gitlab:elastic:list_pending_migrations

What do you do if your Advanced Search migrations are stuck?

In GitLab 15.0, an Advanced Search migration named DeleteOrphanedCommit can be permanently stuck
in a pending state across upgrades. This issue
is corrected in GitLab 15.1.

If you are a self-managed customer who uses GitLab 15.0 with Advanced Search, you will experience performance degradation.
To clean up the migration, upgrade to 15.1 or later.

For other Advanced Search migrations stuck in pending, see how to retry a halted migration.

What do you do for the error Elasticsearch version not compatible

Confirm that your version of Elasticsearch or OpenSearch is compatible with your version of GitLab.

Upgrading without downtime

Read how to upgrade without downtime.

Upgrading to a new major version

Upgrading the major version requires more attention.
Backward-incompatible changes and migrations are reserved for major versions.
Follow the directions carefully as we
cannot guarantee that upgrading between major versions is seamless.

A major upgrade requires the following steps:

  1. Start by identifying a supported upgrade path. This is essential for a successful major version upgrade.
  2. Upgrade to the latest minor version of the preceding major version.
  3. Upgrade to the «dot zero» release of the next major version (X.0.Z).
  4. Optional. Follow the upgrade path, and proceed with upgrading to newer releases of that major version.

It’s also important to ensure that any background migrations have been fully completed
before upgrading to a new major version.

If you have enabled the Elasticsearch integration (PREMIUM SELF), then
ensure all Advanced Search migrations are completed in the last minor version in
your current version
before proceeding with the major version upgrade.

If your GitLab instance has any runners associated with it, it is very
important to upgrade GitLab Runner to match the GitLab minor version that was
upgraded to. This is to ensure compatibility with GitLab versions.

Upgrade paths

Upgrading across multiple GitLab versions in one go is only possible by accepting downtime.
The following examples assume downtime is acceptable while upgrading.
If you don’t want any downtime, read how to upgrade with zero downtime.

For a dynamic view of examples of supported upgrade paths, try the Upgrade Path tool maintained by the GitLab Support team. To share feedback and help improve the tool, create an issue or MR in the upgrade-path project.

Find where your version sits in the upgrade path below, and upgrade GitLab
accordingly, while also consulting the
version-specific upgrade instructions:

8.11.Z -> 8.12.0 -> 8.17.7 -> 9.0.13 -> 9.5.10 -> 10.0.7 -> 10.8.7 -> 11.0.6 -> 11.11.8 -> 12.0.12 -> 12.1.17 -> 12.10.14 -> 13.0.14 -> 13.1.11 -> 13.8.8 -> 13.12.15 -> 14.0.12 -> 14.3.6 -> 14.9.5 -> 14.10.Z -> 15.0.Z -> 15.1.Z (for GitLab instances with multiple web nodes) -> 15.4.0 -> latest 15.Y.Z

NOTE:
When not explicitly specified, upgrade GitLab to the latest available patch
release rather than the first patch release, for example 13.8.8 instead of 13.8.0.
This includes versions you must stop at on the upgrade path as there may
be fixes for issues relating to the upgrade process.
Specifically around a major version,
crucial database schema and migration patches are included in the latest patch releases.

The following table, while not exhaustive, shows some examples of the supported
upgrade paths.
Additional steps between the mentioned versions are possible. We list the minimally necessary steps only.

Target version Your version Supported upgrade path Note
15.1.0 14.6.2 14.6.2 -> 14.9.5 -> 14.10.5 -> 15.0.2 -> 15.1.0 Three intermediate versions are required: 14.9, 14.10, and 15.0.
15.0.0 14.6.2 14.6.2 -> 14.9.5 -> 14.10.5 -> 15.0.2 Two intermediate versions are required: 14.9 and 14.10.
14.6.2 13.10.2 13.10.2 -> 13.12.15 -> 14.0.12 -> 14.3.6 => 14.6.2 Three intermediate versions are required: 13.12, 14.0, and 14.3.
14.1.8 13.9.2 13.9.2 -> 13.12.15 -> 14.0.12 -> 14.1.8 Two intermediate versions are required: 13.12 and 14.0.
13.12.15 12.9.2 12.9.2 -> 12.10.14 -> 13.0.14 -> 13.1.11 -> 13.8.8 -> 13.12.15 Four intermediate versions are required: 12.10, 13.0, 13.1, and 13.8.
13.2.10 11.5.0 11.5.0 -> 11.11.8 -> 12.0.12 -> 12.1.17 -> 12.10.14 -> 13.0.14 -> 13.1.11 -> 13.2.10 Six intermediate versions are required: 11.11, 12.0, 12.1, 12.10, 13.0, and 13.1.
12.10.14 11.3.4 11.3.4 -> 11.11.8 -> 12.0.12 -> 12.1.17 -> 12.10.14 Three intermediate versions are required: 11.11, 12.0, and 12.1.
12.9.5 10.4.5 10.4.5 -> 10.8.7 -> 11.0.6 -> 11.11.8 -> 12.0.12 -> 12.1.17 -> 12.9.5 Five intermediate versions are required: 10.8, 11.0, 11.11, 12.0, and 12.1.
12.2.5 9.2.6 9.2.6 -> 9.5.10 -> 10.0.7 -> 10.8.7 -> 11.0.6 -> 11.11.8 -> 12.0.12 -> 12.1.17 -> 12.2.5 Seven intermediate versions are required: 9.5, 10.0, 10.8, 11.0, 11.11, 12.0, and 12.1.
11.3.4 8.13.4 8.13.4 -> 8.17.7 -> 9.0.13 -> 9.5.10 -> 10.0.7 -> 10.8.7 -> 11.0.6 -> 11.3.4 Six intermediate versions are required: 8.17, 9.0, 9.5, 10.0, 10.8, and 11.0.

Upgrading between editions

GitLab comes in two flavors: Community Edition which is MIT licensed,
and Enterprise Edition which builds on top of the Community Edition and
includes extra features mainly aimed at organizations with more than 100 users.

Below you can find some guides to help you change GitLab editions.

Community to Enterprise Edition

NOTE:
The following guides are for subscribers of the Enterprise Edition only.

If you wish to upgrade your GitLab installation from Community to Enterprise
Edition, follow the guides below based on the installation method:

  • Source CE to EE update guides — The steps are very similar
    to a version upgrade: stop the server, get the code, update configuration files for
    the new functionality, install libraries and do migrations, update the init
    script, start the application and check its status.
  • Omnibus CE to EE — Follow this guide to update your Omnibus
    GitLab Community Edition to the Enterprise Edition.
  • Docker CE to EE —
    Follow this guide to update your GitLab Community Edition container to an Enterprise Edition container.

Enterprise to Community Edition

To downgrade your Enterprise Edition installation back to Community
Edition, you can follow this guide to make the process as smooth as
possible.

Version-specific upgrading instructions

Each month, major, minor, or patch releases of GitLab are published along with a
release post.
You should read the release posts for all versions you’re passing over.
At the end of major and minor release posts, there are three sections to look for specifically:

  • Deprecations
  • Removals
  • Important notes on upgrading

These include:

  • Steps you must perform as part of an upgrade.
    For example 8.12
    required the Elasticsearch index to be recreated. Any older version of GitLab upgrading to 8.12 or later would require this.
  • Changes to the versions of software we support such as
    ceasing support for IE11 in GitLab 13.

Apart from the instructions in this section, you should also check the
installation-specific upgrade instructions, based on how you installed GitLab:

  • Linux packages (Omnibus GitLab)
  • Helm charts

NOTE:
Specific information that follow related to Ruby and Git versions do not apply to Omnibus installations
and Helm Chart deployments. They come with appropriate Ruby and Git versions and are not using system binaries for Ruby and Git. There is no need to install Ruby or Git when utilizing these two approaches.

15.7.0

  • This version validates a NOT NULL DB constraint on the issues.work_item_type_id column.
    To upgrade to this version, no records with a NULL work_item_type_id should exist on the issues table.
    There are multiple BackfillWorkItemTypeIdForIssues background migrations that will be finalized with
    the EnsureWorkItemTypeBackfillMigrationFinished post-deploy migration.

  • GitLab 15.4.0 introduced a batched background migration to
    backfill namespace_id values on issues table. This
    migration might take multiple hours or days to complete on larger GitLab instances. Please make sure the migration
    has completed successfully before upgrading to 15.7.0.

  • A database constraint is added, specifying that the namespace_id column on the issues
    table has no NULL values.

    • If the namespace_id batched background migration from 15.4 failed (see above) then the 15.7 upgrade
      fails with a database migration error.

    • On GitLab instances with large issues tables, validating this constraint causes the upgrade to take
      longer than usual. All database changes need to complete within a one-hour limit:

      FATAL: Mixlib::ShellOut::CommandTimeout: rails_migration[gitlab-rails]
      [..]
      Mixlib::ShellOut::CommandTimeout: Command timed out after 3600s:

      A workaround exists to complete the data change and the upgrade manually.

  • The default Sidekiq max_concurrency has been changed to 20. This is now
    consistent in our documentation and product defaults.

    For example, previously:

    • Omnibus GitLab default (sidekiq['max_concurrency']): 50
    • From source installation default: 50
    • Helm chart default (gitlab.sidekiq.concurrency): 25

    Reference architectures still use a default of 10 as this is set specifically
    for those configurations.

    Sites that have configured max_concurrency will not be affected by this change.
    Read more about the Sidekiq concurrency setting.

15.6.0

  • You should use one of the officially supported PostgreSQL versions. Some database migrations can cause stability and performance issues with older PostgreSQL versions.

  • Git 2.37.0 and later is required by Gitaly. For installations from source, we recommend you use the Git version provided by Gitaly.

  • A database change to modify the behavior of four indexes fails on instances
    where these indexes do not exist:

    Caused by:
    PG::UndefinedTable: ERROR:  relation "index_issues_on_title_trigram" does not exist

    The other three indexes are: index_merge_requests_on_title_trigram, index_merge_requests_on_description_trigram,
    and index_issues_on_description_trigram.

    This issue was fixed in GitLab 15.7 and backported
    to GitLab 15.6.2. The issue can also be worked around:
    read about how to create these indexes.

15.5.0

  • GitLab 15.4.0 introduced a default Sidekiq routing rule that routes all jobs to the default queue. For instances using queue selectors, this will cause performance problems as some Sidekiq processes will be idle.
    • The default routing rule has been reverted in 15.5.4, so upgrading to that version or later will return to the previous behavior.

    • If a GitLab instance now listens only to the default queue (which is not currently recommended), it will be required to add this routing rule back in /etc/gitlab/gitlab.rb:

      sidekiq['routing_rules'] = [['*', 'default']]

15.4.1

A license caching issue prevents some premium features of GitLab from working correctly if you add a new license. Workarounds for this issue:

  • Restart all Rails, Sidekiq and Gitaly nodes after applying a new license. This clears the relevant license caches and allows all premium features to operate correctly.
  • Upgrade to a version that is not impacted by this issue. The following upgrade paths are available for impacted versions:
    • 15.2.5 —> 15.3.5
    • 15.3.0 — 15.3.4 —> 15.3.5
    • 15.4.1 —> 15.4.3

15.4.0

  • GitLab 15.4.0 includes a batched background migration to remove incorrect values from expire_at in ci_job_artifacts table.
    This migration might take hours or days to complete on larger GitLab instances.

  • By default, Gitaly and Praefect nodes use the time server at pool.ntp.org. If your instance can not connect to pool.ntp.org, configure the NTP_HOST variable.

  • GitLab 15.4.0 introduced a default Sidekiq routing rule that routes all jobs to the default queue. For instances using queue selectors, this will cause performance problems as some Sidekiq processes will be idle.

    • The default routing rule has been reverted in 15.4.5, so upgrading to that version or later will return to the previous behavior.

    • If a GitLab instance now listens only to the default queue (which is not currently recommended), it will be required to add this routing rule back in /etc/gitlab/gitlab.rb:

      sidekiq['routing_rules'] = [['*', 'default']]
  • New Git repositories created in Gitaly cluster no longer use the @hashed storage path. Server
    hooks for new repositories must be copied into a different location.

  • The structure of /etc/gitlab/gitlab-secrets.json was modified in GitLab 15.4,
    and new configuration was added to gitlab_pages, grafana, and mattermost sections.
    In a highly available or GitLab Geo environment, secrets need to be the same on all nodes.
    If you’re manually syncing the secrets file across nodes, or manually specifying secrets in
    /etc/gitlab/gitlab.rb, make sure /etc/gitlab/gitlab-secrets.json is the same on all nodes.

  • GitLab 15.4.0 introduced a batched background migration to
    backfill namespace_id values on issues table. This
    migration might take multiple hours or days to complete on larger GitLab instances. Please make sure the migration
    has completed successfully before upgrading to 15.7.0 or later.

15.3.4

A license caching issue prevents some premium features of GitLab from working correctly if you add a new license. Workarounds for this issue:

  • Restart all Rails, Sidekiq and Gitaly nodes after applying a new license. This clears the relevant license caches and allows all premium features to operate correctly.
  • Upgrade to a version that is not impacted by this issue. The following upgrade paths are available for impacted versions:
    • 15.2.5 —> 15.3.5
    • 15.3.0 — 15.3.4 —> 15.3.5
    • 15.4.1 —> 15.4.3

15.3.3

  • In GitLab 15.3.3, SAML Group Links API access_level attribute type changed to integer. See
    the API documentation.

  • A license caching issue prevents some premium features of GitLab from working correctly if you add a new license. Workarounds for this issue:

    • Restart all Rails, Sidekiq and Gitaly nodes after applying a new license. This clears the relevant license caches and allows all premium features to operate correctly.
    • Upgrade to a version that is not impacted by this issue. The following upgrade paths are available for impacted versions:
      • 15.2.5 —> 15.3.5
      • 15.3.0 — 15.3.4 —> 15.3.5
      • 15.4.1 —> 15.4.3

15.3.2

A license caching issue prevents some premium features of GitLab from working correctly if you add a new license. Workarounds for this issue:

  • Restart all Rails, Sidekiq and Gitaly nodes after applying a new license. This clears the relevant license caches and allows all premium features to operate correctly.
  • Upgrade to a version that is not impacted by this issue. The following upgrade paths are available for impacted versions:
    • 15.2.5 —> 15.3.5
    • 15.3.0 — 15.3.4 —> 15.3.5
    • 15.4.1 —> 15.4.3

15.3.1

A license caching issue prevents some premium features of GitLab from working correctly if you add a new license. Workarounds for this issue:

  • Restart all Rails, Sidekiq and Gitaly nodes after applying a new license. This clears the relevant license caches and allows all premium features to operate correctly.
  • Upgrade to a version that is not impacted by this issue. The following upgrade paths are available for impacted versions:
    • 15.2.5 —> 15.3.5
    • 15.3.0 — 15.3.4 —> 15.3.5
    • 15.4.1 —> 15.4.3

15.3.0

  • Incorrect deletion of object storage files on Geo secondary sites can occur in certain situations. See Geo: Incorrect object storage LFS file deletion on secondary site issue in GitLab 15.0.0 to 15.3.2.

  • LFS transfers can redirect to the primary from secondary site mid-session causing failed pull and clone requests when Geo proxying is enabled. Geo proxying is enabled by default in GitLab 15.1 and later. See Geo: LFS transfer redirect to primary from secondary site mid-session issue in GitLab 15.1.0 to 15.3.2 for more details.

  • New Git repositories created in Gitaly cluster no longer use the @hashed storage path. Server
    hooks for new repositories must be copied into a different location.

  • A license caching issue prevents some premium features of GitLab from working correctly if you add a new license. Workarounds for this issue:

    • Restart all Rails, Sidekiq and Gitaly nodes after applying a new license. This clears the relevant license caches and allows all premium features to operate correctly.
    • Upgrade to a version that is not impacted by this issue. The following upgrade paths are available for impacted versions:
      • 15.2.5 —> 15.3.5
      • 15.3.0 — 15.3.4 —> 15.3.5
      • 15.4.1 —> 15.4.3

15.2.5

A license caching issue prevents some premium features of GitLab from working correctly if you add a new license. Workarounds for this issue:

  • Restart all Rails, Sidekiq and Gitaly nodes after applying a new license. This clears the relevant license caches and allows all premium features to operate correctly.
  • Upgrade to a version that is not impacted by this issue. The following upgrade paths are available for impacted versions:
    • 15.2.5 —> 15.3.5
    • 15.3.0 — 15.3.4 —> 15.3.5
    • 15.4.1 —> 15.4.3

15.2.0

  • GitLab installations that have multiple web nodes should be
    upgraded to 15.1 before upgrading to 15.2 (and later) due to a
    configuration change in Rails that can result in inconsistent ETag key
    generation.

  • Some Sidekiq workers were renamed in this release. To avoid any disruption, run the Rake tasks to migrate any pending jobs before starting the upgrade to GitLab 15.2.0.

  • Gitaly now executes its binaries in a runtime location. By default on Omnibus GitLab,
    this path is /var/opt/gitlab/gitaly/run/. If this location is mounted with noexec, merge requests generate the following error:

    fork/exec /var/opt/gitlab/gitaly/run/gitaly-<nnnn>/gitaly-git2go-v15: permission denied

    To resolve this, remove the noexec option from the filesystem mount. An alternative is to change the Gitaly runtime directory:

    1. Add gitaly['runtime_dir'] = '<PATH_WITH_EXEC_PERM>' to /etc/gitlab/gitlab.rb and specify a location without noexec set.
    2. Run sudo gitlab-ctl reconfigure.
  • Incorrect deletion of object storage files on Geo secondary sites can occur in certain situations. See Geo: Incorrect object storage LFS file deletion on secondary site issue in GitLab 15.0.0 to 15.3.2.

  • LFS transfers can redirect to the primary from secondary site mid-session causing failed pull and clone requests when Geo proxying is enabled. Geo proxying is enabled by default in GitLab 15.1 and later. See Geo: LFS transfer redirect to primary from secondary site mid-session issue in GitLab 15.1.0 to 15.3.2 for more details.

15.1.0

  • If you run external PostgreSQL, particularly AWS RDS,
    check you have a PostgreSQL bug fix
    to avoid the database crashing.

  • In GitLab 15.1.0, we are switching Rails ActiveSupport::Digest to use SHA256 instead of MD5.
    This affects ETag key generation for resources such as raw Snippet file
    downloads. To ensure consistent ETag key generation across multiple
    web nodes when upgrading, all servers must first be upgraded to 15.1.Z before
    upgrading to 15.2.0 or later:

    1. Ensure all GitLab web nodes are running GitLab 15.1.Z.
    2. Enable the active_support_hash_digest_sha256 feature flag to switch ActiveSupport::Digest to use SHA256:
    3. Only then, continue to upgrade to later versions of GitLab.
  • Unauthenticated requests to the ciConfig GraphQL field are no longer supported.
    Before you upgrade to GitLab 15.1, add an access token to your requests.
    The user creating the token must have permission to create pipelines in the project.

  • Incorrect deletion of object storage files on Geo secondary sites can occur in certain situations. See Geo: Incorrect object storage LFS file deletion on secondary site issue in GitLab 15.0.0 to 15.3.2.

  • LFS transfers can redirect to the primary from secondary site mid-session causing failed pull and clone requests when Geo proxying is enabled. Geo proxying is enabled by default in GitLab 15.1 and later. See Geo: LFS transfer redirect to primary from secondary site mid-session issue in GitLab 15.1.0 to 15.3.2 for more details.

15.0.0

  • Elasticsearch 6.8 is no longer supported. Before you upgrade to GitLab 15.0, update Elasticsearch to any 7.x version.
  • If you run external PostgreSQL, particularly AWS RDS,
    check you have a PostgreSQL bug fix
    to avoid the database crashing.
  • The use of encrypted S3 buckets with storage-specific configuration is no longer supported after removing support for using background_upload.
  • The certificate-based Kubernetes integration (DEPRECATED) is disabled by default, but you can be re-enable it through the certificate_based_clusters feature flag until GitLab 16.0.
  • When you use the GitLab Helm Chart project with a custom serviceAccount, ensure it has get and list permissions for the serviceAccount and secret resources.
  • The custom_hooks_dir setting for configuring global server hooks is now configured in
    Gitaly. The previous implementation in GitLab Shell was removed in GitLab 15.0. With this change, global server hooks are stored only inside a subdirectory named after the
    hook type. Global server hooks can no longer be a single hook file in the root of the custom hooks directory. For example, you must use <custom_hooks_dir>/<hook_name>.d/* rather
    than <custom_hooks_dir>/<hook_name>.
  • Incorrect deletion of object storage files on Geo secondary sites can occur in certain situations. See Geo: Incorrect object storage LFS file deletion on secondary site issue in GitLab 15.0.0 to 15.3.2.
  • The FF_GITLAB_REGISTRY_HELPER_IMAGE feature flag is removed and helper images are always pulled from GitLab Registry.
  • The AES256-GCM-SHA384 SSL cipher is no longer allowed by NGINX.
    See how you can add the cipher back to the allow list.

14.10.0

  • Before upgrading to GitLab 14.10, you must already have the latest 14.9.Z installed on your instance.
    The upgrade to GitLab 14.10 executes a concurrent index drop of unneeded
    entries from the ci_job_artifacts database table. This could potentially run for multiple minutes, especially if the table has a lot of
    traffic and the migration is unable to acquire a lock. It is advised to let this process finish as restarting may result in data loss.

  • If you run external PostgreSQL, particularly AWS RDS,
    check you have a PostgreSQL bug fix
    to avoid the database crashing.

  • Upgrading to patch level 14.10.3 or later might encounter a one-hour timeout due to a long running database data change,
    if it was not completed while running GitLab 14.9.

    FATAL: Mixlib::ShellOut::CommandTimeout: rails_migration[gitlab-rails]
    (gitlab::database_migrations line 51) had an error:
    [..]
    Mixlib::ShellOut::CommandTimeout: Command timed out after 3600s:

    A workaround exists to complete the data change and the upgrade manually.

14.9.0

  • Database changes made by the upgrade to GitLab 14.9 can take hours or days to complete on larger GitLab instances.
    These batched background migrations update whole database tables to ensure corresponding
    records in namespaces table for each record in projects table.

    After you update to 14.9.0 or a later 14.9 patch version,
    batched background migrations must finish
    before you update to a later version.

    If the migrations are not finished and you try to update to a later version,
    you see errors like:

    Expected batched background migration for the given configuration to be marked as 'finished', but it is 'active':

    Or

    Error executing action `run` on resource 'bash[migrate gitlab-rails database]'
    ================================================================================
    
    Mixlib::ShellOut::ShellCommandFailed
    ------------------------------------
    Command execution failed. STDOUT/STDERR suppressed for sensitive resource
  • GitLab 14.9.0 includes a
    background migration ResetDuplicateCiRunnersTokenValuesOnProjects
    that may remain stuck permanently in a pending state.

    To clean up this stuck job, run the following in the GitLab Rails Console:

    Gitlab::Database::BackgroundMigrationJob.pending.where(class_name: "ResetDuplicateCiRunnersTokenValuesOnProjects").find_each do |job|
      puts Gitlab::Database::BackgroundMigrationJob.mark_all_as_succeeded("ResetDuplicateCiRunnersTokenValuesOnProjects", job.arguments)
    end
  • If you run external PostgreSQL, particularly AWS RDS,
    check you have a PostgreSQL bug fix
    to avoid the database crashing.

14.8.0

  • If upgrading from a version earlier than 14.6.5, 14.7.4, or 14.8.2, review the Critical Security Release: 14.8.2, 14.7.4, and 14.6.5 blog post.
    Updating to 14.8.2 or later resets runner registration tokens for your groups and projects.

  • The agent server for Kubernetes is enabled by default
    on Omnibus installations. If you run GitLab at scale,
    such as the reference architectures,
    you must disable the agent on the following server types, if the agent is not required.

    • Praefect
    • Gitaly
    • Sidekiq
    • Redis (if configured using redis['enable'] = true and not via roles)
    • Container registry
    • Any other server types based on roles(['application_role']), such as the GitLab Rails nodes

    The reference architectures have been updated
    with this configuration change and a specific role for standalone Redis servers.

    Steps to disable the agent:

    1. Add gitlab_kas['enable'] = false to gitlab.rb.
    2. If the server is already upgraded to 14.8, run gitlab-ctl reconfigure.
  • GitLab 14.8.0 includes a
    background migration PopulateTopicsNonPrivateProjectsCount
    that may remain stuck permanently in a pending state.

    To clean up this stuck job, run the following in the GitLab Rails Console:

        Gitlab::Database::BackgroundMigrationJob.pending.where(class_name: "PopulateTopicsNonPrivateProjectsCount").find_each do |job|
          puts Gitlab::Database::BackgroundMigrationJob.mark_all_as_succeeded("PopulateTopicsNonPrivateProjectsCount", job.arguments)
        end
  • If upgrading from a version earlier than 14.3.0, to avoid
    an issue with job retries, first upgrade
    to GitLab 14.7.x and make sure all batched migrations have finished.

  • If upgrading from version 14.3.0 or later, you might notice a failed
    batched migration named
    BackfillNamespaceIdForNamespaceRoute. You can ignore
    this. Retry it after you upgrade to version 14.9.x.

  • If you run external PostgreSQL, particularly AWS RDS,
    check you have a PostgreSQL bug fix
    to avoid the database crashing.

14.7.0

  • See LFS objects import and mirror issue in GitLab 14.6.0 to 14.7.2.

  • If upgrading from a version earlier than 14.6.5, 14.7.4, or 14.8.2, review the Critical Security Release: 14.8.2, 14.7.4, and 14.6.5 blog post.
    Updating to 14.7.4 or later resets runner registration tokens for your groups and projects.

  • GitLab 14.7 introduced a change where Gitaly expects persistent files in the /tmp directory.
    When using the noatime mount option on /tmp in a node running Gitaly, most Linux distributions
    run into an issue with Git server hooks getting deleted.
    These conditions are present in the default Amazon Linux configuration.

    If your Linux distribution manages files in /tmp with the tmpfiles.d service, you
    can override the behavior of tmpfiles.d for the Gitaly files and avoid this issue:

    sudo printf "x /tmp/gitaly-%s-*n" hooks git-exec-path >/etc/tmpfiles.d/gitaly-workaround.conf

    This issue is fixed in GitLab 14.10 and later when using the Gitaly runtime directory
    to specify a location to store persistent files.

14.6.0

  • See LFS objects import and mirror issue in GitLab 14.6.0 to 14.7.2.
  • If upgrading from a version earlier than 14.6.5, 14.7.4, or 14.8.2, review the Critical Security Release: 14.8.2, 14.7.4, and 14.6.5 blog post.
    Updating to 14.6.5 or later resets runner registration tokens for your groups and projects.

14.5.0

  • When make is run, Gitaly builds are now created in _build/bin and no longer in the root directory of the source directory. If you
    are using a source install, update paths to these binaries in your systemd unit files
    or init scripts by following the documentation.

  • Connections between Workhorse and Gitaly use the Gitaly backchannel protocol by default. If you deployed a gRPC proxy between Workhorse and Gitaly,
    Workhorse can no longer connect. As a workaround, disable the temporary workhorse_use_sidechannel
    feature flag. If you need a proxy between Workhorse and Gitaly, use a TCP proxy. If you have feedback about this change, go to this issue.

  • In 14.1 we introduced a background migration that changes how we store merge request diff commits,
    to significantly reduce the amount of storage needed.
    In 14.5 we introduce a set of migrations that wrap up this process by making sure
    that all remaining jobs over the merge_request_diff_commits table are completed.
    These jobs have already been processed in most cases so that no extra time is necessary during an upgrade to 14.5.
    However, if there are remaining jobs or you haven’t already upgraded to 14.1,
    the deployment may take multiple hours to complete.

    All merge request diff commits automatically incorporate these changes, and there are no
    additional requirements to perform the upgrade.
    Existing data in the merge_request_diff_commits table remains unpacked until you run VACUUM FULL merge_request_diff_commits.
    However, the VACUUM FULL operation locks and rewrites the entire merge_request_diff_commits table,
    so the operation takes some time to complete and it blocks access to this table until the end of the process.
    We advise you to only run this command while GitLab is not actively used or it is taken offline for the duration of the process.
    The time it takes to complete depends on the size of the table, which can be obtained by using select pg_size_pretty(pg_total_relation_size('merge_request_diff_commits'));.

    For more information, refer to this issue.

  • GitLab 14.5.0 includes a
    background migration UpdateVulnerabilityOccurrencesLocation
    that may remain stuck permanently in a pending state when the instance lacks records that match the migration’s target.

    To clean up this stuck job, run the following in the GitLab Rails Console:

        Gitlab::Database::BackgroundMigrationJob.pending.where(class_name: "UpdateVulnerabilityOccurrencesLocation").find_each do |job|
          puts Gitlab::Database::BackgroundMigrationJob.mark_all_as_succeeded("UpdateVulnerabilityOccurrencesLocation", job.arguments)
        end
  • Upgrading to 14.5 (or later) might encounter a one hour timeout
    owing to a long running database data change.

    FATAL: Mixlib::ShellOut::CommandTimeout: rails_migration[gitlab-rails]
    (gitlab::database_migrations line 51) had an error:
    [..]
    Mixlib::ShellOut::CommandTimeout: Command timed out after 3600s:

    There is a workaround to complete the data change and the upgrade manually

14.4.4

  • For zero-downtime upgrades on a GitLab cluster with separate Web and API nodes, you must enable the paginated_tree_graphql_query feature flag before upgrading GitLab Web nodes to 14.4.
    This is because we enabled paginated_tree_graphql_query by default in 14.4, so if GitLab UI is on 14.4 and its API is on 14.3, the frontend has this feature enabled but the backend has it disabled. This results in the following error:

    bundle.esm.js:63 Uncaught (in promise) Error: GraphQL error: Field 'paginatedTree' doesn't exist on type 'Repository'

14.4.0

  • Git 2.33.x and later is required. We recommend you use the
    Git version provided by Gitaly.

  • See Maintenance mode issue in GitLab 13.9 to 14.4.

  • After enabling database load balancing by default in 14.4.0, we found an issue where
    cron jobs would not work if the connection to PostgreSQL was severed,
    as Sidekiq would continue using a bad connection. Geo and other features that rely on
    cron jobs running regularly do not work until Sidekiq is restarted. We recommend
    upgrading to GitLab 14.4.3 and later if this issue affects you.

  • After enabling database load balancing by default in 14.4.0, we found an issue where
    Database load balancing does not work with an AWS Aurora cluster.
    We recommend moving your databases from Aurora to RDS for PostgreSQL before
    upgrading. Refer to Moving GitLab databases to a different PostgreSQL instance.

  • GitLab 14.4.0 includes a
    background migration PopulateTopicsTotalProjectsCountCache
    that may remain stuck permanently in a pending state when the instance lacks records that match the migration’s target.

    To clean up this stuck job, run the following in the GitLab Rails Console:

        Gitlab::Database::BackgroundMigrationJob.pending.where(class_name: "PopulateTopicsTotalProjectsCountCache").find_each do |job|
          puts Gitlab::Database::BackgroundMigrationJob.mark_all_as_succeeded("PopulateTopicsTotalProjectsCountCache", job.arguments)
        end

14.3.0

  • Instances running 14.0.0 — 14.0.4 should not upgrade directly to GitLab 14.2 or later.

  • Ensure batched background migrations finish before upgrading
    to 14.3.Z from earlier GitLab 14 releases.

  • Ruby 2.7.4 is required. Refer to the Ruby installation instructions
    for how to proceed.

  • GitLab 14.3.0 contains post-deployment migrations to address Primary Key overflow risk for tables with an integer PK for the tables listed below:

    • ci_builds.id
    • ci_builds.stage_id
    • ci_builds_metadata
    • taggings
    • events

    If the migrations are executed as part of a no-downtime deployment, there’s a risk of failure due to lock conflicts with the application logic, resulting in lock timeout or deadlocks. In each case, these migrations are safe to re-run until successful:

    # For Omnibus GitLab
    sudo gitlab-rake db:migrate
    
    # For source installations
    sudo -u git -H bundle exec rake db:migrate RAILS_ENV=production
  • After upgrading to 14.3, ensure that all the MigrateMergeRequestDiffCommitUsers background
    migration jobs have completed before continuing with upgrading to GitLab 14.5 or later.
    This is especially important if your GitLab instance has a large
    merge_request_diff_commits table. Any pending
    MigrateMergeRequestDiffCommitUsers background migration jobs are
    foregrounded in GitLab 14.5, and may take a long time to complete.
    You can check the count of pending jobs for
    MigrateMergeRequestDiffCommitUsers by using the PostgreSQL console (or sudo gitlab-psql):

    select status, count(*) from background_migration_jobs 
    where class_name = 'MigrateMergeRequestDiffCommitUsers' group by status;

    As jobs are completed, the database records change from 0 (pending) to 1. If the number of
    pending jobs doesn’t decrease after a while, it’s possible that the
    MigrateMergeRequestDiffCommitUsers background migration jobs have failed. You
    can check for errors in the Sidekiq logs:

    sudo grep MigrateMergeRequestDiffCommitUsers /var/log/gitlab/sidekiq/current | grep -i error

    If needed, you can attempt to run the MigrateMergeRequestDiffCommitUsers background
    migration jobs manually in the GitLab Rails Console.
    This can be done using Sidekiq asynchronously, or by using a Rails process directly:

    • Using Sidekiq to schedule jobs asynchronously:

      # For the first run, only attempt to execute 1 migration. If successful, increase
      # the limit for subsequent runs
      limit = 1
      
      jobs = Gitlab::Database::BackgroundMigrationJob.for_migration_class('MigrateMergeRequestDiffCommitUsers').pending.to_a
      
      pp "#{jobs.length} jobs remaining"
      
      jobs.first(limit).each do |job|
        BackgroundMigrationWorker.perform_in(5.minutes, 'MigrateMergeRequestDiffCommitUsers', job.arguments)
      end

      NOTE:
      The queued jobs can be monitored using Sidekiq’s admin panel, which can be accessed at the /admin/sidekiq endpoint URI.

    • Using a Rails process to run jobs synchronously:

      def process(concurrency: 1)
        queue = Queue.new
      
        Gitlab::Database::BackgroundMigrationJob
          .where(class_name: 'MigrateMergeRequestDiffCommitUsers', status: 0)
          .each { |job| queue << job }
      
        concurrency
          .times
          .map do
            Thread.new do
              Thread.abort_on_exception = true
      
              loop do
                job = queue.pop(true)
                time = Benchmark.measure do
                  Gitlab::BackgroundMigration::MigrateMergeRequestDiffCommitUsers
                    .new
                    .perform(*job.arguments)
                end
      
                puts "#{job.id} finished in #{time.real.round(2)} seconds"
              rescue ThreadError
                break
              end
            end
          end
          .each(&:join)
      end
      
      ActiveRecord::Base.logger.level = Logger::ERROR
      process

      NOTE:
      When using Rails to execute these background migrations synchronously, make sure that the machine running the process has sufficient resources to handle the task. If the process gets terminated, it’s likely due to insufficient memory available. If your SSH session times out after a while, it might be necessary to run the previous code by using a terminal multiplexer like screen or tmux.

  • See Maintenance mode issue in GitLab 13.9 to 14.4.

  • You may see the following error when setting up two factor authentication (2FA) for accounts
    that authenticate using an LDAP password:

    You must provide a valid current password
    • The error occurs because verification is incorrectly performed against accounts’
      randomly generated internal GitLab passwords, not the LDAP passwords.
    • This is fixed in GitLab 14.5.0 and backported to 14.4.3.
    • Workarounds:
      • Instead of upgrading to GitLab 14.3.x to comply with the supported upgrade path:

        1. Upgrade to 14.4.5.
        2. Make sure the MigrateMergeRequestDiffCommitUsers background migration has finished.
        3. Upgrade to GitLab 14.5 or later.
      • Reset the random password for affected accounts, using the Rake task:

        sudo gitlab-rake "gitlab:password:reset[user_handle]"

14.2.0

  • Instances running 14.0.0 — 14.0.4 should not upgrade directly to GitLab 14.2 or later.

  • Ensure batched background migrations finish before upgrading
    to 14.2.Z from earlier GitLab 14 releases.

  • GitLab 14.2.0 contains background migrations to address Primary Key overflow risk for tables with an integer PK for the tables listed below:

    • ci_build_needs
    • ci_build_trace_chunks
    • ci_builds_runner_session
    • deployments
    • geo_job_artifact_deleted_events
    • push_event_payloads
    • ci_job_artifacts:

      • Finalize job_id conversion to bigint for ci_job_artifacts
      • Finalize ci_job_artifacts conversion to bigint

    If the migrations are executed as part of a no-downtime deployment, there’s a risk of failure due to lock conflicts with the application logic, resulting in lock timeout or deadlocks. In each case, these migrations are safe to re-run until successful:

    # For Omnibus GitLab
    sudo gitlab-rake db:migrate
    
    # For source installations
    sudo -u git -H bundle exec rake db:migrate RAILS_ENV=production
  • See Maintenance mode issue in GitLab 13.9 to 14.4.

  • GitLab 14.2.0 includes a
    background migration BackfillDraftStatusOnMergeRequests
    that may remain stuck permanently in a pending state when the instance lacks records that match the migration’s target.

    To clean up this stuck job, run the following in the GitLab Rails Console:

    Gitlab::Database::BackgroundMigrationJob.pending.where(class_name: "BackfillDraftStatusOnMergeRequests").find_each do |job|
      puts Gitlab::Database::BackgroundMigrationJob.mark_all_as_succeeded("BackfillDraftStatusOnMergeRequests", job.arguments)
    end

14.1.0

  • Instances running 14.0.0 — 14.0.4 should not upgrade directly to GitLab 14.2 or later
    but can upgrade to 14.1.Z.

    It is not required for instances already running 14.0.5 (or later) to stop at 14.1.Z.
    14.1 is included on the upgrade path for the broadest compatibility
    with self-managed installations, and ensure 14.0.0-14.0.4 installations do not
    encounter issues with batched background migrations.

  • Upgrading to GitLab 14.5 (or later) may take a lot longer if you do not upgrade to at least 14.1
    first. The 14.1 merge request diff commits database migration can take hours to run, but runs in the
    background while GitLab is in use. GitLab instances upgraded directly from 14.0 to 14.5 or later must
    run the migration in the foreground and therefore take a lot longer to complete.

  • See Maintenance mode issue in GitLab 13.9 to 14.4.

14.0.0

Prerequisites:

  • The GitLab 14.0 release post contains several important notes
    about pre-requisites including using Patroni instead of repmgr,
    migrating to hashed storage,
    and to Puma.
  • The support of PostgreSQL 11 has been dropped. Make sure to update your database to version 12 before updating to GitLab 14.0.

Long running batched background database migrations:

  • Database changes made by the upgrade to GitLab 14.0 can take hours or days to complete on larger GitLab instances.
    These batched background migrations update whole database tables to mitigate primary key overflow and must be finished before upgrading to GitLab 14.2 or later.

  • Due to an issue where BatchedBackgroundMigrationWorkers were
    not working
    for self-managed instances, a fix was created
    that requires an update to at least 14.0.5. The fix was also released in 14.1.0.

    After you update to 14.0.5 or a later 14.0 patch version,
    batched background migrations must finish
    before you update to a later version.

    If the migrations are not finished and you try to update to a later version,
    you see an error like:

    Expected batched background migration for the given configuration to be marked as 'finished', but it is 'active':

    See how to resolve this error.

Other issues:

  • In GitLab 13.3 some pipeline processing methods were deprecated
    and this code was completely removed in GitLab 14.0. If you plan to upgrade from
    GitLab 13.2 or older directly to 14.0, this is unsupported.
    You should instead follow a supported upgrade path.
  • See Maintenance mode issue in GitLab 13.9 to 14.4.
  • See Custom Rack Attack initializers if you persist your own custom Rack Attack
    initializers during upgrades.

Upgrading to later 14.Y releases

  • Instances running 14.0.0 — 14.0.4 should not upgrade directly to GitLab 14.2 or later,
    because of batched background migrations.

    1. Upgrade first to either:
      • 14.0.5 or a later 14.0.Z patch release.
      • 14.1.0 or a later 14.1.Z patch release.
    2. Batched background migrations must finish
      before you update to a later version and may take longer than usual.

13.12.0

  • See Maintenance mode issue in GitLab 13.9 to 14.4.

  • Check the GitLab database has no references to legacy storage.
    The GitLab 14.0 pre-install check causes the package update to fail if unmigrated data exists:

    Checking for unmigrated data on legacy storage
    
    Legacy storage is no longer supported. Please migrate your data to hashed storage.

13.11.0

  • Git 2.31.x and later is required. We recommend you use the
    Git version provided by Gitaly.

  • See Maintenance mode issue in GitLab 13.9 to 14.4.

  • GitLab 13.11 includes a faulty background migration (RescheduleArtifactExpiryBackfillAgain)
    that incorrectly sets the expire_at column in the ci_job_artifacts database table.
    Incorrect expire_at values can potentially cause data loss.

    To prevent this risk of data loss, you must remove the content of the RescheduleArtifactExpiryBackfillAgain
    migration, which makes it a no-op migration. You can repeat the changes from the
    commit that makes the migration no-op in 14.9 and later.
    For more information, see how to disable a data migration.

13.10.0

See Maintenance mode issue in GitLab 13.9 to 14.4.

13.9.0

  • We’ve detected an issue with a column rename
    that prevents upgrades to GitLab 13.9.0, 13.9.1, 13.9.2, and 13.9.3 when following the zero-downtime steps. It is necessary
    to perform the following additional steps for the zero-downtime upgrade:

    1. Before running the final sudo gitlab-rake db:migrate command on the deploy node,
      execute the following queries using the PostgreSQL console (or sudo gitlab-psql)
      to drop the problematic triggers:

      drop trigger trigger_e40a6f1858e6 on application_settings;
      drop trigger trigger_0d588df444c8 on application_settings;
      drop trigger trigger_1572cbc9a15f on application_settings;
      drop trigger trigger_22a39c5c25f3 on application_settings;
    2. Run the final migrations:

      sudo gitlab-rake db:migrate

    If you have already run the final sudo gitlab-rake db:migrate command on the deploy node and have
    encountered the column rename issue, you
    see the following error:

    -- remove_column(:application_settings, :asset_proxy_whitelist)
    rake aborted!
    StandardError: An error has occurred, all later migrations canceled:
    PG::DependentObjectsStillExist: ERROR: cannot drop column asset_proxy_whitelist of table application_settings because other objects depend on it
    DETAIL: trigger trigger_0d588df444c8 on table application_settings depends on column asset_proxy_whitelist of table application_settings

    To work around this bug, follow the previous steps to complete the update.
    More details are available in this issue.

  • See Maintenance mode issue in GitLab 13.9 to 14.4.

  • For GitLab Enterprise Edition customers, we noticed an issue when subscription expiration is upcoming, and you create new subgroups and projects. If you fall under that category and get 500 errors, you can work around this issue:

    1. SSH into you GitLab server, and open a Rails console:

      sudo gitlab-rails console
    2. Disable the following features:

      Feature.disable(:subscribable_subscription_banner)
      Feature.disable(:subscribable_license_banner)
    3. Restart Puma or Unicorn:

      #For installations using Puma
      sudo gitlab-ctl restart puma
      
      #For installations using Unicorn
      sudo gitlab-ctl restart unicorn

13.8.8

GitLab 13.8 includes a background migration to address an issue with duplicate service records. If duplicate services are present, this background migration must complete before a unique index is applied to the services table, which was introduced in GitLab 13.9. Upgrades from GitLab 13.8 and earlier to later versions must include an intermediate upgrade to GitLab 13.8.8 and must wait until the background migrations complete before proceeding.

If duplicate services are still present, an upgrade to 13.9.x or later results in a failed upgrade with the following error:

PG::UniqueViolation: ERROR:  could not create unique index "index_services_on_project_id_and_type_unique"
DETAIL:  Key (project_id, type)=(NNN, ServiceName) is duplicated.

13.6.0

Ruby 2.7.2 is required. GitLab does not start with Ruby 2.6.6 or older versions.

The required Git version is Git v2.29 or later.

GitLab 13.6 includes a
background migration BackfillJiraTrackerDeploymentType2
that may remain stuck permanently in a pending state despite completion of work
due to a bug.

To clean up this stuck job, run the following in the GitLab Rails Console:

Gitlab::Database::BackgroundMigrationJob.pending.where(class_name: "BackfillJiraTrackerDeploymentType2").find_each do |job|
  puts Gitlab::Database::BackgroundMigrationJob.mark_all_as_succeeded("BackfillJiraTrackerDeploymentType2", job.arguments)
end

13.4.0

GitLab 13.4.0 includes a background migration to move all remaining repositories in legacy storage to hashed storage. There are known issues with this migration which are fixed in GitLab 13.5.4 and later. If possible, skip 13.4.0 and upgrade to 13.5.4 or later instead. The migration can take quite a while to run, depending on how many repositories must be moved. Be sure to check that all background migrations have completed before upgrading further.

13.3.0

The recommended Git version is Git v2.28. The minimum required version of Git
v2.24 remains the same.

13.2.0

GitLab installations that have multiple web nodes must be
upgraded to 13.1 before upgrading to 13.2 (and later) due to a
breaking change in Rails that can result in authorization issues.

GitLab 13.2.0 remediates an email verification bypass.
After upgrading, if some of your users are unexpectedly encountering 404 or 422 errors when signing in,
or «blocked» messages when using the command line,
their accounts may have been un-confirmed.
In that case, ask them to check their email for a re-confirmation link.
For more information, see our discussion of Email confirmation issues.

GitLab 13.2.0 relies on the btree_gist extension for PostgreSQL. For installations with an externally managed PostgreSQL setup, make sure to
install the extension manually before upgrading GitLab if the database user for GitLab
is not a superuser. This is not necessary for installations using a GitLab managed PostgreSQL database.

13.1.0

In 13.1.0, you must upgrade to either:

  • At least Git v2.24 (previously, the minimum required version was Git v2.22).
  • The recommended Git v2.26.

Failure to do so results in internal errors in the Gitaly service in some RPCs due
to the use of the new --end-of-options Git flag.

Additionally, in GitLab 13.1.0, the version of
Rails was upgraded from 6.0.3 to 6.0.3.1.
The Rails upgrade included a change to CSRF token generation which is
not backwards-compatible — GitLab servers with the new Rails version
generate CSRF tokens that are not recognizable by GitLab servers
with the older Rails version — which could cause non-GET requests to
fail for multi-node GitLab installations.

So, if you are using multiple Rails servers and specifically upgrading from 13.0,
all servers must first be upgraded to 13.1.Z before upgrading to 13.2.0 or later:

  1. Ensure all GitLab web nodes are running GitLab 13.1.Z.

  2. Enable the global_csrf_token feature flag to enable new
    method of CSRF token generation:

    Feature.enable(:global_csrf_token)
  3. Only then, continue to upgrade to later versions of GitLab.

Custom Rack Attack initializers

From GitLab 13.0.1, custom Rack Attack initializers (config/initializers/rack_attack.rb) are replaced with initializers
supplied with GitLab during upgrades. We recommend you use these GitLab-supplied initializers.

If you persist your own Rack Attack initializers between upgrades, you might
get 500 errors when upgrading to GitLab 14.0 and later.

12.10.0

  • The final patch release (12.10.14)
    has a regression affecting maven package uploads.
    If you use this feature and must stay on 12.10 while preparing to upgrade to 13.0:

    • Upgrade to 12.10.13 instead.
    • Upgrade to 13.0.14 as soon as possible.
  • GitLab 13.0 requires PostgreSQL 11.

    • 12.10 is the final release that shipped with PostgreSQL 9.6, 10, and 11.
    • You should make sure that your database is PostgreSQL 11 on GitLab 12.10 before upgrading to 13.0. This upgrade requires downtime.

12.2.0

In 12.2.0, we enabled Rails’ authenticated cookie encryption. Old sessions are
automatically upgraded.

However, session cookie downgrades are not supported. So after upgrading to 12.2.0,
any downgrades would result to all sessions being invalidated and users are logged out.

12.1.0

  • If you are planning to upgrade from 12.0.Z to 12.10.Z, it is necessary to
    perform an intermediary upgrade to 12.1.Z before upgrading to 12.10.Z to
    avoid issues like #215141.

  • Support for MySQL was removed in GitLab 12.1. Existing users using GitLab with
    MySQL/MariaDB should
    migrate to PostgreSQL
    before upgrading.

12.0.0

In 12.0.0 we made various database related changes. These changes require that
users first upgrade to the latest 11.11 patch release. After upgraded to 11.11.Z,
users can upgrade to 12.0.Z. Failure to do so may result in database migrations
not being applied, which could lead to application errors.

It is also required that you upgrade to 12.0.Z before moving to a later version
of 12.Y.

Example 1: you are currently using GitLab 11.11.8, which is the latest patch
release for 11.11.Z. You can upgrade as usual to 12.0.Z.

Example 2: you are currently using a version of GitLab 10.Y. To upgrade, first
upgrade to the last 10.Y release (10.8.7) then the last 11.Y release (11.11.8).
After upgraded to 11.11.8 you can safely upgrade to 12.0.Z.

See our documentation on upgrade paths
for more information.

Change to Praefect-generated replica paths in GitLab 15.3

New Git repositories created in Gitaly cluster no longer use the @hashed storage path.

Praefect now generates replica paths for use by Gitaly cluster.
This change is a pre-requisite for Gitaly cluster atomically creating, deleting, and
renaming Git repositories.

To identify the replica path, query the Praefect repository metadata
and pass the @hashed storage path to -relative-path.

With this information, you can correctly install server hooks.

Maintenance mode issue in GitLab 13.9 to 14.4

When Maintenance mode is enabled, users cannot sign in with SSO, SAML, or LDAP.

Users who were signed in before Maintenance mode was enabled, continue to be signed in. If the administrator who enabled Maintenance mode loses their session, then they can’t disable Maintenance mode via the UI. In that case, you can disable Maintenance mode via the API or Rails console.

This bug was fixed in GitLab 14.5.0 and backported into 14.4.3 and 14.3.5.

LFS objects import and mirror issue in GitLab 14.6.0 to 14.7.2

When Geo is enabled, LFS objects fail to be saved for imported or mirrored projects.

This bug was fixed in GitLab 14.8.0 and backported into 14.7.3.

PostgreSQL segmentation fault issue

If you run GitLab with external PostgreSQL, particularly AWS RDS, ensure you upgrade PostgreSQL
to patch levels to a minimum of 12.7 or 13.3 before upgrading to GitLab 14.8 or later.

In 14.8
for GitLab Enterprise Edition and in 15.1
for GitLab Community Edition a GitLab feature called Loose Foreign Keys was enabled.

After it was enabled, we have had reports of unplanned PostgreSQL restarts caused
by a database engine bug that causes a segmentation fault.

Read more in the issue.

Geo: Incorrect object storage LFS file deletion on secondary sites in GitLab 15.0.0 to 15.3.2

Incorrect deletion of object storage files on Geo secondary sites
can occur in GitLab 15.0.0 to 15.3.2 in the following situations:

  • GitLab-managed object storage replication is disabled, and LFS objects are created while importing a project with object storage enabled.
  • GitLab-managed replication to sync object storage is enabled and subsequently disabled.

This issue is resolved in 15.3.3. Customers who have both LFS enabled and LFS objects being replicated across Geo sites
should upgrade directly to 15.3.3 to reduce the risk of data loss on secondary sites.

Geo: LFS transfers redirect to primary from secondary site mid-session in GitLab 15.1.0 to 15.3.2

LFS transfers can redirect to the primary from secondary site mid-session causing failed pull and clone requests in GitLab 15.1.0 to 15.3.2 when Geo proxying is enabled. Geo proxying is enabled by default in GitLab 15.1 and later.

This issue is resolved in GitLab 15.3.3, so customers with the following configuration should upgrade to 15.3.3 or later:

  • LFS is enabled.
  • LFS objects are being replicated across Geo sites.
  • Repositories are being pulled by using a Geo secondary site.

Miscellaneous

  • Managing PostgreSQL extensions

WildTuna

WildTuna

Posted on May 20, 2022

• Updated on May 21, 2022

Image description

Столкнулся недавно с проблемой обновления GitLab до версии 12.6 из-за ошибки реконфигурации, вызываемой попыткой обновления сертификата Let’s Encrypt. Так получилось, что обновление пришлось на день, когда был просрочен сертификат. Не беда! Отключаем использование сертификата в /etc/gitlab/gitlab.rb и запускаем реконфигурацию вручную:

nano /etc/gitlab/gitlab.rb
// Находим это строку и выставляем значение false
letsencrypt['enable'] = false
// Сохраняем файл и выполняем реконфигурацию
gitlab-ctl reconfigure

Enter fullscreen mode

Exit fullscreen mode

Теперь GitLab успешно применит новые параметры и сможет обновиться. После завершения обновления возвращаем использование Let’s Encrypt в /etc/gitlab/gitlab.rb, снова реконфигурируем GitLab и… снова получаем прежнюю ошибку!

Running handlers:
There was an error running gitlab-ctl reconfigure:
letsencrypt_certificate[git.hostname.com] (letsencrypt::http_authorization line 5) had an error: RuntimeError: acme_certificate[staging] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/letsencrypt/resources/certificate.rb line 25) had an error: RuntimeError: ruby_block[create certificate for git.lapaygroup.ru] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/acme/resources/certificate.rb line 108) had an error: RuntimeError: [git.hostname.com] Validation failed, unable to request certificate

Enter fullscreen mode

Exit fullscreen mode

Огорченный полез гуглить. Оказалось, проблема не только у меня, но четкого решения проблемы нет. После пары часов изучения баг-трекера GitLab и тщетных попыток устранить проблему мой рецепт решения был найден, и он очень простой:

  1. проверить, что порты 80 и 443 открыты;
  2. проверить домен на сайте Let’s Encrypt;
  3. установить параметры в /etc/gitlab/gitlab.rb:

    nginx[‘redirect_http_to_https’] = true
    nginx[‘redirect_http_to_https_port’] = 80
  4. удалить файлы проблемного сертификата из папки /etc/gitlab/ssl;
  5. запросить новый сертификат: gitlab-ctl renew-le-certs.

В моем случае оказался закрыт 80 порт и что-то случилось с правами на файлы сертификата в /etc/gitlab/ssl — при обновлении GitLab не мог их заменить.

Надеюсь, мой рецепт поможет решить проблему и вам!

This post is about attempt to upgrade the GitLab server installation from version 8.x to 10.x. I explained the issues and how I solved them.
Tldr? Move to the solution.

If you have GitLab server set up, then visit https://URL_to_your_gitlab/help (your own GitLab server installation). It’ll show some info about your GitLab server like it’s version etc. The server I got my hands on hasn’t been upgraded for a while now. Hence, many packages were out of date. So now while running the “apt upgrade” command, it started to upgrade the packages as usual. It downloaded gitlab-ce latest version 10.x.x as expected. Current version was 8.x.x. So when the package-manager reached to the point of setting up gitlab-ce, it started with the configuration. Everything was going well. Then it got to upgrading Postgresql database server. It also needed an upgrade. So usually it migrates the data/tables as per the new structure introduced in newer version. But while configuring it gave errors.

Checking PostgreSQL executables: OK
Shutting down all GitLab services except those needed for migrations
ok: down: gitlab-workhorse: 4120s, normally up
ok: down: logrotate: 4120s, normally up
ok: down: nginx: 4119s, normally up
ok: down: postgresql: 0s, normally up, want up
ok: down: redis: 0s, normally up
ok: down: sidekiq: 4119s, normally up
ok: down: unicorn: 4119s, normally up
timeout: down: postgresql: 0s, normally up, want up
ok: run: redis: (pid 48594) 0s

down: postgresql: 1s, normally up, want up; run: log: (pid 1075) 38594s
down: postgresql: 1s, normally up, want up; run: log: (pid 1075) 38595s
down: postgresql: 1s, normally up, want up; run: log: (pid 1075) 38596s
down: postgresql: 1s, normally up, want up; run: log: (pid 1075) 38597s
down: postgresql: 1s, normally up, want up; run: log: (pid 1075) 38598s

. . .

and at the end:

Errors were encountered while processing:
gitlab-ce

Or even after manually starting postgresql it didn’t show up:

$ sudo gitlab-ctl start postgresql
timeout: down: postgresql: 0s, normally up, want up
$ sudo gitlab-ctl status
run: gitlab-workhorse: (pid 61389) 1255s; run: log: (pid 1073) 50380s
run: logrotate: (pid 61358) 1267s; run: log: (pid 1069) 50380s
run: nginx: (pid 61045) 1481s; run: log: (pid 1076) 50380s
down: postgresql: 0s, normally up, want up; run: log: (pid 64776) 187s
run: redis: (pid 60439) 1730s; run: log: (pid 1071) 50380s
run: sidekiq: (pid 65362) 2s; run: log: (pid 1070) 50380s
run: unicorn: (pid 65335) 10s; run: log: (pid 1068) 50380s

Running sudo gitlab-ctl reconfigure showed this error:

$ sudo gitlab-ctl reconfigure
PG::ConnectionBad: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.xxxx"?

So it was definitely something wrong with the Postgresql shipped with the gitlab.

I tried to create backup with the following command, but it failed.

$ sudo gitlab-rake gitlab:backup:create
rake aborted!
storage "default" is missing a gitaly_address

While checking the version info, it showed:

$ sudo gitlab-rake gitlab:env:info
rake aborted!
Bundler::GemRequireError: There was an error while trying to load the gem 'uglifier'.
Gem Load Error is: Could not find a JavaScript runtime. See https://github.com/rails/execjs for a list of available runtimes.

I searched about this error and found this and this and got to know that it requires nodejs to be installed now.

sudo apt-get install nodejs

After installing nodejs, sudo gitlab-rake gitlab:env:info showed some info, but not about the gitlab yet.

$ sudo gitlab-rake gitlab:env:info

System information
System: Ubuntu 14.04
Current User: git
Using RVM: no
Ruby Version: 2.3.5p376
Gem Version: 2.6.13
Bundler Version:1.13.7
Rake Version: 12.3.0
Redis Version: 3.2.11
Git Version: 2.14.3
Sidekiq Version:5.0.4
Go Version: unknown
rake aborted!
PG::ConnectionBad: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.XXXX"?

One may check about the package version using:

sudo apt-cache policy gitlab-ce | grep Installed
Installed: 10.3.3-ce.0

Now the main issue left was that of postgresql. So searched a lot about it too.

In gitlab postgres logs at /var/log/gitlab/postgresql/current, I found entries like this:

LOG: unrecognized configuration parameter "unix_socket_directory" in file "/var/opt/gitlab/postgresql/data/postgresql.conf"

I thought of restoring the backup of the GitLab. GitLab backups are stored at /var/opt/gitlab/backups

I tried the following command for restore. Here 1113118501 is the timestamp of the backup file present at /var/opt/gitlab/backups e.g. 1113118501_gitlab_backup.tar

$ sudo gitlab-rake gitlab:backup:restore BACKUP=1113118501
Unpacking backup ... done
GitLab version mismatch:
Your current GitLab version (10.3.3) differs from the GitLab version in the backup!
Please switch to the following version and try again:
version: 8.6.5

Hint: git checkout v8.6.5

Solution

Then ultimately found this link of an IRC chat on #gitlab channel and this issue. That said it’s 8.x is quite old version and directly upgrading to 10.x would run you into issues.

It said:

Two versions of PG were included in GitLab through the 9.x releases, but we removed the older one in 10.0 since it’s been deprecated for a while.

gableroux try stepping up to 9.x first then 10.x

So for installing 9.x, I went to this link and opened 9.5.9 for my distro:

https://packages.gitlab.com/gitlab/gitlab-ce/packages/ubuntu/trusty/gitlab-ce_9.5.9-ce.0_amd64.deb
It mentions the steps to install this deb.

The two commands are:

$ curl -s https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash

Detected operating system as Ubuntu/trusty.
Checking for curl...
Detected curl...
Running apt-get update... done.
Installing apt-transport-https... done.
Installing /etc/apt/sources.list.d/gitlab_gitlab-ce.list...done.
Importing packagecloud gpg key... done.
Running apt-get update... done.

The repository is setup! You can now install packages.

And:

$ sudo apt-get install gitlab-ce=9.5.9-ce.0

dpkg: warning: downgrading gitlab-ce from 10.3.3-ce.0 to 9.5.9-ce.0
(Reading database ... 110272 files and directories currently installed.)
Preparing to unpack .../gitlab-ce_9.5.9-ce.0_amd64.deb ...
gitlab preinstall: Automatically backing up only the GitLab SQL database (excluding everything else!)
Dumping database ...
Dumping PostgreSQL database gitlabhq_production ... pg_dump: [archiver (db)] connection to database "gitlabhq_production" failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"?
[FAILED]
Backup failed
gitlab preinstall:
gitlab preinstall: Backup failed! If you want to skip this backup, run the following command and
gitlab preinstall: try again:
gitlab preinstall:
gitlab preinstall: sudo touch /etc/gitlab/skip-auto-migrations
gitlab preinstall:
dpkg: error processing archive /var/cache/apt/archives/gitlab-ce_9.5.9-ce.0_amd64.deb (--unpack):
subprocess new pre-installation script returned error exit status 1
Errors were encountered while processing:
/var/cache/apt/archives/gitlab-ce_9.5.9-ce.0_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>

If you notice in the above error, they suggested to try something. So, as said, I created the file:

$ sudo touch /etc/gitlab/skip-auto-migrations

and tried installing again:

gitlab
Checking PostgreSQL executables: OK
Found /etc/gitlab/skip-auto-migrations, exiting…

This time it skipped the database migrations.

Then I executed:

$ sudo gitlab-ctl reconfigure

Running handlers complete
Chef Client finished, xxx/xxx resources updated in 01 minutes 00 seconds
gitlab Reconfigured!

Restarted gitlab and this time everything was fine.

$ sudo gitlab-ctl restart

ok: run: gitaly: (pid 107106) 1s
ok: run: gitlab-monitor: (pid 107122) 0s
ok: run: gitlab-workhorse: (pid 107125) 1s
ok: run: logrotate: (pid 107132) 0s
ok: run: nginx: (pid 107138) 1s
ok: run: node-exporter: (pid 107147) 0s
ok: run: postgres-exporter: (pid 107187) 0s
ok: run: postgresql: (pid 107195) 0s
ok: run: prometheus: (pid 107203) 1s
ok: run: redis: (pid 107211) 0s
ok: run: redis-exporter: (pid 107215) 0s
ok: run: sidekiq: (pid 107220) 1s
ok: run: unicorn: (pid 107227) 0s

It includes some new modules too, like gitaly. It was missing earlier that’s why it was giving error related to gitaly earlier (mentioned above). But now everything was fine and GitLab was up and running with a new look and feel.

Now time to upgrade to 10.x.

I thought of upgrading postgresql first.

$ sudo gitlab-ctl pg-upgrade

and it completed with the upgrade. Then I executed:

$ sudo apt upgrade

Unpacking gitlab-ce (10.3.4-ce.0) over (9.5.9-ce.0) ...

Upgrade complete! If your GitLab server is misbehaving try running
sudo gitlab-ctl restart

I restarted as said above some services didn’t start. So I reconfigured again.

$ sudo gitlab-ctl reconfigure

and did some database migrations again. And restarted again and all worked perfectly.

Thanks for reading.

Понравилась статья? Поделить с друзьями:

Читайте также:

  • Gitlab error tracking
  • Gitlab error src refspec master does not match any
  • Get adcomputer сервер вернул следующую ошибку недопустимый контекст перечисления
  • Get 500 ошибка
  • Get 500 internal server error ajax

  • 0 0 голоса
    Рейтинг статьи
    Подписаться
    Уведомить о
    guest

    0 комментариев
    Старые
    Новые Популярные
    Межтекстовые Отзывы
    Посмотреть все комментарии