An error occurred while trying to run a new pipeline for this merge request

GitLab CE Mirror | Please open new issues in our issue tracker on GitLab.com - gitlabhq/merge_request_pipelines.md at master · gitlabhq/gitlabhq
stage group info

Verify

Pipeline Execution

To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments

Merge request pipelines (FREE)

Renamed from pipelines for merge requests to merge request pipelines in GitLab 14.8.

You can configure your pipeline to run every time you commit changes to a branch.
This type of pipeline is called a branch pipeline.

Alternatively, you can configure your pipeline to run every time you make changes to the
source branch for a merge request. This type of pipeline is called a merge request pipeline.

Branch pipelines:

  • Run when you push a new commit to a branch.
  • Are the default type of pipeline.
  • Have access to some predefined variables.
  • Have access to protected variables and protected runners.

Merge request pipelines:

  • Do not run by default. The jobs in the CI/CD configuration file must be configured
    to run in merge request pipelines.
  • If configured, merge request pipelines run when you:
    • Create a new merge request from a source branch with one or more commits.
    • Push a new commit to the source branch for a merge request.
    • Select Run pipeline from the Pipelines tab in a merge request. This option
      is only available when merge request pipelines are configured for the pipeline
      and the source branch has at least one commit.
  • Have access to more predefined variables.
  • Do not have access to protected variables or protected runners.

Both of these types of pipelines can appear on the Pipelines tab of a merge request.

Types of merge request pipelines

The three types of merge request pipelines are:

  • Merge request pipelines, which run on the changes in the merge request’s
    source branch. Introduced
    in GitLab 14.9, these pipelines display a merge request label to indicate that the
    pipeline ran only on the contents of the source branch, ignoring the target branch.
    In GitLab 14.8 and earlier, the label is detached.
  • Merged results pipelines, which run on
    the result of combining the source branch’s changes with the target branch.
  • Merge trains, which run when merging multiple merge requests
    at the same time. The changes from each merge request are combined into the
    target branch with the changes in the earlier enqueued merge requests, to ensure
    they all work together.

Prerequisites

To use merge request pipelines:

  • Your project’s CI/CD configuration file must be configured with
    jobs that run in merge request pipelines. To do this, you can use:

    • rules.
    • only/except.
  • You must have at least the Developer role in the
    source project to run a merge request pipeline.
  • Your repository must be a GitLab repository, not an external repository.

Use rules to add jobs

You can use the rules keyword to configure jobs to run in
merge request pipelines. For example:

job1:
  script:
    - echo "This job runs in merge request pipelines"
  rules:
    - if: $CI_PIPELINE_SOURCE == 'merge_request_event'

You can also use the workflow: rules keyword
to configure the entire pipeline to run in merge request pipelines. For example:

workflow:
  rules:
    - if: $CI_PIPELINE_SOURCE == 'merge_request_event'

job1:
  script:
    - echo "This job runs in merge request pipelines"

job2:
  script:
    - echo "This job also runs in merge request pipelines"

Use only to add jobs

You can use the only keyword with merge_requests
to configure jobs to run in merge request pipelines.

job1:
  script:
    - echo "This job runs in merge request pipelines"
  only:
    - merge_requests

Use with forked projects

  • Introduced in GitLab 13.3.
  • Moved to GitLab Premium in 13.9.

External contributors who work in forks can’t create pipelines in the parent project.

A merge request from a fork that is submitted to the parent project triggers a
pipeline that:

  • Is created and runs in the fork (source) project, not the parent (target) project.
  • Uses the fork project’s CI/CD configuration, resources, and project CI/CD variables.

Pipelines for forks display with the fork badge in the parent project:

Pipeline ran in fork

Run pipelines in the parent project (PREMIUM)

Project members in the parent project can trigger a merge request pipeline
for a merge request submitted from a fork project. This pipeline:

  • Is created and runs in the parent (target) project, not the fork (source) project.
  • Uses the CI/CD configuration present in the fork project’s branch.
  • Uses the parent project’s CI/CD settings, resources, and project CI/CD variables.
  • Uses the permissions of the parent project member that triggers the pipeline.

Run pipelines in fork project MRs to ensure that the post-merge pipeline passes in
the parent project. Additionally, if you do not trust the fork project’s runner,
running the pipeline in the parent project uses the parent project’s trusted runners.

WARNING:
Fork merge requests can contain malicious code that tries to steal secrets in the
parent project when the pipeline runs, even before merge. As a reviewer, carefully
check the changes in the merge request before triggering the pipeline. If you trigger
the pipeline by selecting Run pipeline or applying a suggestion, GitLab shows
a warning that you must accept before the pipeline runs. If you trigger the pipeline
by using any other method, including the API, /rebase quick action,
or Rebase option,
no warning displays.

Prerequisites:

  • The parent project’s CI/CD configuration file must be configured to
    run jobs in merge request pipelines.
  • You must be a member of the parent project with permissions to run CI/CD pipelines.
    You might need additional permissions if the branch is protected.
  • The fork project must be visible to the
    user running the pipeline. Otherwise, the Pipelines tab does not display
    in the merge request.

To use the UI to run a pipeline in the parent project for a merge request from a fork project:

  1. In the merge request, go to the Pipelines tab.
  2. Select Run pipeline. You must read and accept the warning, or the pipeline does not run.

You can disable this feature by using the projects API
to disable the ci_allow_fork_pipelines_to_run_in_parent_project setting.
The setting is enabled by default.

Available predefined variables

When you use merge request pipelines, you can use:

  • All the same predefined variables that are
    available in branch pipelines.
  • Additional predefined variables
    available only to jobs in merge request pipelines. These variables contain
    information from the associated merge request, which can be when calling the
    GitLab Merge Request API endpoint from a job.

Troubleshooting

Two pipelines when pushing to a branch

If you get duplicate pipelines in merge requests, your pipeline might be configured
to run for both branches and merge requests at the same time. Adjust your pipeline
configuration to avoid duplicate pipelines.

In GitLab 13.7 and later,
you can add workflow:rules to switch from branch pipelines to merge request pipelines.
After a merge request is open on the branch, the pipeline switches to a merge request pipeline.

Two pipelines when pushing an invalid CI/CD configuration file

If you push an invalid CI/CD configuration to a merge request’s branch, two failed
pipelines appear in the pipelines tab. One pipeline is a failed branch pipeline,
the other is a failed merge request pipeline.

When the configuration syntax is fixed, no further failed pipelines should appear.
To find and fix the configuration problem, you can use:

  • The pipeline editor.
  • The CI lint tool.

The merge request’s pipeline is marked as failed but the latest pipeline succeeded

It’s possible to have both branch pipelines and merge request pipelines in the
Pipelines tab of a single merge request. This might be by configuration,
or by accident.

If both types of pipelines are in one merge request, the merge request’s pipeline
is not considered successful if:

  • The branch pipeline succeeds.
  • The merge request pipeline fails.

When using the merge when pipeline succeeds
feature and both pipelines types are present, the merge request pipelines are checked,
not the branch pipelines.

An error occurred while trying to run a new pipeline for this merge request.

This error can happen when you select Run pipeline in a merge request, but the
project does not have merge request pipelines enabled anymore.

Some possible reasons for this error message:

  • The project does not have merge request pipelines enabled, has no pipelines listed
    in the Pipelines tab, and you select Run pipelines.

  • The project used to have merge request pipelines enabled, but the configuration
    was removed. For example:

    1. The project has merge request pipelines enabled in the .gitlab-ci.yml configuration
      file when the merge request is created.
    2. The Run pipeline options is available in the merge request’s Pipelines tab,
      and selecting Run pipeline at this point likely does not cause any errors.
    3. The project’s .gitlab-ci.yml file is changed to remove the merge request pipelines configuration.
    4. The branch is rebased to bring the updated configuration into the merge request.
    5. Now the pipeline configuration no longer supports merge request pipelines,
      but you select Run pipeline to run a merge request pipeline.

If Run pipeline is available, but the project does not have merge request pipelines
enabled, do not use this option. You can push a commit or rebase the branch to trigger
new branch pipelines.

Here is my .gitlab-ci.yml file:

script1:
    only:
        refs:
            - merge_requests
            - master          
        changes:
            - script1/**/*
    script: echo 'script1 done'

script2:
    only:
        refs:
            - merge_requests
            - master
        changes:
            - script2/**/*
    script: echo 'script2 done'

I want script1 to run whenever there is a change in script1 directory; likewise script2.
I tested these with a change in script1, a change in script2, change in both the directories, and no change in either of these directories.

Former 3 cases are passing as expected but 4th case, the one with no change in either directory, is failing.

In the overview, Gitlab gives the message

Could not retrieve the pipeline status. For troubleshooting steps, read thedocumentation.

Overview SS

In the Pipelines tab, I have an option to Run pipeline. Clicking on that gives the error

An error occurred while trying to run a new pipeline for this Merge Request.

Pipelines SS

If there is no job, I want the pipeline to succeed.

Troubleshooting CI/CD (FREE)

GitLab provides several tools to help make troubleshooting your pipelines easier.

This guide also lists common issues and possible solutions.

Verify syntax

An early source of problems can be incorrect syntax. The pipeline shows a yaml invalid
badge and does not start running if any syntax or formatting problems are found.

Edit gitlab-ci.yml with the Web IDE

The GitLab Web IDE offers advanced authoring tools,
including syntax highlighting for the .gitlab-ci.yml, and is the recommended editing
experience (rather than the single file editor). It offers code completion suggestions
that ensure you are only using accepted keywords.

If you prefer to use another editor, you can use a schema like the Schemastore gitlab-ci schema
with your editor of choice.

Verify syntax with CI Lint tool

The CI Lint tool is a simple way to ensure the syntax of a CI/CD configuration
file is correct. Paste in full gitlab-ci.yml files or individual jobs configuration,
to verify the basic syntax.

When a .gitlab-ci.yml file is present in a project, you can also use the CI Lint
tool to simulate the creation of a full pipeline.
It does deeper verification of the configuration syntax.

Verify variables

A key part of troubleshooting CI/CD is to verify which variables are present in a
pipeline, and what their values are. A lot of pipeline configuration is dependent
on variables, and verifying them is one of the fastest ways to find the source of
a problem.

Export the full list of variables
available in each problematic job. Check if the variables you expect are present,
and check if their values are what you expect.

GitLab CI/CD documentation

The complete gitlab-ci.yml reference contains a full list of
every keyword you may need to use to configure your pipelines.

You can also look at a large number of pipeline configuration examples
and templates.

Documentation for pipeline types

Some pipeline types have their own detailed usage guides that you should read
if you are using that type:

  • Multi-project pipelines: Have your pipeline trigger
    a pipeline in a different project.
  • Parent/child pipelines: Have your main pipeline trigger
    and run separate pipelines in the same project. You can also
    dynamically generate the child pipeline’s configuration
    at runtime.
  • Pipelines for Merge Requests: Run a pipeline
    in the context of a merge request.

    • Pipelines for Merge Results:
      Pipelines for merge requests that run on the combined source and target branch
    • Merge Trains:
      Multiple pipelines for merged results that queue and run automatically before
      changes are merged.

Troubleshooting Guides for CI/CD features

There are troubleshooting guides available for some CI/CD features and related topics:

  • Container Registry
  • GitLab Runner
  • Merge Trains
  • Docker Build
  • Environments

Common CI/CD issues

A lot of common pipeline issues can be fixed by analyzing the behavior of the rules
or only/except configuration. You shouldn’t use these two configurations in the same
pipeline, as they behave differently. It’s hard to predict how a pipeline runs with
this mixed behavior.

If your rules or only/except configuration makes use of predefined variables
like CI_PIPELINE_SOURCE, CI_MERGE_REQUEST_ID, you should verify them
as the first troubleshooting step.

Jobs or pipelines don’t run when expected

The rules or only/except keywords are what determine whether or not a job is
added to a pipeline. If a pipeline runs, but a job is not added to the pipeline,
it’s usually due to rules or only/except configuration issues.

If a pipeline does not seem to run at all, with no error message, it may also be
due to rules or only/except configuration, or the workflow: rules keyword.

If you are converting from only/except to the rules keyword, you should check
the rules configuration details carefully. The behavior
of only/except and rules is different and can cause unexpected behavior when migrating
between the two.

The common if clauses for rules
can be very helpful for examples of how to write rules that behave the way you expect.

Two pipelines run at the same time

Two pipelines can run when pushing a commit to a branch that has an open merge request
associated with it. Usually one pipeline is a merge request pipeline, and the other
is a branch pipeline.

This is usually caused by the rules configuration, and there are several ways to
prevent duplicate pipelines.

A job is not in the pipeline

GitLab determines if a job is added to a pipeline based on the only/except
or rules defined for the job. If it didn’t run, it’s probably
not evaluating as you expect.

No pipeline or the wrong type of pipeline runs

Before a pipeline can run, GitLab evaluates all the jobs in the configuration and tries
to add them to all available pipeline types. A pipeline does not run if no jobs are added
to it at the end of the evaluation.

If a pipeline did not run, it’s likely that all the jobs had rules or only/except that
blocked them from being added to the pipeline.

If the wrong pipeline type ran, then the rules or only/except configuration should
be checked to make sure the jobs are added to the correct pipeline type. For
example, if a merge request pipeline did not run, the jobs may have been added to
a branch pipeline instead.

It’s also possible that your workflow: rules configuration
blocked the pipeline, or allowed the wrong pipeline type.

A job runs unexpectedly

A common reason a job is added to a pipeline unexpectedly is because the changes
keyword always evaluates to true in certain cases. For example, changes is always
true in certain pipeline types, including scheduled pipelines and pipelines for tags.

The changes keyword is used in combination with only/except
or rules). It’s recommended to use changes with
rules or only/except configuration that ensures the job is only added to branch
pipelines or merge request pipelines.

«fatal: reference is not a tree» error

Introduced in GitLab 12.4.

Previously, you’d have encountered unexpected pipeline failures when you force-pushed
a branch to its remote repository. To illustrate the problem, suppose you’ve had the current workflow:

  1. A user creates a feature branch named example and pushes it to a remote repository.
  2. A new pipeline starts running on the example branch.
  3. A user rebases the example branch on the latest default branch and force-pushes it to its remote repository.
  4. A new pipeline starts running on the example branch again, however,
    the previous pipeline (2) fails because of fatal: reference is not a tree: error.

This is because the previous pipeline cannot find a checkout-SHA (which is associated with the pipeline record)
from the example branch that the commit history has already been overwritten by the force-push.
Similarly, Pipelines for merged results
might have failed intermittently due to the same reason.

As of GitLab 12.4, we’ve improved this behavior by persisting pipeline refs exclusively.
To illustrate its life cycle:

  1. A pipeline is created on a feature branch named example.
  2. A persistent pipeline ref is created at refs/pipelines/<pipeline-id>,
    which retains the checkout-SHA of the associated pipeline record.
    This persistent ref stays intact during the pipeline execution,
    even if the commit history of the example branch has been overwritten by force-push.
  3. The runner fetches the persistent pipeline ref and gets source code from the checkout-SHA.
  4. When the pipeline finishes, its persistent ref is cleaned up in a background process.

Merge request pipeline messages

The merge request pipeline widget shows information about the pipeline status in
a merge request. It’s displayed above the ability to merge status widget.

«Checking pipeline status» message

This message is shown when the merge request has no pipeline associated with the
latest commit yet. This might be because:

  • GitLab hasn’t finished creating the pipeline yet.
  • You are using an external CI service and GitLab hasn’t heard back from the service yet.
  • You are not using CI/CD pipelines in your project.
  • You are using CI/CD pipelines in your project, but your configuration prevented a pipeline from running on the source branch for your merge request.
  • The latest pipeline was deleted (this is a known issue).

After the pipeline is created, the message updates with the pipeline status.

Merge request status messages

The merge request status widget shows the Merge button and whether or not a merge
request is ready to merge. If the merge request can’t be merged, the reason for this
is displayed.

If the pipeline is still running, the Merge button is replaced with the
Merge when pipeline succeeds button.

If Merge Trains
are enabled, the button is either Add to merge train or Add to merge train when pipeline succeeds. (PREMIUM)

«A CI/CD pipeline must run and be successful before merge» message

This message is shown if the Pipelines must succeed
setting is enabled in the project and a pipeline has not yet run successfully.
This also applies if the pipeline has not been created yet, or if you are waiting
for an external CI service. If you don’t use pipelines for your project, then you
should disable Pipelines must succeed so you can accept merge requests.

«The pipeline for this merge request did not complete. Push a new commit to fix the failure or check the troubleshooting documentation to see other possible actions.» message

This message is shown if the merge request pipeline,
merged results pipeline,
or merge train pipeline
has failed or been canceled.

If a merge request pipeline or merged result pipeline was canceled or failed, you can:

  • Re-run the entire pipeline by clicking Run pipeline in the pipeline tab in the merge request.
  • Retry only the jobs that failed. If you re-run the entire pipeline, this is not necessary.
  • Push a new commit to fix the failure.

If the merge train pipeline has failed, you can:

  • Check the failure and determine if you can use the /merge quick action to immediately add the merge request to the train again.
  • Re-run the entire pipeline by clicking Run pipeline in the pipeline tab in the merge request, then add the merge request to the train again.
  • Push a commit to fix the failure, then add the merge request to the train again.

If the merge train pipeline was canceled before the merge request was merged, without a failure, you can:

  • Add it to the train again.

Pipeline warnings

Pipeline configuration warnings are shown when you:

  • Validate configuration with the CI Lint tool.
  • Manually run a pipeline.

«Job may allow multiple pipelines to run for a single action» warning

When you use rules with a when: clause without an if:
clause, multiple pipelines may run. Usually this occurs when you push a commit to
a branch that has an open merge request associated with it.

To prevent duplicate pipelines, use
workflow: rules or rewrite your rules to control
which pipelines can run.

Console workaround if job using resource_group gets stuck

# find resource group by name
resource_group = Project.find_by_full_path('...').resource_groups.find_by(key: 'the-group-name')
busy_resources = resource_group.resources.where('build_id IS NOT NULL')

# identify which builds are occupying the resource
# (I think it should be 1 as of today)
busy_resources.pluck(:build_id)

# it's good to check why this build is holding the resource.
# Is it stuck? Has it been forcefully dropped by the system?
# free up busy resources
busy_resources.update_all(build_id: nil)

How to get help

If you are unable to resolve pipeline issues, you can get help from:

  • The GitLab community forum
  • GitLab Support

Содержание

  1. Почему я получаю сообщение «Сбой конвейера из-за того, что пользователь не прошел проверку» и «Отсоединенный конвейер запроса на слияние» в запросе на слияние Gitlab?
  2. «Pipeline failed» error in forks of projects with specific runners
  3. Summary
  4. Steps to reproduce
  5. What is the current bug behavior?
  6. What is the expected correct behavior?
  7. Relevant logs and/or screenshots
  8. Further information
  9. Pipelines for the GitLab project
  10. Minimal test jobs before a merge request is approved
  11. Overview of the GitLab project test dependency
  12. RSpec minimal jobs
  13. Determining related RSpec test files in a merge request
  14. Exceptional cases
  15. Jest minimal jobs
  16. Determining related Jest test files in a merge request
  17. Exceptional cases
  18. Fork pipelines
  19. Faster feedback when reverting merge requests
  20. Fail-fast job in merge request pipelines
  21. Test jobs
  22. Test suite parallelization
  23. Flaky tests
  24. Automatic skipping of flaky tests
  25. Automatic retry of failing tests in a separate process
  26. Single database testing
  27. Monitoring
  28. Logging
  29. Review app jobs
  30. As-if-FOSS jobs
  31. As-if-JH jobs
  32. When to consider applying pipeline:run-as-if-jh label
  33. Corresponding JH branch
  34. Ruby 3.0 jobs
  35. undercover RSpec test
  36. Troubleshooting rspec:undercoverage failures
  37. Ruby versions testing
  38. PostgreSQL versions testing
  39. Current versions testing
  40. Long-term plan
  41. Redis versions testing
  42. Current versions testing
  43. Pipelines types for merge requests
  44. Documentation pipeline
  45. Backend pipeline
  46. Frontend pipeline
  47. End-to-end pipeline
  48. CI configuration internals
  49. Workflow rules
  50. Default image
  51. Default variables
  52. Stages
  53. Dependency Proxy
  54. Common job definitions
  55. rules , if: conditions and changes: patterns
  56. if: conditions
  57. changes: patterns
  58. Performance
  59. Interruptible pipelines
  60. Git fetch caching
  61. Caching strategy
  62. Artifacts strategy
  63. Components caching
  64. Pre-clone step

Почему я получаю сообщение «Сбой конвейера из-за того, что пользователь не прошел проверку» и «Отсоединенный конвейер запроса на слияние» в запросе на слияние Gitlab?

Когда разработчик, не являющийся владельцем, помещает ветку в наш репозиторий Gitlab, он возвращает сообщение «сбой конвейера» с деталями «сбой конвейера из-за того, что пользователь не прошел проверку». В учетной записи разработчика он получает приглашение добавить кредитную карту, чтобы подтвердить, что он имеет право на бесплатные минуты конвейера.

Но я не настраивал никаких конвейеров — у меня нет файла gitlab-ci.yml в моем репо, как и новая ветка. На вкладке CI / CD проекта в Gitlab нет заданий или расписаний. Так почему же есть маркер, говорящий о том, что ветвь в конвейере не удалась?

Они говорят, что charge ничего не будут на счету или хранить реквизиты карты, но на самом деле берут 1 доллар. (который мгновенно меняется на противоположный)

Следовательно, вам нужна карта с возможностью международных транзакций. (если вы не в США).

Интересно, почему это заявление не размещено на сайте. Определенно не очень хорошо выглядит со стороны такой крупной компании, как GitLab!

Что касается ответа, предоставление кредитной / дебетовой карты с включенными международными транзакциями и лишним 1 долларом делает дело.

Для всех, кто все еще задается вопросом, я недавно связался с Gitlab, и, по-видимому, это нерешенная проблема. Они сказали, что в любом случае можно объединить филиалы, но в конце концов мы все равно добавили данные кредитной карты (была временная плата). Не идеально, но, надеюсь, скоро разберусь.

Gitlab сообщает о бесплатных минутах конвейера, доступных на GitLab.com.

  1. Предоставьте кредитную или дебетовую карту и используйте 400 бесплатных минут с общими бегунами.
  2. Вы используете свой собственный раннер и отключаете общие раннеры для их проекта.

С наилучшими пожеланиями.

Все ответы выше хороши, но, возможно, у вас есть небольшое недоразумение о предварительной авторизации кредитной карты.

Когда мы используем кредитную карту, магазин будет запрашивать у банка замораживание некоторых кредитов (обычно общей стоимости) для этой транзакции. В какой-то момент (в зависимости от магазина) они просят у банка оплату и получают наличные. После этого банк отправляет пользователю счет.

Предварительная авторизация — это действие замораживания кредитов.

Если магазин не просит у банка оплату, банк не выдаст им наличные, и покупатель не получит счет.

Предварительная авторизация — это способ проверки действительности кредитной карты. Обычная сумма — один доллар США. Это очень распространено в Google Play и App Store, когда вы добавляете новую карту.

Gitlab использует этот способ, чтобы подтвердить, действительна ли кредитная карта .

Хотя это зависит от их внутренних операций, я думаю, что Gitlab не нужно специально отменять транзакцию, единственное, что нужно сделать, это убедиться, что они не будут запрашивать у банка оплату за эту предварительную авторизацию.

Источник

«Pipeline failed» error in forks of projects with specific runners

Summary

We have a project with some specific runner that is used to measure timing regressions. This takes lots of resources as we need to reserve all cores of one socket for one job to avoid interference between timing measurements of concurrently running jobs. Hence we don’t want these runners to be assigned to forks of the project that external contributors work on. As a consequence of this, external contributors get error messages in the web UI after pushing (about no runners being available), and one day later, they get an entirely uninformative email saying «pipeline failed» with a link to a build page that shows an empty log.

Steps to reproduce

Have a project with some specific runners and a CI job that has a tag only provided by these runners. Set things up so that forked projects do not inherit the runners. Fork the project. Push something to the fork.

What is the current bug behavior?

There are errors shown in the web UI about missing runners. Furthermore, after one day (probably the CI job timeout), there is an email saying «pipeline failed» and not saying anything else. This is very confusing for external contributors that make a fork to create merge requests for things to be merged back into the main project.

What is the expected correct behavior?

Ideally, there should be no errors whatsoever even though the project has a .gitlab-ci.yml and no runners. This could be conditioned on it being a fork. At the very least, there should be no errors, but just warnings (and no emails) indicating that the jobs are all stuck.

Relevant logs and/or screenshots

Here’s how the build page of such a failed build looks like:

Further information

This is with GitLab Community Edition 8.16.4 (gitlab-ce@f32ee822d66afcf8d6288d5e2e5660e19b18d5a7).

Источник

Pipelines for the GitLab project

Pipelines for gitlab-org/gitlab (as well as the dev instance’s) is configured in the usual .gitlab-ci.yml which itself includes files under .gitlab/ci/ for easier maintenance.

We’re striving to dogfood GitLab CI/CD features and best-practices as much as possible.

Minimal test jobs before a merge request is approved

To reduce the pipeline cost and shorten the job duration, before a merge request is approved, the pipeline will run a minimal set of RSpec & Jest tests that are related to the merge request changes.

After a merge request has been approved, the pipeline would contain the full RSpec & Jest tests. This will ensure that all tests have been run before a merge request is merged.

Overview of the GitLab project test dependency

To understand how the minimal test jobs are executed, we need to understand the dependency between GitLab code (frontend and backend) and the respective tests (Jest and RSpec). This dependency can be visualized in the following diagram:

  • RSpec tests are dependent on the backend code.
  • Jest tests are dependent on both frontend and backend code, the latter through the frontend fixtures.

RSpec minimal jobs

To identify the minimal set of tests needed, we use the test_file_finder gem, with two strategies:

  • dynamic mapping from test coverage tracing (generated via the Crystalball gem) (see where it’s used)
  • static mapping maintained in the tests.yml file for special cases that cannot be mapped via coverage tracing (see where it’s used)

The test mappings contain a map of each source files to a list of test files which is dependent of the source file.

In the detect-tests job, we use this mapping to identify the minimal tests needed for the current merge request.

Exceptional cases

In addition, there are a few circumstances where we would always run the full RSpec tests:

  • when the pipeline:run-all-rspec label is set on the merge request
  • when the merge request is created by an automation (for example, Gitaly update or MR targeting a stable branch)
  • when the merge request is created in a security mirror
  • when any CI configuration file is changed (for example, .gitlab-ci.yml or .gitlab/ci/**/* )

Jest minimal jobs

To identify the minimal set of tests needed, we pass a list of all the changed files into jest using the —findRelatedTests option. In this mode, jest would resolve all the dependencies of related to the changed files, which include test files that have these files in the dependency chain.

Exceptional cases

In addition, there are a few circumstances where we would always run the full Jest tests:

  • when the pipeline:run-all-jest label is set on the merge request
  • when the merge request is created by an automation (for example, Gitaly update or MR targeting a stable branch)
  • when the merge request is created in a security mirror
  • when any CI configuration file is changed (for example, .gitlab-ci.yml or .gitlab/ci/**/* )
  • when any frontend «core» file is changed (for example, package.json , yarn.lock , babel.config.js , jest.config.*.js , config/helpers/**/*.js )
  • when any vendored JavaScript file is changed (for example, vendor/assets/javascripts/**/* )
  • when any backend file is changed (see the patterns list for details)

Fork pipelines

We only run the minimal RSpec & Jest jobs for fork pipelines unless the pipeline:run-all-rspec label is set on the MR. The goal is to reduce the CI/CD minutes consumed by fork pipelines.

Faster feedback when reverting merge requests

When you need to revert a merge request, to get accelerated feedback, you can add the

pipeline:revert label to your merge request.

When this label is assigned, the following steps of the CI/CD pipeline are skipped:

  • The e2e:package-and-test job.
  • The rspec:undercoverage job.
  • The entire Review Apps process.

Apply the label to the merge request, and run a new pipeline for the MR.

Fail-fast job in merge request pipelines

To provide faster feedback when a merge request breaks existing tests, we are experimenting with a fail-fast mechanism.

An rspec fail-fast job is added in parallel to all other rspec jobs in a merge request pipeline. This job runs the tests that are directly related to the changes in the merge request.

If any of these tests fail, the rspec fail-fast job fails, triggering a fail-pipeline-early job to run. The fail-pipeline-early job:

  • Cancels the currently running pipeline and all in-progress jobs.
  • Sets pipeline to have status failed .

The rspec fail-fast is a no-op if there are more than 10 test files related to the merge request. This prevents rspec fail-fast duration from exceeding the average rspec job duration and defeating its purpose.

This number can be overridden by setting a CI/CD variable named RSPEC_FAIL_FAST_TEST_FILE_COUNT_THRESHOLD .

Test jobs

We have dedicated jobs for each testing level and each job runs depending on the changes made in your merge request. If you want to force all the RSpec jobs to run regardless of your changes, you can add the pipeline:run-all-rspec label to the merge request.

WARNING: Forcing all jobs on docs only related MRs would not have the prerequisite jobs and would lead to errors

Test suite parallelization

Our current RSpec tests parallelization setup is as follows:

  1. The retrieve-tests-metadata job in the prepare stage ensures we have a knapsack/report-master.json file:
    • The knapsack/report-master.json file is fetched from the latest main pipeline which runs update-tests-metadata (for now it’s the 2-hourly maintenance scheduled master pipeline), if it’s not here we initialize the file with <> .
  2. Each [rspec|rspec-ee] [migration|unit|integration|system|geo] n m job are run with knapsack rspec and should have an evenly distributed share of tests:
    • It works because the jobs have access to the knapsack/report-master.json since the «artifacts from all previous stages are passed by default».
    • the jobs set their own report path to «knapsack/$_$_$_$_$_report.json» .
    • if knapsack is doing its job, test files that are run should be listed under Report specs , not under Leftover specs .
  3. The update-tests-metadata job (which only runs on scheduled pipelines for the canonical project takes all the knapsack/rspec*.json files and merge them all together into a single knapsack/report-master.json file that is saved as artifact.

After that, the next pipeline uses the up-to-date knapsack/report-master.json file.

Flaky tests

Automatic skipping of flaky tests

Tests that are known to be flaky are skipped unless the $SKIP_FLAKY_TESTS_AUTOMATICALLY variable is set to false or if the

«pipeline:run-flaky-tests» label is set on the MR.

Automatic retry of failing tests in a separate process

Unless $RETRY_FAILED_TESTS_IN_NEW_PROCESS variable is set to false ( true by default), RSpec tests that failed are automatically retried once in a separate RSpec process. The goal is to get rid of most side-effects from previous tests that may lead to a subsequent test failure.

We keep track of retried tests in the $RETRIED_TESTS_REPORT_FILE file saved as artifact by the rspec:flaky-tests-report job.

Single database testing

By default, all tests run with multiple databases.

We also run tests with a single database in nightly scheduled pipelines, and in merge requests that touch database-related files.

If you want to force tests to run with a single database, you can add the pipeline:run-single-db label to the merge request.

Monitoring

The GitLab test suite is monitored for the main branch, and any branch that includes rspec-profile in their name.

Logging

  • Rails logging to log/test.log is disabled by default in CI for performance reasons. To override this setting, provide the RAILS_ENABLE_TEST_LOG environment variable.

Review app jobs

Consult the Review Apps dedicated page for more information.

If you want to force a Review App to be deployed regardless of your changes, you can add the pipeline:run-review-app label to the merge request.

As-if-FOSS jobs

The * as-if-foss jobs run the GitLab test suite «as if FOSS», meaning as if the jobs would run in the context of gitlab-org/gitlab-foss . These jobs are only created in the following cases:

  • when the pipeline:run-as-if-foss label is set on the merge request
  • when the merge request is created in the gitlab-org/security/gitlab project
  • when any CI configuration file is changed (for example, .gitlab-ci.yml or .gitlab/ci/**/* )

The * as-if-foss jobs are run in addition to the regular EE-context jobs. They have the FOSS_ONLY=’1′ variable set and get the ee/ folder removed before the tests start running.

The intent is to ensure that a change doesn’t introduce a failure after gitlab-org/gitlab is synced to gitlab-org/gitlab-foss .

As-if-JH jobs

NOTE: This is disabled for now.

The * as-if-jh jobs run the GitLab test suite «as if JiHu», meaning as if the jobs would run in the context of GitLab JH. These jobs are only created in the following cases:

  • when the pipeline:run-as-if-jh label is set on the merge request
  • when the pipeline:run-all-rspec label is set on the merge request
  • when any code or backstage file is changed
  • when any startup CSS file is changed

The * as-if-jh jobs are run in addition to the regular EE-context jobs. The jh/ folder is added before the tests start running.

The intent is to ensure that a change doesn’t introduce a failure after gitlab-org/gitlab is synced to GitLab JH.

When to consider applying pipeline:run-as-if-jh label

NOTE: This is disabled for now.

If a Ruby file is renamed and there’s a corresponding prepend_mod line, it’s likely that GitLab JH is relying on it and requires a corresponding change to rename the module or class it’s prepending.

Corresponding JH branch

NOTE: This is disabled for now.

You can create a corresponding JH branch on GitLab JH by appending -jh to the branch name. If a corresponding JH branch is found, * as-if-jh jobs grab the jh folder from the respective branch, rather than from the default branch main-jh .

NOTE: For now, CI will try to fetch the branch on the GitLab JH mirror, so it might take some time for the new JH branch to propagate to the mirror.

Ruby 3.0 jobs

You can add the pipeline:run-in-ruby3 label to the merge request to switch the Ruby version used for running the whole test suite to 3.0. When you do this, the test suite will no longer run in Ruby 2.7 (default), and an additional job verify-ruby-2.7 will also run and always fail to remind us to remove the label and run in Ruby 2.7 before merging the merge request.

This should let us:

  • Test changes for Ruby 3.0
  • Make sure it will not break anything when it’s merged into the default branch

undercover RSpec test

The rspec:undercoverage job runs undercover to detect, and fail if any changes introduced in the merge request has zero coverage.

The rspec:undercoverage job obtains coverage data from the rspec:coverage job.

In the event of an emergency, or false positive from this job, add the pipeline:skip-undercoverage label to the merge request to allow this job to fail.

Troubleshooting rspec:undercoverage failures

The rspec:undercoverage job has known bugs that can cause false positive failures. You can test coverage locally to determine if it’s safe to apply

«pipeline:skip-undercoverage» . For example, using as the name of the test causing the failure:

  1. Run SIMPLECOV=1 bundle exec rspec .
  2. Run scripts/undercoverage .

If these commands return undercover: ✅ No coverage is missing in latest changes then you can apply

«pipeline:skip-undercoverage» to bypass pipeline failures.

Ruby versions testing

Our test suite runs against Ruby 2 in merge requests and default branch pipelines.

We also run our test suite against Ruby 3 on another 2-hourly scheduled pipelines, as GitLab.com will soon run on Ruby 3.

PostgreSQL versions testing

Our test suite runs against PG12 as GitLab.com runs on PG12 and Omnibus defaults to PG12 for new installs and upgrades.

We do run our test suite against PG11 and PG13 on nightly scheduled pipelines.

We also run our test suite against PG11 upon specific database library changes in MRs and main pipelines (with the rspec db-library-code pg11 job).

Current versions testing

Where? PostgreSQL version Ruby version
Merge requests 12 (default version), 11 for DB library changes 2.7 (default version)
master branch commits 12 (default version), 11 for DB library changes 2.7 (default version)
maintenance scheduled pipelines for the master branch (every even-numbered hour) 12 (default version), 11 for DB library changes 2.7 (default version)
maintenance scheduled pipelines for the ruby3 branch (every odd-numbered hour), see below. 12 (default version), 11 for DB library changes 3.0 (coded in the branch)
nightly scheduled pipelines for the master branch 12 (default version), 11, 13 2.7 (default version)

The pipeline configuration for the scheduled pipeline testing Ruby 3 is stored in the ruby3-sync branch. The pipeline updates the ruby3 branch with latest master , and then it triggers a regular branch pipeline for ruby3 . Any changes in ruby3 are only for running the pipeline. It should never be merged back to master . Any other Ruby 3 changes should go into master directly, which should be compatible with Ruby 2.7.

Long-term plan

PostgreSQL version 14.1 (July 2021) 14.2 (August 2021) 14.3 (September 2021) 14.4 (October 2021) 14.5 (November 2021) 14.6 (December 2021)
PG12 MRs/ 2-hour / nightly MRs/ 2-hour / nightly MRs/ 2-hour / nightly MRs/ 2-hour / nightly MRs/ 2-hour / nightly MRs/ 2-hour / nightly
PG11 nightly nightly nightly nightly nightly nightly
PG13 nightly nightly nightly nightly nightly nightly

Redis versions testing

Our test suite runs against Redis 6 as GitLab.com runs on Redis 6 and Omnibus defaults to Redis 6 for new installs and upgrades.

We do run our test suite against Redis 5 on nightly scheduled pipelines, specifically when running backward-compatible and forward-compatible PostgreSQL jobs.

Current versions testing

Where? Redis version
MRs 6
default branch (non-scheduled pipelines) 6
nightly scheduled pipelines 5

Pipelines types for merge requests

In general, pipelines for an MR fall into one of the following types (from shorter to longer), depending on the changes made in the MR:

  • Documentation pipeline: For MRs that touch documentation.
  • Backend pipeline: For MRs that touch backend code.
  • Frontend pipeline: For MRs that touch frontend code.
  • End-to-end pipeline: For MRs that touch code in the qa/ folder.

A «pipeline type» is an abstract term that mostly describes the «critical path» (for example, the chain of jobs for which the sum of individual duration equals the pipeline’s duration). We use these «pipeline types» in metrics dashboards in order to detect what types and jobs need to be optimized first.

An MR that touches multiple areas would be associated with the longest type applicable. For instance, an MR that touches backend and frontend would fall into the «Frontend» pipeline type since this type takes longer to finish than the «Backend» pipeline type.

We use the rules: and needs: keywords extensively to determine the jobs that need to be run in a pipeline. Note that an MR that includes multiple types of changes would have a pipelines that include jobs from multiple types (for example, a combination of docs-only and code-only pipelines).

Following are graphs of the critical paths for each pipeline type. Jobs that aren’t part of the critical path are omitted.

Documentation pipeline

Backend pipeline

Frontend pipeline

End-to-end pipeline

CI configuration internals

Workflow rules

Pipelines for the GitLab project are created using the workflow:rules keyword feature of the GitLab CI/CD.

Pipelines are always created for the following scenarios:

  • main branch, including on schedules, pushes, merges, and so on.
  • Merge requests.
  • Tags.
  • Stable, auto-deploy , and security branches.

Pipeline creation is also affected by the following CI/CD variables:

  • If $FORCE_GITLAB_CI is set, pipelines are created.
  • If $GITLAB_INTERNAL is not set, pipelines are not created.

No pipeline is created in any other cases (for example, when pushing a branch with no MR for it).

The source of truth for these workflow rules is defined in .gitlab-ci.yml .

Default image

The default image is defined in .gitlab-ci.yml .

It includes Ruby, Go, Git, Git LFS, Chrome, Node, Yarn, PostgreSQL, and Graphics Magick.

The images used in our pipelines are configured in the gitlab-org/gitlab-build-images project, which is push-mirrored to gitlab/gitlab-build-images for redundancy.

The current version of the build images can be found in the «Used by GitLab section».

Default variables

In addition to the predefined CI/CD variables, each pipeline includes default variables defined in .gitlab-ci.yml .

Stages

The current stages are:

  • sync : This stage is used to synchronize changes from gitlab-org/gitlab to gitlab-org/gitlab-foss .
  • prepare : This stage includes jobs that prepare artifacts that are needed by jobs in subsequent stages.
  • build-images : This stage includes jobs that prepare Docker images that are needed by jobs in subsequent stages or downstream pipelines.
  • fixtures : This stage includes jobs that prepare fixtures needed by frontend tests.
  • lint : This stage includes linting and static analysis jobs.
  • test : This stage includes most of the tests, and DB/migration jobs.
  • post-test : This stage includes jobs that build reports or gather data from the test stage’s jobs (for example, coverage, Knapsack metadata, and so on).
  • review : This stage includes jobs that build the CNG images, deploy them, and run end-to-end tests against Review Apps (see Review Apps for details). It also includes Docs Review App jobs.
  • qa : This stage includes jobs that perform QA tasks against the Review App that is deployed in stage review .
  • post-qa : This stage includes jobs that build reports or gather data from the qa stage’s jobs (for example, Review App performance report).
  • pages : This stage includes a job that deploys the various reports as GitLab Pages (for example, coverage-ruby , and webpack-report (found at https://gitlab-org.gitlab.io/gitlab/webpack-report/ , but there is an issue with the deployment).
  • notify : This stage includes jobs that notify various failures to Slack.

Dependency Proxy

Some of the jobs are using images from Docker Hub, where we also use $ as a prefix to the image path, so that we pull images from our Dependency Proxy.

$ is a group CI/CD variable defined in gitlab-org as $/ . This means when we use an image defined as:

Projects in the gitlab-org group pull from the Dependency Proxy, while forks that reside on any other personal namespaces or groups fall back to Docker Hub unless $ is also defined there.

Common job definitions

Job definitions Description
.default-retry Allows a job to retry upon unknown_failure , api_failure , runner_system_failure , job_execution_timeout , or stuck_or_timeout_failure .
.default-before_script Allows a job to use a default before_script definition suitable for Ruby/Rails tasks that may need a database running (for example, tests).
.setup-test-env-cache Allows a job to use a default cache definition suitable for setting up test environment for subsequent Ruby/Rails tasks.
.rails-cache Allows a job to use a default cache definition suitable for Ruby/Rails tasks.
.static-analysis-cache Allows a job to use a default cache definition suitable for static analysis tasks.
.coverage-cache Allows a job to use a default cache definition suitable for coverage tasks.
.qa-cache Allows a job to use a default cache definition suitable for QA tasks.
.yarn-cache Allows a job to use a default cache definition suitable for frontend jobs that do a yarn install .
.assets-compile-cache Allows a job to use a default cache definition suitable for frontend jobs that compile assets.
.use-pg11 Allows a job to run the postgres 11 and redis services (see .gitlab/ci/global.gitlab-ci.yml for the specific versions of the services).
.use-pg11-ee Same as .use-pg11 but also use an elasticsearch service (see .gitlab/ci/global.gitlab-ci.yml for the specific version of the service).
.use-pg12 Allows a job to use the postgres 12 and redis services (see .gitlab/ci/global.gitlab-ci.yml for the specific versions of the services).
.use-pg12-ee Same as .use-pg12 but also use an elasticsearch service (see .gitlab/ci/global.gitlab-ci.yml for the specific version of the service).
.use-pg13 Allows a job to use the postgres 13 and redis services (see .gitlab/ci/global.gitlab-ci.yml for the specific versions of the services).
.use-pg13-ee Same as .use-pg13 but also use an elasticsearch service (see .gitlab/ci/global.gitlab-ci.yml for the specific version of the service).
.use-kaniko Allows a job to use the kaniko tool to build Docker images.
.as-if-foss Simulate the FOSS project by setting the FOSS_ONLY=’1′ CI/CD variable.
.use-docker-in-docker Allows a job to use Docker in Docker.

rules , if: conditions and changes: patterns

We’re using the rules keyword extensively.

All rules definitions are defined in rules.gitlab-ci.yml , then included in individual jobs via extends .

The rules definitions are composed of if: conditions and changes: patterns, which are also defined in rules.gitlab-ci.yml and included in rules definitions via YAML anchors

if: conditions

if: conditions Description Notes
if-not-canonical-namespace Matches if the project isn’t in the canonical ( gitlab-org/ ) or security ( gitlab-org/security ) namespace. Use to create a job for forks (by using `when: on_success
if-not-ee Matches if the project isn’t EE (that is, project name isn’t gitlab or gitlab-ee ). Use to create a job only in the FOSS project (by using `when: on_success
if-not-foss Matches if the project isn’t FOSS (that is, project name isn’t gitlab-foss , gitlab-ce , or gitlabhq ). Use to create a job only in the EE project (by using `when: on_success
if-default-refs Matches if the pipeline is for master , main , /^[d-]+-stable(-ee)?$/ (stable branches), /^d+-d+-auto-deploy-d+$/ (auto-deploy branches), /^security// (security branches), merge requests, and tags. Note that jobs aren’t created for branches with this default configuration.
if-master-refs Matches if the current branch is master or main .
if-master-push Matches if the current branch is master or main and pipeline source is push .
if-master-schedule-maintenance Matches if the current branch is master or main and pipeline runs on a 2-hourly schedule.
if-master-schedule-nightly Matches if the current branch is master or main and pipeline runs on a nightly schedule.
if-auto-deploy-branches Matches if the current branch is an auto-deploy one.
if-master-or-tag Matches if the pipeline is for the master or main branch or for a tag.
if-merge-request Matches if the pipeline is for a merge request.
if-merge-request-title-as-if-foss Matches if the pipeline is for a merge request and the MR has label

«pipeline:run-as-if-foss» if-merge-request-title-update-caches Matches if the pipeline is for a merge request and the MR has label

«pipeline:update-cache». if-merge-request-title-run-all-rspec Matches if the pipeline is for a merge request and the MR has label

«pipeline:run-all-rspec». if-security-merge-request Matches if the pipeline is for a security merge request. if-security-schedule Matches if the pipeline is for a security scheduled pipeline. if-nightly-master-schedule Matches if the pipeline is for a master scheduled pipeline with $NIGHTLY set. if-dot-com-gitlab-org-schedule Limits jobs creation to scheduled pipelines for the gitlab-org group on GitLab.com. if-dot-com-gitlab-org-master Limits jobs creation to the master or main branch for the gitlab-org group on GitLab.com. if-dot-com-gitlab-org-merge-request Limits jobs creation to merge requests for the gitlab-org group on GitLab.com. if-dot-com-gitlab-org-and-security-tag Limits job creation to tags for the gitlab-org and gitlab-org/security groups on GitLab.com. if-dot-com-gitlab-org-and-security-merge-request Limit jobs creation to merge requests for the gitlab-org and gitlab-org/security groups on GitLab.com. if-dot-com-gitlab-org-and-security-tag Limit jobs creation to tags for the gitlab-org and gitlab-org/security groups on GitLab.com. if-dot-com-ee-schedule Limits jobs to scheduled pipelines for the gitlab-org/gitlab project on GitLab.com. if-security-pipeline-merge-result Matches if the pipeline is for a security merge request triggered by @gitlab-release-tools-bot .

changes: patterns

changes: patterns Description
ci-patterns Only create job for CI configuration-related changes.
ci-build-images-patterns Only create job for CI configuration-related changes related to the build-images stage.
ci-review-patterns Only create job for CI configuration-related changes related to the review stage.
ci-qa-patterns Only create job for CI configuration-related changes related to the qa stage.
yaml-lint-patterns Only create job for YAML-related changes.
docs-patterns Only create job for docs-related changes.
frontend-dependency-patterns Only create job when frontend dependencies are updated (that is, package.json , and yarn.lock ). changes.
frontend-patterns Only create job for frontend-related changes.
backend-patterns Only create job for backend-related changes.
db-patterns Only create job for DB-related changes.
backstage-patterns Only create job for backstage-related changes (that is, Danger, fixtures, RuboCop, specs).
code-patterns Only create job for code-related changes.
qa-patterns Only create job for QA-related changes.
code-backstage-patterns Combination of code-patterns and backstage-patterns .
code-qa-patterns Combination of code-patterns and qa-patterns .
code-backstage-qa-patterns Combination of code-patterns , backstage-patterns , and qa-patterns .
static-analysis-patterns Only create jobs for Static Analytics configuration-related changes.

Performance

Interruptible pipelines

By default, all jobs are interruptible, except the dont-interrupt-me job which runs automatically on main , and is manual otherwise.

If you want a running pipeline to finish even if you push new commits to a merge request, be sure to start the dont-interrupt-me job before pushing.

Git fetch caching

Because GitLab.com uses the pack-objects cache, concurrent Git fetches of the same pipeline ref are deduplicated on the Gitaly server (always) and served from cache (when available).

This works well for the following reasons:

  • The pack-objects cache is enabled on all Gitaly servers on GitLab.com.
  • The CI/CD Git strategy setting for gitlab-org/gitlab is Git clone, causing all jobs to fetch the same data, which maximizes the cache hit ratio.
  • We use shallow clone to avoid downloading the full Git history for every job.

Caching strategy

  1. All jobs must only pull caches by default.
  2. All jobs must be able to pass with an empty cache. In other words, caches are only there to speed up jobs.
  3. We currently have several different cache definitions defined in .gitlab/ci/global.gitlab-ci.yml , with fixed keys:
    • .setup-test-env-cache
    • .ruby-cache
    • .rails-cache
    • .static-analysis-cache
    • .rubocop-cache
    • .coverage-cache
    • .danger-review-cache
    • .qa-cache
    • .yarn-cache
    • .assets-compile-cache (the key includes $ so it’s actually two different caches).
  4. These cache definitions are composed of multiple atomic caches.
  5. Only the following jobs, running in 2-hourly maintenance scheduled pipelines, are pushing (that is, updating) to the caches:
    • update-setup-test-env-cache , defined in .gitlab/ci/rails.gitlab-ci.yml .
    • update-gitaly-binaries-cache , defined in .gitlab/ci/rails.gitlab-ci.yml .
    • update-rubocop-cache , defined in .gitlab/ci/rails.gitlab-ci.yml .
    • update-qa-cache , defined in .gitlab/ci/qa.gitlab-ci.yml .
    • update-assets-compile-production-cache , defined in .gitlab/ci/frontend.gitlab-ci.yml .
    • update-assets-compile-test-cache , defined in .gitlab/ci/frontend.gitlab-ci.yml .
    • update-yarn-cache , defined in .gitlab/ci/frontend.gitlab-ci.yml .
    • update-storybook-yarn-cache , defined in .gitlab/ci/frontend.gitlab-ci.yml .
  6. These jobs can also be forced to run in merge requests with the pipeline:update-cache label (this can be useful to warm the caches in a MR that updates the cache keys).

Artifacts strategy

We limit the artifacts that are saved and retrieved by jobs to the minimum in order to reduce the upload/download time and costs, as well as the artifacts storage.

Components caching

Some external components (currently only GitLab Workhorse) of GitLab need to be built from source as a preliminary step for running tests.

In this MR, we introduced a new build-components job that:

  • runs automatically for all GitLab.com gitlab-org/gitlab scheduled pipelines
  • runs automatically for any master commit that touches the workhorse/ folder
  • is manual for GitLab.com’s gitlab-org ‘s MRs

This job tries to download a generic package that contains GitLab Workhorse binaries needed in the GitLab test suite (under tmp/tests/gitlab-workhorse ).

  • If the package URL returns a 404:
    1. It runs scripts/setup-test-env , so that the GitLab Workhorse binaries are built.
    2. It then creates an archive which contains the binaries and upload it as a generic package.
  • Otherwise, if the package already exists, it exit the job successfully.

We also changed the setup-test-env job to:

  1. First download the GitLab Workhorse generic package build and uploaded by build-components .
  2. If the package is retrieved successfully, its content is placed in the right folder (for example, tmp/tests/gitlab-workhorse ), preventing the building of the binaries when scripts/setup-test-env is run later on.
  3. If the package URL returns a 404, the behavior doesn’t change compared to the current one: the GitLab Workhorse binaries are built as part of scripts/setup-test-env .

NOTE: The version of the package is the workhorse tree SHA (for example, git rev-parse HEAD:workhorse ).

Pre-clone step

NOTE: We no longer use this optimization for gitlab-org/gitlab because the pack-objects cache allows Gitaly to serve the full CI/CD fetch traffic now. See Git fetch caching.

Источник

Вот мой файл .gitlab-ci.yml:

script1:
    only:
        refs:
            - merge_requests
            - master          
        changes:
            - script1/**/*
    script: echo 'script1 done'

script2:
    only:
        refs:
            - merge_requests
            - master
        changes:
            - script2/**/*
    script: echo 'script2 done'

Я хочу, чтобы script1 запускался всякий раз, когда происходит изменение в каталоге script1; аналогично script2. Я проверил их с изменением в script1, изменением в script2, изменением в обоих каталогах и отсутствием изменений ни в одном из этих каталогов.

го

В обзоре Gitlab дает сообщение

Could not retrieve the pipeline status. For troubleshooting steps, read thedocumentation.

Overview SS

На вкладке Pipelines у меня есть возможность Run pipeline. Нажав на это, выдает ошибку

An error occurred while trying to run a new pipeline for this Merge Request.

Pipelines SS

Если нет работы, я хочу, чтобы конвейер работал успешно.

1 ответ

Конвейеры Gitlab не имеют независимой валидности вне рабочих мест. Конвейер по определению состоит из одного или нескольких заданий. В вашем примере 4 выше рабочие места не создаются. Самый простой хак, который вы можете добавить в свой пайплайн, — это работа, которая всегда выполняется:

dummyjob:
    script: exit 0 


0

slowko
10 Июн 2020 в 20:57

После релиза, нужно было замержить новую дефолтную ветку в оставшиеся мерж реквесты МР. Хотел автоматизировать и мержил скриптом

source_branch — это новая дефолтная ветка v1.2.3
destiny_branch — это ветка исью issue-1234

local source_branch=$1
local destiny_branch=$2


git fetch origin
git remote set-head origin —auto
git checkout $source_branch
git reset —hard origin/$source_branch
git clean -f -d
git checkout $destiny_branch
git reset —hard origin/$destiny_branch
git merge —no-edit $source_branch
git push origin $destiny_branch

В результате все равно вручную пришлось менять target branch на дефолтную ветку v1.2.3 внутри каждого МР и появилась эта ошибка:

Сould not retrieve the pipeline status 

Также кнопка Merge была не активна.

В конце ошибки предлагалась изучить по ссылке troubleshooting, где пишут что есть такой Bug 

Bug

Merge Request pipeline statuses can’t be retrieved when the following occurs:

  1. A Merge Requst is created
  2. The Merge Request is closed
  3. Changes are made in the project
  4. The Merge Request is reopened

To enable the pipeline status to be properly retrieved, close and reopen the
Merge Request again.

И предлагают закрыть и переоткрыть МР, что и помогло.

Кнопка Reopen merge request появляется сразу-же на месте Close merge request 

(т.е заходить в закрытые МР и искать этот МР не нужно)

Понравилась статья? Поделить с друзьями:
  • An error occurred while trying to replace the existing file перевод
  • An error occurred while trying to replace the existing file deletefile failed code 5
  • An error occurred while trying to replace the existing file delete file failed code 5
  • An error occurred while trying to read the source file сталкер
  • An error occurred while trying to read the source file omnisphere 2