Error job failed custom executor is missing runexec

We have running gitlab instance and now trying to integrate Gitlab CI. To do it we followed official guide. And installed gitlab-runner successfully. Running with gitlab-runner 13.1.0 (6214287e) on

We have running gitlab instance and now trying to integrate Gitlab CI. To do it we followed official guide.

And installed gitlab-runner successfully.

Running with gitlab-runner 13.1.0 (6214287e) on testcicd xxxxxx
Preparing the «custom» executor WARNING: custom executor is missing
RunExec ERROR: Job failed: custom executor is missing RunExec

My .gitlab-ci.yml

build1:
  tags: 
    - ci
  stage: build
  script:
    - echo "Do your build here"

  • continuous-integration
  • gitlab
  • continuous-deployment

asked Jun 30, 2020 at 8:34

Sachith Muhandiram's user avatar

1

  • It looks like there is an issue with the gitlab-runner using the custom executor. I’m not sure if you want to be using the custom executor unless you have a very specific scenario, but it seems like you’re missing some configuration for it?

    Jun 30, 2020 at 10:41

Содержание

  1. Roblox accounts catalogs
  2. [SOLVED] Gitlabrunner: custom executor is …
  3. Custom executor is missing RunExec (#4570) · Issues
  4. GitLab CI does not working : custom executor is missing …
  5. The Custom executor GitLab
  6. A practical guide to GitLab Runner Custom Executor drivers
  7. Gitlabrunner shell executor: docker: command not found
  8. Using LXD with the Custom executor GitLab
  9. Unable to upload artifacts with custom executor GitLab
  10. A Brief Guide to GitLab CI Runners and Executors Medium
  11. A practical guide to GitLab Runner Custom Executor drivers
  12. The Custom executor GitLab クリエーションライン株式会社
  13. GitLab Runner 入门及常见问题 简书
  14. Support for custom executors · Issue #689 · actions/runner
  15. Please leave your comments here:
  16. Catalogs Updated
  17. Frequently Asked Questions
  18. What is GitLab Runner and custom executor drivers?
  19. What is prepare_Exec in GitLab Runner?
  20. What is run_ARGs in GitLab Runner?
  21. What is a set of executables called in GitLab?
  22. GitLab CI Shell Executor failing builds with ERROR: Job failed: exit status 1
  23. Interactions with this post
  24. Gitlab Runner errors and solutions
  25. Error 1) open /root/.ssh/known_hosts: no such file or directory
  26. Solution:
  27. Error 2) Job failed: prepare environment: Process exited with status 1.
  28. Solution:
  29. Error 3) handshake failed: knownhosts: key is unknown
  30. Solution:
  31. Solution A
  32. Solution B
  33. Solution C
  34. Solution D
  35. Top comments (0)
  36. Regex for lazy developers
  37. Read next
  38. Rails System Tests with Headless Chrome on Heroku CI
  39. Integração e Entrega Contínuas (CI/CD) de site estático em servidor Linux usando GitLab
  40. GitLab 13.12 is now available! 🌼
  41. 🦊GitLab Cheatsheet — 1 — Basics of Stages and Jobs
  42. More from Pradeep Kumar
  43. The Custom executor
  44. Limitations
  45. Configuration
  46. Prerequisite software for running a Job
  47. Stages
  48. Config
  49. Prepare
  50. Cleanup
  51. Terminating and killing executables
  52. Error handling
  53. Build Failure
  54. System Failure
  55. Driver examples

Roblox accounts catalogs

[SOLVED] Gitlabrunner: custom executor is …

Preview 8 hours ago As Tomasz pointed out the gitlab-runner exec command is not using configuration file and all his configuration needs to be passed as command arguments. gitlab-runner exec custom —builds-dir «/builds» —cache-dir «/cache» —custom-run-exec «/opt/lxd-executor/run.sh» —custom-prepare-exec «/opt/lxd-executor/prepare.sh» —custom-cleanup-exec «/opt/lxd-executor/cleanup.sh» test.

Custom executor is missing RunExec (#4570) · Issues

Preview 7 hours ago $ gitlab-runner exec custom test Runtime platform arch=amd64 os=linux pid=25524 revision=de7731dd version=12.1.0 Running with gitlab-runner 12.1.0 (de7731dd) WARNING: …

GitLab CI does not working : custom executor is missing …

Preview 6 hours ago To do it we followed official guide. And installed gitlab-runner successfully. Running with gitlab-runner 13.1.0 (6214287e) on testcicd xxxxxx Preparing the «custom»

The Custom executor GitLab

Preview 2 hours ago Configure GitLab Runner Advanced config Autoscale config Autoscale on AWS EC2 Autoscale on AWS Fargate Commands Custom instance-level project templates Deprecated API rate …

A practical guide to GitLab Runner Custom Executor drivers

Preview 5 hours ago The Custom Executor works as a bridge between GitLab Runner and a set of binaries or scripts (aka executables) you must develop to set up and use your CI/CD environment.

Gitlabrunner shell executor: docker: command not found

Preview 8 hours ago UPDATE: I’ve worked my way around the problem for now by simply installing a gitlab-runner on the host by following here: — https://docs.gitlab.com/runner/install/linux …

Using LXD with the Custom executor GitLab

Preview 5 hours ago Using LXD with the Custom executor. In this example, we use LXD to create a container per build and clean it up afterwards. This example uses a bash script for each stage. You can …

Unable to upload artifacts with custom executor GitLab

Preview 8 hours ago As I understand the documentation of Gitlab, we have to install gitlab-runner in our guests to support artifact & cache handling. But I still get the Error «Missing gitlab-runner. Uploading …

A Brief Guide to GitLab CI Runners and Executors Medium

Preview 2 hours ago This executor is running the jobs directly on the machine where the runner has been installed. It is as simple as it sounds. This is more or less similar to how Jenkins would …

A practical guide to GitLab Runner Custom Executor drivers

Preview Just Now A practical guide to GitLab Runner Custom Executor drivers. In my brand new blog post, I dig into GitLab Runner Custom Executor drivers. I used a developer-to-developer approach to …

The Custom executor GitLab クリエーションライン株式会社

Preview 5 hours ago The Custom executor. Introduced in GitLab Runner 12.1. GitLab Runner provides the Custom executor for environments that it doesn’t support natively, for example, Podman or Libvirt. …

GitLab Runner 入门及常见问题 简书

Preview 8 hours ago 安装GitLab Runner. 本教程的安装环境为Ubuntu18.04。 运行以下命令增加GitLab官方仓库: curl -L https://packages.gitlab.com/install/repositories/runner/gitlab …

Support for custom executors · Issue #689 · actions/runner

Preview 4 hours ago Gitlab has a nice feature where you can easily create a custom executor, it’s documented here https://docs.gitlab.com/runner/executors/custom.html. This allows one to …

Catalogs Updated

Frequently Asked Questions

What is GitLab Runner and custom executor drivers?

Generated scripts are another key concept of GitLab Runner and hence Custom Executor drivers. Users describe their CI jobs as lists of commands, and GitLab Runner adds some others when executing the Run stage.

What is prepare_Exec in GitLab Runner?

GitLab Runner will execute the executable that is specified in prepare_exec . This is responsible for setting up the environment (for example, creating the virtual machine or container, or anything else). After this is done, we expect that the environment is ready to run the job. This stage is executed only once, in a job execution.

What is run_ARGs in GitLab Runner?

The path to the script that GitLab Runner creates for the Custom executor to run. Name of the stage. If you have run_args defined, they are the first set of arguments passed to the run_exec executable, then GitLab Runner adds others. For example, suppose we have the following config.toml:

What is a set of executables called in GitLab?

This set of executables is called Driver. GitLab Runner and the Custom Executor are fairly well documented as you can see in the links provided so far, what it is pretty useful for those who want to develop their own drivers.

Источник

GitLab CI Shell Executor failing builds with ERROR: Job failed: exit status 1

Tonight I’ve been playing around with GitLab CI’s shell executor, and my builds have been failing with this error:

After some searching online, it appeared that similar to when receiving No Such Directory , this comment noted that it’s an issue with SKEL — the solution was to delete .bash_logout from the gitlab-runner user’s home, but I also removed .bashrc and .profile .

How to work around `ERROR: Job failed: exit status 1` errors with GitLab CI’s shell executor.

Written by Jamie Tanna on Wed, 03 Jun 2020 21:13:41 BST , and last updated on Wed, 02 Mar 2022 13:34:19 UTC .

Content for this article is shared under the terms of the Creative Commons Attribution Non Commercial Share Alike 4.0 International, and code is shared under the Apache License 2.0.

Has this content helped you? Did it solve that difficult-to-resolve issue you’ve been chasing for weeks? Or has it taught you something new you’ll be able to re-use daily?

Please consider supporting me so I can continue to create content like this!

This post was filed under articles.

Interactions with this post

Below you can find the interactions that this page has had using WebMention.

Have you written a response to this post? Let me know the URL:

Do you not have a website set up with WebMention capabilities? You can use Comment Parade.

Источник

Gitlab Runner errors and solutions

Error 1) open /root/.ssh/known_hosts: no such file or directory

Exit fullscreen mode

Solution:

Follow the steps to resolve:

1) Login to gitlab instance via SSH
2) Become sudo via:

Exit fullscreen mode

3) Now, you need to connect gitlab instance to the host where runner is try to connect

Exit fullscreen mode

and should match with the gitlab runner , it will ask for password then it will ask to accept key fingerprint .

Now, try to run the job with the runner. It should be working

Error 2) Job failed: prepare environment: Process exited with status 1.

If you are getting following error in your when running gitlab ci/cd job via gitlab-runner :

Exit fullscreen mode

Solution:

Run following command:

Exit fullscreen mode

and delete following files if exist

Exit fullscreen mode

Try to re-run the jobs it should be working.

Error 3) handshake failed: knownhosts: key is unknown

Exit fullscreen mode

Solution:

Solution A

Verify your login credentials

Solution B

Verify that SSH port is open

Solution C

Edit your runner and add disable_strict_host_key_checking = true

Exit fullscreen mode

Exit fullscreen mode

Then restart the gitlab-runner

Exit fullscreen mode

Solution D

If you’re using WHM as your hosting control panel, enable following settings:

Exit fullscreen mode

For further actions, you may consider blocking this person and/or reporting abuse

Here is a post you might want to check out:

Regex for lazy developers

Sorry for the callout 😆

Read next

Rails System Tests with Headless Chrome on Heroku CI

Adam Rogers — Jun 4 ’21

Integração e Entrega Contínuas (CI/CD) de site estático em servidor Linux usando GitLab

xJuggl3r — Jun 3 ’21

GitLab 13.12 is now available! 🌼

Benjamin Rancourt — Jun 3 ’21

🦊GitLab Cheatsheet — 1 — Basics of Stages and Jobs

Jean-Phi Baconnais — Jun 1 ’21

More from Pradeep Kumar

Once suspended, themodernpk will not be able to comment or publish posts until their suspension is removed.

Once unsuspended, themodernpk will be able to comment and publish posts again.

Once unpublished, all posts by themodernpk will become hidden and only accessible to themselves.

If themodernpk is not suspended, they can still re-publish their posts from their dashboard.

Once unpublished, this post will become invisible to the public and only accessible to Pradeep Kumar.

They can still re-publish the post if they are not suspended.

Thanks for keeping DEV Community 👩‍💻👨‍💻 safe. Here is what you can do to flag themodernpk:

themodernpk consistently posts content that violates DEV Community 👩‍💻👨‍💻’s code of conduct because it is harassing, offensive or spammy.

Unflagging themodernpk will restore default visibility to their posts.

DEV Community 👩‍💻👨‍💻 — A constructive and inclusive social network for software developers. With you every step of your journey.

Built on Forem — the open source software that powers DEV and other inclusive communities.

Made with love and Ruby on Rails. DEV Community 👩‍💻👨‍💻 © 2016 — 2023.

We’re a place where coders share, stay up-to-date and grow their careers.

Источник

The Custom executor

Introduced in GitLab Runner 12.1

GitLab Runner provides the Custom executor for environments that it doesn’t support natively, for example, Podman or Libvirt.

This gives you the control to create your own executor by configuring GitLab Runner to use some executable to provision, run, and clean up your environment.

The scripts you configure for the custom executor are called Drivers . For example, you could create a Podman driver, an LXD driver or a Libvirt driver.

Limitations

Below are some current limitations when using the Custom executor:

  • No support for services . See #4358 for more details.
  • No Interactive Web Terminal support.

Configuration

There are a few configuration keys that you can choose from. Some of them are optional.

Below is an example of configuration for the Custom executor using all available configuration keys:

For field definitions and which ones are required, see [runners.custom] section configuration.

In addition both builds_dir and cache_dir inside of the [[runners]] are required fields.

Prerequisite software for running a Job

The user must set up the environment, including the following that must be present in the PATH :

  • Git: Used to clone the repositories.
  • Git LFS: Pulls any LFS objects that might be in the repository.
  • GitLab Runner: Used to download/update artifacts and cache.

Stages

The Custom executor provides the stages for you to configure some details of the job, prepare and cleanup the environment and run the job script within it. Each stage is responsible for specific things and has different things to keep in mind.

Each stage executed by the Custom executor is executed at the time a builtin GitLab Runner executor would execute them.

For each step that will be executed, specific environment variables are exposed to the executable, which can be used to get information about the specific Job that is running. All stages will have the following environment variables available to them:

  • Standard CI/CD environment variables, including predefined variables.
  • All environment variables provided by the Custom Runner host system.

Both CI/CD environment variables and predefined variables are prefixed with CUSTOM_ENV_ to prevent conflicts with system environment variables. For example, CI_BUILDS_DIR will be available as CUSTOM_ENV_CI_BUILDS_DIR .

The stages run in the following sequence:

  1. config_exec
  2. prepare_exec
  3. run_exec
  4. cleanup_exec

Config

The Config stage is executed by config_exec .

Sometimes you might want to set some settings during execution time. For example settings a build directory depending on the project ID. config_exec reads from STDOUT and expects a valid JSON string with specific keys.

Any additional keys inside of the JSON string will be ignored. If it’s not a valid JSON string the stage will fail and be retried two more times.

Parameter Type Required Allowed empty Description
builds_dir string The base directory where the working directory of the job will be created.
cache_dir string The base directory where local cache will be stored.
builds_dir_is_shared bool n/a Defines whether the environment is shared between concurrent job or not.
hostname string The hostname to associate with job’s “metadata” stored by Runner. If undefined, the hostname is not set.
driver.name string The user-defined name for the driver. Printed with the Using custom executor. line. If undefined, no information about driver is printed.
driver.version string The user-defined version for the drive. Printed with the Using custom executor. line. If undefined, only the name information is printed.

The STDERR of the executable will print to the job log.

The user can set config_exec_timeout if they want to set a deadline for how long GitLab Runner should wait to return the JSON string before terminating the process.

If any of the config_exec_args are defined, these will be added in order to the executable defined in config_exec . For example we have the config.toml content below:

GitLab Runner would execute it as /path/to/config Arg1 Arg2 .

Prepare

The Prepare stage is executed by prepare_exec .

At this point, GitLab Runner knows everything about the job (where and how it’s going to run). The only thing left is for the environment to be set up so the job can run. GitLab Runner will execute the executable that is specified in prepare_exec .

This is responsible for setting up the environment (for example, creating the virtual machine or container, or anything else). After this is done, we expect that the environment is ready to run the job.

This stage is executed only once, in a job execution.

The user can set prepare_exec_timeout if they want to set a deadline for how long GitLab Runner should wait to prepare the environment before terminating the process.

The STDOUT and STDERR returned from this executable will print to the job log.

If any of the prepare_exec_args are defined, these will be added in order to the executable defined in prepare_exec . For example we have the config.toml content below:

GitLab Runner would execute it as /path/to/bin Arg1 Arg2 .

The Run stage is executed by run_exec .

The STDOUT and STDERR returned from this executable will print to the job log.

Unlike the other stages, the run_exec stage is executed multiple times, since it’s split into sub stages listed below in sequential order:

  1. prepare_script
  2. get_sources
  3. restore_cache
  4. download_artifacts
  5. step_*
  6. build_script
  7. step_*
  8. after_script
  9. archive_cache
  10. upload_artifacts_on_success OR upload_artifacts_on_failure

For each stage mentioned above, the run_exec executable will be executed with:

  • The usual environment variables.
  • Two arguments:
    • The path to the script that GitLab Runner creates for the Custom executor to run.
    • Name of the stage.

If you have run_args defined, they are the first set of arguments passed to the run_exec executable, then GitLab Runner adds others. For example, suppose we have the following config.toml :

GitLab Runner will execute the executable with the following arguments:

This executable should be responsible for executing the scripts that are specified in the first argument. They contain all the scripts any GitLab Runner executor would run normally to clone, download artifacts, run user scripts and all the other steps described below. The scripts can be of the following shells:

  • Bash
  • PowerShell Desktop
  • PowerShell Core
  • Batch (deprecated)

We generate the script using the shell configured by shell inside of [[runners]] . If none is provided the defaults for the OS platform are used.

The table below is a detailed explanation of what each script does and what the main goal of that script is.

Script Name Script Contents
prepare_script Simple debug information which machine the Job is running on.
get_sources Prepares the Git configuration, and clone/fetch the repository. We suggest you keep this as is since you get all of the benefits of Git strategies that GitLab provides.
restore_cache Extract the cache if any are defined. This expects the gitlab-runner binary is available in $PATH .
download_artifacts Download artifacts, if any are defined. This expects gitlab-runner binary is available in $PATH .
step_* Generated by GitLab. A set of scripts to execute. It may never be sent to the custom executor. It may have multiple steps, like step_release and step_accessibility . This can be a feature from the .gitlab-ci.yml file.
build_script A combination of before_script and script . In GitLab Runner 14.0 and later, build_script will be replaced with step_script . For more information, see this issue.
after_script This is the after_script defined from the job. This is always called even if any of the previous steps failed.
archive_cache Will create an archive of all the cache, if any are defined.
upload_artifacts_on_success Upload any artifacts that are defined. Only executed when build_script was successful.
upload_artifacts_on_failure Upload any artifacts that are defined. Only executed when build_script fails.

Cleanup

The Cleanup stage is executed by cleanup_exec .

This final stage is executed even if one of the previous stages failed. The main goal for this stage is to clean up any of the environments that might have been set up. For example, turning off VMs or deleting containers.

The result of cleanup_exec does not affect job statuses. For example, a job will be marked as successful even if the following occurs:

  • Both prepare_exec and run_exec are successful.
  • cleanup_exec fails.

The user can set cleanup_exec_timeout if they want to set some kind of deadline of how long GitLab Runner should wait to clean up the environment before terminating the process.

The STDOUT of this executable will be printed to GitLab Runner logs at a DEBUG level. The STDERR will be printed to the logs at a WARN level.

If any of the cleanup_exec_args are defined, these will be added in order to the executable defined in cleanup_exec . For example we have the config.toml content below:

GitLab Runner would execute it as /path/to/bin Arg1 Arg2 .

Terminating and killing executables

GitLab Runner will try to gracefully terminate an executable under any of the following conditions:

  • config_exec_timeout , prepare_exec_timeout or cleanup_exec_timeout are met.
  • The job times out.
  • The job is cancelled.

When a timeout is reached, a SIGTERM is sent to the executable, and the countdown for exec_terminate_timeout starts. The executable should listen to this signal to make sure it cleans up any resources. If exec_terminate_timeout passes and the process is still running, a SIGKILL is sent to kill the process and exec_force_kill_timeout will start. If the process is still running after exec_force_kill_timeout has finished, GitLab Runner will abandon the process and will not try to stop/kill anymore. If both these timeouts are reached during config_exec , prepare_exec or run_exec the build is marked as failed.

As of GitLab 13.1 any child process that is spawned by the driver will also receive the graceful termination process explained above on UNIX based systems. This is achieved by having the main process set as a process group which all the child processes belong too.

Error handling

There are two types of errors that GitLab Runner can handle differently. These errors are only handled when the executable inside of config_exec , prepare_exec , run_exec , and cleanup_exec exits with these codes. If the user exits with a non-zero exit code, it should be propagated as one of the error codes below.

If the user script exits with one of these code it has to be propagated to the executable exit code.

Build Failure

GitLab Runner provides BUILD_FAILURE_EXIT_CODE environment variable which should be used by the executable as an exit code to inform GitLab Runner that there is a failure on the users job. If the executable exits with the code from BUILD_FAILURE_EXIT_CODE , the build is marked as a failure appropriately in GitLab CI.

If the script that the user defines inside of .gitlab-ci.yml file exits with a non-zero code, run_exec should exit with BUILD_FAILURE_EXIT_CODE value.

System Failure

You can send a system failure to GitLab Runner by exiting the process with the error code specified in the SYSTEM_FAILURE_EXIT_CODE . If this error code is returned, on certain stages GitLab Runner will retry the stage, if none of the retries are successful the job will be marked as failed.

Below is a table of what stages are retried, and by how many times.

Stage Name Number of attempts Duration to wait between each retry
prepare_exec 3 3 seconds
get_sources Value of GET_SOURCES_ATTEMPTS variable. (Default 1) 0 seconds
restore_cache Value of RESTORE_CACHE_ATTEMPTS variable. (Default 1) 0 seconds
download_artifacts Value of ARTIFACT_DOWNLOAD_ATTEMPTS variable. (Default 1) 0 seconds

Driver examples

A set of example drivers using the Custom executor can be found in the examples page.

Источник

  • All Results
  • Online

  • Free

  • Stores

  • Post Your Comments?

[SOLVED] Gitlabrunner: custom executor is …

[SOLVED] Gitlabrunner: custom executor is …
Preview
8 hours ago Feel free to start it, but if it’s running already the config should be automatically reloaded! $ gitlab-runner list Runtime platform arch=amd64 os=linux pid=25306 revision=de7731dd version=12.1.0 Listing configured runners ConfigFile=/home/gitlab-runner/.gitlab-runner/config.toml lxd-executor Executor=custom Token=xxxxx …

Last updated: Aug 7, 2019

See Also: Free Catalogs  Show details

GitLab CI does not working : custom executor is missing …

GitLab CI does not working : custom executor is missing …
Preview
6 hours ago WebRunning with gitlab-runner 13.1.0 (6214287e) on testcicd xxxxxx Preparing the «custom» executor WARNING: custom executor is missing RunExec ERROR: …

See Also: Free Catalogs  Show details

Custom executor is missing RunExec (#4570) · Issues

Custom executor is missing RunExec (#4570) · Issues
Preview
7 hours ago WebActual behavior $ gitlab-runner exec custom test Runtime platform arch=amd64 os=linux pid=25524 revision=de7731dd version=12.1.0 Running with gitlab-runner 12.1.0 …

See Also: Free Catalogs  Show details

The Custom executor GitLab

The Custom executor  GitLab
Preview
2 hours ago WebGitLab Runner will try to gracefully terminate an executable under any of the following conditions: config_exec_timeout, prepare_exec_timeout or cleanup_exec_timeout are …

See Also: Free Catalogs  Show details

How to use a custom windows docker container on gitlab …

How to use a custom windows docker container on gitlab …
Preview
6 hours ago WebSorted by: 2 +100 Short answer: you can’t, but you might be able to work around it Shared windows runners do not process image or services, so you can’t set it that way. You …

See Also: Free Catalogs  Show details

Troubleshooting GitLab Runner GitLab

Troubleshooting GitLab Runner  GitLab
Preview
5 hours ago WebIf GitLab Runner is running as a service on Windows, it creates system event logs. To view them, open the Event Viewer (from the Run menu, type eventvwr.msc or search for …

See Also: Free Catalogs  Show details

Unable to upload artifacts with custom executor GitLab

Unable to upload artifacts with custom executor  GitLab
Preview
8 hours ago WebInternally gitlab is running gitlab-runner —version to check if gitlab-runner is installed. This check fails. As you can see I run gitlab-runner —version in before_script and it’s working. …

See Also: Art Catalogs  Show details

A practical guide to GitLab Runner Custom Executor drivers

A practical guide to GitLab Runner Custom Executor drivers
Preview
5 hours ago WebA practical guide to GitLab Runner Custom Executor drivers by Ricardo Mendes CI&T Medium 500 Apologies, but something went wrong on our end. Refresh …

See Also: Free Catalogs  Show details

A Brief Guide to GitLab CI Runners and Executors Medium

A Brief Guide to GitLab CI Runners and Executors  Medium
Preview
2 hours ago WebThe GitLab Runner receives instructions from the GitLab server in regards to which jobs to run. Each runner must be registered with the GitLab server. Runner …

See Also: Free Catalogs  Show details

Runner job, SSH executor failing GitLab CI/CD GitLab …

Runner job, SSH executor failing  GitLab CI/CD  GitLab …
Preview
4 hours ago WebI’m trying to add a test pipeline to my gitlab runner, but it keeps failing at the “SSH executor” step. I’ve added the private key as a SSH_PRIVATE_KEY variable, and …

See Also: Free Catalogs  Show details

Please leave your comments here:

// +build !integration package custom import ( «bytes» «context» «encoding/json» «fmt» «io» «os» «path/filepath» «runtime» «strings» «testing» «github.com/pkg/errors» «github.com/stretchr/testify/assert» «github.com/stretchr/testify/mock» «github.com/stretchr/testify/require» «gitlab.com/gitlab-org/gitlab-runner/common» «gitlab.com/gitlab-org/gitlab-runner/executors/custom/command» «gitlab.com/gitlab-org/gitlab-runner/helpers/process» ) type executorTestCase struct { config common.RunnerConfig commandStdoutContent string commandStderrContent string commandErr error doNotMockCommandFactory bool adjustExecutor func(t *testing.T, e *executor) assertBuild func(t *testing.T, b *common.Build) assertCommandFactory func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) assertOutput func(t *testing.T, output string) assertExecutor func(t *testing.T, e *executor) expectedError string } func getRunnerConfig(custom *common.CustomConfig) common.RunnerConfig { rc := common.RunnerConfig{ RunnerCredentials: common.RunnerCredentials{ Token: «RuNnErToKeN», }, RunnerSettings: common.RunnerSettings{ BuildsDir: «/builds», CacheDir: «/cache», Shell: «bash», }, } if custom != nil { rc.Custom = custom } return rc } func prepareExecutorForCleanup(t *testing.T, tt executorTestCase) (*executor, *bytes.Buffer) { e, options, out := prepareExecutor(t, tt) e.Config = *options.Config e.Build = options.Build e.Trace = options.Trace e.BuildLogger = common.NewBuildLogger(e.Trace, e.Build.Log()) return e, out } func prepareExecutor(t *testing.T, tt executorTestCase) (*executor, common.ExecutorPrepareOptions, *bytes.Buffer) { out := bytes.NewBuffer([]byte{}) successfulBuild, err := common.GetSuccessfulBuild() require.NoError(t, err) successfulBuild.ID = jobID() trace := new(common.MockJobTrace) defer trace.AssertExpectations(t) trace.On(«Write», mock.Anything). Run(func(args mock.Arguments) { _, err := io.Copy(out, bytes.NewReader(args.Get(0).([]byte))) require.NoError(t, err) }). Return(0, nil). Maybe() trace.On(«IsStdout»). Return(false). Maybe() options := common.ExecutorPrepareOptions{ Build: &common.Build{ JobResponse: successfulBuild, Runner: &tt.config, }, Config: &tt.config, Context: context.Background(), Trace: trace, } e := new(executor) return e, options, out } var currentJobID = 0 func jobID() int { i := currentJobID currentJobID++ return i } func assertOutput(t *testing.T, tt executorTestCase, out *bytes.Buffer) { if tt.assertOutput == nil { return } tt.assertOutput(t, out.String()) } func mockCommandFactory(t *testing.T, tt executorTestCase) func() { if tt.doNotMockCommandFactory { return func() {} } outputs := commandOutputs{ stdout: nil, stderr: nil, } cmd := new(command.MockCommand) cmd.On(«Run»). Run(func(_ mock.Arguments) { if tt.commandStdoutContent != «» && outputs.stdout != nil { _, err := fmt.Fprintln(outputs.stdout, tt.commandStdoutContent) require.NoError(t, err, «Unexpected error on mocking command output to stdout») } if tt.commandStderrContent != «» && outputs.stderr != nil { _, err := fmt.Fprintln(outputs.stderr, tt.commandStderrContent) require.NoError(t, err, «Unexpected error on mocking command output to stderr») } }). Return(tt.commandErr) oldFactory := commandFactory commandFactory = func(ctx context.Context, executable string, args []string, options process.CommandOptions) command.Command { if tt.assertCommandFactory != nil { tt.assertCommandFactory(t, tt, ctx, executable, args, options) } outputs.stdout = options.Stdout outputs.stderr = options.Stderr return cmd } return func() { cmd.AssertExpectations(t) commandFactory = oldFactory } } func TestExecutor_Prepare(t *testing.T) { tests := map[string]executorTestCase{ «AbstractExecutor.Prepare failure»: { config: common.RunnerConfig{}, doNotMockCommandFactory: true, expectedError: «custom executor not configured», }, «custom executor not set»: { config: getRunnerConfig(nil), doNotMockCommandFactory: true, expectedError: «custom executor not configured», }, «custom executor set without RunExec»: { config: getRunnerConfig(&common.CustomConfig{}), doNotMockCommandFactory: true, expectedError: «custom executor is missing RunExec», }, «custom executor set»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», }), doNotMockCommandFactory: true, assertOutput: func(t *testing.T, output string) { assert.Contains(t, output, «Using Custom executor…») }, }, «custom executor set with ConfigExec with error»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», ConfigExec: «echo», ConfigArgs: []string{«test»}, }), commandErr: errors.New(«test-error»), assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { assert.Equal(t, tt.config.Custom.ConfigExec, executable) assert.Equal(t, tt.config.Custom.ConfigArgs, args) }, assertOutput: func(t *testing.T, output string) { assert.NotContains(t, output, «Using Custom executor…») }, expectedError: «test-error», }, «custom executor set with ConfigExec with invalid JSON»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», ConfigExec: «echo», }), commandStdoutContent: «abcd», commandErr: nil, assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { assert.Equal(t, tt.config.Custom.ConfigExec, executable) }, assertOutput: func(t *testing.T, output string) { assert.NotContains(t, output, «Using Custom executor…») }, expectedError: «error while parsing JSON output: invalid character ‘a’ looking for beginning of value», }, «custom executor set with ConfigExec with empty JSON»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», ConfigExec: «echo», }), commandStdoutContent: «», commandErr: nil, assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { assert.Equal(t, tt.config.Custom.ConfigExec, executable) }, assertOutput: func(t *testing.T, output string) { assert.Contains(t, output, «Using Custom executor…») }, assertBuild: func(t *testing.T, b *common.Build) { assert.Equal(t, «/builds/project-0», b.BuildDir) assert.Equal(t, «/cache/project-0», b.CacheDir) }, assertExecutor: func(t *testing.T, e *executor) { assert.Nil(t, e.jobEnv) }, }, «custom executor set with ConfigExec with undefined builds_dir»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», ConfigExec: «echo», }), commandStdoutContent: `{«builds_dir»:»»}`, commandErr: nil, assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { assert.Equal(t, tt.config.Custom.ConfigExec, executable) }, assertOutput: func(t *testing.T, output string) { assert.Contains(t, output, «Using Custom executor…») }, expectedError: «the builds_dir is not configured», }, «custom executor set with ConfigExec and driver info missing name»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», ConfigExec: «echo», }), commandStdoutContent: `{ «driver»: { «version»: «v0.0.1» } }`, commandErr: nil, assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { assert.Equal(t, tt.config.Custom.ConfigExec, executable) }, assertOutput: func(t *testing.T, output string) { assert.Contains(t, output, «Using Custom executor…») }, }, «custom executor set with ConfigExec and driver info missing version»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», ConfigExec: «echo», }), commandStdoutContent: `{ «driver»: { «name»: «test driver» } }`, commandErr: nil, assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { assert.Equal(t, tt.config.Custom.ConfigExec, executable) }, assertOutput: func(t *testing.T, output string) { assert.Contains(t, output, «Using Custom executor with driver test driver…») }, }, «custom executor set with ConfigExec»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», ConfigExec: «echo», }), commandStdoutContent: `{ «hostname»: «custom-hostname», «builds_dir»: «/some/build/directory», «cache_dir»: «/some/cache/directory», «builds_dir_is_shared»:true, «driver»: { «name»: «test driver», «version»: «v0.0.1» } }`, commandErr: nil, assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { assert.Equal(t, tt.config.Custom.ConfigExec, executable) }, assertOutput: func(t *testing.T, output string) { assert.Contains(t, output, «Using Custom executor with driver test driver v0.0.1…») }, assertBuild: func(t *testing.T, b *common.Build) { assert.Equal(t, «custom-hostname», b.Hostname) assert.Equal(t, «/some/build/directory/RuNnErTo/0/project-0», b.BuildDir) assert.Equal(t, «/some/cache/directory/project-0», b.CacheDir) }, }, «custom executor set with PrepareExec»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», PrepareExec: «echo», PrepareArgs: []string{«test»}, }), assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { assert.Equal(t, tt.config.Custom.PrepareExec, executable) assert.Equal(t, tt.config.Custom.PrepareArgs, args) }, assertOutput: func(t *testing.T, output string) { assert.Contains(t, output, «Using Custom executor…») }, }, «custom executor set with PrepareExec with error»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», PrepareExec: «echo», PrepareArgs: []string{«test»}, }), commandErr: errors.New(«test-error»), assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { assert.Equal(t, tt.config.Custom.PrepareExec, executable) assert.Equal(t, tt.config.Custom.PrepareArgs, args) }, assertOutput: func(t *testing.T, output string) { assert.Contains(t, output, «Using Custom executor…») }, expectedError: «test-error», }, «custom executor set with valid job_env»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», ConfigExec: «echo», }), commandStdoutContent: `{ «builds_dir»: «/some/build/directory», «job_env»: { «FOO»: «Hello», «BAR»: «World» } }`, commandErr: nil, assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { assert.Equal(t, tt.config.Custom.ConfigExec, executable) }, assertBuild: func(t *testing.T, b *common.Build) { assert.Equal(t, «/some/build/directory/project-0», b.BuildDir) }, assertExecutor: func(t *testing.T, e *executor) { assert.Len(t, e.jobEnv, 2) require.Contains(t, e.jobEnv, «FOO») assert.Equal(t, «Hello», e.jobEnv[«FOO»]) require.Contains(t, e.jobEnv, «BAR») assert.Equal(t, «World», e.jobEnv[«BAR»]) }, }, «custom executor set with valid job_env, verify variable order and prefix»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «run-executable», ConfigExec: «config-executable», PrepareExec: «prepare-executable», PrepareArgs: []string{«test»}, }), commandStdoutContent: `{ «builds_dir»: «/some/build/directory», «job_env»: { «FOO»: «Hello» } }`, commandErr: nil, assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { if executable != «prepare-executable» { return } require.True(t, len(options.Env) >= 2, «options.Env must contain 2 elements or more») assert.Equal(t, «FOO=Hello», options.Env[0], «first env var must be FOO») assert.True( t, strings.HasPrefix(options.Env[1], «CUSTOM_ENV_»), «must be followed by CUSTOM_ENV_* variables», ) }, }, } for testName, tt := range tests { t.Run(testName, func(t *testing.T) { defer mockCommandFactory(t, tt)() e, options, out := prepareExecutor(t, tt) err := e.Prepare(options) assertOutput(t, tt, out) if tt.assertBuild != nil { tt.assertBuild(t, e.Build) } if tt.assertExecutor != nil { tt.assertExecutor(t, e) } if tt.expectedError == «» { assert.NoError(t, err) return } assert.EqualError(t, err, tt.expectedError) }) } } func TestExecutor_Cleanup(t *testing.T) { tests := map[string]executorTestCase{ «custom executor not set»: { config: getRunnerConfig(nil), assertOutput: func(t *testing.T, output string) { assert.Contains(t, output, «custom executor not configured») }, doNotMockCommandFactory: true, }, «custom executor set without RunExec»: { config: getRunnerConfig(&common.CustomConfig{}), assertOutput: func(t *testing.T, output string) { assert.Contains(t, output, «custom executor is missing RunExec») }, doNotMockCommandFactory: true, }, «custom executor set»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», }), doNotMockCommandFactory: true, }, «custom executor set with CleanupExec»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», CleanupExec: «echo», CleanupArgs: []string{«test»}, }), assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { assert.Equal(t, tt.config.Custom.CleanupExec, executable) assert.Equal(t, tt.config.Custom.CleanupArgs, args) }, assertOutput: func(t *testing.T, output string) { assert.NotContains(t, output, «WARNING: Cleanup script failed:») }, }, «custom executor set with CleanupExec with error»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», CleanupExec: «unknown», }), commandStdoutContent: «some output message in commands output», commandStderrContent: «some error message in commands output», commandErr: errors.New(«test-error»), assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { assert.Equal(t, tt.config.Custom.CleanupExec, executable) }, assertOutput: func(t *testing.T, output string) { assert.Contains(t, output, «WARNING: Cleanup script failed: test-error») }, }, «custom executor set with valid job_env, verify variable order and prefix»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», CleanupExec: «echo», CleanupArgs: []string{«test»}, }), adjustExecutor: func(t *testing.T, e *executor) { e.jobEnv = map[string]string{«FOO»: «Hello»} }, assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { require.True(t, len(options.Env) >= 2, «options.Env must contain 2 elements or more») assert.Equal(t, «FOO=Hello», options.Env[0], «first env var must be FOO») assert.True( t, strings.HasPrefix(options.Env[1], «CUSTOM_ENV_»), «must be followed by CUSTOM_ENV_* variables», ) }, }, } for testName, tt := range tests { t.Run(testName, func(t *testing.T) { defer mockCommandFactory(t, tt)() e, out := prepareExecutorForCleanup(t, tt) if tt.adjustExecutor != nil { tt.adjustExecutor(t, e) } e.Cleanup() assertOutput(t, tt, out) }) } } func TestExecutor_Run(t *testing.T) { tests := map[string]executorTestCase{ «Run fails on tempdir operations»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», }), doNotMockCommandFactory: true, adjustExecutor: func(t *testing.T, e *executor) { curDir, err := os.Getwd() require.NoError(t, err) e.tempDir = filepath.Join(curDir, «unknown») }, expectedError: func() string { if runtime.GOOS == «windows» { return «The system cannot find the file specified» } return «no such file or directory» }(), }, «Run executes job»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», }), assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { assert.Equal(t, tt.config.Custom.RunExec, executable) assert.Len(t, args, 2) assert.Equal(t, «build_script», args[1]) }, }, «Run executes job with error»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», CleanupExec: «unknown», }), commandErr: errors.New(«test-error»), assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { assert.Equal(t, tt.config.Custom.RunExec, executable) }, expectedError: «test-error», }, «custom executor set with valid job_env, verify variable order and prefix»: { config: getRunnerConfig(&common.CustomConfig{ RunExec: «bash», }), adjustExecutor: func(t *testing.T, e *executor) { e.jobEnv = map[string]string{«FOO»: «Hello»} }, assertCommandFactory: func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { require.True(t, len(options.Env) >= 2, «options.Env must contain 2 elements or more») assert.Equal(t, «FOO=Hello», options.Env[0], «first env var must be FOO») assert.True( t, strings.HasPrefix(options.Env[1], «CUSTOM_ENV_»), «must be followed by CUSTOM_ENV_* variables», ) }, }, } for testName, tt := range tests { t.Run(testName, func(t *testing.T) { defer mockCommandFactory(t, tt)() e, options, out := prepareExecutor(t, tt) err := e.Prepare(options) require.NoError(t, err) if tt.adjustExecutor != nil { tt.adjustExecutor(t, e) } err = e.Run(common.ExecutorCommand{ Context: context.Background(), Stage: «step_script», }) assertOutput(t, tt, out) if tt.expectedError == «» { assert.NoError(t, err) return } require.Error(t, err) assert.Contains(t, err.Error(), tt.expectedError) }) } } func TestExecutor_Env(t *testing.T) { ciJobImageEnv := «CUSTOM_ENV_CI_JOB_IMAGE» runnerConfig := getRunnerConfig(&common.CustomConfig{ RunExec: «bash», PrepareExec: «echo», CleanupExec: «bash», }) //nolint:lll assertCommandFactory := func(expectedImageName string) func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { return func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { for _, env := range options.Env { pair := strings.Split(env, «=») if pair[0] == ciJobImageEnv { assert.Equal(t, expectedImageName, pair[1]) break } } } } adjustExecutorFactory := func(imageName string) func(t *testing.T, e *executor) { return func(t *testing.T, e *executor) { // the build is assumed to be non-nil across the executor codebase e.Build.Image = common.Image{Name: imageName} } } tests := map[string]executorTestCase{ «custom executor set « + ciJobImageEnv: { config: runnerConfig, adjustExecutor: adjustExecutorFactory(«test_image»), assertCommandFactory: assertCommandFactory(«test_image»), }, «custom executor set empty « + ciJobImageEnv: { config: runnerConfig, adjustExecutor: adjustExecutorFactory(«»), assertCommandFactory: assertCommandFactory(«»), }, «custom executor set expanded « + ciJobImageEnv: { config: runnerConfig, adjustExecutor: func(t *testing.T, e *executor) { e.Build.Variables = append(e.Build.Variables, common.JobVariable{ Key: «to_expand», Value: «expanded», }) adjustExecutorFactory(«image:$to_expand»)(t, e) }, assertCommandFactory: assertCommandFactory(«image:expanded»), }, «custom executor set no variable to expand « + ciJobImageEnv: { config: runnerConfig, adjustExecutor: adjustExecutorFactory(«image:$nothing_to_expand»), assertCommandFactory: assertCommandFactory(«image:»), }, } for tn, tt := range tests { t.Run(tn, func(t *testing.T) { defer mockCommandFactory(t, tt)() e, options, _ := prepareExecutor(t, tt) e.Config = *options.Config e.Build = options.Build e.Trace = options.Trace e.BuildLogger = common.NewBuildLogger(e.Trace, e.Build.Log()) if tt.adjustExecutor != nil { tt.adjustExecutor(t, e) } err := e.Prepare(options) assert.NoError(t, err) err = e.Run(common.ExecutorCommand{ Context: context.Background(), }) assert.NoError(t, err) e.Cleanup() }) } } func TestExecutor_ServicesEnv(t *testing.T) { const CIJobServicesEnv = «CUSTOM_ENV_CI_JOB_SERVICES» runnerConfig := getRunnerConfig(&common.CustomConfig{ RunExec: «bash», PrepareExec: «echo», CleanupExec: «bash», }) adjustExecutorServices := func(services common.Services) func(t *testing.T, e *executor) { return func(t *testing.T, e *executor) { e.Build.Services = services } } assertEnvValue := func(expectedServices []jsonService) func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { return func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { for _, env := range options.Env { pair := strings.Split(env, «=») if pair[0] == CIJobServicesEnv { expectedServicesSerialized, _ := json.Marshal(expectedServices) assert.Equal(t, string(expectedServicesSerialized), pair[1]) break } } } } assertEmptyEnv := func() func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { return func( t *testing.T, tt executorTestCase, ctx context.Context, executable string, args []string, options process.CommandOptions, ) { for _, env := range options.Env { pair := strings.Split(env, «=») if pair[0] == CIJobServicesEnv { assert.Equal(t, «», pair[1]) break } } } } tests := map[string]executorTestCase{ «returns only name when service name is the only definition»: { config: runnerConfig, adjustExecutor: adjustExecutorServices(common.Services{ { Name: «ruby:latest», }, }), assertCommandFactory: assertEnvValue( []jsonService{ { Name: «ruby:latest», Alias: «», Entrypoint: nil, Command: nil, }, }, ), }, «returns full service definition»: { config: runnerConfig, adjustExecutor: adjustExecutorServices(common.Services{ { Name: «ruby:latest», Alias: «henk-ruby», Entrypoint: []string{«path», «to», «entrypoint»}, Command: []string{«path», «to», «command»}, }, }), assertCommandFactory: assertEnvValue( []jsonService{ { Name: «ruby:latest», Alias: «henk-ruby», Entrypoint: []string{«path», «to», «entrypoint»}, Command: []string{«path», «to», «command»}, }, }, ), }, «returns both simple and full service definitions»: { config: runnerConfig, adjustExecutor: adjustExecutorServices(common.Services{ { Name: «python:latest», Alias: «henk-python», Entrypoint: []string{«entrypoint.sh»}, Command: []string{«command —test»}, }, { Name: «python:alpine», }, }), assertCommandFactory: assertEnvValue( []jsonService{ { Name: «python:latest», Alias: «henk-python», Entrypoint: []string{«entrypoint.sh»}, Command: []string{«command —test»}, }, { Name: «python:alpine», Alias: «», Entrypoint: nil, Command: nil, }, }, ), }, «does not create env CI_JOB_SERVICES»: { config: runnerConfig, adjustExecutor: adjustExecutorServices(common.Services{}), assertCommandFactory: assertEmptyEnv(), }, } for tn, tt := range tests { t.Run(tn, func(t *testing.T) { defer mockCommandFactory(t, tt)() e, options, _ := prepareExecutor(t, tt) e.Config = *options.Config e.Build = options.Build e.Trace = options.Trace e.BuildLogger = common.NewBuildLogger(e.Trace, e.Build.Log()) if tt.adjustExecutor != nil { tt.adjustExecutor(t, e) } err := e.Prepare(options) assert.NoError(t, err) err = e.Run(common.ExecutorCommand{ Context: context.Background(), }) assert.NoError(t, err) e.Cleanup() }) } }

Пытаюсь запустить сборку через gitlab-runner docker:

$ gitlab-runner exec docker build:deb
ERRO[0000] Docker executor: prebuilt image helpers will be loaded from /var/lib/gitlab-runner. 
Running with gitlab-runner 11.2.0 (11.2.0)
Using Docker executor with image debian:buster ...
ERROR: Preparation failed: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: "gitlab-runner-cache": executable file not found in $PATH": unknown (executor_docker.go:412:0s)
Will be retried in 3s ...
Using Docker executor with image debian:buster ...
ERROR: Preparation failed: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: "gitlab-runner-cache": executable file not found in $PATH": unknown (executor_docker.go:412:0s)
Will be retried in 3s ...
Using Docker executor with image debian:buster ...
ERROR: Preparation failed: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: "gitlab-runner-cache": executable file not found in $PATH": unknown (executor_docker.go:412:0s)
Will be retried in 3s ...
ERROR: Job failed (system failure): Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: "gitlab-runner-cache": executable file not found in $PATH": unknown (executor_docker.go:412:0s)
FATAL: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: "gitlab-runner-cache": executable file not found in $PATH": unknown (executor_docker.go:412:0s)

Содержимое .gitlab-ci.yml:

stages:
  - build

build:deb:
  stage: build
  image: debian:buster
  tags:
  - deb
  before_script:
  - mkdir build && cd build
  - apt install cmake
  script:
  - cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_EXPORT_COMPILE_COMMANDS=On -DRA_STATIC_LINK=ON ..
  - cmake --build . -- -j 8
  artifacts:
    paths:
    - build/run

Как исправить ошибку?

GitLab CI Shell Executor failing builds with ERROR: Job failed: exit status 1

Tonight I’ve been playing around with GitLab CI’s shell executor, and my builds have been failing with this error:

After some searching online, it appeared that similar to when receiving No Such Directory , this comment noted that it’s an issue with SKEL — the solution was to delete .bash_logout from the gitlab-runner user’s home, but I also removed .bashrc and .profile .

How to work around `ERROR: Job failed: exit status 1` errors with GitLab CI’s shell executor.

Written by Jamie Tanna on Wed, 03 Jun 2020 21:13:41 BST , and last updated on Wed, 02 Mar 2022 13:34:19 UTC .

Content for this article is shared under the terms of the Creative Commons Attribution Non Commercial Share Alike 4.0 International, and code is shared under the Apache License 2.0.

Has this content helped you? Did it solve that difficult-to-resolve issue you’ve been chasing for weeks? Or has it taught you something new you’ll be able to re-use daily?

Please consider supporting me so I can continue to create content like this!

This post was filed under articles.

Interactions with this post

Below you can find the interactions that this page has had using WebMention.

Have you written a response to this post? Let me know the URL:

Do you not have a website set up with WebMention capabilities? You can use Comment Parade.

Источник

Roblox accounts catalogs

[SOLVED] Gitlabrunner: custom executor is missing …

Preview 8 hours ago Web WARNING: custom executor is missing RunExec ERROR: Job failed: custom executor is missing RunExec FATAL: custom executor is missing RunExec

Custom executor is missing RunExec (#4570) · Issues

Preview 7 hours ago Web $ gitlab-runner exec custom test Runtime platform arch=amd64 os=linux pid=25524 revision=de7731dd version=12.1.0 Running with gitlab-runner 12.1.0 (de7731dd) …

GitLab CI does not working : custom executor is missing …

Preview 6 hours ago Web And installed gitlab-runner successfully. Running with gitlab-runner 13.1.0 (6214287e) on testcicd xxxxxx Preparing the «custom» executor WARNING: custom

The Custom executor GitLab

Preview 2 hours ago Web Documentation for GitLab Community Edition, GitLab Enterprise Edition, Omnibus GitLab, and GitLab Runner.

A practical guide to GitLab Runner …

Preview 5 hours ago Web Generated scripts are another key concept of GitLab Runner and hence Custom Executor drivers. Users describe their CI jobs as lists of commands, and GitLab …

Using LXD with the Custom executor GitLab

Preview 5 hours ago Web Using LXD with the Custom executor. In this example, we use LXD to create a container per build and clean it up afterwards. This example uses a bash script for each stage. You …

Invalid executor specified custom executor (#6619) GitLab

Preview 7 hours ago Web Currently i have problem to register the custom executor Files: base.sh cleanup.sh prepare.sh run.sh Running Containers before gitlab runner register

Gitlabcirunner choose executer «Please enter the …

Preview 3 hours ago Web 1 Answer. By choosing ‘docker’ as the CI runner executor a docker container will be created to run the builds. If you are unsure about which executor to use …

A practical guide to GitLab Runner Custom Executor drivers

Preview Just Now Web A practical guide to GitLab Runner Custom Executor drivers. In my brand new blog post, I dig into GitLab Runner Custom Executor drivers. I used a developer-to-developer …

A Brief Guide to GitLab CI Runners and Executors Medium

Preview 2 hours ago Web The GitLab Runner receives instructions from the GitLab server in regards to which jobs to run. Each runner must be registered with the GitLab server. Runner …

The Custom executor GitLab クリエーションライン株式会社

Preview 5 hours ago Web The Custom executor. GitLab Runner provides the Custom executor for environments that it doesn’t support natively, for example, Podman or Libvirt. This gives you the control …

Support for custom executors · Issue #689 · actions/runner

Preview 4 hours ago Web This allows one to easily support custom container runtimes such as podman, plain runc, singularity, or whatever container runtime you can think of. At its …

Please leave your comments here:

Catalogs Updated

Frequently Asked Questions

What is GitLab Runner and custom executor drivers?

Generated scripts are another key concept of GitLab Runner and hence Custom Executor drivers. Users describe their CI jobs as lists of commands, and GitLab Runner adds some others when executing the Run stage.

What is prepare_Exec in GitLab Runner?

GitLab Runner will execute the executable that is specified in prepare_exec . This is responsible for setting up the environment (for example, creating the virtual machine or container, or anything else). After this is done, we expect that the environment is ready to run the job. This stage is executed only once, in a job execution.

What is run_ARGs in GitLab Runner?

The path to the script that GitLab Runner creates for the Custom executor to run. Name of the stage. If you have run_args defined, they are the first set of arguments passed to the run_exec executable, then GitLab Runner adds others. For example, suppose we have the following config.toml:

What is a set of executables called in GitLab?

This set of executables is called Driver. GitLab Runner and the Custom Executor are fairly well documented as you can see in the links provided so far, what it is pretty useful for those who want to develop their own drivers.

Источник

The Custom executor

Introduced in GitLab Runner 12.1

GitLab Runner provides the Custom executor for environments that it doesn’t support natively, for example, Podman or Libvirt.

This gives you the control to create your own executor by configuring GitLab Runner to use some executable to provision, run, and clean up your environment.

The scripts you configure for the custom executor are called Drivers . For example, you could create a Podman driver, an LXD driver or a Libvirt driver.

Limitations

Below are some current limitations when using the Custom executor:

  • No support for services . See #4358 for more details.
  • No Interactive Web Terminal support.

Configuration

There are a few configuration keys that you can choose from. Some of them are optional.

Below is an example of configuration for the Custom executor using all available configuration keys:

For field definitions and which ones are required, see [runners.custom] section configuration.

In addition both builds_dir and cache_dir inside of the [[runners]] are required fields.

Prerequisite software for running a Job

The user must set up the environment, including the following that must be present in the PATH :

  • Git: Used to clone the repositories.
  • Git LFS: Pulls any LFS objects that might be in the repository.
  • GitLab Runner: Used to download/update artifacts and cache.

Stages

The Custom executor provides the stages for you to configure some details of the job, prepare and cleanup the environment and run the job script within it. Each stage is responsible for specific things and has different things to keep in mind.

Each stage executed by the Custom executor is executed at the time a builtin GitLab Runner executor would execute them.

For each step that will be executed, specific environment variables are exposed to the executable, which can be used to get information about the specific Job that is running. All stages will have the following environment variables available to them:

  • Standard CI/CD environment variables, including predefined variables.
  • All environment variables provided by the Custom Runner host system.

Both CI/CD environment variables and predefined variables are prefixed with CUSTOM_ENV_ to prevent conflicts with system environment variables. For example, CI_BUILDS_DIR will be available as CUSTOM_ENV_CI_BUILDS_DIR .

The stages run in the following sequence:

  1. config_exec
  2. prepare_exec
  3. run_exec
  4. cleanup_exec

Config

The Config stage is executed by config_exec .

Sometimes you might want to set some settings during execution time. For example settings a build directory depending on the project ID. config_exec reads from STDOUT and expects a valid JSON string with specific keys.

Any additional keys inside of the JSON string will be ignored. If it’s not a valid JSON string the stage will fail and be retried two more times.

Parameter Type Required Allowed empty Description
builds_dir string The base directory where the working directory of the job will be created.
cache_dir string The base directory where local cache will be stored.
builds_dir_is_shared bool n/a Defines whether the environment is shared between concurrent job or not.
hostname string The hostname to associate with job’s “metadata” stored by Runner. If undefined, the hostname is not set.
driver.name string The user-defined name for the driver. Printed with the Using custom executor. line. If undefined, no information about driver is printed.
driver.version string The user-defined version for the drive. Printed with the Using custom executor. line. If undefined, only the name information is printed.

The STDERR of the executable will print to the job log.

The user can set config_exec_timeout if they want to set a deadline for how long GitLab Runner should wait to return the JSON string before terminating the process.

If any of the config_exec_args are defined, these will be added in order to the executable defined in config_exec . For example we have the config.toml content below:

GitLab Runner would execute it as /path/to/config Arg1 Arg2 .

Prepare

The Prepare stage is executed by prepare_exec .

At this point, GitLab Runner knows everything about the job (where and how it’s going to run). The only thing left is for the environment to be set up so the job can run. GitLab Runner will execute the executable that is specified in prepare_exec .

This is responsible for setting up the environment (for example, creating the virtual machine or container, or anything else). After this is done, we expect that the environment is ready to run the job.

This stage is executed only once, in a job execution.

The user can set prepare_exec_timeout if they want to set a deadline for how long GitLab Runner should wait to prepare the environment before terminating the process.

The STDOUT and STDERR returned from this executable will print to the job log.

If any of the prepare_exec_args are defined, these will be added in order to the executable defined in prepare_exec . For example we have the config.toml content below:

GitLab Runner would execute it as /path/to/bin Arg1 Arg2 .

The Run stage is executed by run_exec .

The STDOUT and STDERR returned from this executable will print to the job log.

Unlike the other stages, the run_exec stage is executed multiple times, since it’s split into sub stages listed below in sequential order:

  1. prepare_script
  2. get_sources
  3. restore_cache
  4. download_artifacts
  5. step_*
  6. build_script
  7. step_*
  8. after_script
  9. archive_cache
  10. upload_artifacts_on_success OR upload_artifacts_on_failure

For each stage mentioned above, the run_exec executable will be executed with:

  • The usual environment variables.
  • Two arguments:
    • The path to the script that GitLab Runner creates for the Custom executor to run.
    • Name of the stage.

If you have run_args defined, they are the first set of arguments passed to the run_exec executable, then GitLab Runner adds others. For example, suppose we have the following config.toml :

GitLab Runner will execute the executable with the following arguments:

This executable should be responsible for executing the scripts that are specified in the first argument. They contain all the scripts any GitLab Runner executor would run normally to clone, download artifacts, run user scripts and all the other steps described below. The scripts can be of the following shells:

  • Bash
  • PowerShell Desktop
  • PowerShell Core
  • Batch (deprecated)

We generate the script using the shell configured by shell inside of [[runners]] . If none is provided the defaults for the OS platform are used.

The table below is a detailed explanation of what each script does and what the main goal of that script is.

Script Name Script Contents
prepare_script Simple debug information which machine the Job is running on.
get_sources Prepares the Git configuration, and clone/fetch the repository. We suggest you keep this as is since you get all of the benefits of Git strategies that GitLab provides.
restore_cache Extract the cache if any are defined. This expects the gitlab-runner binary is available in $PATH .
download_artifacts Download artifacts, if any are defined. This expects gitlab-runner binary is available in $PATH .
step_* Generated by GitLab. A set of scripts to execute. It may never be sent to the custom executor. It may have multiple steps, like step_release and step_accessibility . This can be a feature from the .gitlab-ci.yml file.
build_script A combination of before_script and script . In GitLab Runner 14.0 and later, build_script will be replaced with step_script . For more information, see this issue.
after_script This is the after_script defined from the job. This is always called even if any of the previous steps failed.
archive_cache Will create an archive of all the cache, if any are defined.
upload_artifacts_on_success Upload any artifacts that are defined. Only executed when build_script was successful.
upload_artifacts_on_failure Upload any artifacts that are defined. Only executed when build_script fails.

Cleanup

The Cleanup stage is executed by cleanup_exec .

This final stage is executed even if one of the previous stages failed. The main goal for this stage is to clean up any of the environments that might have been set up. For example, turning off VMs or deleting containers.

The result of cleanup_exec does not affect job statuses. For example, a job will be marked as successful even if the following occurs:

  • Both prepare_exec and run_exec are successful.
  • cleanup_exec fails.

The user can set cleanup_exec_timeout if they want to set some kind of deadline of how long GitLab Runner should wait to clean up the environment before terminating the process.

The STDOUT of this executable will be printed to GitLab Runner logs at a DEBUG level. The STDERR will be printed to the logs at a WARN level.

If any of the cleanup_exec_args are defined, these will be added in order to the executable defined in cleanup_exec . For example we have the config.toml content below:

GitLab Runner would execute it as /path/to/bin Arg1 Arg2 .

Terminating and killing executables

GitLab Runner will try to gracefully terminate an executable under any of the following conditions:

  • config_exec_timeout , prepare_exec_timeout or cleanup_exec_timeout are met.
  • The job times out.
  • The job is cancelled.

When a timeout is reached, a SIGTERM is sent to the executable, and the countdown for exec_terminate_timeout starts. The executable should listen to this signal to make sure it cleans up any resources. If exec_terminate_timeout passes and the process is still running, a SIGKILL is sent to kill the process and exec_force_kill_timeout will start. If the process is still running after exec_force_kill_timeout has finished, GitLab Runner will abandon the process and will not try to stop/kill anymore. If both these timeouts are reached during config_exec , prepare_exec or run_exec the build is marked as failed.

As of GitLab 13.1 any child process that is spawned by the driver will also receive the graceful termination process explained above on UNIX based systems. This is achieved by having the main process set as a process group which all the child processes belong too.

Error handling

There are two types of errors that GitLab Runner can handle differently. These errors are only handled when the executable inside of config_exec , prepare_exec , run_exec , and cleanup_exec exits with these codes. If the user exits with a non-zero exit code, it should be propagated as one of the error codes below.

If the user script exits with one of these code it has to be propagated to the executable exit code.

Build Failure

GitLab Runner provides BUILD_FAILURE_EXIT_CODE environment variable which should be used by the executable as an exit code to inform GitLab Runner that there is a failure on the users job. If the executable exits with the code from BUILD_FAILURE_EXIT_CODE , the build is marked as a failure appropriately in GitLab CI.

If the script that the user defines inside of .gitlab-ci.yml file exits with a non-zero code, run_exec should exit with BUILD_FAILURE_EXIT_CODE value.

System Failure

You can send a system failure to GitLab Runner by exiting the process with the error code specified in the SYSTEM_FAILURE_EXIT_CODE . If this error code is returned, on certain stages GitLab Runner will retry the stage, if none of the retries are successful the job will be marked as failed.

Below is a table of what stages are retried, and by how many times.

Stage Name Number of attempts Duration to wait between each retry
prepare_exec 3 3 seconds
get_sources Value of GET_SOURCES_ATTEMPTS variable. (Default 1) 0 seconds
restore_cache Value of RESTORE_CACHE_ATTEMPTS variable. (Default 1) 0 seconds
download_artifacts Value of ARTIFACT_DOWNLOAD_ATTEMPTS variable. (Default 1) 0 seconds

Driver examples

A set of example drivers using the Custom executor can be found in the examples page.

Источник

GitLabrunner-1-1

The Shell executor is a simple executor that you use to execute builds locally on the machine where GitLab Runner is installed. It supports all systems on which the Runner can be installed. That means that it’s possible to use scripts generated for Bash, PowerShell Core, Windows PowerShell, and Windows Batch (deprecated). Like I said before, Shell is the simplest executor to configure. All required dependencies for your builds need to be installed manually on the same machine that GitLab Runner is installed on. Here are some related guides: How to install Git on macOS, How to uninstall Git on macOS, Practical Git use with mackdown, how to clone a repository and install software from GitHub on Windows, how to use AWS CodeCommit, Azure DevOps and GitHub integration for Docker and Kubernetes deployment, and how to build your first CI/CD Pipeline in Azure DevOps using ASP.Net Core Application.

The following error "There has been a runner system failure, please try again: RROR: Job failed (system failure): prepare environment: failed to start process: exec: "pwsh": executable file not found in %PATH%" can result because the pwsh entry for the shell attribute in gitlab-runner config.toml does not work in some WIN10 machines. You may also want to see "how to install, register and start GitLab Runner on Windows".

PowerShell Desktop Edition is the default shell when a new runner is registered on Windows using GitLab Runner 12.0-13.12. In 14.0 and later, the default is PowerShell Core Edition. PowerShell doesn’t support executing the build-in context of another user.

Screenshot-2021-10-04-at-17.30.00

GitLab Runner supports certain shells. To select a shell, specify it in your config.toml file. For example. So I had to edit the confg.toml like follows.

Screenshot-2021-10-05-at-17.40.08

Below is the syntax of the config.toml file that is currently being edited to resolve this issue.

concurrent = 1
check_interval = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "windows10 runner with Shell"
  url = "https://gitlab.com/"
  token = "xxxxxxxxxxxxxxxxxx"
  executor = "shell"
  shell = "pwsh"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]

I had to replace the shell executor with the following shell = “powershell” as shown below. You may also see this official guide from Docker on how this was defined.

Screenshot-2021-10-05-at-17.45.05

When this is done, GitLab-runner should reload automatically. Bt to be safe, just restart manually to ensure it takes effect.

gitlab-runner.exe restart
Screenshot-2021-10-05-at-17.49.16

Alternatively, you can restart via the services.msc as shown below.

Screenshot-2021-10-05-at-17.48.02

Note: Generally it’s unsafe to run tests with shell executors. The jobs are run with the user’s permissions (gitlab-runner) and can “steal” code from other projects that are run on this server. Use it only for running builds on a server you trust and own. Kindly refer to this guide for more information.

I hope you found this blog post helpful. If you have any questions, please let me know in the comment session.

Понравилась статья? Поделить с друзьями:
  • Error job failed command terminated with exit code 1
  • Error jjsploit did not find roblox
  • Error javascript is disabled this site will not work properly without it
  • Error javafx runtime components are missing and are required to run this application перевод
  • Error javafx runtime components are missing and are required to run this application ошибка