Error job failed command terminated with exit code 1

I followed Connecting GitLab with a Kubernetes cluster | GitLab and GitLab Runner and now trying to follow GitLab CI/CD Pipeline Configuration Reference, yet running into following error: Cannot

I followed Connecting GitLab with a Kubernetes cluster | GitLab and
GitLab Runner and now trying to follow GitLab CI/CD Pipeline Configuration Reference, yet running into following error:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?

job:

Running with gitlab-runner 11.10.1 (1f513601)
  on runner-gitlab-runner-5b8d5bf7db-5phqs 3gRXuKPT
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image docker:latest ...
Waiting for pod gitlab-managed-apps/runner-3grxukpt-project-18-concurrent-1m7ttl to be running, status is Pending
Running on runner-3grxukpt-project-18-concurrent-1m7ttl via runner-gitlab-runner-5b8d5bf7db-5phqs...
Initialized empty Git repository in /builds/X/test/.git/
Fetching changes...
Created fresh repository.
From https://gitlab.X.com/X/test
 * [new branch]      master     -> origin/master
Checking out 72b6895d as master...

Skipping Git submodules setup
$ docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

$ docker build --pull -t "$CI_REGISTRY_IMAGE" .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERROR: Job failed: command terminated with exit code 1

.gitlab-ci.yml:

# This file is a template, and might need editing before it works on your project.
# Official docker image.
image: docker:latest

services:
  - docker:dind

before_script:
  - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY

build-master:
  stage: build
  script:
    - docker build --pull -t "$CI_REGISTRY_IMAGE" .
    - docker push "$CI_REGISTRY_IMAGE"
  only:
    - master

build:
  stage: build
  script:
    - docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
    - docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
  except:
    - master

Please advise.

asked Jun 8, 2019 at 19:40

alexus's user avatar

alexusalexus

7,11611 gold badges42 silver badges66 bronze badges

2

in my case I had to add following variables into .gitlab-ci.yml:

services:
  - docker:dind

variables:
  DOCKER_HOST: tcp://localhost:2375/
  DOCKER_DRIVER: overlay2

answered Jun 11, 2019 at 19:58

alexus's user avatar

alexusalexus

7,11611 gold badges42 silver badges66 bronze badges

Содержание

  1. Upload artifact fails but jobs succeeds
  2. Summary
  3. Steps to reproduce
  4. Actual behavior
  5. Expected behavior
  6. Relevant logs and/or screenshots
  7. Environment description
  8. Used GitLab Runner version
  9. poetry install fails when installing dependencies, from git repositories, in gitlab CI. #2475
  10. Comments
  11. Issue
  12. 0 of 1 updated replicas available — CrashLoopBackOff [AutoDevOps + GKE @ stage:production]
  13. Summary
  14. Steps to reproduce
  15. Example Project
  16. What is the current bug behaviour?
  17. What is the expected correct behaviour?
  18. Relevant logs and/or screenshots
  19. Possible fixes
  20. Using affected:build in CI yields spawnSync error #1886
  21. Comments
  22. Expected Behavior
  23. Current Behavior
  24. Failure Information (for bugs)
  25. Steps to Reproduce
  26. Context
  27. Error When Building Image with Kaniko #1624
  28. Comments

Upload artifact fails but jobs succeeds

Summary

Occasionally, after one of our build jobs is finished, it will start uploading the artifacts, and silently fail: the job succeeds but the artifacts are not uploaded and no errors are reported.

  • Jobs that depend on these artifacts then fail, which requires re-running the whole job again (as opposed to just re-running the upload)

Steps to reproduce

  • I have managed to reproduce the behavior (the helper crashing but job succeeding)
    • More details in this project https://gitlab.com/jpsamper/runner-helper-reproducer
    • In production, we usually see it when there are a lot of jobs running/uploading artifacts at the same time
      • We’ve seen it with as low as 10 jobs uploading 1.5GB zipped (4-5GB unzipped) concurrently
  • By using a custom gitlab-runner-helper with a lot more print statements, we have found that the logs stop after invoking r.client.Do (i.e. if we add a print statement right before and right after, the one right after never appears)
    • Naturally, this is the behavior when something goes wrong, when the artifact is uploaded correctly, we see both print statements

Actual behavior

If I understand correctly, the function call linked above is invoking Do from the net/http package, and that call seems to be crashing the gitlab-runner-helper with no additional error message/return code/etc.

Expected behavior

  • If an artifact upload fails, the job fails or retries
  • An informative error message too, ideally

Relevant logs and/or screenshots

When everything works as expected:

And when it doesn’t:

Environment description

  • Gitlab Runner with docker executor kubernetes executor
  • Latest docker version
  • Default config.toml

Used GitLab Runner version

We’re currently on gitlab-runner 13.3.0 but have been seeing this since at least 12.9.0

Источник

poetry install fails when installing dependencies, from git repositories, in gitlab CI. #2475

  • I am on the latest Poetry version.
  • I have searched the issues of this repo and believe that this is not a duplicate.
  • If an exception occurs when executing a command, I executed it again in debug mode ( -vvv option).
  • OS version and name: python3.8 docker image
  • Poetry version: 1.0.5 (installed via pip3 install poetry )
  • Link of a Gist with the contents of your pyproject.toml file: link

Issue

poetry install fails when installing dependencies, from git repositories, in gitlab CI.

It does work on my local machine. Dependency has been added via poetry add git+https://github.com/devopshq/artifactory.git@support-python-38-glob
Traceback from poetry install -vvv :

The text was updated successfully, but these errors were encountered:

it looks a bit like git cannot find the branch. Can you try to clone the repository and checkout the branch in the target machine only with git ? At best, copy&paste the branch name to avoid oversee any minor typo.

The target here is a docker container (based on the image we provide) provisioned by Gitlab during the build time.
It works fine when running the docker container locally.
@pawelrubin did you manage to fix it / work around it?

I believe this problem is related to something I’ve experienced.

It looks like poetry will clone the repository with it’s default name during poetry add but will try to infer the local repository name from the package name when doing anything else, so in my case I had something like:

which cloned such repository under $HOME/.cache/pypoetry/virtualenvs/myproject-pbdKQ0bZ-py3.6/src/mycompany-sdk/

However, during a subsequent poetry update it failed with:

Which makes total sense because there’s no mycompany directory there, the repository is called mycompany-sdk , yet the package it contains is, indeed, named mycompany and that’s why poetry thinks it should look for that package there.

There does not appear to be any rush to fix this, so I guess there is a workaround. I wonder what that might be?

Perhaps delete the virtualenv and start over?

I had a similar issue with a dependency from a private Gitlab repository that used SSH for cloning (same CalledProcessError ). When testing in a local Docker container, I noticed that I had to actively confirm the server fingerprint. As git has no option for automatically doing that, I solved poetry’s installation of dependencies in the Gitlab CI container by adding the fingerprint to

/.ssh/known_hosts before running poetry install in my .gitlab-ci.yml . Here’s how I did that:

  1. Manually clone my dependency’s repository in a clean docker container
  2. In that docker container, get the two lines (not sure why there’s more than one) from

/.ssh/known_hosts

  • In my .gitlab-ci.yml , add the following lines before calling poetry install :
  • /.ssh/known_hosts — echo «SECOND LINE COPIED IN STEP 2» >>

    I’m aware that the problem in the original comment might be something else (given that you, @pawelrubin, were trying to clone via HTTPS from a public repository), but maybe this will help other people ending up here when researching that error message.

    1. In my .gitlab-ci.yml , add the following lines before calling poetry install :

    /.ssh/known_hosts — echo «SECOND LINE COPIED IN STEP 2» >>

    I know this is old. but a cleaner alternative that I found for github, guessing gitlab is similar

    I had a similar issue and it turns out the problem was that the library I was referring to ( sentinelsat ) had no master branch because they renamed their main branch to main . Since a lot of repositories have done that in the last few years, that might be the problem.

    Adding the branch requirement in pyproject.toml worked for me:

    This problem already has an issue at #3366.

    Experiencing this issue w/ a public GitHub repo.

    Here is my Poetry config:

    And here at the GitLab CI Logs:

    Anyone aware of a workaround?

    In case it helps anyone: I had the same «returned non-zero exit status 128» issue. I investigated it by just running this:

    It turned out that /usr/local/src did not exist / wasn’t writable, so totally a problem on my end. It would really be nice if Poetry could log the stderr of Git to make debugging situations like this easier.

    For those wondering: The path is coming from the pip_installer which uses sys.prefix / «src»

    It turned out that /usr/local/src did not exist / wasn’t writable, so totally a problem on my end.

    @mrcljx Do you think you could explain what you changed? I tried the following:

    but it’s still giving me the same error:

    This dependency is written in pyproject.toml as follows:

    The container is created without an issue if I run everything as root. Are you running Poetry as root?

    Experiencing exactly the same issue with 2 out of 4 github repositories. What the two that are failing have in common is that they both have submodules. Turns out I’d made a change to the way the submodules were being referenced.

    I was able to identify the issue by manually adding git clone —recurse-submodules — https://git@github.com/xxx/xxx.git /usr/local/src/xxx/xxx into the pipeline to see what the real error was. Turns out the submodules were using the url https://github.com, whereas their parents were accessed through https://git@github.com. I’d applied the following to ensure the PAT was being set for git@github.com;

    This was not applying to the submodule and I got an authentication error when it tried to clone the submodule. I solved this by just updating the url to the submodules in .gitmodules.

    This may not be the same issue as exit status code 1 is rather uninformative, but adding the command manually to your pipeline should reveal the true cause.

    @phenry2
    Thank you for timely comments.
    I had just the similar issue. I could solved it.

    Experiencing exactly the same issue with 2 out of 4 github repositories. What the two that are failing have in common is that they both have submodules. Turns out I’d made a change to the way the submodules were being referenced.

    I was able to identify the issue by manually adding git clone —recurse-submodules — https://git@github.com/xxx/xxx.git /usr/local/src/xxx/xxx into the pipeline to see what the real error was. Turns out the submodules were using the url https://github.com, whereas their parents were accessed through https://git@github.com. I’d applied the following to ensure the PAT was being set for git@github.com;

    This was not applying to the submodule and I got an authentication error when it tried to clone the submodule. I solved this by just updating the url to the submodules in .gitmodules.

    This may not be the same issue as exit status code 1 is rather uninformative, but adding the command manually to your pipeline should reveal the true cause.

    Источник

    0 of 1 updated replicas available — CrashLoopBackOff [AutoDevOps + GKE @ stage:production]

    Summary

    Pipeline job for production stage fails with:

    GKE reports deployment error: 0 of 1 updated replicas available — CrashLoopBackOff

    Steps to reproduce

    1. Connect to existing GKE cluster which was created:
    • configure API endpoint, ca, token and wildcard ci domain
    • install helm tiller, ingress, update your dns with ingress IP, cert-manager, gitlab-runner
    • check your updated DNS for: *.ci.example.com —> dig @1.1.1.1 xxx.ci.example.com should point to your ingress IP
    1. Enable AutoDevOps
      • use build, test and production stages
      • in test stage change: /bin/herokuish buildpack test into /bin/herokuish version otherwise it will fail with:
    1. Commit some code and merge dev branch to master
    2. Wait for production stage
    3. Function deploy should fail with:

    Example Project

    What is the current bug behaviour?

    1. Pipeline job for production stage fails
    2. POD restarts

    What is the expected correct behaviour?

    Pipeline job for production stage succeed

    Relevant logs and/or screenshots

    Deployment status:

    CrashLoopBackOff Container ‘auto-deploy-app’ keeps crashing.

    Deployment status details:

    StackDriver Logs:

    Click to expand

    HTTPS Ingress works, no backend though:

    Possible fixes

    1. meanwhile will try to reproduce it during Create new GKE cluster scenario THE SAME BEHAVIOUR
    2. will try to reproduce it on different regions if 1. unsuccessful

    Источник

    Using affected:build in CI yields spawnSync error #1886

    Expected Behavior

    When running affected:build in CI, Nx should build the affected apps.

    Current Behavior

    The affected:build command fails before running any tests.

    Failure Information (for bugs)

    Steps to Reproduce

    We are using FROM node:12 as our base docker image in CI.

    Context

    Please provide any relevant information about your setup:

    • version of Nx used: 8.5.1

    The text was updated successfully, but these errors were encountered:

    This repository has a large commit history. I tried the max_old_space_size trick for node, with no change:

    @rpd10 I believe the issue is with the fact that some command exceed the buffer size, which is 200kb.

    Could you run the following three command to measure the size of the output?

    Indeed, git ls-files —others —exclude-standard has a huge output, so much so that GitLab bails out: «Job’s log exceeded limit of 4194304 bytes».

    We have CI setup to put the .yarn cache into the working directory (I believe for Cypress). I think newer versions of Cypress has better support for caching, I will look into using the default cache location for Cypress and yarn, and report back.

    Moving the yarn cache outside the working directory does fix this problem, affected:build, etc. works now.

    I’m not sure whether there is anything to fix here from the Nx side (maybe a better error message?). I can close this issue unless you would like it to remain open.

    One thing we can do is to bump up the size of the buffer to something like 10mb, and a better error message. Would you be interested in contributing the change? I can help.

    I’ve pushed up a PR for this.

    today I ran into the same problem using nx affected in my Jenkins pipeline: Error: spawnSync /bin/sh ENOBUFS .

    After some investigation, I saw that git ls-files —others —exclude-standard produced an output with a size of 11 MB because of Yarn’s .cache directory inside the workspace. Well, that’s definitely above rdp10’s PR.

    After some thought, I found a very simple solution: git ls-files —others —exclude-standard ignores everything that is in your .gitignore file. So, to get rid of this error, it’s sufficient to put .cache inside your .gitignore file.

    @vsavkin: Would perhaps be worth a hint in the documentation. 😉

    Hope that helps.

    Best wishes,
    Michael

    Had the same issue today in my gitlab ci pipeline. Turned out my .gitignore file was ignored by my .dockerignore file, which made git ls-files —others —exclude-standard return the whole contents of node_modules.. Removing .gitignore from the .dockerignore file solved it.

    Thanks for pointing me in the right direction @MichaelKaaden.

    Источник

    Error When Building Image with Kaniko #1624

    Actual behavior
    I am getting an error when building image with Kaniko in GitLab CI.

    Our image build script is this:

    Our Dockerfile is this:

    Expected behavior
    It shouldn’t give any errors in dotnet restore phase because the .csproj file should be there with path src/our_app_name.csproj , I do not get any errors when building directly with docker build .

    To Reproduce
    Steps to reproduce the behavior:

    1. Use given dockerfile and a dotnet project with the same name.
    2. Use given GitLab CI script.

    Additional Information

    • Dockerfile — Provided.
    • Build Context — GitLab CI.
    • Kaniko Image (fully qualified with digest) — gcr.io/kaniko-project/executor:debug with digest sha256:e00dfdd4a44097867c8ef671e5a7f3e31d94bd09406dbdfba8a13a63fc6b8060 .

    Triage Notes for the Maintainers

    Description Yes/No
    Please check if this a new feature you are proposing
    Please check if the build works in docker but not in kaniko
    Please check if this error is seen when you use —cache flag
    Please check if your dockerfile is a multistage dockerfile

    The text was updated successfully, but these errors were encountered:

    Источник

    Skip to content



    Open


    Issue created Feb 20, 2018 by George Paoli@george.linearsistemas

    ERROR: Job failed: exit code 1

    Summary

    Script executed with success, but job fail.

    Steps to reproduce

    Here is my attempt, you can check the error, an error message appears: «ERROR: Job failed: exit code 1»

    Actual behavior

    An error message appears: «ERROR: Job failed: exit code 1»

    Expected behavior

    Show message «code coverage done!», from my last script configured in .gitlab-ci.yml

    Relevant logs and/or screenshots

    screenshots

    .gitlab-ci.yml source:

    image: microsoft/dotnet
    
    stages:
      - build_and_test
    
    build-modulo-materiais:
      stage: build_and_test
      script:
        - cd v1/materiais
        - dotnet build src/api/api.csproj
        - dotnet build test/test.csproj
        - dotnet restore tools/tools.csproj
        - cd tools
        - dotnet minicover instrument --workdir ../ --assemblies test/**/bin/**/*.dll --sources src/**/*.cs
        - dotnet minicover reset
        - dotnet test ../test/test.csproj --no-build
        - dotnet minicover uninstrument --workdir ../
        - dotnet minicover report --workdir ../ --threshold 90 ## JOB FAIL HERE, BUT SCRIPT EXECUTED WITH SUCCESS
        - echo "code coverage done!"

    Environment description

    Are you using shared Runners on GitLab.com? Yes
    Please, see complete in https://gitlab.com/linear-back/pocs/estrutura-projetos-testes-ci-cd/-/jobs/53438671

    Used GitLab Runner version

    Running with gitlab-runner 10.5.0-rc1 (7a8e43fe)
      on docker-auto-scale (e11ae361)
    Using Docker executor with image microsoft/dotnet ...

    Edited Feb 20, 2018 by George Paoli

    Running with gitlab-runner 11.7.0~beta.3.g41feb4a1 (41feb4a1)
      on Kubernetes Runner dHhx65ta
    Using Kubernetes namespace: gitlab-ci
    Using Kubernetes executor with image docker:stable ...
    
    $ pip3 install docker-compose
    Collecting docker-compose
      Downloading https://files.pythonhosted.org/packages/51/56/5745e66b33846e92a8814466c163f165a26fadad8b33afe381e8b6c3f652/docker_compose-1.24.0-py2.py3-none-any.whl (134kB)
    Collecting cached-property<2,>=1.2.0 (from docker-compose)
      Downloading https://files.pythonhosted.org/packages/3b/86/85c1be2e8db9e13ef9a350aecd6dea292bd612fa288c2f40d035bb750ded/cached_property-1.5.1-py2.py3-none-any.whl
    Collecting docopt<0.7,>=0.6.1 (from docker-compose)
      Downloading https://files.pythonhosted.org/packages/a2/55/8f8cab2afd404cf578136ef2cc5dfb50baa1761b68c9da1fb1e4eed343c9/docopt-0.6.2.tar.gz
    Collecting texttable<0.10,>=0.9.0 (from docker-compose)
      Downloading https://files.pythonhosted.org/packages/02/e1/2565e6b842de7945af0555167d33acfc8a615584ef7abd30d1eae00a4d80/texttable-0.9.1.tar.gz
    Collecting dockerpty<0.5,>=0.4.1 (from docker-compose)
      Downloading https://files.pythonhosted.org/packages/8d/ee/e9ecce4c32204a6738e0a5d5883d3413794d7498fe8b06f44becc028d3ba/dockerpty-0.4.1.tar.gz
    Collecting jsonschema<3,>=2.5.1 (from docker-compose)
      Downloading https://files.pythonhosted.org/packages/77/de/47e35a97b2b05c2fadbec67d44cfcdcd09b8086951b331d82de90d2912da/jsonschema-2.6.0-py2.py3-none-any.whl
    Collecting PyYAML<4.3,>=3.10 (from docker-compose)
      Downloading https://files.pythonhosted.org/packages/9e/a3/1d13970c3f36777c583f136c136f804d70f500168edc1edea6daa7200769/PyYAML-3.13.tar.gz (270kB)
    Collecting requests!=2.11.0,!=2.12.2,!=2.18.0,<2.21,>=2.6.1 (from docker-compose)
      Downloading https://files.pythonhosted.org/packages/ff/17/5cbb026005115301a8fb2f9b0e3e8d32313142fe8b617070e7baad20554f/requests-2.20.1-py2.py3-none-any.whl (57kB)
    Collecting docker[ssh]<4.0,>=3.7.0 (from docker-compose)
      Downloading https://files.pythonhosted.org/packages/48/68/c3afca1a5aa8d2997ec3b8ee822a4d752cf85907b321f07ea86888545152/docker-3.7.2-py2.py3-none-any.whl (134kB)
    Collecting websocket-client<1.0,>=0.32.0 (from docker-compose)
      Downloading https://files.pythonhosted.org/packages/29/19/44753eab1fdb50770ac69605527e8859468f3c0fd7dc5a76dd9c4dbd7906/websocket_client-0.56.0-py2.py3-none-any.whl (200kB)
    Collecting six<2,>=1.3.0 (from docker-compose)
      Downloading https://files.pythonhosted.org/packages/73/fb/00a976f728d0d1fecfe898238ce23f502a721c0ac0ecfedb80e0d88c64e9/six-1.12.0-py2.py3-none-any.whl
    Collecting chardet<3.1.0,>=3.0.2 (from requests!=2.11.0,!=2.12.2,!=2.18.0,<2.21,>=2.6.1->docker-compose)
      Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl (133kB)
    Collecting certifi>=2017.4.17 (from requests!=2.11.0,!=2.12.2,!=2.18.0,<2.21,>=2.6.1->docker-compose)
      Downloading https://files.pythonhosted.org/packages/60/75/f692a584e85b7eaba0e03827b3d51f45f571c2e793dd731e598828d380aa/certifi-2019.3.9-py2.py3-none-any.whl (158kB)
    Collecting urllib3<1.25,>=1.21.1 (from requests!=2.11.0,!=2.12.2,!=2.18.0,<2.21,>=2.6.1->docker-compose)
      Downloading https://files.pythonhosted.org/packages/62/00/ee1d7de624db8ba7090d1226aebefab96a2c71cd5cfa7629d6ad3f61b79e/urllib3-1.24.1-py2.py3-none-any.whl (118kB)
    Collecting idna<2.8,>=2.5 (from requests!=2.11.0,!=2.12.2,!=2.18.0,<2.21,>=2.6.1->docker-compose)
      Downloading https://files.pythonhosted.org/packages/4b/2a/0276479a4b3caeb8a8c1af2f8e4355746a97fab05a372e4a2c6a6b876165/idna-2.7-py2.py3-none-any.whl (58kB)
    Collecting docker-pycreds>=0.4.0 (from docker[ssh]<4.0,>=3.7.0->docker-compose)
      Downloading https://files.pythonhosted.org/packages/f5/e8/f6bd1eee09314e7e6dee49cbe2c5e22314ccdb38db16c9fc72d2fa80d054/docker_pycreds-0.4.0-py2.py3-none-any.whl
    Collecting paramiko>=2.4.2; extra == "ssh" (from docker[ssh]<4.0,>=3.7.0->docker-compose)
      Downloading https://files.pythonhosted.org/packages/cf/ae/94e70d49044ccc234bfdba20114fa947d7ba6eb68a2e452d89b920e62227/paramiko-2.4.2-py2.py3-none-any.whl (193kB)
    Collecting pynacl>=1.0.1 (from paramiko>=2.4.2; extra == "ssh"->docker[ssh]<4.0,>=3.7.0->docker-compose)
      Downloading https://files.pythonhosted.org/packages/61/ab/2ac6dea8489fa713e2b4c6c5b549cc962dd4a842b5998d9e80cf8440b7cd/PyNaCl-1.3.0.tar.gz (3.4MB)
      Installing build dependencies: started
      Installing build dependencies: finished with status 'error'
      Complete output from command /usr/bin/python3.6 -m pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-dx3r45lk --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel "cffi>=1.4.1; python_implementation != 'PyPy'":
      Collecting setuptools
        Downloading https://files.pythonhosted.org/packages/d1/6a/4b2fcefd2ea0868810e92d519dacac1ddc64a2e53ba9e3422c3b62b378a6/setuptools-40.8.0-py2.py3-none-any.whl (575kB)
      Collecting wheel
        Downloading https://files.pythonhosted.org/packages/96/ba/a4702cbb6a3a485239fbe9525443446203f00771af9ac000fa3ef2788201/wheel-0.33.1-py2.py3-none-any.whl
      Collecting cffi>=1.4.1
        Downloading https://files.pythonhosted.org/packages/64/7c/27367b38e6cc3e1f49f193deb761fe75cda9f95da37b67b422e62281fcac/cffi-1.12.2.tar.gz (453kB)
      Collecting pycparser (from cffi>=1.4.1)
        Downloading https://files.pythonhosted.org/packages/68/9e/49196946aee219aead1290e00d1e7fdeab8567783e83e1b9ab5585e6206a/pycparser-2.19.tar.gz (158kB)
      Installing collected packages: setuptools, wheel, pycparser, cffi
        Running setup.py install for pycparser: started
          Running setup.py install for pycparser: finished with status 'done'
        Running setup.py install for cffi: started
          Running setup.py install for cffi: finished with status 'error'
          Complete output from command /usr/bin/python3.6 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-vhx9hxx_/cffi/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('rn', 'n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-hnobiin5/install-record.txt --single-version-externally-managed --prefix /tmp/pip-build-env-dx3r45lk --compile:
          unable to execute 'gcc': No such file or directory
          unable to execute 'gcc': No such file or directory
      
              No working compiler found, or bogus compiler options passed to
              the compiler from Python's standard "distutils" module.  See
              the error messages above.  Likely, the problem is not related
              to CFFI but generic to the setup.py of any Python package that
              tries to compile C code.  (Hints: on OS/X 10.8, for errors about
              -mno-fused-madd see http://stackoverflow.com/questions/22313407/
              Otherwise, see https://wiki.python.org/moin/CompLangPython or
              the IRC channel #python on irc.freenode.net.)
      
              Trying to continue anyway.  If you are trying to install CFFI from
              a build done in a different context, you can ignore this warning.
      
          running install
          running build
          running build_py
          creating build
          creating build/lib.linux-x86_64-3.6
          creating build/lib.linux-x86_64-3.6/cffi
          copying cffi/model.py -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/vengine_gen.py -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/pkgconfig.py -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/setuptools_ext.py -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/cparser.py -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/ffiplatform.py -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/cffi_opcode.py -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/backend_ctypes.py -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/api.py -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/recompiler.py -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/__init__.py -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/lock.py -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/error.py -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/vengine_cpy.py -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/commontypes.py -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/verifier.py -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/_cffi_include.h -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/parse_c_type.h -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/_embedding.h -> build/lib.linux-x86_64-3.6/cffi
          copying cffi/_cffi_errors.h -> build/lib.linux-x86_64-3.6/cffi
          running build_ext
          building '_cffi_backend' extension
          creating build/temp.linux-x86_64-3.6
          creating build/temp.linux-x86_64-3.6/c
          gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Os -fomit-frame-pointer -g -Os -fomit-frame-pointer -g -Os -fomit-frame-pointer -g -DTHREAD_STACK_SIZE=0x100000 -fPIC -I/usr/include/ffi -I/usr/include/libffi -I/usr/include/python3.6m -c c/_cffi_backend.c -o build/temp.linux-x86_64-3.6/c/_cffi_backend.o
          unable to execute 'gcc': No such file or directory
          error: command 'gcc' failed with exit status 1
      
          ----------------------------------------
      Command "/usr/bin/python3.6 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-vhx9hxx_/cffi/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('rn', 'n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-hnobiin5/install-record.txt --single-version-externally-managed --prefix /tmp/pip-build-env-dx3r45lk --compile" failed with error code 1 in /tmp/pip-install-vhx9hxx_/cffi/
      You are using pip version 18.1, however version 19.0.3 is available.
      You should consider upgrading via the 'pip install --upgrade pip' command.
      
      ----------------------------------------
    Command "/usr/bin/python3.6 -m pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-dx3r45lk --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel "cffi>=1.4.1; python_implementation != 'PyPy'"" failed with error code 1 in None
    You are using pip version 18.1, however version 19.0.3 is available.
    You should consider upgrading via the 'pip install --upgrade pip' command.
    ERROR: Job failed: command terminated with exit code 1
    

    Also using a docker alpine image in gitlab-ci. Same issue, rolling back to 1.23.2 fixed it for me.

    Upload artifact fails but jobs succeeds

    Summary

    Occasionally, after one of our build jobs is finished, it will start uploading the artifacts, and silently fail: the job succeeds but the artifacts are not uploaded and no errors are reported.

    • Jobs that depend on these artifacts then fail, which requires re-running the whole job again (as opposed to just re-running the upload)

    Steps to reproduce

    • I have managed to reproduce the behavior (the helper crashing but job succeeding)
      • More details in this project https://gitlab.com/jpsamper/runner-helper-reproducer
      • In production, we usually see it when there are a lot of jobs running/uploading artifacts at the same time
        • We’ve seen it with as low as 10 jobs uploading 1.5GB zipped (4-5GB unzipped) concurrently
    • By using a custom gitlab-runner-helper with a lot more print statements, we have found that the logs stop after invoking r.client.Do (i.e. if we add a print statement right before and right after, the one right after never appears)
      • Naturally, this is the behavior when something goes wrong, when the artifact is uploaded correctly, we see both print statements

    Actual behavior

    If I understand correctly, the function call linked above is invoking Do from the net/http package, and that call seems to be crashing the gitlab-runner-helper with no additional error message/return code/etc.

    Expected behavior

    • If an artifact upload fails, the job fails or retries
    • An informative error message too, ideally

    Relevant logs and/or screenshots

    When everything works as expected:

    And when it doesn’t:

    Environment description

    • Gitlab Runner with docker executor kubernetes executor
    • Latest docker version
    • Default config.toml

    Used GitLab Runner version

    We’re currently on gitlab-runner 13.3.0 but have been seeing this since at least 12.9.0

    Источник

    Using affected:build in CI yields spawnSync error #1886

    Comments

    Expected Behavior

    When running affected:build in CI, Nx should build the affected apps.

    Current Behavior

    The affected:build command fails before running any tests.

    Failure Information (for bugs)

    Steps to Reproduce

    We are using FROM node:12 as our base docker image in CI.

    Context

    Please provide any relevant information about your setup:

    • version of Nx used: 8.5.1

    The text was updated successfully, but these errors were encountered:

    This repository has a large commit history. I tried the max_old_space_size trick for node, with no change:

    @rpd10 I believe the issue is with the fact that some command exceed the buffer size, which is 200kb.

    Could you run the following three command to measure the size of the output?

    Indeed, git ls-files —others —exclude-standard has a huge output, so much so that GitLab bails out: «Job’s log exceeded limit of 4194304 bytes».

    We have CI setup to put the .yarn cache into the working directory (I believe for Cypress). I think newer versions of Cypress has better support for caching, I will look into using the default cache location for Cypress and yarn, and report back.

    Moving the yarn cache outside the working directory does fix this problem, affected:build, etc. works now.

    I’m not sure whether there is anything to fix here from the Nx side (maybe a better error message?). I can close this issue unless you would like it to remain open.

    One thing we can do is to bump up the size of the buffer to something like 10mb, and a better error message. Would you be interested in contributing the change? I can help.

    I’ve pushed up a PR for this.

    today I ran into the same problem using nx affected in my Jenkins pipeline: Error: spawnSync /bin/sh ENOBUFS .

    After some investigation, I saw that git ls-files —others —exclude-standard produced an output with a size of 11 MB because of Yarn’s .cache directory inside the workspace. Well, that’s definitely above rdp10’s PR.

    After some thought, I found a very simple solution: git ls-files —others —exclude-standard ignores everything that is in your .gitignore file. So, to get rid of this error, it’s sufficient to put .cache inside your .gitignore file.

    @vsavkin: Would perhaps be worth a hint in the documentation. 😉

    Hope that helps.

    Best wishes,
    Michael

    Had the same issue today in my gitlab ci pipeline. Turned out my .gitignore file was ignored by my .dockerignore file, which made git ls-files —others —exclude-standard return the whole contents of node_modules.. Removing .gitignore from the .dockerignore file solved it.

    Thanks for pointing me in the right direction @MichaelKaaden.

    Источник

    Error When Building Image with Kaniko #1624

    Comments

    Actual behavior
    I am getting an error when building image with Kaniko in GitLab CI.

    Our image build script is this:

    Our Dockerfile is this:

    Expected behavior
    It shouldn’t give any errors in dotnet restore phase because the .csproj file should be there with path src/our_app_name.csproj , I do not get any errors when building directly with docker build .

    To Reproduce
    Steps to reproduce the behavior:

    1. Use given dockerfile and a dotnet project with the same name.
    2. Use given GitLab CI script.

    Additional Information

    • Dockerfile — Provided.
    • Build Context — GitLab CI.
    • Kaniko Image (fully qualified with digest) — gcr.io/kaniko-project/executor:debug with digest sha256:e00dfdd4a44097867c8ef671e5a7f3e31d94bd09406dbdfba8a13a63fc6b8060 .

    Triage Notes for the Maintainers

    Description Yes/No
    Please check if this a new feature you are proposing
    Please check if the build works in docker but not in kaniko
    Please check if this error is seen when you use —cache flag
    Please check if your dockerfile is a multistage dockerfile

    The text was updated successfully, but these errors were encountered:

    Источник

    How to Fix ‘Terminated With Exit Code 1’ Error | Signal 7 (SIGHUP)

    Learning objective

    What is Exit Code 1

    Exit Code 1 indicates that a container shut down, either because of an application failure or because the image pointed to an invalid file. In a Unix/Linux operating system, when an application terminates with Exit Code 1, the operating system ends the process using Signal 7, known as SIGHUP.

    In Kubernetes, container exit codes can help you diagnose issues with pods. If a pod is unhealthy or frequently shuts down, you can diagnose the problem using the command kubectl describe pod [POD_NAME]

    If you see containers terminated with Exit Code 1, you’ll need to investigate the container and its applications more closely to see what caused the failure. We’ll provide several techniques for diagnosing and debugging Exit Code 1 in containers.

    Why Do Exit Code 1 Errors Occur

    Exit Code 1 means that a container terminated, typically due to an application error or an invalid reference.

    An application error is a programming error in any code running within the container. For example, if a Java library is running within the container, and the library throws a compiler error, the container might terminate with Exit Code 1.

    An invalid reference is a file reference in the image used to run the container, which points to a nonexistent file.

    What is Signal 7 (SIGHUP)?

    In Unix or Linux operating systems, signals help manage the process lifecycle. When a container terminates with Exit Code 1, the operating system terminates the container’s process with Signal 7.

    Signal 7 is also known as SIGHUP – a term that originates from POSIX-compliant terminals. In old terminals based on the RS-232 protocol, SIGHUP was a “hang up” indicating the terminal has shut down.

    To send Signal 7 (SIGHUP) to a Linux process use the following command:

    Detect and fix errors 5x faster

    Komodor monitors your entire K8s stack, identifies issues, and uncovers their root cause.

    Diagnosing Exit Code 1Checking Exit Codes in Kubernetes

    Diagnosing Exit Code 1

    When a container exits, the container engine’s command-line interface (CLI) displays a line like this. The number in the brackets is the Exit Code.

    To list all containers that exited with an error code or didn’t start correctly:
    If you are using Docker, run ps -la

    To diagnose why your container exited, look at the container engine logs:

    • Check if a file listed in the image specification was not found. If so, the container probably exited because of this invalid reference.
    • If there are no invalid references, check the logs for a clue that might indicate which library within the container caused the error, and debug the library.

    Checking Exit Codes in Kubernetes

    If you are running containers as part of a Kubernetes cluster, you can find the exit code by gathering information about a pod.

    Run the kubectl describe pod [POD_NAME] command.

    The result will look something like this:

    DIY Troubleshooting Techniques

    1. Delete And Recreate the Container

    It is a good idea to start your troubleshooting by recreating the container. This can clean out temporary files or other transient conditions that may be causing the error. Deleting and recreating will run the container with a fresh file system.

    To delete and recreate the container:

    • In Docker, use the docker stop command to stop the container, then use docker rm to completely remove the container. Rebuild the container using docker run
    • In Kubernetes, you can manually kill the pod that runs your container using kubectl delete pod [pod-name] . You can then wait for Kubernetes to automatically restart your pod (depending on your setup), or manually restart the pod using kubectl run [pod-name] —image=[image-name]

    2. Bashing Into a Container To Troubleshoot Applications
    If your container does not use entrypoints, and you suspect Exit Code 1 is caused by an application problem, you can bash into the container and try to identify which application is causing it to exit.

    To bash into the container and troubleshoot applications:

    1. Bashing into the container using the following command:
    2. You should now be running in a shell within the container. Run the application you suspect is causing the problem and see if it exits
    3. If the application exits, check the application’s logs and see if it exited due to an application error, and what was the error

    Note: Another way to troubleshoot an application is to simply run the application, with the same command line, outside the container. For this to be effective, you need to have an environment similar to that inside the container on the local machine.

    3. Experimenting With Application Parameters

    Exit Code 1 is often caused by application errors that cause the application, and the entire container, to exit. If you determine that Exit Code 1 is caused by an application, you can experiment with various configuration options of the application to prevent it from exiting.

    Here is a partial list of application parameters you can try:

    • Allocate more memory to the application
    • Run the application without special switches or flags
    • Make sure that the port the application uses is exposed to the relevant network
    • Change the port used by the application
    • Change environment variables
    • Check for compatibility issues between the application and other libraries, or the underlying operating system

    4. Addressing the PID 1 Problem

    Some Exit 1 errors are caused by the PID 1 problem. In Linux, PID 1 is the “init process” that spawns other processes and sends signals.

    Ordinarily, the container runs as PID 2, immediately under the init process, and additional applications running on the containers run as PID 3, 4, etc. If the application running on the container runs as PID 2, and the container itself as PID 3, the container may not terminate correctly.

    To identify if you have a PID 1 problem

    1. Run docker ps -a or the corresponding command in your container engine. Identify which application was running on the failed container.
    2. Rerun the failed container. While it is running, in the system shell, use the command ps -aux to see currently running processes. The result will look something like this. You can identify your process by looking at the command at the end.
    3. Look at the PID and USER at the beginning of the failing process. If PID is 1, you have a PID 1 problem.

    Possible solutions for the PID 1 problem

    • If the container will not start, try forcing it to start using a tool like tini or dumb-init
    • If you are using docker-compose , add the init parameter to docker-compose.yml
    • If you are using K8s, run the container using Share Process Namespace (PID 5)

    These four techniques are only some of the possible approaches to troubleshooting and solving the Exit Code 1 error. There are many possible causes of Exit Code 1 which are beyond our scope, and additional approaches to resolving the problem.

    Troubleshooting Kubernetes Exit Codes with Komodor

    As a Kubernetes administrator or user, pods or containers terminating unexpectedly can be a pain and can result in severe production issues.

    Exit Code 1 is a prime example of how difficult it can be to identify a specific root cause in Kubernetes because many different problems can cause the same error. The troubleshooting process in Kubernetes is complex and, without the right tools, can be stressful, ineffective, and time-consuming.

    Komodor is a Kubernetes troubleshooting platform that turns hours of guesswork into actionable answers in just a few clicks. Using Komodor, you can monitor, alert and troubleshoot exit code 1 event.

    For each K8s resource, Komodor automatically constructs a coherent view, including the relevant deploys, config changes, dependencies, metrics, and past incidents. Komodor seamlessly integrates and utilizes data from cloud providers, source controls, CI/CD pipelines, monitoring tools, and incident response platforms.

    • Discover the root cause automatically with a timeline that tracks all changes in your application and infrastructure.
    • Quickly tackle the issue, with easy-to-follow remediation instructions.
    • Give your entire team a way to troubleshoot independently without escalating.

    Источник

    # #python #powershell #gitlab #cicd

    Вопрос:

    Мой конвейер gitlab, который работает уже почти шесть месяцев, теперь неожиданно выходит из строя.

    Каждая предыдущая строка выполняется успешно, а затем происходит следующее:

     Setting up curl (7.52.1-5 deb9u16) ...
    $ curl -s https://deb.nodesource.com/setup_12.x | bash
    Cleaning up project directory and file based variables 
    ERROR: Job failed: exit code 1
     

    Я ни за что на свете не могу понять, что изменилось. Я думал, что это может быть связано с этой проблемой, но у меня нет никаких проблем с сетью, тайм-аутов и т. Д.

    Слегка запутанная версия моего .gitlab-ci.yml. Очевидно, что я использую .gitlab-ci.yml для настройки своих конвейеров, а также использую общие бегуны GitLab.

     
    image: python:3.6-stretch
    
    variables:
        ACCESS_KEY_ID: **********
        SECRET_ACCESS_KEY: **********
    
    before_script:
      - apt-get update
      - apt-get install -y curl
      - curl -s https://deb.nodesource.com/setup_12.x | bash
      - apt-get install -y nodejs
      - apt-get install -y npm
      - npm install -g serverless
      - pip install  --upgrade awscli
      - python --version
      - nodejs --version
    
    stages:
      - deploy
    
    deploy:
      stage: deploy
    
      only:
      - master   # We will run the CD only when something is going to change in master branch.
    
      script:
        - npm install   # Archive the code repository.
        - pip install -r requirements.txt
    
        - cd services/service1/
        - sls deploy -v --stage production
        - cd ../../
    
        - cd services/service2/
        - sls deploy -v --stage production
        - cd ../../
    
        - cd services/service3/
        - sls deploy -v --stage production
        - cd ../../
    
    
      environment:
        name: master
     

    Комментарии:

    1. Если вы используете общие бегуны GitLab, предоставленные при использовании gitlab.com (в отличие от вашего собственного, автономного экземпляра GitLab), тогда вам следует обратиться в службу поддержки / поднять проблему . Эта ошибка, похоже, никак не связана с вашим определением конвейера.

    Ответ №1:

    Эта предпоследняя строка ( Cleaning up project directory and file based variables ) всегда присутствует в задании CI/CD, выполняется или не выполняется.

    Скорее всего, происходит то, что последняя команда curl -s https://deb.nodesource.com/setup_12.x | bash терпит неудачу. К сожалению, поскольку вы загружаете файл удаления и передаете его в bash, вполне возможно, что ваш конвейер начнет случайным образом отказывать, потому что этот сценарий не гарантированно будет одинаковым каждый раз.

    Чтобы проверить это, я создал чистую виртуальную машину ubuntu, запустил эту команду curl и получил следующую ошибку: введите описание изображения здесь

    Ваш лучший способ исправить это в долгосрочной перспективе-создать контейнер, в котором есть все зависимости, необходимые для вашего CI, и сохранить их в реестре контейнеров для вашего проекта GitLab, а затем каждый раз извлекать этот контейнер. Это не только сэкономит вам минуты CI/CD, так как вам не нужно каждый раз запускать установки, но и предотвратит эту точную проблему, когда ваши зависимости изменяются под вами и вызывают ошибку. Также стоит отметить, что вы должны бытьочень осторожно передавайте внешне загруженный сценарий в bash, потому что этот сценарий может измениться, чтобы включить что угодно, и ваш CI просто неосознанно запустит его.

    Понравилась статья? Поделить с друзьями:
  • Error javafx runtime components are missing and are required to run this application maven
  • Error javafx runtime components are missing and are required to run this application launch4j
  • Error javafx runtime components are missing and are required to run this application jar file
  • Error java installer download failed from http javadl oracle com
  • Error javafx runtime components are missing and are required to run this application intellij idea