Docker error 139

I have a web api project with running fine when i ran through visual studio, able to build the image also. but when i run using the command docker run -d -t -p 8000:83 8fbf296e2173 shows no error...

I have a web api project with running fine when i ran through visual studio, able to build the image also.
but when i run using the command

docker run -d -t -p 8000:83 8fbf296e2173

shows no error and the container will be listed in docker ps -a with the status

Exited (139) 1 second ago

Please help to solve this

asked Apr 4, 2019 at 5:38

arunraj770's user avatar

Started using WSL 2 and encountered the same issue. The workaround posted here has resolved things for me:
https://github.com/microsoft/WSL/issues/4694#issuecomment-

Add the following to .wslconfig in %userprofile%.wslconfig

[wsl2]
kernelCommandLine = vsyscall=emulate

answered Sep 2, 2020 at 14:51

Sam Worley's user avatar

Sam WorleySam Worley

3072 silver badges7 bronze badges

1

It’s impossible to say what the root cause is without knowing more about the image that is running. But, the Exited (139) basically means the PID 1 of the container was sent SIGKILL. It could be anything, segfault, out of memory, stack overflow, etc.

answered Apr 4, 2019 at 5:51

wmorrell's user avatar

wmorrellwmorrell

4,8884 gold badges29 silver badges36 bronze badges

2

For anyone’s future reference; Docker exit code 139 (128 + 11) means that a container received a SIGSEGV. This may be a result of invalid memory reference.

Ref: https://stackoverflow.com/a/35410993/7160815

answered Oct 29, 2021 at 3:49

Savindi's user avatar

SavindiSavindi

1131 silver badge8 bronze badges

I faced the same issue while trying to connect to port 1433 from my host to docker (error code 139). I was able to resolve it by using Administrator: Windows Powershell.

double-beep's user avatar

double-beep

4,85916 gold badges32 silver badges41 bronze badges

answered May 20, 2020 at 14:53

Santandar's user avatar

SantandarSantandar

1211 silver badge3 bronze badges

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and
privacy statement. We’ll occasionally send you account related emails.

Already on GitHub?
Sign in
to your account

Labels

question

Usability question, not directly related to an error with the image

Comments

@vkannemacher

Hello,
I am facing an issue when i want to run a mysql container: I tried with the example command i found on the Docker hub:

docker run —name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:5.6.24

docker ps -a

Shows that the container exited with code 139
mysql:5.6.24 "/entrypoint.sh mysq…" 13 seconds ago Exited (139) 12 seconds ago some-mysql

And i can’t have a single line of logs: the return of the docker logs command is empty…

@wglambert

What do the docker logs show for the container?

I’m not able to reproduce the issue

$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:5.6.24
d671a4fce32d58e1f6e74ed96bf6ea46404c3822213820f26a3189174c95265b

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
d671a4fce32d        mysql:5.6.24        "/entrypoint.sh mysq…"   38 seconds ago      Up 33 seconds       3306/tcp            some-mysql

Maybe your local copy is corrupted, you could try docker image rm mysql:5.6.24 and re-pulling it

@wglambert
wglambert

added
the

question

Usability question, not directly related to an error with the image

label

Jun 12, 2019

@vkannemacher

Hi,
That is the problem: docker logs shows absolutely nothing…

docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:5.6.24
Unable to find image 'mysql:5.6.24' locally
5.6.24: Pulling from library/mysql
6e69f355f70e: Pull complete 
a3ed95caeb02: Pull complete 
2207cf04cde9: Pull complete 
0a2e8166cde7: Pull complete 
fce818e7de4b: Pull complete 
f4db7f77aeec: Pull complete 
bf8516093f28: Pull complete 
d34cadab8b95: Pull complete 
b7ed17133bd7: Pull complete 
Digest: sha256:6587cd1219e83d7f491be8be0e57201d3bfe864d525b31ecff53c338f690199f
Status: Downloaded newer image for mysql:5.6.24
2569c1a8cbd284557493e84ab5f5d19bace70a4094a72c87b63f56a68e82a0b9
docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                       PORTS               NAMES
2569c1a8cbd2        mysql:5.6.24        "/entrypoint.sh mysq…"   5 seconds ago       Exited (139) 4 seconds ago                       some-mysql
~ docker logs 2569c1a8cbd2 
~ 

@wglambert

Are you running on x86/amd64 architecture? That’s the only one this image supports

@kwisatz

I had the exact same behavior (error 139, no log output) for a mysql:5.5 image.
The only notable thing I did in between a working and a crashing container was to upgrade from Debian 9 to Debian 10. The mysql image was the only (so far) that stopped working.
rmiing the image, destroying the mysql container and then and pulling the image again solved my issue.

@vkannemacher

Hi,
Destroying and pulling the image did nothing for me… I’m still stuck with the issue

@wglambert

Are you using Docker for Windows/Mac or Linux Containers on Windows?

An Exited (139) is a segmentation fault so if it’s not a corrupted download then there’s some other environmental affect that’s causing it, maybe even Apparmor or SELinux

@vkannemacher

I am currently using Docker for Debian

@wglambert

@enthusiasmus

I also have this problem: exited with code 139 without any further logs. I try to upgrade to mysql 5.7.X, maybe this will resolve the error.

@tianon

@HouaniFarah

For me it was solved by updating the docker on my machine

@SylwesterZarebski

yveszoundi

added a commit
to rimerosolutions/entrusted
that referenced
this issue

Jul 7, 2022

@yveszoundi

Labels

question

Usability question, not directly related to an error with the image

What is SIGSEGV

SIGSEGV, also known as a segmentation violation or segmentation fault, is a signal used by Unix-based operating systems (such as Linux). It indicates an attempt by a program to write or read outside its allocated memory—either because of a programming error, a software or hardware compatibility issue, or a malicious attack, such as buffer overflow.

SIGSEGV is indicated by the following codes:

  • In Unix/Linux, SIGSEGV is operating system signal 11
  • In Docker containers, when a Docker container terminates due to a SIGSEV error, it throws exit code 139

The default action for SIGSEGV is abnormal termination of the process. In addition, the following may take place:

  • A core file is typically generated to enable debugging
  • SIGSEGV signals may logged in more detail for troubleshooting and security purposes
  • The operating system may perform platform-specific operations
  • The operating system may allow the process itself to handle the segmentation violation

SIGSEGV is a common cause for container termination in Kubernetes. However, Kubernetes does not trigger SIGSEGV directly. To resolve the issue, you will need to debug the problematic container or the underlying host.

SIGSEGV (exit code 139) vs SIGABRT (exit code 134)

SIGSEGV and SIGABRT are two Unix signals that can cause a process to terminate.

SIGSEGV is triggered by the operating system, which detects that a process is carrying out a memory violation, and may terminate it as a result.

SIGABRT (signal abort) is a signal triggered by a process itself. It abnormally terminates the process, closes and flushes open streams. Once it is triggered, it cannot be blocked by the process (similar to SIGKILL, but different in that SIGKILL is triggered by the operating system).

Before the SIGABRT signal is sent, the process may:

  • Call the abort() function in the libc library, which unlocks the SIGABRT signal. Then the process can abort itself by triggering SIGABRT
  • Call the assert() macro, which is used in debugging, and aborts the program using SIGABRT if the assertion is false.

Exit codes 139 and 134 are parallel to SIGSEGV and SIGABRT in Docker containers:

  • Docker exit code 139—means the container received a SIGSEGV by the underlying operating system due to a memory violation
  • Docker exit code 134—means the container triggered a SIGABRT and was abnormally terminated

What Causes SIGSEGV?

Modern general-purpose computing systems include memory management units (MMUs). An MMU enables memory protection in operating systems like Linux—preventing different processes from accessing or modifying each other’s memory, except via a strictly controlled API. This simplifies troubleshooting and makes processes more resilient, because they are carefully isolated from each other.

A SIGSEGV signal or segmentation error occurs when a process attempts to use a memory address that was not assigned to it by the MMU. This can happen for three common reasons:

  1. Coding error—segmentation violations can occur if a process is not initialized properly, or if it tries to access memory through a pointer to previously freed memory. This will result in a segmentation violation in a specific process or binary file under specific circumstances.
  2. Incompatibility between binaries and libraries—if a process runs a binary file that is not compatible with a shared library, it can result in segmentation violations. For example, if a developer updates a library, changing its binary interface, but does not update the version number, an older binary may be loaded against the newer version. This may result in the older binary trying to access inappropriate memory addresses.
  3. Hardware incompatibility or misconfiguration—if segmentation violations occur frequently across multiple libraries, with no repeating pattern, this may indicate a problem with the memory subsystems on the machine or improper low-level system configuration settings.

Handling SIGSEGV Errors

On a Unix-based operating system, by default, a SIGSEGV signal will result in abnormal termination of the violating process.

Additional actions performed by the operating system

In addition to terminating the process, the operating system may generate core files to assist with debugging, and can also perform other platform-dependent operations. For
example, on Linux, you can use the grsecurity utility to log SIGSEGV signals in detail, to monitor for related security risks such as buffer overflow.

Allowing the process to handle SIGSEGV

On Linux and Windows, the operating system allows processes to handle their response to segmentation violations. For example, the program can collect a stack trace with information like processor register values and the memory addresses that were involved in the segmentation fault.

An example of this is segvcatch, a C++ library that supports multiple operating systems, and is able to convert segmentation faults and other hardware related exceptions to software language exceptions. This makes it possible to handle “hard” errors like segmentation violations with simple try/catch code. This makes it possible for software to identify a segmentation violation and correct it during program execution.

Troubleshooting SIGSEGV

When troubleshooting segmentation errors, or testing programs to avoid these errors, there may be a need to intentionally cause a segmentation violation to investigate its impact. Most operating systems make it possible to handle SIGSEGV in such a way that they will allow the program to run even after the segmentation error occurs, to allow for investigation and logging.

Troubleshooting Common Segmentation Faults in Kubernetes

SIGSEGV faults are highly relevant for Kubernetes users and administrators. It is fairly common for a container to fail due to a segmentation violation.

However, unlike other signals such as SIGTERM and SIGKILL, Kubernetes does not trigger a SIGSEGV signal directly. Rather, the host machine on a Kubernetes node can trigger SIGSEGV when a container is caught performing a memory violation. The container then terminates, Kubernetes detects this, and may attempt to restart it depending on the pod configuration.

When a Docker container is terminated by a SIGSEGV signal, it throws exit code 139. This can indicate:

  • An issue with application code in one of the libraries running on the container
  • An incompatibility between different libraries running on the container
  • An incompatibility between those libraries and hardware on the host
  • Issues with the host’s memory management systems or a memory misconfiguration

To debug and resolve a SIGSEGV issue on a container, follow these steps:

  1. Get root access to the host machine, and review the logs to see additional information about the buggy container. A SIGSEGV error looks like the following in kubelet logs:
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1bdaed0]
  2. Try to identify in which layer of the container image the error occurs—it could be in your specific application code, or lower down in the base image of the container.
  3. Run docker pull [image-id] to pull the image for the container terminated by SIGSEGV.
  4. Make sure that you have debugging tools (e.g. curl or vim) installed, or add them.
  5. Use kubectl to execute into the container. See if you can replicate the SIGSEGV error to confirm which library is causing the issue.
  6. If you have identified the library or libraries causing the memory violation, try to modify your image to fix the library causing the memory violation, or replace it with another library. Very often, updating a library to a newer version, or a version that is compatible with the environment on the host, will resolve the issue.
  7. If you cannot identify a library that is consistently causing the error, the problem may be on the host. Check for problems with the host’s memory configuration or memory hardware.

The process above can help you resolve straightforward SIGSEGV errors, but in many cases troubleshooting can become very complex and require non-linear investigation involving multiple components. That’s exactly why we built Komodor – to troubleshoot memory errors and other complex Kubernetes issues before they get out of hand.

Troubleshooting Kubernetes Container Termination with Komodor

As a Kubernetes administrator or user, pods or containers terminating unexpectedly can be a pain, and can result in severe production issues. Container termination can be a result of multiple issues in different components and can be difficult to diagnose. The troubleshooting process in Kubernetes is complex and, without the right tools, can be stressful, ineffective and time-consuming.

Some best practices can help minimize the chances of SIGSEGV or SIGABRT signals affecting your applications, but eventually something will go wrong—simply because it can.

This is the reason why we created Komodor, a tool that helps dev and ops teams stop wasting their precious time looking for needles in (hay)stacks every time things go wrong.

Acting as a single source of truth (SSOT) for all of your k8s troubleshooting needs, Komodor offers:

  • Change intelligence: Every issue is a result of a change. Within seconds we can help you understand exactly who did what and when.
  • In-depth visibility: A complete activity timeline, showing all code and config changes, deployments, alerts, code diffs, pod logs and etc. All within one pane of glass with easy drill-down options.
  • Insights into service dependencies: An easy way to understand cross-service changes and visualize their ripple effects across your entire system.
  • Seamless notifications: Direct integration with your existing communication channels (e.g., Slack) so you’ll have all the information you need, when you need it.

If you are interested in checking out Komodor, use this link to sign up for a Free Trial.

I am trying to run a CentOS Docker Image on my Arch Linux host. Running the following command returns nothing except the 139 error code:

$ docker run -ti centos:centos6 /bin/bash                                                                                                                                
[139] $  

I have the CentOS Docker image:

centos              centos6             0cbf37812bff        2 weeks ago         194MB

and a centOS container is there under the list of containers

$ docker ps -a|grep cento                                                                                                                                                
2ef0f0d7439c        centos:centos6         "/bin/bash"              5 minutes ago       Exited (139) 5 minutes ago                       elated_turing

Docker logs also returns nothing:

$ docker logs <container id>
$

I have tried using other Docker images and they work, it only seems to affect the CentOS image but I need to use centOS for my work.

asked Oct 29, 2018 at 9:41

Thomas Crowley's user avatar

6

They were changes made on vsyscall linking in the Linux Kernel, starting with version 4.11, that caused issues with containers running Centos 6.x

2 solutions :

  • Use a 7.x Centos image
  • Try to boot the kernel with the parameter vsyscall=emulate

Example with GRUB, modify /etc/default/grub :

GRUB_CMDLINE_LINUX_DEFAULT="vsyscall=emulate"

And then run update-grub

Example with systemd-boot, modify your conf in /boot/loader/entries and add the parameter to the options line :

title Arch Linux
linux /vmlinuz-linux
initrd /initramfs-linux.img
options *EXISTINGPARAMS* vsyscall=emulate

answered Dec 14, 2018 at 10:00

Lucas Declercq's user avatar

4

The best solution is to use vsyscall=emulate as advised in the other answer, of course, but if you can’t reboot the machine containing CentOS 6 container (or just a CentOS 6 installation inside a chroot, as in my case) and don’t mind some open-heart surgery, you can also «upgrade» CentOS 6 to use CentOS 7 glibc version to work around the problem. Note that replacing just libc.so is not enough, you need to copy at least the following files from a CentOS 7 system to /lib64 directory: ld-2.17.so, lib{c,dl,m,pthread}-2.17.so and then update the corresponding symlinks, i.e. ld-linux-x86-64.so.2, libc.so.6, libdl.so.2, libm.so.6 and libpthread.so.0 to point to 2.17 versions instead of 2.12 ones.

answered Oct 8, 2020 at 16:22

VZ.'s user avatar

Same this here. I’m trying to run cloudera in a docker-compose cluster. The yaml looks like this:

cloudera:
    image: cloudera/quickstart:latest
    hostname: cloudera
    privileged: true
    command: /usr/bin/docker-quickstart
    expose:
      - "8020" # 
      - "8888" # 
      - "9083" # 
      - "10000" # hive jdbc
      - "50070" # nn http
    ports:
      - "8888:8888"
    tty: true
    stdin_open: true
    command: bash -c "/usr/bin/docker-quickstart"

 When I run dock-compose up. it silently exits with 139. Using verbose I was able to get the following which is still not very helpful as far as I can see:

 docker-compose --verbose start cloudera
compose.config.config.find: Using configuration files: ./docker-compose.yml
docker.utils.config.find_config_file: Trying paths: ['/home/user1/.docker/config.json', '/home/user1/.dockercfg']
docker.utils.config.find_config_file: Found file at path: /home/user1/.docker/config.json
docker.auth.load_config: Found 'credsStore' section
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.22/version HTTP/1.1" 200 875
compose.cli.command.get_client: docker-compose version 1.25.5, build 8a1c60f6
docker-py version: 4.1.0
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.0l  10 Sep 2019
compose.cli.command.get_client: Docker base_url: http+docker://localhost
compose.cli.command.get_client: Docker version: Platform={'Name': 'Docker Engine - Community'}, Components=[{'Name': 'Engine', 'Version': '19.03.8', 'Details': {'ApiVersion': '1.40', 'Arch': 'amd64', 'BuildTime': '2020-03-11T01:29:16.000000000+00:00', 'Experimental': 'false', 'GitCommit': 'afacb8b', 'GoVersion': 'go1.12.17', 'KernelVersion': '4.19.104-microsoft-standard', 'MinAPIVersion': '1.12', 'Os': 'linux'}}, {'Name': 'containerd', 'Version': 'v1.2.13', 'Details': {'GitCommit': '7ad184331fa3e55e52b890ea95e65ba581ae3429'}}, {'Name': 'runc', 'Version': '1.0.0-rc10', 'Details': {'GitCommit': 'dc9208a3303feef5b3839f4323d9beb36df0a9dd'}}, {'Name': 'docker-init', 'Version': '0.18.0', 'Details': {'GitCommit': 'fec3683'}}], Version=19.03.8, ApiVersion=1.40, MinAPIVersion=1.12, GitCommit=afacb8b, GoVersion=go1.12.17, Os=linux, Arch=amd64, KernelVersion=4.19.104-microsoft-standard, BuildTime=2020-03-11T01:29:16.000000000+00:00
compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('project3_default')
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.22/networks/project3_default HTTP/1.1" 200 1716
compose.cli.verbose_proxy.proxy_callable: docker inspect_network -> {'Attachable': False,
 'ConfigFrom': {'Network': ''},
 'ConfigOnly': False,
 'Containers': {'1d5c500c0e0ad94b0c66a4ff9524bfcfe1323c762a4b63444a66822b12e5c60c': {'EndpointID': 'ba49e9233aa1f4cba2ba6569fa36514b7e0cb412fef92e1c0755e35aac3adfb0',
                                                                                     'IPv4Address': '172.18.0.5/16',
                                                                                     'IPv6Address': '',
                                                                                     'MacAddress': '02:42:ac:12:00:05',
                                                                                     'Name': 'project3_zookeeper_1'},
                '2dc3c508b4c8fb6bb8163c968179185c5e1a3b753f08a6ab3c31b0d802ee9a54': {'EndpointID': '2355f170495a5762deb7445f632b68a1e691992816d3ff92d888332540a9d940',
                                                                                     'IPv4Address': '172.18.0.6/16',
...
Starting cloudera ...
compose.parallel.feed_queue: Pending: {<Service: cloudera>}
compose.parallel.feed_queue: Starting producer thread for <Service: cloudera>
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=project3', 'com.docker.compose.service=cloudera', 'com.docker.compose.oneoff=False']})
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.22/containers/json?limit=-1&all=1&size=0&trunc_cmd=0&filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Dproject3%22%2C+%22com.docker.compose.service%3Dcloudera%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D HTTP/1.1" 200 1269
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 1 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('3dd42eaab54bd0612e2ed19bcbda1db8e979edde763136e08869c37da2ef248e')
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.22/containers/3dd42eaab54bd0612e2ed19bcbda1db8e979edde763136e08869c37da2ef248e/json HTTP/1.1" 200 None
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '',
 'Args': ['bash', '-c', '/usr/bin/docker-quickstart'],
 'Config': {'AttachStderr': False,
            'AttachStdin': False,
            'AttachStdout': False,
            'Cmd': ['bash', '-c', '/usr/bin/docker-quickstart'],
            'Domainname': '',
            'Entrypoint': ['/usr/bin/docker-entrypoint'],
            'Env': ['PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'],
            'ExposedPorts': {'10000/tcp': {},
...
compose.cli.verbose_proxy.proxy_callable: docker start <- ('3dd42eaab54bd0612e2ed19bcbda1db8e979edde763136e08869c37da2ef248e')
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.22/containers/3dd42eaab54bd0612e2ed19bcbda1db8e979edde763136e08869c37da2ef248e/start HTTP/1.1" 204 0
compose.cli.verbose_proxy.proxy_callable: docker start -> None
compose.parallel.parallel_execute_iter: Finished processing: <Service: cloudera>
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=project3', 'com.docker.compose.service=cloudera', 'com.docker.compose.oneoff=False']})
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.22/containers/json?limit=-1&all=0&size=0&trunc_cmd=0&filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Dproject3%22%2C+%22com.docker.compose.service%3Dcloudera%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D HTTP/1.1" 200 1570
Starting cloudera ... done
compose.parallel.feed_queue: Pending: set()

Any help would be great!

Понравилась статья? Поделить с друзьями:
  • Docker desktop stopping как исправить
  • Docker compose error while fetching server api version
  • Docker compose error log
  • Docker build error during connect
  • Docker 404 error