Error processing tar file exit status 1 archive tar invalid tar header

Archive/tar: invalid tar header message when export or import #660 Comments Expected behavior Success import image. Actual behavior «Error response from daemon: Error processing tar file(exit status 1): archive/tar: invalid tar header» Information If we want to export container using > then we got wrong file/error on import. If we want to import image […]

Содержание

  1. Archive/tar: invalid tar header message when export or import #660
  2. Comments
  3. Expected behavior
  4. Actual behavior
  5. Information
  6. Steps to reproduce the behavior
  7. Error pulling down image: «archive/tar: invalid tar header» #15561
  8. Comments
  9. Reproduced with Docker 1.7.1:
  10. Reproduced with Docker 1.8.1:
  11. Unable to load cached images: archive/tar: invalid tar header #8720
  12. Comments

Expected behavior

Success import image.

Actual behavior

«Error response from daemon: Error processing tar file(exit status 1): archive/tar: invalid tar header»

Information

  • If we want to export container using > then we got wrong file/error on import.
  • If we want to import image using | then we got error
  • Diagnostic ID: 65607FD9-95F8-4697-86E9-82CAC340FD2A/2017-04-20_21-49-26
  • Windows 10 Pro
  • Docker 17.05.0-ce-rc1-wind8 (11189) edge 73d01bb
  • Temporary solution for export is to use: docker export —output=»export.tar» container_id
  • Temporary solution for import is to use: docker import export.tar

Steps to reproduce the behavior

  1. docker export container_id > export.tar
  2. cat export.tar | docker import — exampleimagelocal:new

The text was updated successfully, but these errors were encountered:

This could be related to a PowerShell limitation: #758

I made some tests on my machine with the different ouput types (STDOUT or file output switch -o) and the sizes differs quite a bit and when I opened the files in a diff-tool it complains about different carriage return types.

Moving on I also opened the two files in a hex editor, and turns out the version that was created using STDOUT had a hex 0x00 character for every second character. My guess is that it might be the classical problem where carriage return differs between linux and windows, and also a problem where windows uses UTF-16 by default and not UTF-8. Effectively corrupting any form of character strings or binary data.

My system is a windows 10 pro system and I also used powershell for generating the files. Might be good to mention that when I saved the file using the file output switch -o, i managed to import the file again, but when using STDOUT, the import yielded the exact same error.

Steps to reproduce:
1 open powershell
2. docker pull alpine
3. docker save alpine > alpine.tar
4. docker rmi alpine
5. docker load -i alpine.tar (Error processing tar file(exit status 1): archive/tar: invalid tar header)

The same procedure will work if doing it from a normal command window (cmd) so most likely a pure powershell problem. However using the file output switch (-o) when saving and not STDOUT will work from powershell so it’s probably best to avoid using any form of output streams when working in powershell.

Also files created with powershell are considerably larger. For example:

  • Valid Cassandra image file created with CMD: 323,032KB
  • Faulty Cassandra image created with Powershell: 653,040KB

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale comment.
Stale issues will be closed after an additional 30d of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale

Источник

Error pulling down image: «archive/tar: invalid tar header» #15561

Reproduced with Docker 1.7.1:

Reproduced with Docker 1.8.1:

The text was updated successfully, but these errors were encountered:

I’ll give you some extra informations maybe this help you :

I’m using the last version (1.8.1) :

I build the image with it (1.8.1 and docker import)

Locally, the image is working

I’m sorry to bother you with that issue, but it’s our base image used to build all the other and actually 80% are broken ;-(

We’ve started seeing this too. Seems to happen after pushing an image that was import ed. After removing the local copy so pull doesn’t no-op, all pull s fail.

@rockyluke I am trying to reproduce this case on the push side. Do you have specific instructions for how this image was first created before push. Can you also include kernel version and graph driver.

@vito if you have a reproducible case it will help as well

@stevenschlansker if you have this information it will help as well. I want to know if you can describe the contents of the directory which is being imported, also graph driver information. Thanks!

This script is what we use to generate our base images, and it 100% fails on Docker 1.8.1, works OK on Docker 1.7.1.

Kernel vanilla 4.0.4, Ubuntu 14.04.3, overlay driver

this is fixed by #15492 which is already cherry-picked for 1.8.2 .

confirmed fixed in master and will be in 1.8.2.

The bad images which are failing on pull will need to be reimported and repushed before pulling will succeed.

the tar -c . | docker import — approach is including GNU @LongLink.

This also means that images pushed from this will need to be re- docker import ‘ed and re-pushed, once updated to docker 1.8.2

Can I ask what operating system everyone is using? I’m using Windows and getting these errors

@Sicily-F this is a 6 year old ticket; Docker was not yet available for Windows back then, and the cause of this issue was fixed; if you’re running into an issue, it’s very likely unrelated to this one. If you think it’s a bug and have details that may help triage the issue, please open a new ticket instead

@Sicily-F this is a 6 year old ticket; Docker was not yet available for Windows back then, and the cause of this issue was fixed; if you’re running into an issue, it’s very likely unrelated to this one. If you think it’s a big and have details that may help triage the issue, please open a new ticket instead

thankyou, I have asked the question in the Docker forum

Источник

Unable to load cached images: archive/tar: invalid tar header #8720

When starting minikube from windows CMD prompt I am getting the below error, please assist.

  • minikube v1.12.0 on Microsoft Windows 10 Home Single Language 10.0.19041 Build 19041
  • Using the virtualbox driver based on existing profile
  • Starting control plane node minikube in cluster minikube
  • Updating the running virtualbox «minikube» VM .
  • Found network options:
    • HTTP_PROXY=http://k8s.gcr.io
    • HTTPS_PROXY=https://k8s.gcr.io
    • NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
    • http_proxy=http://k8s.gcr.io
    • https_proxy=https://k8s.gcr.io
    • no_proxy=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
      ! This VM is having trouble accessing https://k8s.gcr.io
  • To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
  • Preparing Kubernetes v1.18.3 on Docker 19.03.12 .
    • env HTTP_PROXY=http://k8s.gcr.io
    • env HTTPS_PROXY=https://k8s.gcr.io
    • env NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
  • Unable to load cached images: loading cached images: Docker load /var/lib/minikube/images/kube-proxy_v1.18.3: loadimage docker.: docker load -i /var/lib/minikube/images/kube-proxy_v1.18.3: Process exited with status 1
    stdout:

stderr:
Error processing tar file(exit status 1): archive/tar: invalid tar header

! initialization failed, will try again: run: /bin/bash -c «sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init —config /var/tmp/minikube/kubeadm.yaml —ignore-preflight-errors=DirAvailable—etc-kubernetes-manifests,DirAvailable—var-lib-minikube,DirAvailable—var-lib-minikube-etcd,FileAvailable—etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable—etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable—etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable—etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap»: Process exited with status 2
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks

stderr:
W0714 15:29:11.102241 4241 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
unexpected fault address 0x0
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x41887f]

goroutine 1 [running]:
runtime.throw(0x19a1551, 0x5)
/usr/local/go/src/runtime/panic.go:774 +0x72 fp=0xc00061ae28 sp=0xc00061adf8 pc=0x42dc22
runtime.sigpanic()
/usr/local/go/src/runtime/signal_unix.go:401 +0x3de fp=0xc00061ae58 sp=0xc00061ae28 pc=0x44243e
runtime.SetFinalizer(0x18b9220, 0xc0005f4390, 0x1735f40, 0x1a795c0)
/usr/local/go/src/runtime/mfinal.go:410 +0x26f fp=0xc00061af40 sp=0xc00061ae58 pc=0x41887f
os.newProcess(. )
/usr/local/go/src/os/exec.go:26
os.startProcess(0xc0004c4ce0, 0xe, 0xc0002cb110, 0x3, 0x3, 0xc00061b168, 0x0, 0x0, 0x0)
/usr/local/go/src/os/exec_posix.go:55 +0x429 fp=0xc00061b028 sp=0xc00061af40 pc=0x4cc139
os.StartProcess(0xc0004c4ce0, 0xe, 0xc0002cb110, 0x3, 0x3, 0xc00061b168, 0xb, 0x0, 0x0)
/usr/local/go/src/os/exec.go:102 +0x7c fp=0xc00061b080 sp=0xc00061b028 pc=0x4cb9fc
os/exec.(*Cmd).Start(0xc0000da9a0, 0x9c5f01, 0xc0000ce910)
/usr/local/go/src/os/exec/exec.go:416 +0x50c fp=0xc00061b1c0 sp=0xc00061b080 pc=0x9c786c
os/exec.(*Cmd).Run(0xc0000da9a0, 0xc0000ce910, 0x0)
/usr/local/go/src/os/exec/exec.go:338 +0x2b fp=0xc00061b1e8 sp=0xc00061b1c0 pc=0x9c72fb
os/exec.(*Cmd).Output(0xc0000da9a0, 0x9, 0xc00061b290, 0x2, 0x2, 0xc0000da9a0)
/usr/local/go/src/os/exec/exec.go:540 +0x88 fp=0xc00061b240 sp=0xc00061b1e8 pc=0x9c81d8
k8s.io/kubernetes/cmd/kubeadm/app/util/initsystem.SystemdInitSystem.ServiceExists(0x19a6832, 0x9, 0x20)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/initsystem/initsystem_unix.go:123 +0x96 fp=0xc00061b2c0 sp=0xc00061b240 pc=0x130bfb6
k8s.io/kubernetes/cmd/kubeadm/app/util/initsystem.(*SystemdInitSystem).ServiceExists(0x2a221d8, 0x19a6832, 0x9, 0x0)
:1 +0x46 fp=0xc00061b2e8 sp=0xc00061b2c0 pc=0x130c456
k8s.io/kubernetes/cmd/kubeadm/app/preflight.FirewalldCheck.Check(0xc0004c4c40, 0x2, 0x2, 0x18edf80, 0xc0002cabd0, 0xc0003ecbe0, 0x11, 0x2a23400, 0x0)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/preflight/checks.go:172 +0x146 fp=0xc00061b380 sp=0xc00061b2e8 pc=0x14d11b6
k8s.io/kubernetes/cmd/kubeadm/app/preflight.(*FirewalldCheck).Check(0xc000537220, 0x19a5f71, 0x9, 0x0, 0x0, 0x0, 0x0)
:1 +0x56 fp=0xc00061b3d8 sp=0xc00061b380 pc=0x14df736
k8s.io/kubernetes/cmd/kubeadm/app/preflight.RunChecks(0xc0000dc2c0, 0x25, 0x2c, 0x1c0cc40, 0xc00000e020, 0xc0002cabd0, 0x1d, 0x2c)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/preflight/checks.go:1069 +0xd9 fp=0xc00061b520 sp=0xc00061b3d8 pc=0x14dc2b9
k8s.io/kubernetes/cmd/kubeadm/app/preflight.RunInitNodeChecks(0x1c4ade0, 0x2a221d8, 0xc0005ad8c0, 0xc0002cabd0, 0x0, 0x26, 0x0)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/preflight/checks.go:948 +0xc0c fp=0xc00061ba80 sp=0xc00061b520 pc=0x14d91bc
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runPreflight(0x1923e60, 0xc000410640, 0x19a6b41, 0x9)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/preflight.go:60 +0x145 fp=0xc00061bb40 sp=0xc00061ba80 pc=0x15d7b15
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1(0xc000144500, 0x0, 0x0)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234 +0x11a fp=0xc00061bbe0 sp=0xc00061bb40 pc=0x15a42ba
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll(0xc0003f0360, 0xc00061bc78, 0x0, 0x3)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422 +0x6e fp=0xc00061bc20 sp=0xc00061bbe0 pc=0x15a358e
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run(0xc0003f0360, 0xc000235a40, 0x0, 0x3, 0xc00061bd20, 0x1)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207 +0x14e fp=0xc00061bcb0 sp=0xc00061bc20 pc=0x15a2b5e
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1(0xc000114f00, 0xc000235a40, 0x0, 0x3, 0x0, 0x0)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:147 +0x16b fp=0xc00061bd40 sp=0xc00061bcb0 pc=0x161fe3b
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000114f00, 0xc000235980, 0x3, 0x3, 0xc000114f00, 0xc000235980)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826 +0x460 fp=0xc00061be18 sp=0xc00061bd40 pc=0x6fb130
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0005aac80, 0xc00000e010, 0x1c0cc40, 0xc00000e018)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914 +0x2fb fp=0xc00061bef0 sp=0xc00061be18 pc=0x6fbb7b
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(. )
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run(0x0, 0x16eb700)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50 +0x21c fp=0xc00061bf38 sp=0xc00061bef0 pc=0x16218dc
main.main()
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25 +0x22 fp=0xc00061bf60 sp=0xc00061bf38 pc=0x1621932
runtime.main()
/usr/local/go/src/runtime/proc.go:203 +0x21e fp=0xc00061bfe0 sp=0xc00061bf60 pc=0x42f5be
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc00061bfe8 sp=0xc00061bfe0 pc=0x45a3b1

goroutine 6 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog.(*loggingT).flushDaemon(0x2a04d40)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:1010 +0x8b
created by k8s.io/kubernetes/vendor/k8s.io/klog.init.0
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:411 +0xd6

X Error starting cluster: run: /bin/bash -c «sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init —config /var/tmp/minikube/kubeadm.yaml —ignore-preflight-errors=DirAvailable—etc-kubernetes-manifests,DirAvailable—var-lib-minikube,DirAvailable—var-lib-minikube-etcd,FileAvailable—etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable—etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable—etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable—etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap»: Process exited with status 2
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks

stderr:
W0714 15:29:15.394503 4265 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
unexpected fault address 0x0
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x41887f]

goroutine 1 [running]:
runtime.throw(0x19a1551, 0x5)
/usr/local/go/src/runtime/panic.go:774 +0x72 fp=0xc000512e28 sp=0xc000512df8 pc=0x42dc22
runtime.sigpanic()
/usr/local/go/src/runtime/signal_unix.go:401 +0x3de fp=0xc000512e58 sp=0xc000512e28 pc=0x44243e
runtime.SetFinalizer(0x18b9220, 0xc00062c060, 0x1735f40, 0x1a795c0)
/usr/local/go/src/runtime/mfinal.go:410 +0x26f fp=0xc000512f40 sp=0xc000512e58 pc=0x41887f
os.newProcess(. )
/usr/local/go/src/os/exec.go:26
os.startProcess(0xc0005ed900, 0xe, 0xc00060ad50, 0x3, 0x3, 0xc000513168, 0x0, 0x0, 0x0)
/usr/local/go/src/os/exec_posix.go:55 +0x429 fp=0xc000513028 sp=0xc000512f40 pc=0x4cc139
os.StartProcess(0xc0005ed900, 0xe, 0xc00060ad50, 0x3, 0x3, 0xc000513168, 0xb, 0x0, 0x40)
/usr/local/go/src/os/exec.go:102 +0x7c fp=0xc000513080 sp=0xc000513028 pc=0x4cb9fc
os/exec.(*Cmd).Start(0xc0001451e0, 0x9c5f01, 0xc0000d33b0)
/usr/local/go/src/os/exec/exec.go:416 +0x50c fp=0xc0005131c0 sp=0xc000513080 pc=0x9c786c
os/exec.(*Cmd).Run(0xc0001451e0, 0xc0000d33b0, 0x0)
/usr/local/go/src/os/exec/exec.go:338 +0x2b fp=0xc0005131e8 sp=0xc0005131c0 pc=0x9c72fb
os/exec.(*Cmd).Output(0xc0001451e0, 0x9, 0xc000513290, 0x2, 0x2, 0xc0001451e0)
/usr/local/go/src/os/exec/exec.go:540 +0x88 fp=0xc000513240 sp=0xc0005131e8 pc=0x9c81d8
k8s.io/kubernetes/cmd/kubeadm/app/util/initsystem.SystemdInitSystem.ServiceExists(0x19a6832, 0x9, 0x20)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/initsystem/initsystem_unix.go:123 +0x96 fp=0xc0005132c0 sp=0xc000513240 pc=0x130bfb6
k8s.io/kubernetes/cmd/kubeadm/app/util/initsystem.(*SystemdInitSystem).ServiceExists(0x2a221d8, 0x19a6832, 0x9, 0x0)
:1 +0x46 fp=0xc0005132e8 sp=0xc0005132c0 pc=0x130c456
k8s.io/kubernetes/cmd/kubeadm/app/preflight.FirewalldCheck.Check(0xc0005ed880, 0x2, 0x2, 0x18edf80, 0xc00060a8d0, 0xc000628240, 0x11, 0x2a23400, 0x0)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/preflight/checks.go:172 +0x146 fp=0xc000513380 sp=0xc0005132e8 pc=0x14d11b6
k8s.io/kubernetes/cmd/kubeadm/app/preflight.(*FirewalldCheck).Check(0xc000609320, 0x19a5f71, 0x9, 0x0, 0x0, 0x0, 0x0)
:1 +0x56 fp=0xc0005133d8 sp=0xc000513380 pc=0x14df736
k8s.io/kubernetes/cmd/kubeadm/app/preflight.RunChecks(0xc0003b4000, 0x25, 0x2c, 0x1c0cc40, 0xc00000e020, 0xc00060a8d0, 0x1d, 0x2c)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/preflight/checks.go:1069 +0xd9 fp=0xc000513520 sp=0xc0005133d8 pc=0x14dc2b9
k8s.io/kubernetes/cmd/kubeadm/app/preflight.RunInitNodeChecks(0x1c4ade0, 0x2a221d8, 0xc000574d80, 0xc00060a8d0, 0x0, 0x26, 0x0)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/preflight/checks.go:948 +0xc0c fp=0xc000513a80 sp=0xc000513520 pc=0x14d91bc
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runPreflight(0x1923e60, 0xc0005dc140, 0x19a6b41, 0x9)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/preflight.go:60 +0x145 fp=0xc000513b40 sp=0xc000513a80 pc=0x15d7b15
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1(0xc0005fe200, 0x0, 0x0)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234 +0x11a fp=0xc000513be0 sp=0xc000513b40 pc=0x15a42ba
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll(0xc0003fa5a0, 0xc000513c78, 0x0, 0x3)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422 +0x6e fp=0xc000513c20 sp=0xc000513be0 pc=0x15a358e
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run(0xc0003fa5a0, 0xc0002695f0, 0x0, 0x3, 0xc000513d20, 0x1)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207 +0x14e fp=0xc000513cb0 sp=0xc000513c20 pc=0x15a2b5e
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1(0xc000115900, 0xc0002695f0, 0x0, 0x3, 0x0, 0x0)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:147 +0x16b fp=0xc000513d40 sp=0xc000513cb0 pc=0x161fe3b
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000115900, 0xc000269560, 0x3, 0x3, 0xc000115900, 0xc000269560)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826 +0x460 fp=0xc000513e18 sp=0xc000513d40 pc=0x6fb130
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0005c0c80, 0xc00000e010, 0x1c0cc40, 0xc00000e018)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914 +0x2fb fp=0xc000513ef0 sp=0xc000513e18 pc=0x6fbb7b
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(. )
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run(0x0, 0x16eb700)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50 +0x21c fp=0xc000513f38 sp=0xc000513ef0 pc=0x16218dc
main.main()
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25 +0x22 fp=0xc000513f60 sp=0xc000513f38 pc=0x1621932
runtime.main()
/usr/local/go/src/runtime/proc.go:203 +0x21e fp=0xc000513fe0 sp=0xc000513f60 pc=0x42f5be
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000513fe8 sp=0xc000513fe0 pc=0x45a3b1

goroutine 6 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog.(*loggingT).flushDaemon(0x2a04d40)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:1010 +0x8b
created by k8s.io/kubernetes/vendor/k8s.io/klog.init.0
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:411 +0xd6

X failed to start node: startup failed: run: /bin/bash -c «sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init —config /var/tmp/minikube/kubeadm.yaml —ignore-preflight-errors=DirAvailable—etc-kubernetes-manifests,DirAvailable—var-lib-minikube,DirAvailable—var-lib-minikube-etcd,FileAvailable—etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable—etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable—etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable—etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap»: Process exited with status 2
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks

stderr:
W0714 15:29:15.394503 4265 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
unexpected fault address 0x0
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x41887f]

goroutine 1 [running]:
runtime.throw(0x19a1551, 0x5)
/usr/local/go/src/runtime/panic.go:774 +0x72 fp=0xc000512e28 sp=0xc000512df8 pc=0x42dc22
runtime.sigpanic()
/usr/local/go/src/runtime/signal_unix.go:401 +0x3de fp=0xc000512e58 sp=0xc000512e28 pc=0x44243e
runtime.SetFinalizer(0x18b9220, 0xc00062c060, 0x1735f40, 0x1a795c0)
/usr/local/go/src/runtime/mfinal.go:410 +0x26f fp=0xc000512f40 sp=0xc000512e58 pc=0x41887f
os.newProcess(. )
/usr/local/go/src/os/exec.go:26
os.startProcess(0xc0005ed900, 0xe, 0xc00060ad50, 0x3, 0x3, 0xc000513168, 0x0, 0x0, 0x0)
/usr/local/go/src/os/exec_posix.go:55 +0x429 fp=0xc000513028 sp=0xc000512f40 pc=0x4cc139
os.StartProcess(0xc0005ed900, 0xe, 0xc00060ad50, 0x3, 0x3, 0xc000513168, 0xb, 0x0, 0x40)
/usr/local/go/src/os/exec.go:102 +0x7c fp=0xc000513080 sp=0xc000513028 pc=0x4cb9fc
os/exec.(*Cmd).Start(0xc0001451e0, 0x9c5f01, 0xc0000d33b0)
/usr/local/go/src/os/exec/exec.go:416 +0x50c fp=0xc0005131c0 sp=0xc000513080 pc=0x9c786c
os/exec.(*Cmd).Run(0xc0001451e0, 0xc0000d33b0, 0x0)
/usr/local/go/src/os/exec/exec.go:338 +0x2b fp=0xc0005131e8 sp=0xc0005131c0 pc=0x9c72fb
os/exec.(*Cmd).Output(0xc0001451e0, 0x9, 0xc000513290, 0x2, 0x2, 0xc0001451e0)
/usr/local/go/src/os/exec/exec.go:540 +0x88 fp=0xc000513240 sp=0xc0005131e8 pc=0x9c81d8
k8s.io/kubernetes/cmd/kubeadm/app/util/initsystem.SystemdInitSystem.ServiceExists(0x19a6832, 0x9, 0x20)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/initsystem/initsystem_unix.go:123 +0x96 fp=0xc0005132c0 sp=0xc000513240 pc=0x130bfb6
k8s.io/kubernetes/cmd/kubeadm/app/util/initsystem.(*SystemdInitSystem).ServiceExists(0x2a221d8, 0x19a6832, 0x9, 0x0)
:1 +0x46 fp=0xc0005132e8 sp=0xc0005132c0 pc=0x130c456
k8s.io/kubernetes/cmd/kubeadm/app/preflight.FirewalldCheck.Check(0xc0005ed880, 0x2, 0x2, 0x18edf80, 0xc00060a8d0, 0xc000628240, 0x11, 0x2a23400, 0x0)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/preflight/checks.go:172 +0x146 fp=0xc000513380 sp=0xc0005132e8 pc=0x14d11b6
k8s.io/kubernetes/cmd/kubeadm/app/preflight.(*FirewalldCheck).Check(0xc000609320, 0x19a5f71, 0x9, 0x0, 0x0, 0x0, 0x0)
:1 +0x56 fp=0xc0005133d8 sp=0xc000513380 pc=0x14df736
k8s.io/kubernetes/cmd/kubeadm/app/preflight.RunChecks(0xc0003b4000, 0x25, 0x2c, 0x1c0cc40, 0xc00000e020, 0xc00060a8d0, 0x1d, 0x2c)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/preflight/checks.go:1069 +0xd9 fp=0xc000513520 sp=0xc0005133d8 pc=0x14dc2b9
k8s.io/kubernetes/cmd/kubeadm/app/preflight.RunInitNodeChecks(0x1c4ade0, 0x2a221d8, 0xc000574d80, 0xc00060a8d0, 0x0, 0x26, 0x0)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/preflight/checks.go:948 +0xc0c fp=0xc000513a80 sp=0xc000513520 pc=0x14d91bc
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runPreflight(0x1923e60, 0xc0005dc140, 0x19a6b41, 0x9)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/preflight.go:60 +0x145 fp=0xc000513b40 sp=0xc000513a80 pc=0x15d7b15
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1(0xc0005fe200, 0x0, 0x0)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234 +0x11a fp=0xc000513be0 sp=0xc000513b40 pc=0x15a42ba
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll(0xc0003fa5a0, 0xc000513c78, 0x0, 0x3)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422 +0x6e fp=0xc000513c20 sp=0xc000513be0 pc=0x15a358e
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run(0xc0003fa5a0, 0xc0002695f0, 0x0, 0x3, 0xc000513d20, 0x1)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207 +0x14e fp=0xc000513cb0 sp=0xc000513c20 pc=0x15a2b5e
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1(0xc000115900, 0xc0002695f0, 0x0, 0x3, 0x0, 0x0)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:147 +0x16b fp=0xc000513d40 sp=0xc000513cb0 pc=0x161fe3b
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000115900, 0xc000269560, 0x3, 0x3, 0xc000115900, 0xc000269560)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826 +0x460 fp=0xc000513e18 sp=0xc000513d40 pc=0x6fb130
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0005c0c80, 0xc00000e010, 0x1c0cc40, 0xc00000e018)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914 +0x2fb fp=0xc000513ef0 sp=0xc000513e18 pc=0x6fbb7b
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(. )
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run(0x0, 0x16eb700)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50 +0x21c fp=0xc000513f38 sp=0xc000513ef0 pc=0x16218dc
main.main()
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25 +0x22 fp=0xc000513f60 sp=0xc000513f38 pc=0x1621932
runtime.main()
/usr/local/go/src/runtime/proc.go:203 +0x21e fp=0xc000513fe0 sp=0xc000513f60 pc=0x42f5be
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000513fe8 sp=0xc000513fe0 pc=0x45a3b1

goroutine 6 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog.(*loggingT).flushDaemon(0x2a04d40)
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:1010 +0x8b
created by k8s.io/kubernetes/vendor/k8s.io/klog.init.0
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:411 +0xd6

  • minikube is exiting due to an error. If the above message is not useful, open an issue:
    • https://github.com/kubernetes/minikube/issues/new/choose

The text was updated successfully, but these errors were encountered:

Источник

I’m trying to import a Docker image into Docker on AWS Red Hat Linux (3.10.0-514.el7.x86_64) and am having problems with the error;

Error processing tar file(exit status 1): archive/tar: invalid tar header

This same image works fine on my local machine, and in Boot2Docker on Windows also. It’s quite large (2.5 GB), but I’ve verified the checksum on the Red Hat Linux instance, and it’s the same as from the source.

What could be wrong, or how I can resolve it?

asked Nov 16 ’16 at 0:33

1

I wanted to add that the issue probably occurs because of the difference in behaviour of STDOUT between Windows and Unix. Therefore, using the STDOUT way of saving like:

docker save [image] > file.tar followed by docker load < file.tar

will not work if the save and load are executed on a different OS. Always use:

docker save [image] -o file.tar followed by docker load -i file.tar

to prevent these issues. Comparing the TAR files produced by the different methods, you will find that they have a completely different size (303MB against 614MB for me).

answered Nov 7 ’18 at 16:20

OsteckeOstecke

7997 silver badges11 bronze badges

1

As a more detailed description to Ostecke’s answer.

I’ve found it’s not a windows specific issue. It’s a powershell issue. Powershell emits two byte characters to STDOUT, not one byte characters. If you look in the file you’ll notice that the TAR header has nulls between what should be the correct header (and in the rest of the file). This explains why the file is twice the size.

CMD on the other hand does not emit multibyte characters to STDOUT. I’ve found the STDOUT method of saving a file works fine across different OSes if you use CMD on windows.

Using powershell, only the -o option is safe:

docker save [image] -o file.tar

Using CMD, either method should work fine.

answered May 1 ’19 at 15:23

The correct way to resolve this problem is this:

When you save the image, use this instruction

Docker save --output=C:YOUR_PATHmy_docker_image.tar e6f81ac424ae(image id)

And when you try to load this image, use this instruction:

Docker load --input C:YOUR_PATHmy_docker_image.tar

After this you see your image with name <none> in Docker image, and to resolve this, use the command tag

Docker tag IMAGE_ID mydockerapplication

answered Dec 4 ’17 at 8:24

daniele3004daniele3004

11k9 gold badges56 silver badges63 bronze badges

The problem was in FTPing the TAR file to my AWS instance — the FTP client was defaulting to ASCII mode instead of binary. Once I set it to binary I had no problems importing the archive.

answered Nov 20 ’18 at 19:30

bicsterbicster

5202 gold badges6 silver badges16 bronze badges

1

Not the answer you’re looking for? Browse other questions tagged linux docker redhat tar docker-image or ask your own question.

  • Remove From My Forums
  • Question

  • Hi,

    After importing a couple of databases in the ms developer sql database image, then stopping it, and committing, i get the following error:

    PS > docker commit ContainerXY my/tagError 
    response from daemon: re-exec error: exit status 1: output: archive/tar: invalid tar header

    The Databases are quite big though, but anyway, it should not fail like that.
    Is there any more info i can give you to solve that issue?

    PS C:Windowssystem32> docker info
    Containers: 2
     Running: 0
     Paused: 0
     Stopped: 2
    Images: 209
    Server Version: 17.03.1-ee-3
    Storage Driver: windowsfilter
     Windows: 
    Logging Driver: json-file
    Plugins: 
     Volume: local
     Network: l2bridge l2tunnel nat null overlay transparent
    Swarm: inactive
    Default Isolation: process
    Kernel Version: 10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)
    Operating System: Windows Server 2016 Standard
    OSType: windows
    Architecture: x86_64
    CPUs: 12
    Total Memory: 63.92 GiB
    Name: S6
    ID: 2OZY:DSZI:JEUU:4ISW:GJ23:F2TA:F5U3:XR4Z:YZKU:SAID:362W:MMXD
    Docker Root Dir: D:SystemDataDocker
    Debug Mode (client): false
    Debug Mode (server): true
     File Descriptors: -1
     Goroutines: 22
     System Time: 2017-06-02T09:56:38.4659153+02:00
     EventsListeners: 0
    Registry: https://index.docker.io/v1/
    Experimental: false
    Insecure Registries:
     127.0.0.0/8
    Live Restore Enabled: false
    
    PS C:Windowssystem32> docker version
    Client:
     Version:      17.03.1-ee-3
     API version:  1.27
     Go version:   go1.7.5
     Git commit:   3fcee33
     Built:        Thu Mar 30 19:31:22 2017
     OS/Arch:      windows/amd64
    
    Server:
     Version:      17.03.1-ee-3
     API version:  1.27 (minimum version 1.24)
     Go version:   go1.7.5
     Git commit:   3fcee33
     Built:        Thu Mar 30 19:31:22 2017
     OS/Arch:      windows/amd64
     Experimental: false
    
    PS C:Windowssystem32> 

Понравилась статья? Поделить с друзьями:
  • Error processing request contact your application administrator
  • Error processing performance parameter file 203a
  • Error processing package ubuntu
  • Error reading file postgresql conf
  • Error reading file materials panorama images regions uz png