Error end of centdir 64 signature not where expected prepended bytes

Version info bcbio version (bcbio_nextgen.py --version): 1.2.6 OS name and version (lsb_release -ds): Ubuntu 18.04.3 LTS To Reproduce Exact bcbio command you have used: bcbio_nextgen.py upgrade -u ...

Version info

  • bcbio version (bcbio_nextgen.py --version): 1.2.6
  • OS name and version (lsb_release -ds): Ubuntu 18.04.3 LTS

To Reproduce
Exact bcbio command you have used:

bcbio_nextgen.py upgrade -u skip --datatarget dbnsfp --genomes hg38

Observed behavior
Error message or bcbio output:

2021-02-22 01:53:15 (2.81 MB/s) - ‘dbNSFP4.1a.zip’ saved [30335259650]

error: End-of-centdir-64 signature not where expected (prepended bytes?)
  (attempting to process anyway)
warning [dbNSFP4.1a.zip]:  26040288077 extra bytes at beginning or within zipfile
  (attempting to process anyway)
   skipping: dbNSFP4.1a_variant.chrM.gz  need PK compat. v4.5 (can do v2.1)
Traceback (most recent call last):
  File "/home/local/bcbio/anaconda/bin/bcbio_nextgen.py", line 228, in <module>
    install.upgrade_bcbio(kwargs["args"])
  File "/home/local/bcbio/anaconda/lib/python3.6/site-packages/bcbio/install.py", line 107, in upgrade_bcbio
    upgrade_bcbio_data(args, REMOTES)
  File "/home/local/bcbio/anaconda/lib/python3.6/site-packages/bcbio/install.py", line 359, in upgrade_bcbio_data
    args.cores, ["ggd", "s3", "raw"])
  File "/home/tmpbcbio-install/cloudbiolinux/cloudbio/biodata/genomes.py", line 354, in install_data_local
    _prep_genomes(env, genomes, genome_indexes, ready_approaches, data_filedir)
  File "/home/tmpbcbio-install/cloudbiolinux/cloudbio/biodata/genomes.py", line 480, in _prep_genomes
    retrieve_fn(env, manager, gid, idx)
  File "/home/tmpbcbio-install/cloudbiolinux/cloudbio/biodata/genomes.py", line 875, in _install_with_ggd
    ggd.install_recipe(os.getcwd(), env.system_install, recipe_file, gid)
  File "/home/tmpbcbio-install/cloudbiolinux/cloudbio/biodata/ggd.py", line 30, in install_recipe
    recipe["recipe"]["full"]["recipe_type"], system_install)
  File "/home/tmpbcbio-install/cloudbiolinux/cloudbio/biodata/ggd.py", line 62, in _run_recipe
    subprocess.check_output(["bash", run_file])
  File "/home/local/bcbio/anaconda/lib/python3.6/subprocess.py", line 356, in check_output
    **kwargs).stdout
  File "/home/local/bcbio/anaconda/lib/python3.6/subprocess.py", line 438, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['bash', '/home/local/bcbio/genomes/Hsapiens/hg38/txtmp/ggd-run.sh']' returned non-zero exit status 81.

Additional context
I thought the download was interrupted and file was corrupt during my 1st attempt, but this kept occurring during my 2nd and 3rd attempt.

I think this is associated with previous issue #913 (comment) since in ggd-run.sh still seems to be using unzip instead of p7zip to extract dbNSFP4.1a.zip.

EDIT#1: added relevant snippet of code from ggd-run.sh

if [ ! -f dbNSFP.txt.gz ]; then
  UNPACK_DIR=`pwd`/tmpunpack
  mkdir -p $UNPACK_DIR
  unzip dbNSFP*.zip "dbNSFP*_variant.chrM.gz" # Potentially problematic line?
  gunzip dbNSFP*_variant.chrM.gz
  head -n1 dbNSFP*_variant.chrM > $UNPACK_DIR/header.txt
  rm dbNSFP*_variant.chrM
  # unzip only files with chromosomal info, eg. skip genes and readme.
  cat $UNPACK_DIR/header.txt > dbNSFP.txt
  unzip -p dbNSFP*.zip "dbNSFP*_variant.chr*.gz" | gunzip -c | grep -v '^#chr' | sort -T $UNPACK_DIR -k1,1 -k2,2n >> dbNSFP.txt
  bgzip dbNSFP.txt
  #extract readme file, used by VEP plugin to add vcf header info
  unzip -p dbNSFP*.zip "*readme.txt" > dbNSFP.readme.txt
fi

EDIT#2: added result from file integrity check using 7z

7z t dbNSFP4.1a.zip

7-Zip [64] 15.09 beta : Copyright (c) 1999-2015 Igor Pavlov : 2015-10-16
p7zip Version 15.09 beta (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,64 bits,24 CPUs x64)

Scanning the drive for archives:
1 file, 30335259650 bytes (29 GiB)

Testing archive: dbNSFP4.1a.zip
--         
Path = dbNSFP4.1a.zip
Type = zip
Physical Size = 30335259650
64-bit = +

Everything is Ok                     

Files: 36
Size:       30471606961
Compressed: 30335259650

I think the dbNSFP problem solved.
don’t know if it’s ok to post here, I think I encountered python2/python3 issue this time(?!)
(btw, I use python3.4 for bcbio_nextgen.py and python2.7 for bcbio_nextgen.py)

here’s the error msg after I remove and rerun dbNSFP installation.

* downloading https://s3.amazonaws.com/gemini-annotations/whole_genome_SNVs.tsv.compressed.gz to /opt/bcbio/gemini_da

Curl failed with non-zero exit code 18. Retrying
Curl failed with non-zero exit code 56. Retrying
Curl failed with non-zero exit code 56. Retrying
Traceback (most recent call last):
  File "/opt/bcbio/anaconda/lib/python2.7/site-packages/gemini/install-data.py", line 179, in <module>
    install_annotation_files(args.anno_dir, args.dl_files, args.extra)
  File "/opt/bcbio/anaconda/lib/python2.7/site-packages/gemini/install-data.py", line 101, in install_annotation_file
    to_dl, anno_dir, cur_config)
  File "/opt/bcbio/anaconda/lib/python2.7/site-packages/gemini/install-data.py", line 133, in _download_anno_files
    cur_config.get("versions", {}).get(orig, 1))
  File "/opt/bcbio/anaconda/lib/python2.7/site-packages/gemini/install-data.py", line 160, in _download_to_dir
    raise ValueError("Failed to download with curl")
ValueError: Failed to download with curl
Traceback (most recent call last):
  File "/opt/bcbio/anaconda/bin/gemini", line 6, in <module>
    gemini.gemini_main.main()
  File "/opt/bcbio/anaconda/lib/python2.7/site-packages/gemini/gemini_main.py", line 1121, in main
    args.func(parser, args)
  File "/opt/bcbio/anaconda/lib/python2.7/site-packages/gemini/gemini_main.py", line 996, in update_fn
    gemini_update.release(parser, args)
  File "/opt/bcbio/anaconda/lib/python2.7/site-packages/gemini/gemini_update.py", line 68, in release
    subprocess.check_call([sys.executable, _get_install_script(), config["annotation_dir"]] + extra_args)
  File "/opt/bcbio/anaconda/lib/python2.7/subprocess.py", line 540, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/opt/bcbio/anaconda/bin/python', '/opt/bcbio/anaconda/lib/python2.7/site-pa
[localhost] local: touch /root/tmp/cloudbiolinux/install_packages.R
[localhost] local: mkdir -p /opt/bcbio/tools/lib/R/site-library
[localhost] local: chown -R root /opt/bcbio/tools/lib/R/site-library
[localhost] local: echo '
    .libPaths(c("/opt/bcbio/tools/lib/R/site-library"))
    library(methods)
    cran.repos <- getOption("repos")
    cran.repos["CRAN" ] <- "http://cran.fhcrc.org/"
    options(repos=cran.repos)
    source("http://bioconductor.org/biocLite.R")
    ' >> "$(echo /root/tmp/cloudbiolinux/install_packages.R)"
[localhost] local: echo '
    repo.installer <- function(repos, install.fn, pkg_name_fn) {

        update.packages(lib.loc="/opt/bcbio/tools/lib/R/site-library", repos=repos, ask=FALSE)

      maybe.install <- function(pname) {
  check_name <- ifelse(is.null(pkg_name_fn), pname, pkg_name_fn(pname))
        if (!(is.element(check_name, installed.packages()[,1])))
          install.fn(pname)
      }
    }
    ' >> "$(echo /root/tmp/cloudbiolinux/install_packages.R)"
[localhost] local: echo '
    std.pkgs <- c("cghFLasso", "devtools", "ggplot2", "gsalib", "matrixStats", "snow", "RColorBrewer")
    std.installer = repo.installer(cran.repos, install.packages, NULL)
    lapply(std.pkgs, std.installer)
    ' >> "$(echo /root/tmp/cloudbiolinux/install_packages.R)"
[localhost] local: echo '
        bioc.pkgs <- c("BiocInstaller", "BubbleTree", "cn.mops", "DEXSeq", "DNAcopy", "GenomicRanges", "IRanges", "rt
        bioc.installer = repo.installer(biocinstallRepos(), biocLite, NULL)
        lapply(bioc.pkgs, bioc.installer)
        ' >> "$(echo /root/tmp/cloudbiolinux/install_packages.R)"
[localhost] local: echo '
        std2.pkgs <- c("PSCBS")
        lapply(std2.pkgs, std.installer)
        ' >> "$(echo /root/tmp/cloudbiolinux/install_packages.R)"
[localhost] local: Rscript /root/tmp/cloudbiolinux/install_packages.R
[localhost] local: rm -f /root/tmp/cloudbiolinux/install_packages.R
[localhost] local: rm -rf /root/tmp/cloudbiolinux
[localhost] local: rm -rf .cpanm
[root@Rome bcbio]# htop
[root@Rome bcbio]# less -S bcbio.toolplus.install.cadd.dbnsfp.log
[localhost] local: rm -f ~/*.dot
[localhost] local: rm -f ~/*.log
Creating manifest of installed packages in /opt/bcbio/manifest
Third party tools upgrade complete.
Installing additional tools
Upgrading bcbio-nextgen data files
Setting up virtual machine
[localhost] local: echo $HOME
[localhost] local: uname -m
[localhost] local: pwd
[localhost] local: mkdir -p '/opt/bcbio/genomes/Hsapiens/GRCh37/variation/9e5abf75-1cb7-3e48-84fd-7817f48269a1'
[localhost] local: wget --continue --no-check-certificate -O dbNSFPv3.0b2c.zip 'ftp://dbnsfp:dbnsfp@dbnsfp.softgeneti
[localhost] local: mv dbNSFPv3.0b2c.zip /opt/bcbio/genomes/Hsapiens/GRCh37/variation
[localhost] local: rm -rf /opt/bcbio/genomes/Hsapiens/GRCh37/variation/9e5abf75-1cb7-3e48-84fd-7817f48269a1
[localhost] local: mkdir -p dbNSFPv3.0b2c
[localhost] local: 7za x /opt/bcbio/genomes/Hsapiens/GRCh37/variation/dbNSFPv3.0b2c.zip -y -odbNSFPv3.0b2c
[localhost] local: cat dbNSFPv3.0b2c/dbNSFP*_variant.chr* | bgzip -c > dbNSFP_v3.0b2c.gz
[localhost] local: rm -f dbNSFPv3.0b2c/* && rmdir dbNSFPv3.0b2c
[localhost] local: rm -f /opt/bcbio/genomes/Hsapiens/GRCh37/variation/dbNSFPv3.0b2c.zip
[localhost] local: tabix -s 1 -b 2 -e 2 -c '#' dbNSFP_v3.0b2c.gz
Traceback (most recent call last):
  File "./tools/bin/bcbio_nextgen.py", line 207, in <module>
    install.upgrade_bcbio(kwargs["args"])
  File "/opt/bcbio/anaconda/lib/python2.7/site-packages/bcbio/install.py", line 94, in upgrade_bcbio
    upgrade_bcbio_data(args, REMOTES)
:

Recommend Projects

  • React photo

    React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo

    Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo

    Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo

    TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo

    Django

    The Web framework for perfectionists with deadlines.

  • Laravel photo

    Laravel

    A PHP framework for web artisans

  • D3 photo

    D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Visualization

    Some thing interesting about visualization, use data art

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo

    Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo

    Microsoft

    Open source projects and samples from Microsoft.

  • Google photo

    Google

    Google ❤️ Open Source for everyone.

  • Alibaba photo

    Alibaba

    Alibaba Open Source for everyone

  • D3 photo

    D3

    Data-Driven Documents codes.

  • Tencent photo

    Tencent

    China tencent open source team.

Hlic818

ku21fan

Hello,

Sorry for the inconvenience of downloading. The file is huge (8.42 GB).. so it is hard to send via email.

Can you try to download it with the following command? (via the original download URL)

wget -O data_CVPR2021.zip https://www.dropbox.com/sh/1s6r4slurc5ei2n/AABJZzmWTCNt6EWVXbQ-QdDUa/data_CVPR2021.zip?dl=0

or this command, (we just reset the download URL of the file.)

wget -O data_CVPR2021.zip https://www.dropbox.com/s/o27gunx16usjhgu/data_CVPR2021.zip?dl=0

In our environment, both commands can still download the file data_CVPR2021.zip.
So, we don’t know why the problem happens :(

If you still cannot download this file, I am planning to upload it to Baidu.

Hope it helps.

Hlic818

Dear Dr. Baek,
I’m sorry for disturbing you again.
In our setting, both the following commands
wget -O data_CVPR2021.zip https://www.dropbox.com/sh/1s6r4slurc5ei2n/AABJZzmWTCNt6EWVXbQ-QdDUa/data_CVPR2021.zip?dl=0
wget -O data_CVPR2021.zip https://www.dropbox.com/s/o27gunx16usjhgu/data_CVPR2021.zip?dl=0
failed to download the data. At your convenience, would you please upload it to Baidu as soon as possiable? We’re desperate for the data.
Thank you for your assistance.
Sincerely,
Xiao Li

At 2021-10-21 18:35:57, «Baek JeongHun» ***@***.***> wrote:

Hello,

Sorry for the inconvenience of downloading. The file is huge (8.42 GB).. so it is hard to send via email.

Can you try to download it with the following command? (via the original download URL)

wget -O data_CVPR2021.zip https://www.dropbox.com/sh/1s6r4slurc5ei2n/AABJZzmWTCNt6EWVXbQ-QdDUa/data_CVPR2021.zip?dl=0

or this command, (we just reset the download URL of the file.)

wget -O data_CVPR2021.zip https://www.dropbox.com/s/o27gunx16usjhgu/data_CVPR2021.zip?dl=0

In our setting, both commands can still download the file data_CVPR2021.zip.
So, we don’t know why the problem happens :(

If you still cannot download this file, I am planning to upload it to Baidu.

Hope it helps.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.

ku21fan

OK. I am going to upload it Baidu right now.

ku21fan

I uploaded data to baidu (password: datm)

The file data_CVPR2021.zip is split into 3files: data_CVPR2021_split.z01, data_CVPR2021.z02, and data_CVPR2021.zip.

You should download them all and then run the following commands

cat data_CVPR2021_split.z* > tmp.zip
unzip tmp.zip

then you will get data_CVPR2021.zip (about 8.5GB)

Hope it helps :)

yusirhhh

When I use the above command unzip tmp.zip,
It return a error: End-of-centdir-64 signature not where expected (prepended bytes?)

Do it happen when you unzip this file?

ku21fan

@yusirhhh It did not happen to me.
umm.. can you try to re-download or check md5sum of downloaded files?

md5sum of each file are as follows.

77c78ac256ffbf3cc5e36c8bd5e00b4d  data_CVPR2021_split.z01
ac4424d99c8c8ccdcb17dd7e2b8b9ae6  data_CVPR2021_split.z02
9c5c71ad13f72f700434bcd438b49d1c  data_CVPR2021_split.zip

yusirhhh

Hello, when performing scene text recognition, the input picture is processed into lmdb format. I want to conduct research on handwritten text on this program. I would like to ask if processing as lmdb format has a big impact on the speed of training. I look forward to your reply.

ku21fan

@yusirhhh
Hello, I have not compared the speed of training carefully, but I believe that using lmdb format is faster than not using lmdb format.

Following the convention that CRNN implementation did, I usually use the lmdb format.
And of course, the lmdb format helps to handle many image files as one DB file.
Thus, I use the lmdb format because of convention and convenience.

So, in my opinion, if you don’t need to follow convention and do not get the improvement of speed from lmdb, you may not need to use the lmdb format.

  • Home
  • Forum
  • The Ubuntu Forum Community
  • Ubuntu Official Flavours Support
  • General Help
  • [ubuntu] unzip — read failure while seeking for End-of-centdir-64 signature

  1. Smile unzip — read failure while seeking for End-of-centdir-64 signature

    I encountered a problem when i tried to unzip a large .zip file ( about 7.8G). Below is the error message i got from shell. Could any guy help to look into this problem? How can i fix it (if other tools can work, please suggest)

    Code:

    unzip VMImageV6.zip
    Archive:  VMImageV6.zip
    fatal error: read failure while seeking for End-of-centdir-64 signature.
      This zipfile is corrupt.
    unzip:  cannot find zipfile directory in one of VMImageV6.zip or
            VMImageV6.zip.zip, and cannot find VMImageV6.zip.ZIP, period.

    Btw, I’m in lucid lynx, thanks in advance!


  2. Re: unzip — read failure while seeking for End-of-centdir-64 signature

    Stating the obvious, it sounds like the .zip file is corrupt. Easy test: use the unzip tool to test the archive with «unzip -t file.zip». You can also test independent of unzip: install the p7zip-full package and use the 7z tool to test the archive: «7z t file.zip».


  3. Re: unzip — read failure while seeking for End-of-centdir-64 signature

    Thank you, gmargo! i installed 7z and tested the archive, it told me that the archive can not be opened.

    Code:

    7z t VMImageV6.zip
    
    7-Zip 9.04 beta  Copyright (c) 1999-2009 Igor Pavlov  2009-05-30
    p7zip Version 9.04 (locale=zh_CN.utf8,Utf16=on,HugeFiles=on,2 CPUs)
    
    Processing archive: VMImageV6.zip
    
    Error: Can not open file as archive

    To be sure, i re-downloaded the file from FTP, and now it works well. Thanks for the help


Tags for this Thread

Bookmarks

Bookmarks


Posting Permissions

For the 2.15.0.19 update, you may need to update pillarsofeternity-thewhitemarch1-gog first, otherwise you’ll get a conflict error for charged_spellbind_biting_winds_ability.unity3d.

Okay, actually, figured out the main menu not working… I have two monitors, and the game was mis-detecting my resolution and shifting mouse inputs about two inches to the left. Weird. Switching resolution fixed it.

[/home/erik/yaourt/yaourt-tmp-erik/aur-pillarsofeternity-gog/src/gog_pillars_of_eternity_2.13.0.17.sh]
error: End-of-centdir-64 signature not where expected (prepended bytes?)
(attempting to process anyway)
warning [/home/erik/yaourt/yaourt-tmp-erik/aur-pillarsofeternity-gog/src/gog_pillars_of_eternity_2.13.0.17.sh]: 835631 extra bytes at beginning or within zipfile
(attempting to process anyway)

Appears to finish building and installs, but opening menu does not function — clicking the gauntlet on any menu items does nothing but twitch the gauntlet.

Garment4D

[PDF] | [OpenReview] | [Project Page]

Overview

This is the codebase for our NeurIPS 2021 paper Garment4D: Garment Reconstruction from Point Cloud Sequences.

teaser

For further information, please contact Fangzhou Hong.

News

  • 2021-12 Code release!
  • 2021-09 Garment4D is accepted to NeurIPS 2021.

Getting Started

Please checkout the scripts folder for the training scripts. We currently support three types of garments i.e. skirts, Tshirts and Trousers. Take skirts training as an example, please run the seg_pca_skirt2.sh first for the canonical garment reconstruction and then run the seg_pca_lbs_skirt2.sh for the posed garment reconstruction.

TODO

  • Instructions for setting up python environments.
  • Data to run the code.
  • Pre-trained models.

Citation

If you find our work useful in your research, please consider citing the following papers:

@inproceedings{
    hong2021garmentd,
    title={Garment4D: Garment Reconstruction from Point Cloud Sequences},
    author={Fangzhou Hong and Liang Pan and Zhongang Cai and Ziwei Liu},
    booktitle={Thirty-Fifth Conference on Neural Information Processing Systems},
    year={2021},
    url={https://openreview.net/forum?id=aF60hOEwHP}
}

Acknowledgments

In our implementation, we refer to the following open-source databases:

  • PointNet2.PyTorch
  • pygcn
  • smplx

Понравилась статья? Поделить с друзьями:
  • Error encountered while performing the operation look at the information window for more details
  • Error encountered while invoking java web start sys exec как решить
  • Error encountered while importing a file zbrush
  • Error encountered while communicating with primary target ip address
  • Error eacces permission denied unlink