Error invalid zip file with overlapped components possible zip bomb

Hi, on Gentoo x86, unzip test suite is failing with ##### testing unzipsfx (self-extractor) UnZipSFX 6.00 of 20 April 2009, by Info-ZIP (http://www.info-zip.org). error: invalid zip file with overl...

Correct. There is an inconsistency in #3 between the local & central header that isn’t relevant here — just means that a streaming unzip, that relies on the accuracy of the local headers, will have problems.

It is not required for a local header to contain the same extra blocks as the central header, and in fact it is not possible for a streaming zipper. The zipper does not know at the time the local header is written what the lengths and CRC will be, which is exactly why there is such a thing as a data descriptor after the compressed data. The information needed to populate a Zip64 extra block does not exist at the time the local header is written, so there can’t be one there. (Similarly, those fields in the local header are set to zero.) The Zip64 extra block can only be in the corresponding central header. It must be there if any of the lengths, offset, or disk number don’t fit in their respective slots in the usual central header.

So there is no «inconsistency» that I see.

While the zip format was designed to support streaming zippers, it was not designed to support streaming unzippers. Nevertheless my streaming unzipper sunzip has no problem with this, resolving any ambiguities at the time the data descriptor is encountered, which includes most annoyingly whether the data descriptor signature is present or not.

Nor is it a problem for a streaming zipper, since at the time the data descriptor is written the offset, disk, and lengths are known, so it is known whether there will need to be a Zip64 extra block in the central directory. Or the code could elect to always put a Zip64 extra block in the central directory (permitted by the appnote), in which case the code knows that it will be doing that at the time the data descriptor is written, and use eight-byte lengths.

The zipbomb logic appears to be in a grey(ish) area. There is an assumption that the data descriptor needs to include 8 byte sizes because the central directory has a Zip64 entry. That sounds fine, when there is a well-formed Zip64 entry in the central directory. In this case the Zip64 entry doesn’t contain any size data, so doesn’t conform to 4.3.9.2. The code is making an assumption based on a well-formed Zip64 entry in the central directory that doesn’t exist.

It is not a gray area with respect to the appnote, which is clear, and 4.3.9.2 says nothing about the Zip64 entry containing size data. It says that if there is a Zip64 extra field present for the entry, then the data descriptor must have eight-byte lengths. The Zip64 extra field that is there is well-formed, compliant, and required. It is required because the offset doesn’t fit in 32 bits. It is compliant because that is the only value doesn’t fit in its field, and the offset is the only one marked with all one bits in the header.

This use case isn’t malicious, so is there case for modifying the zipbomb logic to deal with this edge condition? Before zipbomb logic was added, unzip could infer the intent with this type of file and unzip it.

I will look at making this a different warning and proceeding, using the same logic sunzip does to permit all four possibilities of signature/no signature, four-byte/eight-byte lengths. The zip bomb detection in unzip turns out to be good at also detecting bugs in zip files, so I do not plan to completely silence it.

test failure on 32bit systems: error: invalid zip file with overlapped components (possible zip bomb) #2

Comments

on Gentoo x86, unzip test suite is failing with

I tested this git repository, too. Same failure.

Gentoo Bug report with complete build.log: https://bugs.gentoo.org/698694

The text was updated successfully, but these errors were encountered:

It does the same on hppa (32 bit, big endian).

Same on 32 bit arm.

Also take a look at the commit comments. You should make sure that those builds are using large file support.

There may be more here that needs fixing besides the 13f0260 commit.

On a RHEL system I was getting error: invalid zip file with overlapped components (possible zip bomb) immediately. I cloned this repo and built from source. This version got much deeper into the zip before reporting the same issue.

So I manually patched out the zip bomb detection by commenting out the three OverlappedComponents related if statements in extract.c that return PK_BOMB . This patched version was able to extract the zip without any issue.

Here is what unzip -v reports:

I don’t know what zip software was used to create the zip and I’m not certain that it doesn’t actually contain overlapping components. I do know however that it wasn’t maliciously constructed and it does extract without error (presumably passing the CRC- checks) when I patched out the zip bomb detection.

Though this is an extremely large file, can you make it available for me to download?

Though this is an extremely large file, can you make it available for me to download?

I’m sorry I can’t. It’s a forensic disk image and contains sensitive data. This is in part why I masked the name and provided the -v output.

I’d be willing to test various patches though. For example, if you made a debugging patch that reported component offsets and such I’d be wiling to run that and give you the log. I know this isn’t an ideal option.

I have a script, called zipdetails , that will dump data about the internal structure of zip files. Recently I’ve adding logic to detect zipbomb/overlapping entries. Rather than exiting at the first sign of a bad entry it attempts to log what the problem is and continue.

The zipbomb detection is still a work in progress, but it should allow you to share data about the structure of your zip file without having to publish the full zip file.

Given the sensitive data you are working with, I added a redact option to mask out the filenames in the output.

$ perl zipdetails -v -redact yourfile.zip

If you are happy that this doesn’t output any sensitive data, can you post it here?

I am sufficiently pleased with the redacted output, thank you for adding it. The output is attached. Note that the overlap detection is causing the script to exit with a die on line 2328 in the save() subroutine. I modified the script and changed that die to a warn in hopes of collecting more data.

For whatever reason GitHub is not allowing me to attach the files to this post. I have uploaded them to
http://www.brandonenright.net/

I am sufficiently pleased with the redacted output, thank you for adding it. The output is attached. Note that the overlap detection is causing the script to exit with a die on line 2328 in the save() subroutine. I modified the script and changed that die to a warn in hopes of collecting more data.

That die should be a print . I did say this is a work in progress 🙂

For whatever reason GitHub is not allowing me to attach the files to this post. I have uploaded them to
http://www.brandonenright.net/

Getting late here, but the first problem I see, apart from all the apparent overlaps the script is reporting, is a discrepancy in the length of the third entry. In the local header it is listed as 00E56AFA compressed bytes, but the central directory equivalent lists it as 200E56AFA bytes. Looks like the local entry has been written as a 32-bit value. The central directory suggests it is a 64-bit value.

Given that my script cannot find the fourth local header it suggests the central header length is incorrect.

In order to see which is true, can you run my script again with the —scan option. This changes the way the script scan the file. With a bit of luck it will show us all the local headers.

The —scan option may take a while to run, so just let it cook.

Getting late here, but the first problem I see, apart from all the apparent overlaps the script is reporting, is a discrepancy in the length of the third entry. In the local header it is listed as 00E56AFA compressed bytes, but the central directory equivalent lists it as 200E56AFA bytes. Looks like the local entry has been written as a 32-bit value. The central directory suggests it is a 64-bit value.

It’s entirely possible that whatever software was used to create this zip is partially broken. If so, it’s a miracle that unzip was able to extract it once I patched out the overlap detection. Would it help if I tracked down the person that made this zip and got the exact software version info? I’m not entirely sure I even can track down the creator since getting this image involved multiple layers of delegation from team to team, spanning multiple timezones. I’m willing to try though.

Given that my script cannot find the fourth local header it suggests the central header length is incorrect.

In order to see which is true, can you run my script again with the —scan option. This changes the way the script scan the file. With a bit of luck it will show us all the local headers.

The —scan option may take a while to run, so just let it cook.

The scan option doesn’t look to be an option. It is spitting out large sections of unredacted filesystem content listed as «comments». It’s also outputting huge chunks of XX. XXX for impossibly long filenames. If I had to guess, I’d say the pattern that —scan looks for can false-positive quite frequently where the size of the «comment» / «filename» can be quite large. I ran —scan for a few minutes and it had already output more than 5GB of data (as measured by piping the output to dd of=/dev/null).

If this is a matter of which header size/offset offset is correct, perhaps instead you can give me a few offsets into the file where I can provide a manually redacted (if needed) hexdump?

Maybe I’m getting into crazy territory here, but perhaps your zipdetails script could be made to backtrack and try different offsets when it fails to find the expected header? If this is a matter of truncating a 64-bit size into a 32-bit field ( 200E56AFA -> 00E56AFA ) then perhaps a middle-ground between a full —scan would be to try the 16 offsets 00E56AFA in turn and backtrack when there isn’t the expected header at each one of those.

I do greatly appreciate both of your support here. Thanks to both of you for your willingness to troubleshoot my (likely) broken zip file.

Getting late here, but the first problem I see, apart from all the apparent overlaps the script is reporting, is a discrepancy in the length of the third entry. In the local header it is listed as 00E56AFA compressed bytes, but the central directory equivalent lists it as 200E56AFA bytes. Looks like the local entry has been written as a 32-bit value. The central directory suggests it is a 64-bit value.

It looks right to me, since the next four bytes are 02 00 00 00. Your script may be incorrectly interpreting that data descriptor as having four-byte lengths when in fact it has eight-byte lengths. In fact, the central header for that entry has a Zip64 extra field, which, according to the appnote, requires that the data descriptor have eight-byte lengths.

Thank you Brandon for running and modifying Paul’s script.

The central directory indicates that there is, correctly, 24 bytes for the data descriptor at the end of entry #3. The first problem occurs at the end of entry #4.

Entry #4 has only ten bytes for the data descriptor. The central directory has a Zip64 extended information extra field, so per section 4.3.9.2 of the PKWare APPNOTE, the compressed and uncompressed sizes must be eight bytes each, regardless of whether you need eight bytes to represent them. It is the offset of the local header, not the sizes, that force the long data descriptor.

This occurs consistently for all files that are small, but start after the first 4 GB of the zip file. The problem is not there for files that have lengths that require eight bytes to represent them.

I can see how the author of the code might assume that the data descriptor would only use four-byte values for short lengths. But unfortunately that is incorrect.

Getting late here, but the first problem I see, apart from all the apparent overlaps the script is reporting, is a discrepancy in the length of the third entry. In the local header it is listed as 00E56AFA compressed bytes, but the central directory equivalent lists it as 200E56AFA bytes. Looks like the local entry has been written as a 32-bit value. The central directory suggests it is a 64-bit value.

It looks right to me, since the next four bytes are 02 00 00 00. Your script may be incorrectly interpreting that data descriptor as having four-byte lengths when in fact it has eight-byte lengths. In fact, the central header for that entry has a Zip64 extra field, which, according to the appnote, requires that the data descriptor have eight-byte lengths.

Yep, you are correct. It looks like the central directory for #3 says the file is 64 bit, but the equivalent local header doesn’t have the zip64 extra field present, but does have a 64-bit data descriptor. So it is inconsistent.

My script takes each directory entry at face value when decoding, so tries to decode the inconsistent local entry in 32 bit mode.

I’ll update my code to assume that the central directory holds the truth about zip64

Getting late here, but the first problem I see, apart from all the apparent overlaps the script is reporting, is a discrepancy in the length of the third entry. In the local header it is listed as 00E56AFA compressed bytes, but the central directory equivalent lists it as 200E56AFA bytes. Looks like the local entry has been written as a 32-bit value. The central directory suggests it is a 64-bit value.

It’s entirely possible that whatever software was used to create this zip is partially broken. If so, it’s a miracle that unzip was able to extract it once I patched out the overlap detection. Would it help if I tracked down the person that made this zip and got the exact software version info? I’m not entirely sure I even can track down the creator since getting this image involved multiple layers of delegation from team to team, spanning multiple timezones. I’m willing to try though.

It would be good to know what tools was used to create the zip file.

Given that my script cannot find the fourth local header it suggests the central header length is incorrect.
In order to see which is true, can you run my script again with the —scan option. This changes the way the script scan the file. With a bit of luck it will show us all the local headers.
The —scan option may take a while to run, so just let it cook.

The scan option doesn’t look to be an option. It is spitting out large sections of unredacted filesystem content listed as «comments». It’s also outputting huge chunks of XX. XXX for impossibly long filenames. If I had to guess, I’d say the pattern that —scan looks for can false-positive quite frequently where the size of the «comment» / «filename» can be quite large. I ran —scan for a few minutes and it had already output more than 5GB of data (as measured by piping the output to dd of=/dev/null).

If this is a matter of which header size/offset offset is correct, perhaps instead you can give me a few offsets into the file where I can provide a manually redacted (if needed) hexdump?

Maybe I’m getting into crazy territory here, but perhaps your zipdetails script could be made to backtrack and try different offsets when it fails to find the expected header? If this is a matter of truncating a 64-bit size into a 32-bit field ( 200E56AFA -> 00E56AFA ) then perhaps a middle-ground between a full —scan would be to try the 16 offsets 00E56AFA in turn and backtrack when there isn’t the expected header at each one of those.

The scan option is supposed to try really hard to find anything that looks like a zip directory entry — it just starts at the beginning of the file and reads evry byte in the file until it finds a directory signature. Then it decodes that entry and scans again. The problem with it is false positives — if the compressed data in the zip file happens to have a four byte sequence that matches one of the zip directory headers it will blindly try to decode it. I suspect that is what you are tripping over.

I’m fixing the script to deal with the Zip64 inconsistency between the central and local headers. Means you won’t have to run with the scan option. Should have something today to try out. Hopefully that will allow us to get a full picture of the structure of your zip file.

The change should hopefully cope with the mismatch between the central & local headers.

$ perl zipdetails -v -redact yourfile.zip

It would be good to know what tools was used to create the zip file.

I’m going to guess Java’s ZipOutputStream class. I think I’ve seen this before.

I have run Paul’s updated script. The output is looking good.

I’m working on getting details about the library used to make this zip. I likely won’t know until next week.

Damn! The data is getting truncated after local header #4 . I see the problem with my code. Hopefully sorted — new version available at https://github.com/pmqs/zipdetails/blob/main/bin/zipdetails . Can you give it a go if you get a chance please?

Damn! The data is getting truncated after local header #4 . I see the problem with my code. Hopefully sorted — new version available at https://github.com/pmqs/zipdetails/blob/main/bin/zipdetails . Can you give it a go if you get a chance please?

Absolutely! The new script gives substantially more output:

I’ve had a closer look at the latest output. The only issue I see is with #3 where it is missing a Zip64 extra field.

I created a zip file >4Gig locally with the missing Zip64 extra field, but unzip doesn’t complain about it.

Mark — would a missing a Zip64 extra field be enough to trigger an overlapped components error in unzip?

Brandon — another version of my script is available at https://github.com/pmqs/zipdetails/blob/main/bin/zipdetails. This one has better (but not exhaustive) overlap detection in place. It should also explicitly report the missing zip64 field. Can you give it a go please?

Also, regarding the source of the file — do you think is it possible to get an example file that unzip doesn’t like that can be shared?

There is no issue with entry #3. As I said above, the first problem is in entry #4. Entry #3 does indeed have a Zip64 extra field in the central header. Nothing is missing.

Entry #4 has a Zip64 extra field in the central header, so its data descriptor is supposed to have eight-byte lengths. A data descriptor of that length then overlaps the start of local header #5 by eight bytes.

Correct. There is an inconsistency in #3 between the local & central header that isn’t relevant here — just means that a streaming unzip, that relies on the accuracy of the local headers, will have problems.

Entry #4 has a Zip64 extra field in the central header, so its data descriptor is supposed to have eight-byte lengths. A data descriptor of that length then overlaps the start of local header #5 by eight bytes.

The zipbomb logic appears to be in a grey(ish) area. There is an assumption that the data descriptor needs to include 8 byte sizes because the central directory has a Zip64 entry. That sounds fine, when there is a well-formed Zip64 entry in the central directory. In this case the Zip64 entry doesn’t contain any size data, so doesn’t conform to 4.3.9.2. The code is making an assumption based on a well-formed Zip64 entry in the central directory that doesn’t exist.

This use case isn’t malicious, so is there case for modifying the zipbomb logic to deal with this edge condition? Before zipbomb logic was added, unzip could infer the intent with this type of file and unzip it.

I’m sorry for the delay in my response. Multiple other emergencies kept grabbing my attention.

Brandon — another version of my script is available at https://github.com/pmqs/zipdetails/blob/main/bin/zipdetails. This one has better (but not exhaustive) overlap detection in place. It should also explicitly report the missing zip64 field. Can you give it a go please?

It sounds like the problem is pretty well understood now but in case it helps here is the output from latest version of your tool:

Also, regarding the source of the file — do you think is it possible to get an example file that unzip doesn’t like that can be shared?

I haven’t yet tracked down the person or the software that made this zip. Even when I do I doubt I’ll be able to get them to zip up a few large benign files for me in the hopes that it’s similarly malformed.

I get the impression though that your zipdetails script isn’t too far away from being able to construct a ‘redacted’ zip that matches the exact structure of an existing zip. Bytes could be copied from the source to the redacted destination, replacing filenames with something like your X. X and (probably?) replacing the deflate/compression stream with some sort of dummy data. I suppose this would be hard to do generically for all different datastream types. I guess it would also be hard in the case where the zip is so malformed that the zipdetails script can’t even figure out what bytes are filename and which are compression stream and so on.

It’s been frustrating to me that I haven’t been able to help more and provide a copy of the zip I have. If I had a way of redacting the zip or reproducing with benign data so that i could share a useful malformed sample I would.

Correct. There is an inconsistency in #3 between the local & central header that isn’t relevant here — just means that a streaming unzip, that relies on the accuracy of the local headers, will have problems.

It is not required for a local header to contain the same extra blocks as the central header, and in fact it is not possible for a streaming zipper. The zipper does not know at the time the local header is written what the lengths and CRC will be, which is exactly why there is such a thing as a data descriptor after the compressed data. The information needed to populate a Zip64 extra block does not exist at the time the local header is written, so there can’t be one there. (Similarly, those fields in the local header are set to zero.) The Zip64 extra block can only be in the corresponding central header. It must be there if any of the lengths, offset, or disk number don’t fit in their respective slots in the usual central header.

So there is no «inconsistency» that I see.

While the zip format was designed to support streaming zippers, it was not designed to support streaming unzippers. Nevertheless my streaming unzipper sunzip has no problem with this, resolving any ambiguities at the time the data descriptor is encountered, which includes most annoyingly whether the data descriptor signature is present or not.

Nor is it a problem for a streaming zipper, since at the time the data descriptor is written the offset, disk, and lengths are known, so it is known whether there will need to be a Zip64 extra block in the central directory. Or the code could elect to always put a Zip64 extra block in the central directory (permitted by the appnote), in which case the code knows that it will be doing that at the time the data descriptor is written, and use eight-byte lengths.

The zipbomb logic appears to be in a grey(ish) area. There is an assumption that the data descriptor needs to include 8 byte sizes because the central directory has a Zip64 entry. That sounds fine, when there is a well-formed Zip64 entry in the central directory. In this case the Zip64 entry doesn’t contain any size data, so doesn’t conform to 4.3.9.2. The code is making an assumption based on a well-formed Zip64 entry in the central directory that doesn’t exist.

It is not a gray area with respect to the appnote, which is clear, and 4.3.9.2 says nothing about the Zip64 entry containing size data. It says that if there is a Zip64 extra field present for the entry, then the data descriptor must have eight-byte lengths. The Zip64 extra field that is there is well-formed, compliant, and required. It is required because the offset doesn’t fit in 32 bits. It is compliant because that is the only value doesn’t fit in its field, and the offset is the only one marked with all one bits in the header.

This use case isn’t malicious, so is there case for modifying the zipbomb logic to deal with this edge condition? Before zipbomb logic was added, unzip could infer the intent with this type of file and unzip it.

I will look at making this a different warning and proceeding, using the same logic sunzip does to permit all four possibilities of signature/no signature, four-byte/eight-byte lengths. The zip bomb detection in unzip turns out to be good at also detecting bugs in zip files, so I do not plan to completely silence it.

Источник

I’ve been struggling with this for a couple days so I’m hoping someone on SE can help me.

I’ve downloaded a large file from Dropbox using wget (following command)

wget -O folder.zip https://www.dropbox.com/sh/.../.../dropboxfolder?dl=1

I’m sure it’s a zip because 1), file dropboxfolder.zip yields
dropboxfolder.zip: Zip archive data, at least v2.0 to extract, and 2) the download and extraction works find on my Windows machine.

When I try to unzip to the current directory using unzip dropboxfolder.zip, on Linux, I get the following output:

warning:  stripped absolute path spec from /  
mapname:  conversion of  failed     
creating: subdir1/
creatingL subdir2/
extracting: subdir1/file1.tif 
error: invalid zip file with overlapped components (possible zip bomb)

I’m unsure what the issue is, since as I said it works fine on Windows. Since the zip is rather large (~19GB) I would like to avoid transferring it bit by bit, so I would be very thankful for any help. I’ve run unzip -t but it gives the same error. When listing all the elements in the archive it shows everything as it should be. Could it be an issue with the file being a tif file?

Im building an Android custom rom and I’m getting the following error with unzip 6.0-18

ExternalError: Failed to run command '['unzip', '-o', '-q', '/mnt/Android-Source/Bliss-arcadia-next-testing/out/target/product/lemonadep/obj/PACKAGING/target_files_intermediates/bliss_lemonadep-target_files-eng.srgrusso.zip', '-d', '/mnt/Android-Source/Bliss-arcadia-next-testing/out/soong/.temp/targetfiles-QY1RYM', 'SYSTEM/etc/vintf/*', 'VENDOR/etc/vintf/*', 'ODM/etc/vintf/*', 'SYSTEM_EXT/etc/vintf/*', 'ODM/etc/*', 'META/*', '*/build.prop']' (exit code 12):
error: invalid zip file with overlapped components (possible zip bomb)
 To unzip the file anyway, rerun the command with UNZIP_DISABLE_ZIPBOMB_DETECTION=TRUE environmnent variable

If I export UNZIP_DISABLE_ZIPBOMB_DETECTION=true in my terminal session I can manualy unzip it.  But my build enviroment ignores the variable.  So I tried adding export UNZIP_DISABLE_ZIPBOMB_DETECTION=true to /etc/environment But my build enviroment ignores this too.
The only solution I have been able to find so far is to downgrade to unzip 6.0-13.

Any thoughts or recommendations?

Thanks


Description


Thomas Deutschmann (RETIRED)


gentoo-dev


2019-10-27 22:10:04 UTC

Created attachment 594202 [details]
build.log

> >>> Test phase: app-arch/unzip-6.0_p25
> make --jobs 5 --load-average 7.95   check
> #####  This is a Unix-specific target.  (Just so you know.)
> #####     Make sure unzip, funzip and unzipsfx are compiled and
> #####     in this directory.
> #####  testing extraction
> Archive:  testmake.zip
>   inflating: testmake.zipinfo
> #####  testing zipinfo (unzip -Z)
> 1,4c1,3
> < Archive:  testmake.zip
> < Zip file size: 527 bytes, number of entries: 2
> < -rw-a--     2.3 ntf      126 tx defX 98-Nov-19 22:46 notes
> < -rw-a--     2.3 ntf      236 tx defX 98-Nov-19 22:46 testmake.zipinfo
> ---
> > Archive:  testmake.zip   527 bytes   2 files
> > -rw-a--     2.3 ntf      126 tx defX 19-Nov-98 22:46 notes
> > -rw-a--     2.3 ntf      236 tx defX 19-Nov-98 22:46 testmake.zipinfo
> #####  WARNING:  zipinfo output doesn't match stored version
> #####     (If the only difference is the file times, compare your
> #####      timezone with the Central European timezone, which is one
> #####      hour east of Greenwich but effectively 2 hours east
> #####      during summer Daylight Savings Time.  The upper two
> #####      lines should correspond to your local time when the
> #####      files were created, on 19 November 1998 at 10:46pm CET.
> #####      If the times are consistent, please ignore this warning.)
> #####  testing unzip -d exdir option
> Archive:  testmake.zip
>   inflating: testun/notes
> This file is part of testmake.zip for UnZip 5.4 and
> later.  It has DOS/OS2/NT style CR-LF line-endings.
> It's pretty short.
> #####  testing unzip -o and funzip (ignore funzip warning)
> funzip warning: zipfile has more than one entry--rest ignored
> #####  testing unzipsfx (self-extractor)
> UnZipSFX 6.00 of 20 April 2009, by Info-ZIP (http://www.info-zip.org).
> error: invalid zip file with overlapped components (possible zip bomb)
> make: *** [Makefile:501: check] Error 12



Portage 2.3.76 (python 3.6.9-final-0, default/linux/x86/17.0, gcc-8.3.0, glibc-2.29-r2, 4.19.72-gentoo-x86 i686)
=================================================================
System uname: Linux-4.19.72-gentoo-x86-i686-Intel-R-_Core-TM-_i7-3770K_CPU_@_3.50GHz-with-gentoo-2.6
KiB Mem:     3106552 total,   1951724 free
KiB Swap:     488276 total,    485964 free
Timestamp of repository gentoo: Sun, 27 Oct 2019 20:45:58 +0000
Head commit of repository gentoo: db01be01382eb338146df36c2dd6a15c6ddf9ebb

sh bash 4.4_p23-r1
ld GNU ld (Gentoo 2.32 p2) 2.32.0
app-shells/bash:          4.4_p23-r1::gentoo
dev-java/java-config:     2.2.0-r4::gentoo
dev-lang/perl:            5.28.2-r1::gentoo
dev-lang/python:          2.7.16::gentoo, 3.6.9::gentoo
dev-util/cmake:           3.14.6::gentoo
sys-apps/baselayout:      2.6-r1::gentoo
sys-apps/openrc:          0.41.2::gentoo
sys-apps/sandbox:         2.13::gentoo
sys-devel/autoconf:       2.13-r1::gentoo, 2.69-r4::gentoo
sys-devel/automake:       1.16.1-r1::gentoo
sys-devel/binutils:       2.32-r1::gentoo
sys-devel/gcc:            8.3.0-r1::gentoo
sys-devel/gcc-config:     2.0::gentoo
sys-devel/libtool:        2.4.6-r3::gentoo
sys-devel/make:           4.2.1-r4::gentoo
sys-kernel/linux-headers: 4.19::gentoo (virtual/os-headers)
sys-libs/glibc:           2.29-r2::gentoo
Repositories:

gentoo
    location: /usr/portage
    sync-type: git
    sync-uri: https://github.com/gentoo-mirror/gentoo.git
    priority: -1000

ABI="x86"
ABI_X86="32"
ACCEPT_KEYWORDS="x86"
ACCEPT_LICENSE="*"
ACCEPT_PROPERTIES="*"
ACCEPT_RESTRICT="*"
ADA_TARGET="gnat_2018"
ANT_HOME="/usr/share/ant"
ARCH="x86"
BROOT=""
CBUILD="i686-pc-linux-gnu"
CFLAGS="-O2 -pipe -march=pentium4m -mtune=pentium4m -Wno-error=jump-misses-init -Wno-error=sign-compare"
CHOST="i686-pc-linux-gnu"
CHOST_x86="i686-pc-linux-gnu"
COLLISION_IGNORE="/lib/modules/*"
CONFIG_PROTECT="/etc /usr/share/config /usr/share/gnupg/qualified.txt"
CPU_FLAGS_X86="mmx mmxext sse sse2"
CXXFLAGS="-O2 -pipe -march=pentium4m -mtune=pentium4m -Wno-error=jump-misses-init -Wno-error=sign-compare"
DEFAULT_ABI="x86"
EDITOR="/usr/bin/mcedit"
ELIBC="glibc"
ENV_UNSET="DBUS_SESSION_BUS_ADDRESS DISPLAY GOBIN PERL5LIB PERL5OPT PERLPREFIX PERL_CORE PERL_MB_OPT PERL_MM_OPT XAUTHORITY XDG_CACHE_HOME XDG_CONFIG_HOME XDG_DATA_HOME XDG_RUNTIME_DIR"
EPREFIX=""
EROOT="/"
ESYSROOT="/"
FCFLAGS="-O2 -march=i686 -pipe"
FEATURES="assume-digests binpkg-docompress binpkg-dostrip binpkg-logs cgroup config-protect-if-modified distlocks downgrade-backup ebuild-locks fixlafiles ipc-sandbox merge-sync multilib-strict network-sandbox news parallel-fetch pid-sandbox preserve-libs protect-owned sandbox sfperms strict unknown-features-warn unmerge-logs unmerge-orphans userfetch userpriv usersandbox usersync xattr"
FFLAGS="-O2 -march=i686 -pipe"
GCC_SPECS=""
GRUB_PLATFORMS="efi-32 pc"
GSETTINGS_BACKEND="dconf"
HOME="/root"
INFOPATH="/usr/share/gcc-data/i686-pc-linux-gnu/8.3.0/info:/usr/share/binutils-data/i686-pc-linux-gnu/2.32/info:/usr/share/info"
INPUT_DEVICES="libinput keyboard mouse"
IUSE_IMPLICIT="abi_x86_32 prefix prefix-guest prefix-stack"
JAVAC="/etc/java-config-2/current-system-vm/bin/javac"
JAVA_HOME="/etc/java-config-2/current-system-vm"
JDK_HOME="/etc/java-config-2/current-system-vm"
KERNEL="linux"
L10N="en en-US de de-DE"
LANG="en_US.UTF-8"
LC_ALL="en_US.UTF-8"
LC_MESSAGES="C"
LC_PAPER="de_DE.UTF-8"
LDFLAGS="-Wl,-O1 -Wl,--as-needed"
LIBDIR_x86="lib"
LINGUAS="en de"
LOGNAME="root"
MAIL="/var/mail/root"
MAKEOPTS="--jobs 5 --load-average 7.95"
MANPAGER="manpager"
MULTILIB_ABIS="x86"
NETBEANS_MODULES="apisupport cnd groovy gsf harness ide identity j2ee java mobility nb php profiler soa visualweb webcommon websvccommon xml"
NOCOLOR="true"
OFFICE_IMPLEMENTATION="libreoffice"
OLDPWD="/usr/portage"
OPENCL_PROFILE="ocl-icd"
OPENGL_PROFILE="xorg-x11"
PAGER="/usr/bin/less"
PATH="/usr/lib/llvm/8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin"
PHP_TARGETS="php7-1 php7-2 php7-3"
POSTGRES_TARGETS="postgres10 postgres11"
PWD="/usr/portage/app-arch/unzip"
PYTHONDONTWRITEBYTECODE="1"
PYTHON_SINGLE_TARGET="python3_6"
PYTHON_TARGETS="python2_7 python3_6"
QT_GRAPHICSSYSTEM="raster"
ROOT="/"
ROOTPATH="/usr/lib/llvm/8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin"
RUBY_TARGETS="ruby24 ruby25"
SHELL="/bin/bash"
SHLVL="2"
SSH_TTY="/dev/pts/0"
SYSROOT="/"
TERM="tmux-256color"
TMUX="/tmp//tmux-0/default,5222,0"
TMUX_PANE="%4"
TWISTED_DISABLE_WRITING_OF_PLUGIN_CACHE="1"
USER="root"
USERLAND="GNU"
VIDEO_CARDS="vmware"
XDG_CONFIG_DIRS="/etc/xdg"
XDG_DATA_DIRS="/usr/local/share:/usr/share"


Comment 1


Thomas Deutschmann (RETIRED)


gentoo-dev


2019-10-27 22:19:36 UTC

Looks like an x86 problem. I am unable to reproduce on amd64.


Comment 2


Thomas Deutschmann (RETIRED)


gentoo-dev


2019-10-28 00:08:35 UTC

Reported upstream.


Comment 3


Rolf Eike Beer


archtester


2019-10-30 20:30:12 UTC

same on hppa


Comment 4


Alexey



2019-11-03 21:28:29 UTC

I've got this on arm.


Comment 5


ernsteiswuerfel


archtester


2020-01-21 00:28:09 UTC

Same on ppc.


Comment 7


Thomas Deutschmann (RETIRED)


gentoo-dev


2020-03-20 00:59:34 UTC

I have seen this but for yet unknown reason I have lost my reproducer (i.e. app-arch/unzip-6.0_p25 doesn't fail anymore) so I cannot really verify.


Comment 8


Rolf Eike Beer


archtester


2020-03-25 19:50:24 UTC

Can still reproduce this on hppa, and the patch fixes it.

Понравилась статья? Поделить с друзьями:
  • Error invalid winflash parameters
  • Error invalid username or password перевод
  • Error invalid username or incorrect password lost your password
  • Error invalid user buffer
  • Error invalid use of void expression