Error ebadf bad file descriptor read

Error: EBADF: bad file descriptor, read when parsing over Open.file() #104 Comments Thank you for Open.file() which saves actually a lot of time . However most of times I have a very strange error. The problem is — it’s very random, always raising after different files: Here is the sample code how I use […]

Содержание

  1. Error: EBADF: bad file descriptor, read when parsing over Open.file() #104
  2. Comments
  3. EBADF error sometimes happened after close event #25335
  4. Comments
  5. Error: EBADF, read #4201
  6. Comments
  7. Error: EBADF: bad file descriptor, read #30
  8. Comments
  9. use of jruby causes “Errno::EBADF: Bad file descriptor” error #5249
  10. Comments

Error: EBADF: bad file descriptor, read when parsing over Open.file() #104

Thank you for Open.file() which saves actually a lot of time . However most of times I have a very strange error. The problem is — it’s very random, always raising after different files:

Here is the sample code how I use it:

Do you have any ideas what it can be?

The text was updated successfully, but these errors were encountered:

it looks like the problem is in using promise inside the while. I just tested with the simple forEach and it works fine. I wanted to prevent memory leakage. But it seems that’s the reason.

I had same the problem.

The way I solved was through comment in Node source (based on 10.15.0).
With reference to the comment, I kept stream reference for a period of time, and the problem no longer occurred.

Try modifying your code like this:

But, I do not know if the magic number 100 of the timeout is valid in all environments.

Thank you for reading my comment.

I have the same issue and I believe the issue should be open until solved.

I am able to reproduce this random unzipper crash too. I execute the same routines with yauzl and node-zip-stream as part of a benchmark suite, and only unzipper exhibits this behaviour.

Same story here. @danielweck can you share example with yauzl ?

Stacktrace / debug info:

node —version
=>
v10.16.3

Thanks @danielweck — can you give me more detail on how I can also replicate the problem on my side, i.e. your test suite, do you have a branch or a commit hash I can run tests that fail? Would be really valuable to get to the bottom of this and fix it.

It’s worth noting that the unzipper.Parse is the legacy implementation within unzipper . The Open methods are far more robust, stream only the files of interest and properly use the information in the central directory.

Same story here. @danielweck can you share example with yauzl ?

Here is a repro script which can be used to compare unzipper (the only lib that crashes in my tests), yauzl and node-zip-stream :
#157

I should point out that I tried various process.nextTick() and setTimeout() techniques in order to prevent premature garbage collection of unzipper streams (in fact, I even tried storing the stream references in a global array . to no avail):

Lines 24 to 28 in b0e3d93

file : function ( filename , options ) <
var source = <
stream : function ( offset , length ) <
return fs . createReadStream ( filename , < start : offset , end : length && offset + length >) ;
> ,

The bug does not occur consistently, and is therefore hard to reproduce. Which is why a repro script comes-in handy.

I suspect that devs who tried the setTimeout() technique «successfully» (as mentioned in this discussion thread), didn’t stress-test hard enough, resulting in a false sense of success. Using a script to automate the opening/closing of ZIP entry streams, I am able to reproduce the bug quite regularly.

Источник

EBADF error sometimes happened after close event #25335

  • Version:
    v10.15.0
  • Platform:
    Windows 7 64-bit
  • Subsystem:
    fs

Sometimes, an error event occurs after close event (probably because the file is closed while reading), but I can not know if the stream is ready to close the file to prevent this error.

Here is an example to reproduce:

The result can look like this:

The text was updated successfully, but these errors were encountered:

Please note that the error can also happened on Linux but before the close event:

I don’t know if I agree this is a bug. To me it’s expected behavior: the test is simultaneously reading from the file and closing the file descriptor; in other words, it contains a race condition. You should unpipe first and wait for the ‘unpipe’ event before closing.

(That said, I can’t reproduce locally with master, but that’s race conditions for you.)

I have to run the example multiple times to reproduce, but in my real case I attempt to destroy the readable stream in the writable close event (before the read end of file because an error happens) so shouldn’t it be already be unpiped ? Anyway, I don’t see anywhere on the documentation that the destroy method can’t be called anytime, so it should be at least documented.
IMHO, the destroy event should wait the end of the current read operation (and ignore the read result) before closing the file descriptor.

I have to run the example multiple times to reproduce sometimes (it’s more difficult on linux), you may can increase the number to 1000 to have more chances to reproduce.

And this is still reproductible with unpipe event:

An example of result:

@bnoordhuis It seems that waiting for readable event can workaround this issue (there is probably not any running read while this event is emitted), it seems to work but I guess that this is not optimal (note that I tried on big files too):

@nicolashenry Your example from #25335 (comment) doesn’t call readStream.unpipe(passthrough) explicitly, it only waits for the ‘unpipe’ event.

@bnoordhuis So ‘unpipe’ event is not the result of an unpipe ? From what I understood destroying a stream is calling unpipe since ‘unpipe’ event is triggered. :/

Anyway it is still happening with explicit unpipe call:

@bnoordhuis It seems that unpipe is not enough to destroy the ReadStream safely, so do you consider this as a bug?

@bnoordhuis Do you need more information about this issue?

If you need more context, I have multiple read streams that could be closed anytime while being piped and I have occasionally EBAF errors occurring (it’s very rare but it’s create some annoying logs).
I tried to unpipe and wait for ‘unpipe’ event like you suggest but it does not change anything.

I think that I could do a pull request fixing this because I already tried to fork node file ReadStream patching _destroy method to wait that file descriptor is unused before closing it and it definitely fixed this issue. However I can’t maintain this fork forever. Also if I do a pull request I am sure that I can’t do a useful test for this because this is a racing issue.

Источник

Error: EBADF, read #4201

Happens on random source file modifications and subsequent client refresh.
Met 1.1 for Win

The text was updated successfully, but these errors were encountered:

Is there a full stack trace you can show?

How do I get it?

I mean, please show more context about what was printed to your screen than just one line.

Hmm, this is definitely strange. fs.readFile is failing because an fd becomes invalid between open and fstat?

Maybe a rogue close somewhere else in the process? Would be easier to track down if it were replicable.

Unfortunately there’s nothing more to the error, only this description and timestamp:
W20150415-09:10:20.648(4)? (STDERR) Error: EBADF, read

This error happens on random server restarts and client refreshes.

The error looks like this

Yep, and sometimes it becomes

I also get this sometimes when saving files I think. I use Mac OS X 10.10.2, Meteor 1.1.0.2 and Webstorm 10 with the option «Use safe write (save changes to a temporary file first)» turned on. I just mention this because maybe this has something to do with producing this error. This option is under «Appearance & Behavior» -> «System settings»

ok, i’ve got smth new:

I am seeing this periodically on mac Meteor 1.1.0.2

I receive the same error, in Windows 8.1 x64 and Ubuntu 14.04 x64.
The «EBADF, read» error does not have an effect on the running application. However, the «Error serving static file» results in a file (in my case, an mp3 clip) from the static/ folder to not be played.

This is definitely a real thing. I just closed a thread on meteor/windows-preview with 35 comments about this; let’s move the main discussio here.

This comes from the StackOverflow thread here.

Essentially, every time the Meteor server refreshes (i.e. after a file change), it spits out this error to the console / command window:

Note that the functionality seems to be unaffected (I was able to run through the entire standard tutorial on the Meteor.com site without problems). Just thought I’d log it and was directed to do so here.

This error is spamming my console too. Doesn’t seem to be affecting anything (everything seems to be working fine), but it is extremely annoying.

I noticed that (at least in 1.2.1) these errors only show up when I open the browsers debug tools.

I can consistently reproduce if I empty the browser cache and reload the page with developer tools closed -> No errrors. As soon as I open developer toools I get these errors. The second time I open the developer tools (even after a soft reload), I don’t get these errors, because of the browser cache.

Therefore I think these errors are related to the missing locations the source maps refer to: #5142

@sebakerckhof Are you able to create a small consistent minimized reproduction we can experiment with?

This is causing a lot of lost productivity here at Workpop. It’s not something we can easily repro. as we’re developing, between 3-15 times a day depending on the day meteor will cold crash and we have to manually restart it. With the slower 1.2.1 build times it’s becoming really impactful unfortunately. I’m more than happy to run with an instrumented tool that could help you guys debug this with logs or whatever is needed when it happens, that tends to have better outcomes for heisenbugs such as this rather than trying to narrow down repro steps.

PS: You guys have our dev sub for a repro environment, albeit not very minimized 🙂

@glasser How did you deduce that «fs.readFile is failing because an fd becomes invalid between open and fstat?»

So @Slava ‘s stack trace in windows-preview was:

(I just re-enabled and re-disabled issues on windows-preview to get that)

and so what that means is that fs.readFile is returning EBADF from fstat. What EBADF means is that something tried to operate on an fd that is not valid. The only fstat in fs.readFile comes right after the fs.open.

So most likely this means some concurrent part of the process is calling close() on the wrong file descriptor. Probably a double-close: ie, the buggy code does open() -> 42, close(42), close(42), and in between the two close(42) calls our friend fs.readFile gets its own open() -> 42.

It could also indicate a concurrency bug in how we wrap fs.readFile into files.readFile, but that’s less likely.

oh it’s also what @sanjo’s trace is showing

This is happening to me using OS X 10.11.3 with WebStorm 10. It happens the most when I save the same file frequently. I might hit save, then quickly fix a typo, then hit save again. The crash occurs while Meteor is building/refreshing. If it wait for it finish before I hit save again, it won’t happen. I was not able to determine the right timing to make this reliably reproducible, but it happens fairly often.

Full stack trace:

happens the same on OSX

/Users/firfi/.meteor/packages/coffeescript/.1.0.16.1uhzf8w++os+web.browser+web.cordova/plugin.compileCoffeescript.os/npm/node_modules/meteor/promise/node_modules/meteor/promise/node_modules/meteor-promise/promise_server.js:116
throw error;
^
Error: EBADF, fstat

I can reproduce this issue on macOS Sierra 10.12.1 (have tried both Node v4 and v6). It happens when I try to install the nathantreid:css-modules package. Basic steps:

After a couple of minutes of installing the nathantreid:css-modules package, it consistently fails right at the end with the following stack trace:

Any ideas? This is preventing me from upgrading to 1.4.2.

I’ve been receiving a similar error since last Friday afternoon. About 70% of the time, when I try to run the following command:

Meteor version 1.4.2
Mac OSX Sierra 10.12.1

Not sure if it’s related, but I also haven’t been able to use spacejam since then.

Источник

Error: EBADF: bad file descriptor, read #30

Hi, I’m raising this bug a bit early as I’m still in the middle of debugging a pretty serious bug we’ve discovered in the latest release of Ghost and will update as I go trying to figure out what’s happening.

I’m getting an error in extract-zip@1.5.0:

Error: EBADF: bad file descriptor, read

I get a similar error in extract-zip@1.3.0:

This happening when extracting a particular zip file. It’s not my zip so I can’t share it publicly, and I’m still trying to figure out what the discerning factor is with this zip. The tripping factor appears to be a .git folder at the moment.

The first thing I did is upgrade, which gave me a lower level error.

Additionally, what I have done is a) research the error message and determine that it is probably something to do with trying to call .close() on a file descriptor that is already closed and b) write a 50 line version of extract using the example in the README file of the yauzl repository and verify that this does not give me an error.

Therefore, I’m relatively confident I’ve found a bug, just not sure what it is yet, and wanted to flag it up early in the hope that someone else might have a clue!

This is happening on Node v4.4.7 on my local machine, and also on Ubuntu. It is reproducible when uploading the offending zip to http://gscan.ghost.org. Will be back soon hopefully with a sharable zip that reproduces the error.

/me goes back to debugging.

The text was updated successfully, but these errors were encountered:

Источник

use of jruby causes “Errno::EBADF: Bad file descriptor” error #5249

This is posted to StackOverflow as well.

I’ve got a little executable we’ll call «decode» (written in C) that takes a block of data on stdin (an image file), converts it, and spits it back to stdout. So from the linux command line the following command works just fine:

I’m trying to wrap this binary in some ruby code and eliminate the need for the use of regular file IO using Open3.popen3. Here’s the relevant section of Ruby code:

Variable f contains the block of data to convert. I was writing it to file and then calling decode on the written file. When trying to run the the above code from irb using jruby, one gets the following error traceback (slightly sanitized):

The funny thing is that the exact same code works fine unchanged in irb if I’m using the system ruby interpreter, or rubinius (both of which I have installed and can switch between using rbenv).

Can anyone tell me what gives? I’m runing ubuntu linux 18.04 LTS, and jruby 9.2.0.0 (2.5.0). Jruby is the platform of choice because of speed and other considerations, so I need to get this working.

The text was updated successfully, but these errors were encountered:

EBADF would generally mean that one of the IO streams to the subprocess have been shut down, perhaps prematurely.

Can you put together a small git repo that we can clone and run to see the problem locally?

Alright. I’ll work on trying to put together a bit of code that reproduces the error. Might take me into next week. I didn’t post until I’d pretty much exhausted google and cross tested every other alternative. Stay tuned.

OK, so it didn’t take too long to cook up a demo. Here’s a file-set on Github that should allow you to reproduce the problem.

run the .rb file in jruby and you should get the following error:

As I said before, the code runs fine with system ruby and rubinius (no error). With either of those, you’ll get a tiff file you can open with any image viewer program. Thanks for the help in figuring this out.

I think there’s a bug in the demo script you provided; it can’t find the decode executable. I changed it from Open3.popen3(«decode») to Open3.popen3(«./decode») and the ENOENT went away.

Unfortunately, I also did not get EBADF:

With my change, does the script give you an EBADF, or do we need to keep looking for a reproduction?

I’ve updated my script with your change, inserting a «./» in front of decode. I get the same error.

Here’s a file listing in case there’s something wrong there (I can’t see what)

And for good measure I’ve run the executable manually as follows:

/tmp/jruby-bug$ ls bugdemo.rb decode README.md test.kob test.tiff user@host:

It produced the expected tiff file without issue. Switching over to system ruby with rbenv global system and running the script again produces the following result:

Ditto for rbx (rubinius) which I also have installed. Clearly, there’s something wrong with my installation of jruby.

PS. I’ve updated the GIT repo with the minor tweak to bugdemo.rb.

I found this explanation from a post of yours quite some time ago. I don’t know if it is relevant or not.

In JRuby, because we don’t normally have access to the «real» file
descriptor for any IO channel, all our logic for fileno is basically
fake. We keep an artificial list of numbers that map to IO channels
and use that as our file descriptor table. In this case, you’re
pulling in a real file descriptor from the system, which does not
exist in our table, so we raise an error.

In order for us to support arbitrary descriptors in our IO we’d
probably need an implementation of IO that used all low-level C APIs
rather than Java IO APIs. There’s currently no (public) way to get
from a file descriptor to an IO channel in Java APIs.

So I’ll divert this issue by asking: what does this get you that our
built-in tempfile support does not? We do not use MRI’s tempfile.rb;
we’ve implemented our own on top of Java’s tempfile support that
performs quite a bit better. Perhaps there’s a missing feature we can
add.

My point in using pipes is to improve performance over std disk-based file I/O. The code I’ve abstracted for the demo is part of a larger system that performs repeated conversions, hundreds of thousands of them in fact, and needs to do so as quickly as possible. That’s why the code that actually does the converting is written in C and compiled. That’s just step 1 though as I need to get the data to convert to the binary, and the converted data back from it, as quickly and as efficiently as possible.

The earlier version of the code that used temporary files worked, but it also occasionally would throw an «Errno::EBADF: Bad file descriptor» error. I’m using multiple threads in a worker pool (Celluloid) to fire off as many of these conversions at once as the system can handle. It smelled to me in the temp-file version that I was running into a resource problem, but given that std file I/O is not as efficient as pipes theoretically, I moved to that. But I’m not even getting off first base with the new pipe-based code.

If there’s a better way to do what I need to do in jRuby, I’m open.

Just tried the bug demo code with jruby 9.2.1. no joy.

I am traveling but will try to look at this again soon. I suspect it is a difference in how we do IO or handle errors on Linux versus MacOS (I’m testing on MacOS).

@enebo Can you reproduce this?

@voyager131 @headius ok something fascinating here. If I run it using Java 8 it runs quickly. If I run with Java 9 or Java 10 it just sits there. Here is a dump:

I do not seem to get EBADF but it just doesn’t work.

That is indeed interesting. I am running java 10.

You guys will have to be the judges though of what that might mean or how best to proceed with this new information. I know next to nothing about java, and only use it.

Ah-ha, Java 10. I think that’s a detail I missed, but it could be important. Could you try downloading a Java 8 SDK and see if it works?

The problem on Java 9+ is that we need to conform to some new standards about reflectively digging around inside the JDK classes (something we call «cracking them open»). In the case of process IO, this may mean that we are unable to properly represent native IO because we have to use the JDK IO classes as-is.

I’m pulling 9 to my VM now over a slow connection and then I should be able to confirm this.

Ok, I get a slightly different error than you do when I use Java 9, but it’s failing:

Interestingly, it sometimes succeeds for me.

Ok, some good news!

If I modify your script to just use IO.popen (which doesn’t provide access to stderr) it appears to work every time.

The problem you’re seeing seems to stem from the implementation of popen3 and how we handle it.

Ok more information.

It does appear that popen3 is getting a proper native PID for a directly-launched subprocess, rather than one faked through JDK classes. So my original theory about Java 9 interfering with the process launch seems to not be true.

However if I pause your script inside the popen3 block and check whether a thread has actually been started for wait_thr, I see only the main thread running. This may not be related to your issue, but it seems that Process.detach is having some kind of trouble.

So I have java 8 installed and working along side java 10. I can switch between them on the command line just fine using

sudo update-alternatives —config java
sudo update-alternatives —config javac

However this does not change the version of java jruby uses as jruby —version yields the same result (see above) regardless off the setting I choose for java using the above commands. Working on figuring out how to fix that. Since I use rbenv, it might be nice to have another version of jruby linked to a different java or something.

Источник

Comments

@ThePlenkov

seebees

added a commit
to seebees/aws-encryption-sdk-javascript
that referenced
this issue

Feb 27, 2020

@seebees

`unzipper` would throw an FD error occationaly.
Relevant issue: ZJONSSON/node-unzipper#104
There is a pending PR,
but it seemed simpler to simply move to `yauzl`.

The error:

```
events.js:174
      throw er; // Unhandled 'error' event
      ^

Error: EBADF: bad file descriptor, read
Emitted 'error' event at:
    at lazyFs.read (internal/fs/streams.js:165:12)
    at FSReqWrap.wrapper [as oncomplete] (fs.js:467:17)
```

seebees

added a commit
to seebees/aws-encryption-sdk-javascript
that referenced
this issue

Feb 27, 2020

@seebees

`unzipper` would throw an FD error occasionally.
Relevant issue: ZJONSSON/node-unzipper#104
There is a pending PR,
but it seemed simpler to simply move to `yauzl`.

The error:

```
events.js:174
      throw er; // Unhandled 'error' event
      ^

Error: EBADF: bad file descriptor, read
Emitted 'error' event at:
    at lazyFs.read (internal/fs/streams.js:165:12)
    at FSReqWrap.wrapper [as oncomplete] (fs.js:467:17)
```

seebees

added a commit
to seebees/aws-encryption-sdk-javascript
that referenced
this issue

Feb 27, 2020

@seebees

`unzipper` would throw an FD error occasionally.
Relevant issue: ZJONSSON/node-unzipper#104
There is a pending PR,
but it seemed simpler to simply move to `yauzl`.

The error:

```
events.js:174
      throw er; // Unhandled 'error' event
      ^

Error: EBADF: bad file descriptor, read
Emitted 'error' event at:
    at lazyFs.read (internal/fs/streams.js:165:12)
    at FSReqWrap.wrapper [as oncomplete] (fs.js:467:17)
```

seebees

added a commit
to aws/aws-encryption-sdk-javascript
that referenced
this issue

Feb 27, 2020

@seebees

`unzipper` would throw an FD error occasionally.
Relevant issue: ZJONSSON/node-unzipper#104
There is a pending PR,
but it seemed simpler to simply move to `yauzl`.

The error:

```
events.js:174
      throw er; // Unhandled 'error' event
      ^

Error: EBADF: bad file descriptor, read
Emitted 'error' event at:
    at lazyFs.read (internal/fs/streams.js:165:12)
    at FSReqWrap.wrapper [as oncomplete] (fs.js:467:17)
```

При использовании функции _read(..) передается значение -1 (т.е. нефига не читается)

Цитата:

descriptor = _open(FileName,1);

_lseek(descriptor,0,0);
int len(0);
int i = _read(descriptor,&len,1);

из MSDN

Цитата:

the file is not open for reading, or the file is locked, the function returns –1 and sets errno to EBADF

трассировка функции показывает передачу ошибки EBADF
Объясните, пожалуйста, в чем заключается эта ошибка? Функция _write(descriptor,»test»,4) производит запись в файл без проблем (т.е. он открыт и дескриптор указан верно)
Что может вызывать EBADF?

6 ответов

63

17 февраля 2007 года

Zorkus

2.6K / / 04.11.2006

При использовании функции _read(..) передается значение -1 (т.е. нефига не читается)
трассировка функции показывает передачу ошибки EBADF
Объясните, пожалуйста, в чем заключается эта ошибка? Функция _write(descriptor,»test»,4) производит запись в файл без проблем (т.е. он открыт и дескриптор указан верно)
Что может вызывать EBADF?

Ну так, функция _write пишет без проблем потому, быть может, что ты открыл файл для записи:), дескриптор создан для записи в файл. Чтобы читать из файла, передавай вторым параметром в _open _O_RDWR.
P.S. Вообще, очень советую использовать в таких случаях именованные константы, даже если ошибка в данном случае и не в этом.

по поводу _O_RDWR у меня тоже есть вопрос =)

Цитата:

error C2065: ‘_O_RDWR’ : undeclared identifier

почему? <io.h> включен

сам разобрался методом Научного Тыка =)
_open(fd, _O_RDWR) <=> _open(fd, 2)

вот только кто объяснит, почему?!

8.8K

18 февраля 2007 года

The_Ice

109 / / 04.04.2006

сам разобрался методом Научного Тыка =)
_open(fd, _O_RDWR) <=> _open(fd, 2)

вот только кто объяснит, почему?!

во-первых что почему? «<=>» это что? меньше равно или больше? пользуйся define’ами так нагляднее. если описан параметр функции как _O_RDWR, так зачем тебе двойки всякие?

257

18 февраля 2007 года

kosfiz

1.6K / / 18.09.2005

[quote=Аццкий программер]сам разобрался методом Научного Тыка =)
_open(fd, _O_RDWR) <=> _open(fd, 2)[/quote]
а зачем методом «научного тыка»? в файле fcntl.h так и написано
[quote=]#define _O_RDWR 0x0002 /* open for reading and writing */
[/quote]
а вот это
[quote=Аццкий программер]по поводу _O_RDWR у меня тоже есть вопрос =)
[quote=]error C2065: ‘_O_RDWR’ : undeclared identifier[/quote]
почему? <io.h> включен[/quote]
потому что надо подключить fcntl.h, так как в io.h _O_RDWR вообще нет.

  • Remove From My Forums
  • Question

  • User365604 posted

    Before I updated to visual studio 19 and Android 10 (Q), I successfully installed third party apps with my app with the following code.

    PackageInstaller installer = activity.PackageManager.PackageInstaller;
    PackageInstaller.SessionParams sessionParams = new PackageInstaller.SessionParams(PackageInstallMode.FullInstall);
    int sessionId = installer.CreateSession(sessionParams);
    PackageInstaller.Session session = installer.OpenSession(sessionId);
    
    var input = new FileStream(pfad, FileMode.Open, FileAccess.Read);
    var packageInSession = session.OpenWrite("package", 0, -1);
    input.CopyTo(packageInSession);
    packageInSession.Close();
    input.Close();
    packageInSession.Dispose();
    input.Dispose();
    
    //That this is necessary could be a Xamarin bug.
    GC.Collect();
    GC.WaitForPendingFinalizers();
    GC.Collect();
    
    Intent intent = new Intent(activity, activity.Class);
    intent.SetAction("com.example.android.apis.content.SESSION_API_PACKAGE_INSTALLED");
    PendingIntent pendingIntent = PendingIntent.GetActivity(activity, 0, intent, 0);
    IntentSender statusReceiver = pendingIntent.IntentSender;
    
    // Commit the session (this will start the installation workflow).
    session.Commit(statusReceiver);
    

    When i Dispose() the streams, i get an IOException: write failed (EBADF) bad file descriptor which would indicate a bad APK.

    But this is unlikely because the code in visual studio 2017 works with the Android 9 target.

    Hope somebody can help me and thank you in advance!

Answers

  • User394637 posted

    I have gotten APK installation working in Android Q with Xamarin
    There are a couple of things you need to make sure in order for the apk installation to succeed:
    — Do not use using statements inside the addApkToInstallSession method. The Dispose causes the installation to fail. Use try/catch and close instead:

        private static void addApkToInstallSession(Context context, Android.Net.Uri apkUri, PackageInstaller.Session session)
        {
          var packageInSession = session.OpenWrite("package", 0, -1);
          var input = context.ContentResolver.OpenInputStream(apkUri);
    
          if (input != null)
          {
            input.CopyTo(packageInSession);
          }
          else
          {
            throw new Exception("Inputstream is null");
          }
    
          packageInSession.Close();
          input.Close();
    
          //That this is necessary could be a Xamarin bug.
          GC.Collect();
          GC.WaitForPendingFinalizers();
          GC.Collect();
        }
    
    • The Activity where you override the «OnNewIntent» method must have LaunchMode set to LaunchMode.SingleTop
    • The user must have given the application from which you try to install the APK file the necessary permissions to install APK’s. You can check if this is the case by calling PackageManager.CanRequestPackageInstalls(). If this function returns false, you can open the application options window by using this code:

      StartActivity(new Intent(
                      Android.Provider.Settings.ActionApplicationDetailsSettings,
                      Android.Net.Uri.Parse("package:" + Android.App.Application.Context.PackageName)));
      

    so the user can easily set the switch to enable this.

    • If you are debugging on a Xiaomi device, you must disable MIUI Optimizations under developer options. Otherwise the installation will fail with permission denied error.
    • Marked as answer by

      Thursday, June 3, 2021 12:00 AM

Понравилась статья? Поделить с друзьями:
  • Error during initialization no iwd files found in main call of duty
  • Error during initialization missing config file default cfg during initialization
  • Error during initialization miles sound system как решить
  • Error during initialization exe err not found configure mp csv
  • Error during initialization exe err mss init failed mp call of duty 2