Sqlite3 operationalerror disk i o error

Hi Paul, I am running FindFungi for the demo ERR675624 fastq on cluster(CentOS Linux release 7.4.1708). I had remove bsub and got the following results. There something error I can't fix, could...

Hi Paul,
I am running FindFungi for the demo ERR675624 fastq on cluster(CentOS Linux release 7.4.1708). I had remove bsub and got the following results.
There something error I can’t fix, could help have a check? Thank you.

  • ouput files in ./ERR675624/FindFungi/Results

128K Oct 21 09:34 BLAST_Processing
24bytes Oct 21 11:32 ERR675624.WordCloud.R
0bytes Oct 21 11:32 Final_Results_ERR675624-lca.sorted.csv
21bytes Oct 21 11:29 Final_Results_ERR675624-taxids.txt
91K Oct 21 05:48 Final_Results_ERR675624.tsv
2.9K Oct 21 11:30 Final_Results_ERR675624.tsv_AllResults-taxids.txt
11M Oct 21 05:48 Final_Results_ERR675624.tsv_AllResults.tsv
0bytes Oct 21 11:32 Final_Results_ERR675624_AllResults-lca.sorted.csv

The CSV is always empty with the errors showing below

  • source script code
$ScriptPath/LowestCommonAncestor_V4.sh $Dir/Results/Final_Results_$z.tsv
$ScriptPath/LowestCommonAncestor_V4.sh $Dir/Results/Final_Results_$z.tsv_AllResults.tsv
  • error information (using ete3-3.1.1 )
Uploading to /users/username/.etetoolkit/taxa.sqlite
Traceback (most recent call last):
  File "/users/username/tools/FindFungi-v0.23.3/LowestCommonAncestor_V4.py", line 26, in <module>
    ncbi = NCBITaxa()
  File "/users/username/username/tools/conda3/lib/python2.7/site-packages/ete3/ncbi_taxonomy/ncbiquery.py", line 120, in __init__
    self.update_taxonomy_database(taxdump_file)
  File "/users/username/username/tools/conda3/lib/python2.7/site-packages/ete3/ncbi_taxonomy/ncbiquery.py", line 129, in update_taxonomy_database
    update_db(self.dbfile)
  File "/users/username/username/tools/conda3/lib/python2.7/site-packages/ete3/ncbi_taxonomy/ncbiquery.py", line 760, in update_db
    upload_data(dbfile)
  File "/users/username/username/tools/conda3/lib/python2.7/site-packages/ete3/ncbi_taxonomy/ncbiquery.py", line 791, in upload_data
    db.execute(cmd)
sqlite3.OperationalError: disk I/O error
Done
  • error information(using older version ete3)
/users/username/tools/FindFungi-v0.23.3//LowestCommonAncestor_V4.sh: line 14: 28870 Segmentation fault      python2.7 ~/tools/FindFungi-v0.23.3/LowestCommonAncestor_V4.py ${1} ${y}-taxids.txt ${y}-lca.csv
Done
/users/username/tools/FindFungi-v0.23.3//LowestCommonAncestor_V4.sh: line 14: 28875 Segmentation fault      python2.7 ~/tools/FindFungi-v0.23.3/LowestCommonAncestor_V4.py ${1} ${y}-taxids.txt ${y}-lca.csv
Done

Best
Keli

Hi, 

My Spiceworks stop working awhile back, and I haven’t had time to investigate it until today. I’m getting this message in the startup log. Any ideas on how I can resolve this? 

Here’s a sample from the log. 

I, [2017-03-02T08:03:15.233975 #8288] INFO — : ———- Setting up to run Spiceworks ———-
I, [2017-03-02T08:03:15.233975 #8288] INFO — : root => C:/Program Files (x86)/Spiceworks
I, [2017-03-02T08:03:15.233975 #8288] INFO — : app root => C:/Program Files (x86)/Spiceworks/pkg/gems/spiceworks-7.2.00195
I, [2017-03-02T08:03:15.233975 #8288] INFO — : port => 9675
I, [2017-03-02T08:03:15.233975 #8288] INFO — : https port => 9676
I, [2017-03-02T08:03:15.233975 #8288] INFO — : https required => false
I, [2017-03-02T08:03:15.233975 #8288] INFO — : version => 7.2.00195
I, [2017-03-02T08:03:15.233975 #8288] INFO — : environment => production
I, [2017-03-02T08:03:15.233975 #8288] INFO — : verbose => false
I, [2017-03-02T08:03:15.233975 #8288] INFO — : Starting spiceworks server (SCGI backend)
I, [2017-03-02T08:19:34.452502 #7992] INFO — : ———- Setting up to run Spiceworks ———-
I, [2017-03-02T08:19:34.514895 #7992] INFO — : root => C:/Program Files (x86)/Spiceworks
I, [2017-03-02T08:19:34.514895 #7992] INFO — : app root => C:/Program Files (x86)/Spiceworks/pkg/gems/spiceworks-7.2.00195
I, [2017-03-02T08:19:34.514895 #7992] INFO — : port => 9675
I, [2017-03-02T08:19:34.514895 #7992] INFO — : https port => 9676
I, [2017-03-02T08:19:34.514895 #7992] INFO — : https required => false
I, [2017-03-02T08:19:34.514895 #7992] INFO — : version => 7.2.00195
I, [2017-03-02T08:19:34.514895 #7992] INFO — : environment => production
I, [2017-03-02T08:19:34.514895 #7992] INFO — : verbose => false
I, [2017-03-02T08:19:34.514895 #7992] INFO — : Starting spiceworks server (SCGI backend)
#<SQLite3::IOException: disk I/O error>
C:/Program Files (x86)/Spiceworks/pkg/gems/sqlite3-1.3.8/lib/sqlite3/database.rb:91:in `initialize’
C:/Program Files (x86)/Spiceworks/pkg/gems/sqlite3-1.3.8/lib/sqlite3/database.rb:91:in `new’
C:/Program Files (x86)/Spiceworks/pkg/gems/sqlite3-1.3.8/lib/sqlite3/database.rb:91:in `prepare’
C:/Program Files (x86)/Spiceworks/pkg/gems/sqlite3-1.3.8/lib/sqlite3/database.rb:134:in `execute’
C:/Program Files (x86)/Spiceworks/pkg/gems/spiceworks_lib-7.2.00195/database/safeguard.rb:61:in `spiceworks_versions’
C:/Program Files (x86)/Spiceworks/pkg/gems/spiceworks_lib-7.2.00195/database/safeguard.rb:49:in `is_beta_db?’
C:/Program Files (x86)/Spiceworks/pkg/gems/spiceworks_lib-7.2.00195/database/safeguard.rb:14:in `allowed?’
C:/Program Files (x86)/Spiceworks/pkg/gems/spiceworks_lib-7.2.00195/command/migrates_database.rb:15:in `check_and_migrate_database’
C:/Program Files (x86)/Spiceworks/pkg/gems/spiceworks_lib-7.2.00195/command/scgi_run.rb:12:in `run’
C:/Program Files (x86)/Spiceworks/pkg/gems/spiceworks_lib-7.2.00195/command.rb:13:in `block in run’
C:/Program Files (x86)/Spiceworks/pkg/gems/spiceworks_lib-7.2.00195/command.rb:12:in `each’
C:/Program Files (x86)/Spiceworks/pkg/gems/spiceworks_lib-7.2.00195/command.rb:12:in `run’
C:/Program Files (x86)/Spiceworks/pkg/gems/spiceworks-7.2.00195/bin/loader.rb:46:in `<main>’
spiceworks:in `eval’
spiceworks:in `load’

Thank you, 

Brian 


    • #1

    Hello,

    Some unforeseen event has accelerated my migration to OMV 5 (the thumb drive with the install of OMV 4 failed:)). I was previously using mergerFS without problem. I managed to successfully restore my shared folders on OMV 5.

    The problem happens when I use the mergerFS volume as bind in Portainer (/srv/…). Plex refuses to startup and the logs show the following:

    Jun 05, 2020 09:05:54.998 [0x7fb5ec726740] ERROR - SQLITE3:(nil), 5386, os_unix.c:37072: (19) mmap(/config/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db-shm) - No such device
    Jun 05, 2020 09:05:54.998 [0x7fb5ec726740] ERROR - SQLITE3:(nil), 5386, disk I/O error in "PRAGMA cache_size=2000"
    Jun 05, 2020 09:05:54.998 [0x7fb5ec726740] ERROR - Database corruption: sqlite3_statement_backend::prepare: disk I/O error for SQL: PRAGMA cache_size=2000

    However, this does not happen if I use the direct path to one of the drive used with mergerFS. I thought my Plex database may be corrupted, so I tried to run a new Plex instance with a fresh config folder in my mergerfs volume and it made no difference.

    Same kind of errors pop up with Homeassistant :

    2020-06-04 23:01:16 ERROR (Recorder) [homeassistant.components.recorder] Error during connection setup: (sqlite3.OperationalError) disk I/O error
    (Background on this error at: http://sqlalche.me/e/e3q8) (retrying in 3 seconds)

    I have tried resetting the permissions with the resetperm plugin but to no avail.

    Edit; here is my fstab :

    # /etc/fstab: static file system information.
    #
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point>   <type>  <options>       <dump>  <pass>
    # / was on /dev/sda2 during installation
    UUID=be0b5804-7dc0-489f-8df4-620dd6b4b549 /               ext4    errors=remount-ro 0       1
    # /boot/efi was on /dev/sda1 during installation
    UUID=FD52-53CA  /boot/efi       vfat    umask=0077      0       1
    # swap was on /dev/sda3 during installation
    UUID=85d94df3-b52b-45dc-bc4e-933f9433c9df none            swap    sw              0       0
    # >>> [openmediavault]
    /dev/disk/by-label/HDD2         /srv/dev-disk-by-label-HDD2     ext4    defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,ac0 2
    /dev/disk/by-label/HDD1         /srv/dev-disk-by-label-HDD1     ext4    defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,ac0 2
    /srv/dev-disk-by-label-HDD1:/srv/dev-disk-by-label-HDD2         /srv/2e18f54d-6de6-40d2-99c9-0c2b0380d9d6  fuse.mergerfs   defaults,allow_other,cache.files=off,use_ino,category.create=epmfs,minfreespace=4G,fsname=HDDXL:2e18f54d-6de6-40d2-99c9-0c2b0380d9d6,x-systemd.requires=/srv/dev-disk-by-label-HDD1,x-systemd.requires=/srv/dev-disk-by-label-HDD2       0 0
    # <<< [openmediavault]

    Alles anzeigen

    Any help much appreciated,

    Thanks !

    • #2

    Ok so I found a fix, in case someone encounters the same issue.

    From the GitHub page of mergerfs


    Zitat

    Plex doesn’t work with mergerfs

    It does. If you’re trying to put Plex’s config / metadata on mergerfs you have to leave direct_io off because Plex is using sqlite3 which apparently needs mmap. mmap doesn’t work with direct_io. To fix this place the data elsewhere or disable direct_io (with dropcacheonclose=true). Sqlite3 does not need mmap but the developer needs to fall back to standard IO if mmap fails.

    direct_io is deprecated and has been replaced by cache.files. The solution was to remove cache.files=off from the mergerfs options in the OMV GUI.

    I hope it helps someone !

    • #3

    It helped me, thanks!

    I had an issue with urbackup on OMV5 recently: https://forums.urbackup.org/t/…date-broke-something/9383

    Your change and a reboot, bob’s my uncle.

Понравилась статья? Поделить с друзьями:
  • Sqlite3 exec failed disk i o error
  • Sqlite3 exec failed database disk image is malformed db ошибка 1с
  • Sqlite3 error codes
  • Sqlite3 error code
  • Sql syntax error near declare