Error 37 no locks available

Материал из Wiki - Iphoster - the best ever hosting and support. 2005 - 2023

Материал из Wiki — Iphoster — the best ever hosting and support. 2005 — 2023

Перейти к:навигация, поиск

MySQL — InnoDB: Error number 37 means ‘No locks available’

Ошибка вида:

InnoDB: Unable to lock ./admin_forum1/blog_blogs.ibd, error: 37
160519  6:44:58  InnoDB: Error creating file './admin_forum1/blog_blogs.ibd'. 
160519  6:44:58  InnoDB: Operating system error number 37 in a file operation.
InnoDB: Error number 37 means 'No locks available'.
InnoDB: Some operating system error numbers are described at
InnoDB: http://dev.mysql.com/doc/refman/5.5/en/operating-system-error-codes.html
160519  6:44:58  InnoDB: Error creating file './admin_forum1/blog_blogs.ibd'.
160519  6:44:58  InnoDB: Operating system error number 17 in a file operation.
InnoDB: Error number 17 means 'File exists'.
InnoDB: Some operating system error numbers are described at
InnoDB: http://dev.mysql.com/doc/refman/5.5/en/operating-system-error-codes.html
InnoDB: The file already exists though the corresponding table did not
InnoDB: exist in the InnoDB data dictionary. Have you moved InnoDB
InnoDB: .ibd files around without using the SQL commands
InnoDB: DISCARD TABLESPACE and IMPORT TABLESPACE, or did
InnoDB: mysqld crash in the middle of CREATE TABLE? You can
InnoDB: resolve the problem by removing the file './admin_forum1/blog_blogs.ibd'
InnoDB: under the 'datadir' of MySQL. 
160519  6:44:58  InnoDB: Error creating file './admin_forum1/blog_blogs.ibd'.
160519  6:44:58  InnoDB: Operating system error number 17 in a file operation.
InnoDB: Error number 17 means 'File exists'.
InnoDB: Some operating system error numbers are described at
InnoDB: http://dev.mysql.com/doc/refman/5.5/en/operating-system-error-codes.html
InnoDB: The file already exists though the corresponding table did not

Решение — убрать опцию innodb_file_per_table из my.cnf и перезапустить mysql сервер.

Также такая ошибка может возникнуть из-за неправильного выделения памяти на VPS сервере с OpenVZ виртуализацией, поэтому обратитесь к Вашему хостинг провайдеру.

Комментарии

Issue

  • NFS client can not lock files using NFSv3.
  • NFS client can lock a file using NFSv4.x
  • Attempted file locks receive Error (37): No locks available
  • Packet captures show no NLM LOCK Calls.
  • rpcbind on the NFS client is logging messages such as:
Jun 12 14:31:21 client.example.net rpcbind[11809]: cannot get local address for udp: Servname not supported for ai_socktype
Jun 12 14:31:21 client.example.net rpcbind[11809]: cannot get local address for tcp: Servname not supported for ai_socktype
  • rpcbind shows no listening sockets in # lsof output

Environment

  • Red Hat Enterprise Linux
  • NFSv3

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

Steps to reproduce the issue

$ spack install gcc or whatever spec

Error Message

$ spack --debug --stacktrace install gcc

$ spack --debug --stacktrace install gcc
lib/spack/spack/cmd/__init__.py:122 ==> [2021-04-08-13:03:03.032227] Imported install from built-in commands
lib/spack/spack/config.py:981 ==> [2021-04-08-13:03:03.038651] Reading config file /users/kitayama/projects/spack/etc/spack/defaults/config.yaml
lib/spack/spack/cmd/__init__.py:122 ==> [2021-04-08-13:03:03.089444] Imported install from built-in commands
lib/spack/spack/config.py:981 ==> [2021-04-08-13:03:03.093661] Reading config file /users/kitayama/projects/spack/etc/spack/defaults/repos.yaml
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.484900] libtool applying constraint m4@1.4.6:
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.487252] libtool applying constraint m4@1.4.6:
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.492920] autoconf applying constraint m4@1.4.6:
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.519350] ncurses applying constraint pkgconfig
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.521281] readline applying constraint ncurses
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.523040] gdbm applying constraint readline
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.524781] perl applying constraint berkeley-db
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.526476] perl applying constraint gdbm
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.531062] help2man applying constraint perl
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.542263] gettext applying constraint iconv
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.544319] help2man applying constraint perl
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.546177] help2man applying constraint gettext
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.548100] autoconf applying constraint m4@1.4.6:
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.549946] autoconf applying constraint perl
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.551825] autoconf applying constraint help2man
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.557187] automake applying constraint perl
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.559126] automake applying constraint autoconf
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.561027] gmp applying constraint m4
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.562934] gmp applying constraint libtool
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.564808] gmp applying constraint autoconf
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.566691] gmp applying constraint automake
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.574286] diffutils applying constraint iconv
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.577084] gcc applying constraint gmp@4.3.2:
lib/spack/spack/spec.py:2811 ==> [2021-04-08-13:03:05.581716] gcc applying constraint diffutils
lib/spack/spack/config.py:981 ==> [2021-04-08-13:03:05.641360] Reading config file /users/kitayama/projects/spack/etc/spack/defaults/packages.yaml
Traceback (most recent call last):
  File "/users/kitayama/projects/spack/bin/spack", line 76, in <module>
    sys.exit(spack.main.main())
  File "/users/kitayama/projects/spack/lib/spack/spack/main.py", line 772, in main
    return _invoke_command(command, parser, args, unknown)
  File "/users/kitayama/projects/spack/lib/spack/spack/main.py", line 496, in _invoke_command
    return_val = command(parser, args)
  File "/users/kitayama/projects/spack/lib/spack/spack/cmd/install.py", line 319, in install
    args.spec, concretize=True, tests=tests)
  File "/users/kitayama/projects/spack/lib/spack/spack/cmd/__init__.py", line 164, in parse_specs
    spec.concretize(tests=tests)  # implies normalize
  File "/users/kitayama/projects/spack/lib/spack/spack/spec.py", line 2573, in concretize
    self._old_concretize(tests)
  File "/users/kitayama/projects/spack/lib/spack/spack/spec.py", line 2372, in _old_concretize
    self._expand_virtual_packages(concretizer),
  File "/users/kitayama/projects/spack/lib/spack/spack/spec.py", line 2275, in _expand_virtual_packages
    candidates = concretizer.choose_virtual_or_external(spec)
  File "/users/kitayama/projects/spack/lib/spack/spack/concretize.py", line 144, in choose_virtual_or_external
    candidates = self._valid_virtuals_and_externals(spec)
  File "/users/kitayama/projects/spack/lib/spack/spack/concretize.py", line 99, in _valid_virtuals_and_externals
    candidates = spack.repo.path.providers_for(spec)
  File "/users/kitayama/projects/spack/lib/spack/spack/repo.py", line 90, in converter
    return function(self, spec_like, *args, **kwargs)
  File "/users/kitayama/projects/spack/lib/spack/spack/repo.py", line 591, in providers_for
    providers = self.provider_index.providers_for(vpkg_spec)
  File "/users/kitayama/projects/spack/lib/spack/spack/repo.py", line 575, in provider_index
    self._provider_index.merge(repo.provider_index)
  File "/users/kitayama/projects/spack/lib/spack/spack/repo.py", line 985, in provider_index
    return self.index['providers']
  File "/users/kitayama/projects/spack/lib/spack/spack/repo.py", line 407, in __getitem__
    self._build_all_indexes()
  File "/users/kitayama/projects/spack/lib/spack/spack/repo.py", line 422, in _build_all_indexes
    self.indexes[name] = self._build_index(name, indexer)
  File "/users/kitayama/projects/spack/lib/spack/spack/repo.py", line 447, in _build_index
    with misc_cache.write_transaction(cache_filename) as (old, new):
  File "/users/kitayama/projects/spack/lib/spack/llnl/util/lock.py", line 566, in __enter__
    if self._enter() and self._acquire_fn:
  File "/users/kitayama/projects/spack/lib/spack/llnl/util/lock.py", line 602, in _enter
    return self._lock.acquire_write(self._timeout)
  File "/users/kitayama/projects/spack/lib/spack/llnl/util/lock.py", line 330, in acquire_write
    wait_time, nattempts = self._lock(fcntl.LOCK_EX, timeout=timeout)
  File "/users/kitayama/projects/spack/lib/spack/spack/util/lock.py", line 31, in _lock
    return super(Lock, self)._lock(op, timeout)
  File "/users/kitayama/projects/spack/lib/spack/llnl/util/lock.py", line 186, in _lock
    if self._poll_lock(op):
  File "/users/kitayama/projects/spack/lib/spack/llnl/util/lock.py", line 210, in _poll_lock
    self._length, self._start, os.SEEK_SET)
OSError: [Errno 37] No locks available

Information on your system

Additional information

  • I have run spack debug report and reported the version of Spack/Python/Platform
  • I have searched the issues of this repo and believe this is not a duplicate
  • I have run the failing commands in debug mode and reported the output

Home
> mySQL troubleshooting > InnoDB: Unable to lock ./ibdata1, error: 37

Today, I have start mysql with a central NAS storage. The error messages file show message following

InnoDB: Unable to lock ./ibdata1, error: 37
120514 16:41:29  InnoDB: Operating system error number 37 in a file operation.
InnoDB: Error number 37 means ‘No locks available’.
InnoDB: Some operating system error numbers are described at
InnoDB: http://dev.mysql.com/doc/refman/5.5/en/operating-system-error-codes.html
InnoDB: Error in creating or opening ./ibdata1
120514 16:41:29 InnoDB: Could not open or create data files.
120514 16:41:29 InnoDB: If you tried to add new data files, and it failed here,
120514 16:41:29 InnoDB: you should now edit innodb_data_file_path in my.cnf back
120514 16:41:29 InnoDB: to what it was, and remove the new ibdata files InnoDB created
120514 16:41:29 InnoDB: in this failed attempt. InnoDB only wrote those files full of
120514 16:41:29 InnoDB: zeros, but did not yet use them in any way. But be careful: do not
120514 16:41:29 InnoDB: remove old data files which contain your precious data!
120514 16:41:29 [ERROR] Plugin ‘InnoDB’ init function returned error.
120514 16:41:29 [ERROR] Plugin ‘InnoDB’ registration as a STORAGE ENGINE failed.
120514 16:41:29 [ERROR] Unknown/unsupported storage engine: InnoDB
120514 16:41:29 [ERROR] Aborting
120514 16:41:29 [Note] /usr/sbin/mysqld: Shutdown complete

Solution: Add nolock option into nfs mount entry in /etc/fstab as below and restart netfs. The mysql can starts.

wtt.isilon.local:/ifs/wtt/dbdata                /dbdata  nfs     rsize=65536,wsize=65536,nolock

Yesterday, as usual the cron job triggered a datapump export job against a database on a Linux Server.
Immediately post running the export job it got failed. When i look into the dump logfile i found below sort of errors.


Export: Release 11.2.0.3.0 — Production on Sat Mar 8 05:53:37 2014


Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
;;;
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 — 64bit Production
With the Partitioning and Automatic Storage Management options
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file «/oraexp/NTLSNDB/Data_Pump_Export_NTLSNDB_FULL_030814_0553_01.dmp»
ORA-27086: unable to lock file — already in use
Linux-x86_64 Error: 37: No locks available
Additional information: 10


I verified at database level whether the dump directory, its path and the proper read & write privileges are granted on the directory. Yes everything was fine at database end.


I believed this could be an issue of nfs mount option at OS level. We are using an NFS shared mount point for all of the servers it needs to get mounted with proper options on each server to get it used by the database. I could see this mount point is mounted properly with recommended options by Oracle.


Then i checked the logs at OS level, then i found the issue is related to nfslock services. The nfslock service is not running on this database.  this service helps the client to lock a file in the related NFS mount point on the server to create a file and make write operations.


>cat messages | grep lockd
Mar  8 04:03:31 demoserver kernel: lockd: cannot monitor 10.207.80.179
Mar  8 04:03:31 demoserver kernel: lockd: failed to monitor 10.207.80.179
Mar  8 04:20:27 demoserver kernel: lockd: cannot monitor 10.207.80.179
Mar  8 04:20:27 demoserver kernel: lockd: failed to monitor 10.207.80.179


Further i came to know that t the server got rebooted couple of days ago for a reason, after reboot the nfslock services did not startup automatically. So manually we started the services. Note that  If the nfslock services need to get auto start after a reboot then we need to use chkconfig nfslock on. Later the same has been taken care. hence onwards whenever the server gets rebooted the nfslock services will automatically startup.


cat messages | grep rpc
Mar  8 07:01:43 demoserver rpc.statd[12667]: Version 1.0.9 Starting
Mar  8 07:01:49 demoserver rpc.statd[12667]: Caught signal 15, un-registering and exiting.
Mar  8 07:01:49 demoserver rpc.statd[12745]: Version 1.0.9 Starting


You can manage the nfslock services by below commands.


service nfslock status
service nfslock start
service nfslock stop


After making sure that the services got started and the client could able to lock the file on the NFS file system on the server. we re-triggered the export job. It executed successfully.



I am trying to install Splunk 4.2.1 on a CentOS 5 (64-bit) box. It starts with no problem, but when I try to connect to SplunkWeb at port 8000, I get this error:

IOError: [Errno 37] No locks available

My limits are set as follows (note that file locks are unlimited):

cm@cmadmin-v04/cm/admin/tools/splunk-4.2.1/bin ) ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 16384
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 65536
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 16384
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

What do I need to change to make this work?

Thanks

##########################
## Error
##########################

ORA-27086: unable to lock file — already in use
Linux-x86_64 Error: 37: No locks available

### Full Error

Tue May 20 08:58:08 2014
Control file backup creation failed.
Backup target file size found to be zero.
Errors in file /u01/app/oracle/diag/rdbms/oralin/oralin1/trace/oralin1_ckpt_14497.trc:
ORA-27086: unable to lock file — already in use
Linux-x86_64 Error: 37: No locks available
Additional information: 10

##########################
#  Error Occurred
##########################

Error occured in RMAN FULL backup while taking backup of control file in 11.2.0.3.0 version

##########################
## Command Executed
##########################

Backup of control file

**************************************** Step By Step Analysis ******************************************************

#########################################
# Alert Log
#########################################

Tue May 20 08:58:08 2014
Control file backup creation failed.
Backup target file size found to be zero.
Errors in file /u01/app/oracle/diag/rdbms/oralin/oralin1/trace/oralin1_ckpt_14497.trc:
ORA-27086: unable to lock file — already in use
Linux-x86_64 Error: 37: No locks available
Additional information: 10

#########################################
# Trace File ( /u01/app/oracle/diag/rdbms/oralin/oralin1/trace/oralin1_ckpt_14497.trc )
#########################################

ORA-27086: unable to lock file — already in use

*** 2014-05-20 08:58:08.285
Linux-x86_64 Error: 37: No locks available
Additional information: 10
Control file enqueue hold time tracking dump at time: 36768

#########################################
# 1) Check the Snapshot Controlfile
#########################################

RMAN> show snapshot controlfile name;
using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name oralin are:
CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/backup/db/oralin/CONTROLFILE_SNAPSHOT_oralin/snapcf_oralin1.f’;

RMAN>

Here, /backup is the shared NFS Mount Point.

=====================================================================================================================

#########################################
# 2) Check NFS lock status
#########################################

service nfslock status

[root@host01 ~]# service nfslock status
rpc.statd is stopped
[root@host01 ~]# 

NFS lock is in stopped state.

[root@host01 ~]# chkconfig —list | grep nfs
nfs             0:off   1:off   2:off   3:off   4:off   5:off   6:off
nfslock         0:off   1:off   2:off   3:off   4:off   5:off   6:off
[root@host01 ~]#

=====================================================================================================================

#########################################
# 3) Reason for Failure
#########################################

Error is occuring when autobackup of controlfile is being done. When RMAN is trying to create a snapshot control file in the shared NFS MP, its says unable to lock file.

From Step 2, we can see that nfs lock service is in stopped state, so RMAN cannot hold a lock.

This server was rebooted recently and nfs lock service didnt come up and also we can see from step 2 that auto start is in OFF

=====================================================================================================================

#########################################
# 4) Reproduce the Error
#########################################

A simple backup of controlfile in NFS Mount Point will help us to reproduce the error.

backup current controlfile format ‘/backup/db/oralin/CONTROLFILE_SNAPSHOT_oralin/oralin.ctl’;

RMAN> backup current controlfile format ‘/backup/db/oralin/CONTROLFILE_SNAPSHOT_oralin/oralin.ctl’;
Starting backup at 20-MAY-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=4 instance=oralin1 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 05/20/2014 09:27:03
ORA-01580: error creating control backup file /backup/db/oralin/CONTROLFILE_SNAPSHOT_oralin/snapcf_oralin1.f
ORA-27086: unable to lock file — already in use
Linux-x86_64 Error: 37: No locks available
Additional information: 10

=====================================================================================================================

##########################
## Solution
##########################

#### Start NFS Lock services in the server

[root@host01 ~]# service nfslock start
Starting NFS statd:                                        [  OK  ]
[root@host01 ~]# service nfslock status
rpc.statd (pid 29693) is running…
[root@host01 ~]#

=====================================================================================================================
Test the backup, now it completes successfully…
=====================================================================================================================

RMAN> backup current controlfile format ‘/backup/db/oralin/CONTROLFILE_SNAPSHOT_oralin/oralin.ctl’;

Starting backup at 20-MAY-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 20-MAY-14
channel ORA_DISK_1: finished piece 1 at 20-MAY-14
piece handle=/backup/db/oralin/CONTROLFILE_SNAPSHOT_oralin/oralin.ctl tag=TAG20140520T092820 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 20-MAY-14
Starting Control File and SPFILE Autobackup at 20-MAY-14
piece handle=/backup/db/oralin/autobackup/c-419595185-20140520-02 comment=NONE
Finished Control File and SPFILE Autobackup at 20-MAY-14
RMAN>

=====================================================================================================================
 Comments Are Always welcome
=====================================================================================================================

Понравилась статья? Поделить с друзьями:
  • Error 37 cpt cone puncher
  • Error 37 chinese warlord скачать
  • Error 37 chinese warlord перевод
  • Error 37 cactaur core текст
  • Error 36871 schannel