Multipath error getting device debian

Contents

  • Contents

    1. Overview

      1. Device Mapper
      2. Device Mapper Multipathing
    2. Terminology, Concepts and Usage

      1. Output of multipath command
      2. Terminology
      3. Configuration File (/etc/multipath.conf)

        1. Attribute value overrides
      4. multipath, multipathd command usage
    3. Supported Storage Devices

      1. Devices supported in multipath tools
      2. Devices that have hardware handler in kernel
    4. Install and Boot on a multipathed device

      1. Installation instructions for SLES10
      2. Installation instructions for RHEL5
      3. Moving root/swap from single path device to multipath device
      4. Other Distributions
    5. SCSI Hardware handler

      1. Including hardware handler in RHEL 5.3
      2. Including hardware handler in SLES11 and SLES10 SP3
    6. Tips and Tricks
    7. References

      1. General

1. Overview

The connection from the server through the HBA to the storage controller is referred as a path. When multiple paths exists to a storage device(LUN) on a storage subsystem, it is referred as multipath connectivity. It is a enterprise level storage capability. Main purpose of multipath connectivity is to provide redundant access to the storage devices, i.e to have access to the storage device when one or more of the components in a path fail. Another advantage of multipathing is the increased throughput by way of load balancing.

  • /! Note: Multipathing protects against the failure of path(s) and not the failure of a specific storage device.

Common example of multipath is a SAN connected storage device. Usually one or more fibre channel HBAs from the host will be connected to the fabric switch and the storage controllers will be connected to the same switch.

A simple example of multipath could be: 2 HBAs connected to a switch to which the storage controllers are connected. In this case the storage controller can be accessed from either of the HBAs and hence we have multipath connectivity.

In the following diagram each host has 2 HBAs and each storage has 2 controllers. With the given configuration setup each host will have 4 paths to each of the LUNs in the storage.

fabric1.png

In Linux, a SCSI device is configured for a LUN seen on each path. i.e, if a LUN has 4 paths, then one will see four SCSI devices getting configured for the same device. Doing I/O to a LUN in a such an environment is unmanageable

  • applications/administrators do not know which SCSI device to use
  • all applications consistently using the same device
  • in case of a path failure, knowledge to retry the I/O on a different path
  • always using the storage device specific preferred path
  • spreading I/O between multiple valid paths

1.1. Device Mapper

Device mapper is a block subsystem that provides layering mechanism for block devices. One can write a device mapper to provide a specific functionality on top of a block device.

Currently the following functional layers are available:

  • concatenation
  • mirror
  • striping
  • encryption
  • flaky
  • delay
  • multipath

Multiple device mapper modules can be stacked to get the combined functionality.

Click here for more information on device mapper.

1.2. Device Mapper Multipathing

Object of this document is to provide details on device mapper multipathing (DM-MP). DM-MP resolves all the issues that arise in accessing a multipathed device in Linux. It also provides a consistent user interface for storage devices provided by multiple vendors. There is only one block device (/dev/mapper/XXX) for a LUN. This is the device created by device mapper.

Paths are grouped into priority groups, and one of the priority group will be used for I/O, and is called active. A path selector selects a path in the priority group to be used for an I/O based on some load balancing algorithm (for example round-robin).

When a I/O fails in a path, that path gets disabled and the I/O is retried in a different path in the same priority group. If all paths in a priority group fails, a different priority group which is enabled will be selected to send I/O.

DM-MP consists of 4 components:

  1. DM MP kernel module — Kernel module that is responsible for making the multipathing decisions in normal and failure situations.

  2. multipath command — User space tool that allows the user with initial configuration, listing and deletion of multipathed devices.

  3. multipathd daemon — User space daemon that constantly monitors the paths. It marks a path as failed when it finds the path faulty and if all the paths in a priority group are faulty then it switches to the next enable priority group. It keeps checking the failed path, once the failed path comes alive, based on the failback policy, it can activate the path. It provides an CLI to monitor/manage individual paths. It automatically creates device mapper entries when new devices comes into existence.

  4. kpartx — User space command that creates device mapper entries for all the partitions in a multipathed disk/LUN. When the multipath command is invoked, this command automatically gets invoked. For DOS based partitions this command need to be run manually.

2. Terminology, Concepts and Usage

2.1. Output of multipath command

Standard output of multipath command

# multipath -ll
mydev1 (3600a0b800011a1ee0000040646828cc5) dm-1 IBM,1815      FAStT
[size=512M][features=1 queue_if_no_path][hwhandler=1 rdac]
_ round-robin 0 [prio=6][active]
 _ 29:0:0:1 sdf 8:80  [active][ready]
 _ 28:0:1:1 sdl 8:176 [active][ready]
_ round-robin 0 [prio=0][enabled]
 _ 28:0:0:1 sdb 8:16  [active][ghost]
 _ 29:0:1:1 sdq 65:0  [active][ghost]

Annotated output of multipath command

mydev1 (3600a0b800011a1ee0000040646828cc5) dm-1 IBM,1815      FAStT
------  ---------------------------------  ---- --- ---------------
   |               |                         |    |          |-------> Product
   |               |                         |    |------------------> Vendor
   |               |                         |-----------------------> sysfs name
   |               |-------------------------------------------------> WWID of the device
   |------------------------------------------------------ ----------> User defined Alias name

[size=512M][features=1 queue_if_no_path][hwhandler=1 rdac]
 ---------  ---------------------------  ----------------
     |                 |                        |--------------------> Hardware Handler, if any
     |                 |---------------------------------------------> Features supported
     |---------------------------------------------------------------> Size of the DM device

Path Group 1:
_ round-robin 0 [prio=6][active]
-- -------------  ------  ------
 |    |              |      |----------------------------------------> Path group state
 |    |              |-----------------------------------------------> Path group priority
 |    |--------------------------------------------------------------> Path selector and repeat count
 |-------------------------------------------------------------------> Path group level

First path on Path Group 1:
 _ 29:0:0:1 sdf 8:80  [active][ready]
    -------- --- ----   ------  -----
      |      |     |        |      |---------------------------------> Physical Path state
      |      |     |        |----------------------------------------> DM Path state
      |      |     |-------------------------------------------------> Major, minor numbers
      |      |-------------------------------------------------------> Linux device name
      |--------------------------------------------------------------> SCSI information: host, channel, scsi_id and lun

Second path on Path Group 1:
 _ 28:0:1:1 sdl 8:176 [active][ready]

Path Group 2:
_ round-robin 0 [prio=0][enabled]
 _ 28:0:0:1 sdb 8:16  [active][ghost]
 _ 29:0:1:1 sdq 65:0  [active][ghost]

2.2. Terminology

Path
Connection from the server through a HBA to a specific LUN. Without DM-MP, each path would appear as a separate device.
Path Group

Paths are grouped into a path groups. At any point of time only path group will be active. Path selector decides which path in the path group gets to send the next I/O. I/O will be sent only to the active path.

Path Priority

Each path has a specific priority. A priority callout program provides the priority for a given path. The user space commands use this priority value to choose an active path. In the group_by_prio path grouping policy, path priority is used to group the paths together and change their relative weight with the round robin path selector.

Path Group Priority
Sum of priorities of all non-faulty paths in a path group. By default, the multipathd daemon tries to keep the path group with the highest priority active.
Path Grouping Policy
Determines how the path group(s) are formed using the available paths. There are five different policies:

  1. multibus: One path group is formed with all paths to a LUN. Suitable for devices that are in Active/Active mode.

  2. failover: Each path group will have only one path.
  3. group_by_serial: One path group per storage controller(serial). All paths that connect to the LUN through a controller are assigned to a path group. Suitable for devices that are in Active/Passive mode.

  4. group_by_prio: Paths with same priority will be assigned to a path group.
  5. group_by_node_name: Paths with same target node name will be assigned to a path group.

/! Setting multibus as path grouping policy for a storage device in Active/Passive mode will reduce the I/O performance.

Path Selector
A kernel multipath component that determines which path will be chosen for the next I/O. Path selector can have an appropriate load balancing algorithm. Currently one one path selector exists, which is the round-robin.
Path Checker
Functionality in the user space that is used to check the availability of a path. This is implemented as a library function that is used by both multipath command and the multipathd daemon. Currently, there are 3 path checkers:

  1. readsector0: sends a read command to sector 0 at regular time interval. Produce lot of error messages in Active/Passive mode. Hence, suitable only for Active/Active mode.

  2. tur: sends a test unit ready command at regular interval.
  3. rdac: specific to the lsi-rdac device. Sends a inquiry command and sets the status of the path appropriately.
Path States

This refers to the physical state of a path. A path can be in one of the following states:

  1. ready: Path is up and can handle I/O requests.

  2. faulty: Path is down and cannot handle I/O requests.

  3. ghost: Path is a passive path. This state is shown in the passive path in Active/Passive mode.

  4. shaky: Path is up, but temporarily not available for I/O requests.
DM Path States
This refers to the DM module(kernel)’s view of the path’s state. It can be in one of the two states:

  1. active: Last I/O sent to this path successfully completed. Analogous to ready path state.

  2. failed: Last I/O to this path failed. Analogous to faulty path state.

Path Group State
Path Groups can be in one of the following three states:
  1. active: I/O will be sent to the multipath device will be sent to this path group. Only one path group will be in this state.
  2. enabled: If none of the paths in the active path group is in the ready state, I/O will be sent these path groups. There can be one or more path groups in this state.

  3. disabled: In none of the paths in the active path group and enabled path group is in the ready state. I/O will be sent to these path groups. There can be one or more path groups in this state. This state is available only for certain storage devices.
UID Callout (or) WWID Callout
A standalone program that returns a globally unique identifier for a path. multipath/multipathd invokes this callout and uses the ID returned to coalesce multiple paths to a single multipath device.
Priority Callout
A standalone program that returns the priority for a path. multipath/multipathd invokes this callout and uses the priority value of the paths to determine the active path group.
Hardware Handler
Kernel personality module for storage devices that needs special handling. This module is responsible for enabling a path (at the device level) during initialization, failover and failback. It is also responsible for handling device specific sense error codes.
Failover

When all the paths in a path group are in faulty state, one of the enabled path group (path with highest priority) with any paths in ready state will be made active. If there is no paths in ready state in any of the enabled path groups, then one of the disabled path group (path with highest priority) will be made active. Making a new path group active is also referred as switching of path group. Original active path group’s state will be changed to enabled.

Failback

A failed path can become active at any point of time. multipathd keeps checking the path. Once it finds a path is active, it will change the state of the path to ready. If this action makes one of the enabled path group’s priority to be higher than the current active path group, multipathd may choose to failback to the highest priority path group.

Failback Policy

Under failback situations multipathd can do one of the following three things:

  1. immediate: Immediately failback to the highest priority path group.
  2. # of seconds: Wait for the specified number of seconds, for I/O to stabilize, then failback to the highest priority path group.
  3. do nothing: Do nothing, user explicitly fails back to the highest priority path group.

This policy selection can be set by the user through /etc/multipath.conf.

Active/Active
Storage devices with 2 controller can be configured in this mode. Active/Active means that both the controllers can process I/Os.
Active/Passive
Storage devices with 2 controller can be configured in this mode. Active/Passive means that one of the controllers(active) can process I/Os, and the other one(passive) is in a standby mode. I/Os to the passive controller will fail.
Alias

A user friendly and/or user defined name for a DM device. By default, WWID is used for the DM device. This is the name that is listed in /dev/disk/by-name directory. When the user_friendly_names configuration option is set, the alias of a DM device will have the form of mpath<n>. User also has the option of setting a unique alias for each multipath device.

2.3. Configuration File (/etc/multipath.conf)

DM-Multipath allows many of the feature to be user configurable using the configuration file /etc/multipath.conf. multipath command and multipathd uses the configuration information from this file. This file is consulted only during the configuration of multipath devices. In other words, if the user makes any changes to this file, then the multipath command need to be rerun to configure the multipath devices (i.e the user has to do multipath -F followed by multipath).

Support for many of the devices (as listed below) is inbuilt in the user space component of DM-Multipath. If the support for a specific storage device is not inbuilt or the user wants to override some of the values only then the user need to modify this file.

This file has 5 sections:

  1. System level defaults («defaults«): Where the user can specify system level default override.

  2. Black listed devices («blacklist«): User can specify the list of devices they do not want to be under the control of DM-Multipath. These devices will be excluded.

  3. Black list exceptions («blacklist_exceptions«): Specific devices to be treated as multipath candidates even if they exist in the blacklist.

  4. Storage controller specific settings («devices«): User specified configuration settings will be applied to devices with specified «Vendor» and «Product» information.

  5. Device specific settings («multipaths«): User can fine tune configuration settings for individual LUNs.

User can specify the values for the attributes in this file using regular expression syntax.

For detailed explanation of the different attributes and allowed values for the attributes please refer to multipath.conf.annotated file.

  • In Mainline, this file is located in the root directory of multipath-tools.
  • In RedHat, this file is located in the directory /usr/share/doc/device-mapper-multipath-X.Y.Z/.

  • In SuSE, this file is located in the directory /usr/share/doc/packages/multipath-tools/

2.3.1. Attribute value overrides

Attribute values are set at multiple levels (internally in multipath tools and through multipath.conf file). Following is the order in which the attribute values will be overwritten.

  1. Global internal defaults, as specified in the man page of multipath.conf.
  2. Device specific internal defaults, as defined in libmultipath/hwtable.c.
  3. Items described in defaults section of /etc/multipath.conf.
  4. Items defined in device section of /etc/multipath.conf.
    • /! Note that this will completely overwrite configuration information defined in (2) above. So, if even if you want to change/add only one attribute one have to provide the whole list for a device.

  5. Items defined in multipaths section of /etc/multipath.conf.

2.4. multipath, multipathd command usage

Man page of multipath/multipathd provides good details on the usage of the tools.

multipathd has a interactive mode option which can be used for querying and managing the paths and also to check the configuration details that will be used.

When multipathd is running, one has to invoke multipathd with the command line multipathd -k. multipathd will enter into a command line mode where user can invoke different commands. Checkout the man page for different commands.

3. Supported Storage Devices

This is the list of devices that have configuration information built-in in the multipath tools. Not being in this list does not mean that the specific device is not supported, it just means that there is no built-in configuration in the multipath tools.

Some of the devices do need a hardware handler which need to compiled in the kernel. The device being in this list does mean that the hardware handler is present in the kernel. It is possible that the hardware handler is present in the kernel but the device is not added in the list of supported built-in devices.

3.1. Devices supported in multipath tools

Following are the list of storage devices that have configuration information buil-in in the multipath tools.

Vendor

Product

Common Name

Mainline 0.4.7

Mainline 0.4.8

RHEL5

RHEL5 U1

SLES10

SLES10 SP1

3PARdata

VV

YES

YES

YES

YES

YES

YES

APPLE

Xserve RAID

YES

YES

YES

YES

YES

{COMPAQ, HP}

{MSA,HSV}1*

YES

YES

(COMPAQ,HP)

(MSA|HSV)1.0.*

YES

YES

(COMPAQ,HP)

(MSA|HSV)1.1.*

YES

YES

(COMPAQ,HP)

MSA1.*

YES

YES

(COMPAQ,HP)

HSV(1|2).*

YES

YES

DDN

SAN DataDirector

YES

YES

YES

YES

YES

YES

DEC

HSG80

YES

YES

YES

YES

YES

YES

DGC

*

YES

YES

YES

YES

YES

YES

EMC

SYMMETRIX

YES

YES

YES

YES

YES

YES

FSC

CentricStor

YES

YES

YES

YES

YES

YES

GNBD

GNBD

YES

YES

HITACHI

{A6189A,OPEN-}

YES

(HITACHI|HP)

OPEN-.*

YES

YES

YES

YES

YES

HITACHI

DF.*

YES

YES

YES

YES

YES

HP

A6189A

YES

YES

YES

YES

YES

HP

MSA VOLUME

YES

YES

HP

HSV2*

YES

YES

YES

YES

HP

LOGICAL VOLUME.*

YES

HP

DF[456]00

YES

Vendor

Product

Common Name

Mainline 0.4.7

Mainline 0.4.8

RHEL5

RHEL5 U1

SLES10

SLES10 SP1

IBM

ProFibre 4000R

YES

YES

YES

YES

YES

YES

IBM

1742

YES

YES

YES

YES

YES

YES

IBM

3526

YES

YES

YES

YES

YES

IBM

3542

YES

YES

YES

YES

YES

YES

IBM

2105F20

YES

YES

YES

YES

YES

YES

IBM

2105800

YES

YES

YES

YES

YES

IBM

{1750500,2145}

YES

YES

YES

YES

YES

YES

IBM

2107900

YES

YES

YES

YES

YES

YES

IBM

S/390 DASD ECKD

YES

YES

YES

YES

YES

YES

IBM

Nseries.*

YES

YES

YES

YES

YES

NETAPP

LUN.*

YES

YES

YES

YES

YES

YES

Pillar

Axiom 500

YES

YES

YES

YES

YES

YES

Pillar

Axiom.*

YES

SGI

TP9[13]00

YES

YES

YES

YES

YES

YES

SGI

TP9[45]00

YES

YES

YES

YES

YES

YES

SGI

IS.*

YES

YES

STK

OPENstorage D280

YES

YES

YES

YES

YES

YES

SUN

{StorEdge 3510,T4}

YES

YES

YES

YES

YES

YES

This list can be obtained by using the following commands

  1. In RHEL5 and later
    • Make sure multipathd is running and then run
    • # multipathd -k
    • multipathd> show config

  2. In SLES10 and later
    • run
    • # multipath -t

3.2. Devices that have hardware handler in kernel

Some storage device need special handling for path failover/failback. Which means that they need a hardware handler present in the kernel. Following are the list of the storage devices that has hardware handler in the kernel.

Generic controller Name

Storage device Name

Mainline 2.6.22

Mainline 2.6.23

RHEL5

RHEL5 U1

SLES10

SLES10 SP1

LSI Engenio

IBM DS4000 Series, IBM DS3000 Series

YES

YES

YES

EMC CLARiiON

AX/CX-series

YES

YES

YES

YES

YES

YES

HP Storage Works and Fibrecat

YES

4. Install and Boot on a multipathed device

There are advantages in making your boot/root partitions to be in SAN, like avoiding single point of failure, the disk content being accessible even if the server is down etc., This section describes the steps to be taken in the two major distributions to successfully install and boot of a SAN/multipathed device.

4.1. Installation instructions for SLES10

Note: This is tested on SLES10 SP1. If you have any other version, your mileage may vary.

  1. Install the OS in a device that has multiple paths.
    Make sure the root device’s «Mount by» option is set to «Device by-id» (this option is available under «expert partitioner» as «fstab options»).
    If you are installing on LVM, choose «Mount by» to be «by label».

  2. Complete the installation. Let the system boot up in multiuser mode.
    Make sure the root device, swap device are all referenced by their by-id device node entries instead of /dev/sd* type names. If they are not, fix them first.
    If using LVM, make sure the device are referenced by LABEL.

  3. Once booted, update /etc/multipath.conf
    If you have to make changes to /etc/multipath.conf, make the changes.
    Note: the option «user_friendly_names» is not supported by initrd. So, if you have user_friendly_names in your /etc/multipath.conf file, comment it for now, you can uncomment it later.

  4. Enable multipathing by running the following commands
    • chkconfig boot.multipath on
    • chkconfig multipathd on
  5. Add multipath module to initrd

    Edit the file /etc/sysconfig/kernel and add «dm-multipath» to INITRD_MODULES».
    Note: If your storage devices needs a hardware handler, add the corresponding module to INITRD_MODULES, in addition to «dm-multipath». For example add «dm-rdac» and «dm-multipath» to support IBM’s DS4K storage devices

  6. Run mkinitrd, if required run lilo. ‘Note: You can uncomment the user friendly name if you have commented it above.

  7. Reboot

The system will come up with the root disk on a multipathed device.

Note: You can switch off multipathing to the root device by adding multipath=off to the kernel command line.

4.2. Installation instructions for RHEL5

Note: This is tested on RHEL5 U1. If you have any other version, your mileage may vary.

  1. Start the installation with the kernel command line «linux mpath»

    • You will see multipathed devices (/dev/mapper/mpath*) as installation devices.
  2. Finish the installation.
  3. Reboot.

    • If your boot device does not need multipath.conf and does not have a special hardware handler, then you are done.
      If you have either of these, follow the steps below.

  4. Once booted, update multipath.conf file, if needed.
  5. Run mkinitrd, if you need a hardware handler, add it to initrd with —with option.

    • # mkinitrd /boot/initrd.final.img —with=dm-rdac

  6. Replace the initrd in your grub.conf/lilo.conf/yaboot.conf with the newly built initrd.
  7. Reboot.

The system will come up with the root disk on a multipathed device.

Note: You can switch off multipathing to the root device by adding multipath=off to the kernel command line.

  • Note: By default, RedHat disables dm-multipath by blacklisting all devices in /etc/multipath.conf. It just excludes your root device. If you do not see your other multipath devices through multipath -ll», then check and fix the blacklist in /etc/multipath.conf

4.3. Moving root/swap from single path device to multipath device

Procedure above (Red Hat) gives details on how to use linux mpath to install on a multipathed storage device. What if you have already installed Red Hat on a SCSI disk (instead of a dm device) that has multiple paths.

This section gives details on how to move your root and swap from /dev/sd?? to /dev/mappper/mpath??

This procedure is tested on RHEL 5.1. If you are trying on a different release of RHEL, your mileage may vary.

  1. Before you start with this procedure, make sure
    • a) your /etc/multipath.conf works properly in your current setup.
      • — appropriate blacklist (make sure your root is not blacklisted)
      • — special config stanzas for your storage
      • — etc.,
    • b) your root device is referred by LABEL rather than the scsi name in both /etc/fstab and in your boot loader configuration file.
  2. run «/sbin/scsi_id -g -u -s /block/$your_disk_name», and save the wwid
    • repeat this step for all disks that are multipathed and used in /etc/fstab (/, /home, /boot, swap etc.,).
  3. save /sbin/mkinitrd, we will be making some changes to this file.
    • cp /sbin/mkinitrd /sbin/mkinitrd.save
  4. edit /sbin/mkinitrd:
    • — look for «use_multipath=0» and change it to «use_multipath=1»
    • — look for the line with «echo Creating multipath devices»
    • — add the following line immediately below the above line (before the for loop)
      • emit «/bin/multipath -v 0 $wwid»
    • use the wwid that was saved in step (1) Add a line for every disk/wwid noted down in step (2)
  5. Run mkinitrd to generate a new initrd image
    • — mkinitrd /boot/initrd-mpath-$(uname -r) $(uname -r)
  6. Change the boot loader configuration file:
    • — In your boot loader config file (yaboot.conf or grub.conf or lilo.conf), add a new stanza with the new initrd image and the original kernel.
      • i.e copy the original stanza as is and modify only the initrd line to be initrd-mpath-$(uname -r) (of course with uname -r expanded)
    • — add an option «fastboot» to the kernel command line. It can be added to the «append» string in the stanza.
  7. Reboot.
  8. Run mkinitrd with rootdev to generate a new initrd image
    • — mkinitrd -f —rootdev LABEL=$ROOTLABEL /boot/initrd-mpath-$(uname -r) $(uname -r)
      • Note: -f is to forcefully overwrite the old initrd image
      • Note: Use your root device’s label and _not_ $ROOTLABEL
  9. Remove the file /etc/blkid/blkid.tab, which has labels for non-multipathed block devices.
    • — rm /etc/blkid/blkid.tab
      • Note: Don’t worry about removing this, it will be created the next time system is rebooted.
  10. Remove the option «fastboot» from the kernel command line (added in step (6) above).
  11. restore the original mkinitrd (saved in step (3))
    • mv /sbin/mkinird.save /sbin/mkinitrd
    • optionally you can save the modified one for your future reference
  12. Reboot again.
  13. Verification: Your devices should be under dm-multipath’s control. Verify it by running «df» and/or «cat /proc/swaps».

4.4. Other Distributions

Debian Multipath Support

Unsupported SLES9

5. SCSI Hardware handler

Some of the storage devices, like IBM’ DS4K, require special handling for path failover and failback. Until 2.6.27, this was handled at the dm layer as hardware handler. This functionality is available in RHEL 5.3 and SLES11 currently.

They had a drawback that the underlying SCSI layer doesn’t know about it and would send I/O on the passive path, which would fail after a timeout and would print extraneous error messages in the console.

This problem was taken care by moving the hardware handler to the SCSI layer, hence the term SCSI Hardware Handler. These handlers are modules created under SCSI directory. These modules need to be included in the initrd image and insmoded before SCSI module is inserted to take full advantage of them.

Following are the instructions to include these modules in different distributions.

5.1. Including hardware handler in RHEL 5.3

  1. Install the OS as you normally would.
    • If your storage is active/passive (like DS4K), then your installation would be faster if you disable the connectivity of the passive path.
  2. Boot the system.
  3. Create the new initrd image using the following command
    • # mkinitrd /boot/initrd-$(uname -r)-scsi_dh.img $(uname -r) —preload=scsi_dh_rdac
      Note: The above command is for scsi_dh_rdac. If you want to include other hardware handlers, use them instead.

  4. Replace the initrd in your grub.conf/lilo.conf/yaboot.conf with the newly built initrd.
  5. Reboot

5.2. Including hardware handler in SLES11 and SLES10 SP3

Note: If your root device is on a mulitpathed device, then all the scsi hardware handlers will be included in the initrd, following steps are not needed.

  1. Install the OS as you normally would.
    • If your storage is active/passive (like DS4K), then your installation would be faster if you disable the connectivity of the passive path.
  2. Boot the system.
  3. Edit the file /etc/sysconfig/kernel and add «scsi_dh_rdac» to INITRD_MODULES».

    • Note: If you want to include other hardware handlers, use them instead of scsi_dh_rdac.

    • Note: Insert the hardware handler module before the adapters that will probe for your storage. For example, change «ipr qla2xxx» to «ipr scsi_dh_rdac qla2xxx»

  4. Create the new initrd image using the following command
    • # mkinitrd -k /boot/vmlinux-$(uname -r) -i /boot/initrd-$(uname -r)-scsi_dh -M /boot/System.map-$(uname -r)

  5. Replace the initrd in your grub.conf/lilo.conf/yaboot.conf with the newly built initrd.
  6. Reboot

6. Tips and Tricks

  1. Using alias: By default, the multipathed devices are named with the uid of the device, which one accesses through /dev/mapper/${uid_name}. When one uses user_friendly_names, devices will be named as mpath0, mpath1 etc., which may meet ones needs. User also have an option to define a alias in multipath.conf for each of the device.

Syntax is:

multipaths {
        multipath {
                wwid    3600a0b800011a2be00001dfa46cf0620
                alias   mydev1
        }
}
  1. Persistent device names: The names (uid_names or mpath names or alias names) that appear in /dev/mapper are persistent across boots, and the names dm-, dm-1 etc., can change between reboots. So, it is advisable to use the device names that appear under /dev/mapper and avoid using the dm-? names.
  2. Restart of tools after changing multipath,conf file: Once multipath.conf file is changed, the multipath tools need to be rerun for those configuration values to be effective. One has to kill multipathd, run multipath -F and then restart multipathd and multipath.
  3. Devices with paritions: Create device partitions before running multipath, as kpartx is configured to run to create multipathed partitions that way. Partions on device mpath0 appear as /dev/mapper/mpath0p1, /dev/mapper/mpath0p2, etc.,
  4. Using binding file in clustered environment: Bindings file holds the bindings between the device mapper names and the uid of the underlying device. By default the file is /var/lib/multipath/bindings, this can be changed by the multipath command line option -b. In a clustered environment, this file can be created in one node and can be transferred to another to get the same names.
    Note that the same effect can also be acheived by using alias and having the same multipath.conf file in all the nodes of the cluster.

  5. Getting the multipath device name corresponding to a SCSI device: If one knows the name of a SCSI device and wants to get the device mapper name associated with that the could use multipath -l /dev/sda, where sda is the SCSI device. On the other hand, if one knows the device mapper name and wants to know the underlying device names they could use the same command with the device mapper name. i.e multipath -l mpath0, where mpath0 is the device mapper name.
  6. When using LVM on dm-multipath devices, it is better to turn lvm scanning off on the underlying SCSI devices. This can be done by changing the filter parameter in /etc/lvm/lvm.conf to be filter = [ «a/dev/mapper/.*/», «r/dev/sd.*/» ].
    If your root device is also a multipathed lvm device, then make the above change before you create a new initrd image.

  7. To find out if your device (vendor/product) is supported by the tool by default do the following.
    • In RHEL:
      • Make sure that multipathd is running. Then run
      # multipathd -k
      multipathd> show config
  • This command will list all the devices that are built-in in the tools. In SLES:
     # multipath -t
  • this would list all the devices that are built-in in the tools.
  1. If you have more than 1024 paths, you need to set a configuration parameter max_fds to a number equal to or greater than the number of paths + 32. Otherwise, you might multipathd daemon die with an error (in /var/log/messages) saying that there are too many files open.

  2. When multipath/multipathd starts you might see a message(s) like
     device-mapper: table: 253:0: multipath: error getting device
     device-mapper: ioctl: error adding target to table
  • in console or /var/log/messages. This is due to dm-multipath trying to create multipath devices for your root device and/or other devices that are already mounted or opened.
  • You can avoid this by adding a blacklist stanza in your /etc/multipath.conf file for those devices that generate these errors.

7. References

7.1. General

Mainline Documentation for multipath tools

Christophe’s «Multipath Implementation» page

LWN Article

Device Mapper Resource Page

Linux Multipathing: Ottawa Linux Symposium 2005

Request-based Dm-Multipathing: Ottawa Linux Symposium 2007

Using the Device-Mapper Multipath feature of Red Hat Enterprise Linux 5.2

Managing Multipath Devices in SLES 10

Debian Installer — Multipath Support


Содержание

  1. How to troubleshoot device-mapper: table: 253:7: multipath: error getting device?
  2. Responses
  3. 5.3.18-3-pve multipath issue
  4. speedlnx
  5. Device mapper table 253 2 multipath error getting device
  6. Device mapper table 253 2 multipath error getting device
  7. 1.1. Device Mapper
  8. 1.2. Device Mapper Multipathing
  9. 2. Terminology, Concepts and Usage
  10. 2.1. Output of multipath command
  11. 2.2. Terminology
  12. 2.3. Configuration File (/etc/multipath.conf)
  13. 2.3.1. Attribute value overrides
  14. 2.4. multipath, multipathd command usage
  15. 3. Supported Storage Devices
  16. 3.1. Devices supported in multipath tools
  17. 3.2. Devices that have hardware handler in kernel
  18. 4. Install and Boot on a multipathed device
  19. 4.1. Installation instructions for SLES10
  20. 4.2. Installation instructions for RHEL5
  21. 4.3. Moving root/swap from single path device to multipath device
  22. 4.4. Other Distributions
  23. 5. SCSI Hardware handler
  24. 5.1. Including hardware handler in RHEL 5.3
  25. 5.2. Including hardware handler in SLES11 and SLES10 SP3
  26. 6. Tips and Tricks

How to troubleshoot device-mapper: table: 253:7: multipath: error getting device?

version:redhat 6.4
problem:

Responses

]# dmsetup info -C
Name Maj Min Stat Open Targ Event UUID
vg00-LogVol01 253 5 L—w 1 1 0 LVM-eJV2Rjx11AacJLzet6tcb2z2O9PTUku64U1mtUXXTcl72B6Iu6W5V6Hin7uTNOkw
vg00-LogVol00 253 0 L—w 1 1 0 LVM-eJV2Rjx11AacJLzet6tcb2z2O9PTUku6EJrCeqVbJoyKw44HELUYjF9N2dVlHc4Q
mpathd 253 4 L—w 33 1 1 mpath-36001438009b064580000400000710000
mpathc 253 3 L—w 9 1 1 mpath-36001438009b064580000400000650000
mpathb 253 2 L—w 63 1 1 mpath-36001438009b0645800004000006b0000
vg00-LogVol03 253 1 L—w 1 1 0 LVM-eJV2Rjx11AacJLzet6tcb2z2O9PTUku6HogLq442U3F5A2Kj4PDAOm56R1Krex2j
vg00-LogVol02 253 6 L—w 1 1 0 LVM-eJV2Rjx11AacJLzet6tcb2z2O9PTUku6wQS2Nd3n04mFad51Lx1Y8VBdqIxHq0KV
[root@j1

]# vgs
VG #PV #LV #SN Attr VSize VFree
vg00 1 4 0 wz—n- 558.68g 369.23g
[root@j1

]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 vg00 lvm2 a— 558.68g 369.23g
[root@j1

]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
LogVol00 vg00 -wi-ao— 97.66g
LogVol01 vg00 -wi-ao— 9.77g
LogVol02 vg00 -wi-ao— 19.53g
LogVol03 vg00 -wi-ao— 62.50g
[root@j1

]# lsmod |grep dm_multipath
dm_multipath 17756 4 dm_round_robin
dm_mod 82839 22 dm_multipath,dm_mirror,dm_log

]# grep -v ^$ /etc/multipath.conf
defaults <
udev_dir /dev
polling_interval 10
path_selector «round-robin 0»
path_grouping_policy failover
getuid_callout «/lib/udev/scsi_id —whitelisted —device=/dev/%n»
prio alua
path_checker tur
rr_min_io 100
rr_min_io_rq 1
rr_weight uniform
failback immediate
no_path_retry 12
user_friendly_names yes
>
blacklist <
devnode «^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]»
devnode «^hd[a-z][[0-9]

devnode «^cciss!c[0-9]d[0-9]»
devnode «^sda[0-9]»
>
devices <
device <
vendor «HP»
product «OPEN-.
»
path_grouping_policy multibus
getuid_callout «/lib/udev/scsi_id —whitelisted —device=/dev/%n»
path_selector «round-robin 0»
path_checker tur
features «0»
hardware_handler «0»
prio const
failback immediate
rr_weight uniform
no_path_retry queue
rr_min_io 1000
rr_min_io_rq 1
>
>

Also, it’s better to put output and config files between

to preserve the formatting (for readability).

(I added the output you provided into the original question.)

Is your operating system installed on a multipathed device?

If so, then you should have installed RHEL6 using a special installation boot option mpath , otherwise multipathing is not started within initramfs: if your root filesystem is on a multipathed disk, it causes LVM to grab one of the individual paths and lock onto it. When the dm-multipath is started a bit later, it cannot get exclusive access to all individual paths of the multipathed system disk because LVM is already locked onto one of the individual paths.

If this is your problem, it might be better to reinstall the OS unless you’re very familiar with multipathing and LVM: it can be fixed without reinstallation, but the procedure is a bit tricky. The first step is to edit /etc/sysconfig/mkinitrd/multipath to contain:

and rebuilding your initrd using the mkinitrd command. This will cause dm-multipath to be started earlier in the boot sequence, when the system is still running on initrd, before LVM volume groups are activated. As a result, dm-multipath gets to claim the individual paths of the multipath devices first, then LVM can lock on to the multipathed device instead of one of the individual /dev/sd* paths.

For LVM this is not a problem, as it will scan all the disk devices it can find anyway. But for plain old disk partitions (think /boot !), this will make the original device name (typically /dev/sda1) inaccessible, as you should now be using the equivalent multipathed device name, i.e. /dev/mapper/mpath*p1 , or the UUID= syntax in /etc/fstab to specify the filesystem. As a result, the system will probably drop into single user mode because it cannot mount the /boot filesystem.

So be aware of this, and be prepared to get on the system console to fix your /etc/fstab to match the new situation the first time you boot the system after making this change.

Источник

5.3.18-3-pve multipath issue

speedlnx

Active Member

Hello,
since the last upgrade of my servers with the last kernel 5.3.18-3-pve multipath stop works. Everything works great on 5.3.10-1-pve and I can manage my cluster storage with lvm on my san. Is there any change with the last kernel? Is there any option I have to add to makes it works?

When booting with 5.3.18-3-pve multipath -ll give me blank output and I’ve this error during boot:

device-mapper: table: 253:9: multipath: error getting device

and during the working kernel boot I’ve:

[ 17.617230] scsi 17:0:0:7: Attached scsi generic sg10 type 0
[ 17.617233] scsi 17:0:0:7: Embedded Enclosure Device
[ 17.617604] scsi 17:0:0:7: Power-on or device reset occurred
[ 17.617617] scsi 17:0:0:7: Failed to get diagnostic page 0x1
[ 17.617671] scsi 17:0:0:7: Failed to bind enclosure -19

but everything works out anyway.

My configuration is:

2x Lenovo server with QLogic Corp. ISP2722-based 16/32Gb
1x Lenovo SAN DE2000H

pve-manager/6.1-8/806edfe1 (running kernel: 5.3.10-1-pve) Working kernel.

defaults <
find_multipaths no
polling_interval 2
path_selector «round-robin 0»
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
>

blacklist <
devnode «^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*»
devnode «^hd[a-z][[0-9]*]»
devnode «^sda[[0-9]*]»
devnode «^cciss!c[0-9]d[0-9]*»
>

multipaths <
multipath <
wwid 36d039ea00006a630000001855ea5d2ff
alias mpath1
>
multipath <
wwid 36d039ea00006a2480000014d5ea1c997
alias mpath2
>
multipath <
wwid 36d039ea00006a2480000018e5ea610f2
alias mpath3
>
>

Источник

Device mapper table 253 2 multipath error getting device

]# l /dev/mapper/
total 0
1217 drwxr-xr-x 2 root 60 Dec 8 17:41 ./
1214 drwxr-xr-x 10 root 14320 Dec 8 17:44 ../
4795 crw——- 1 root 10, 63 Dec 8 17:41 control

]# multipath -v2 -d
create: mpath0 (3600508b400011c130003000000fc0000) COMPAQ,HSV111 (C)COMPAQ
[size=100G][features=0][hwhandler=1 hp-sw]
_ round-robin 0 [prio=4][undef]
_ 0:0:0:1 sda 8:0 [undef][ready]
_ 0:0:1:1 sdb 8:16 [undef][ready]
_ 1:0:0:1 sdc 8:32 [undef][ready]
_ 1:0:1:1 sdd 8:48 [undef][ready]

]# cat /etc/multipath.conf
defaults <
user_friendly_names yes
>

blacklist <
devnode «^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*»
devnode «^(hd|xvd)[a-z]*»
wwid «*»
>

# Make sure our multipath devices are enabled.

blacklist_exceptions <
wwid «3600508b400011c130003000000fc0000»
>

multipaths <
multipath <
wwid «3600508b400011c130003000000fc0000»
alias mpath0
>
>
devices <
device <
vendor «HP|COMPAQ*»
product «HSV1*»
hardware_handler «1 hp-sw»
path_grouping_policy group_by_prio
>
>

The error happens on «reload» ioctl:
[root@localhost

]# multipath -v9
.
mpath0: pgfailover = -1 (internal default)
mpath0: pgpolicy = failover (internal default)
mpath0: selector = round-robin 0 (internal default)
mpath0: features = 0 (internal default)
mpath0: hwhandler = 1 hp-sw (controller setting)
mpath0: rr_weight = 1 (internal default)
mpath0: minio = 1000 (config file default)
mpath0: no_path_retry = NONE (internal default)
pg_timeout = NONE (internal default)
mpath0: set ACT_CREATE (map does not exist)
libdevmapper: ioctl/libdm-iface.c(1623): device-mapper: reload ioctl failed: Invalid argument
mpath0: domap (0) failure for create/reload map
mpath0: remove multipath map
.

dmesg after that shows several pairs of following messages:

Источник

Device mapper table 253 2 multipath error getting device

The connection from the server through the HBA to the storage controller is referred as a path. When multiple paths exists to a storage device(LUN) on a storage subsystem, it is referred as multipath connectivity. It is a enterprise level storage capability. Main purpose of multipath connectivity is to provide redundant access to the storage devices, i.e to have access to the storage device when one or more of the components in a path fail. Another advantage of multipathing is the increased throughput by way of load balancing.

Note: Multipathing protects against the failure of path(s) and not the failure of a specific storage device.

Common example of multipath is a SAN connected storage device. Usually one or more fibre channel HBAs from the host will be connected to the fabric switch and the storage controllers will be connected to the same switch.

A simple example of multipath could be: 2 HBAs connected to a switch to which the storage controllers are connected. In this case the storage controller can be accessed from either of the HBAs and hence we have multipath connectivity.

In the following diagram each host has 2 HBAs and each storage has 2 controllers. With the given configuration setup each host will have 4 paths to each of the LUNs in the storage.

In Linux, a SCSI device is configured for a LUN seen on each path. i.e, if a LUN has 4 paths, then one will see four SCSI devices getting configured for the same device. Doing I/O to a LUN in a such an environment is unmanageable

  • applications/administrators do not know which SCSI device to use
  • all applications consistently using the same device
  • in case of a path failure, knowledge to retry the I/O on a different path
  • always using the storage device specific preferred path
  • spreading I/O between multiple valid paths

1.1. Device Mapper

Device mapper is a block subsystem that provides layering mechanism for block devices. One can write a device mapper to provide a specific functionality on top of a block device.

Currently the following functional layers are available:

  • concatenation
  • mirror
  • striping
  • encryption
  • flaky
  • delay
  • multipath

Multiple device mapper modules can be stacked to get the combined functionality.

Click here for more information on device mapper.

1.2. Device Mapper Multipathing

Object of this document is to provide details on device mapper multipathing (DM-MP). DM-MP resolves all the issues that arise in accessing a multipathed device in Linux. It also provides a consistent user interface for storage devices provided by multiple vendors. There is only one block device (/dev/mapper/XXX) for a LUN. This is the device created by device mapper.

Paths are grouped into priority groups, and one of the priority group will be used for I/O, and is called active. A path selector selects a path in the priority group to be used for an I/O based on some load balancing algorithm (for example round-robin).

When a I/O fails in a path, that path gets disabled and the I/O is retried in a different path in the same priority group. If all paths in a priority group fails, a different priority group which is enabled will be selected to send I/O.

DM-MP consists of 4 components:

DM MP kernel module — Kernel module that is responsible for making the multipathing decisions in normal and failure situations.

multipath command — User space tool that allows the user with initial configuration, listing and deletion of multipathed devices.

multipathd daemon — User space daemon that constantly monitors the paths. It marks a path as failed when it finds the path faulty and if all the paths in a priority group are faulty then it switches to the next enable priority group. It keeps checking the failed path, once the failed path comes alive, based on the failback policy, it can activate the path. It provides an CLI to monitor/manage individual paths. It automatically creates device mapper entries when new devices comes into existence.

kpartx — User space command that creates device mapper entries for all the partitions in a multipathed disk/LUN. When the multipath command is invoked, this command automatically gets invoked. For DOS based partitions this command need to be run manually.

2. Terminology, Concepts and Usage

2.1. Output of multipath command

Standard output of multipath command

Annotated output of multipath command

2.2. Terminology

Path Connection from the server through a HBA to a specific LUN. Without DM-MP, each path would appear as a separate device.

Paths are grouped into a path groups. At any point of time only path group will be active. Path selector decides which path in the path group gets to send the next I/O. I/O will be sent only to the active path.

Each path has a specific priority. A priority callout program provides the priority for a given path. The user space commands use this priority value to choose an active path. In the group_by_prio path grouping policy, path priority is used to group the paths together and change their relative weight with the round robin path selector.

Path Group Priority Sum of priorities of all non-faulty paths in a path group. By default, the multipathd daemon tries to keep the path group with the highest priority active.

Path Grouping Policy Determines how the path group(s) are formed using the available paths. There are five different policies:

multibus: One path group is formed with all paths to a LUN. Suitable for devices that are in Active/Active mode.
failover: Each path group will have only one path.

group_by_serial: One path group per storage controller(serial). All paths that connect to the LUN through a controller are assigned to a path group. Suitable for devices that are in Active/Passive mode.

  • group_by_prio: Paths with same priority will be assigned to a path group.
  • group_by_node_name: Paths with same target node name will be assigned to a path group.
  • Setting multibus as path grouping policy for a storage device in Active/Passive mode will reduce the I/O performance.

    Path Selector A kernel multipath component that determines which path will be chosen for the next I/O. Path selector can have an appropriate load balancing algorithm. Currently one one path selector exists, which is the round-robin.

    Path Checker Functionality in the user space that is used to check the availability of a path. This is implemented as a library function that is used by both multipath command and the multipathd daemon. Currently, there are 3 path checkers:

    readsector0: sends a read command to sector 0 at regular time interval. Produce lot of error messages in Active/Passive mode. Hence, suitable only for Active/Active mode.

  • tur: sends a test unit ready command at regular interval.
  • rdac: specific to the lsi-rdac device. Sends a inquiry command and sets the status of the path appropriately.
  • This refers to the physical state of a path. A path can be in one of the following states:

    ready: Path is up and can handle I/O requests.

    faulty: Path is down and cannot handle I/O requests.

    ghost: Path is a passive path. This state is shown in the passive path in Active/Passive mode.

  • shaky: Path is up, but temporarily not available for I/O requests.
  • DM Path States This refers to the DM module(kernel)’s view of the path’s state. It can be in one of the two states:

    active: Last I/O sent to this path successfully completed. Analogous to ready path state.

    failed: Last I/O to this path failed. Analogous to faulty path state.

    Path Group State Path Groups can be in one of the following three states:

      active: I/O will be sent to the multipath device will be sent to this path group. Only one path group will be in this state.

    enabled: If none of the paths in the active path group is in the ready state, I/O will be sent these path groups. There can be one or more path groups in this state.

  • disabled: In none of the paths in the active path group and enabled path group is in the ready state. I/O will be sent to these path groups. There can be one or more path groups in this state. This state is available only for certain storage devices.
  • UID Callout (or) WWID Callout A standalone program that returns a globally unique identifier for a path. multipath/multipathd invokes this callout and uses the ID returned to coalesce multiple paths to a single multipath device.

    Priority Callout A standalone program that returns the priority for a path. multipath/multipathd invokes this callout and uses the priority value of the paths to determine the active path group.

    Hardware Handler Kernel personality module for storage devices that needs special handling. This module is responsible for enabling a path (at the device level) during initialization, failover and failback. It is also responsible for handling device specific sense error codes.

    When all the paths in a path group are in faulty state, one of the enabled path group (path with highest priority) with any paths in ready state will be made active. If there is no paths in ready state in any of the enabled path groups, then one of the disabled path group (path with highest priority) will be made active. Making a new path group active is also referred as switching of path group. Original active path group’s state will be changed to enabled.

    A failed path can become active at any point of time. multipathd keeps checking the path. Once it finds a path is active, it will change the state of the path to ready. If this action makes one of the enabled path group’s priority to be higher than the current active path group, multipathd may choose to failback to the highest priority path group.

    Under failback situations multipathd can do one of the following three things:

    1. immediate: Immediately failback to the highest priority path group.
    2. # of seconds: Wait for the specified number of seconds, for I/O to stabilize, then failback to the highest priority path group.
    3. do nothing: Do nothing, user explicitly fails back to the highest priority path group.

    This policy selection can be set by the user through /etc/multipath.conf.

    Active/Active Storage devices with 2 controller can be configured in this mode. Active/Active means that both the controllers can process I/Os.

    Active/Passive Storage devices with 2 controller can be configured in this mode. Active/Passive means that one of the controllers(active) can process I/Os, and the other one(passive) is in a standby mode. I/Os to the passive controller will fail.

    A user friendly and/or user defined name for a DM device. By default, WWID is used for the DM device. This is the name that is listed in /dev/disk/by-name directory. When the user_friendly_names configuration option is set, the alias of a DM device will have the form of mpath . User also has the option of setting a unique alias for each multipath device.

    2.3. Configuration File (/etc/multipath.conf)

    DM-Multipath allows many of the feature to be user configurable using the configuration file /etc/multipath.conf. multipath command and multipathd uses the configuration information from this file. This file is consulted only during the configuration of multipath devices. In other words, if the user makes any changes to this file, then the multipath command need to be rerun to configure the multipath devices (i.e the user has to do multipath -F followed by multipath).

    Support for many of the devices (as listed below) is inbuilt in the user space component of DM-Multipath. If the support for a specific storage device is not inbuilt or the user wants to override some of the values only then the user need to modify this file.

    This file has 5 sections:

    System level defaults («defaults«): Where the user can specify system level default override.

    Black listed devices («blacklist«): User can specify the list of devices they do not want to be under the control of DM-Multipath. These devices will be excluded.

    Black list exceptions («blacklist_exceptions«): Specific devices to be treated as multipath candidates even if they exist in the blacklist.

    Storage controller specific settings («devices«): User specified configuration settings will be applied to devices with specified «Vendor» and «Product» information.

    Device specific settings («multipaths«): User can fine tune configuration settings for individual LUNs.

    User can specify the values for the attributes in this file using regular expression syntax.

    For detailed explanation of the different attributes and allowed values for the attributes please refer to multipath.conf.annotated file.

      In Mainline, this file is located in the root directory of multipath-tools.

    In RedHat, this file is located in the directory /usr/share/doc/device-mapper-multipath-X.Y.Z/.

  • In SuSE, this file is located in the directory /usr/share/doc/packages/multipath-tools/
  • 2.3.1. Attribute value overrides

    Attribute values are set at multiple levels (internally in multipath tools and through multipath.conf file). Following is the order in which the attribute values will be overwritten.

    1. Global internal defaults, as specified in the man page of multipath.conf.
    2. Device specific internal defaults, as defined in libmultipath/hwtable.c.
    3. Items described in defaults section of /etc/multipath.conf.
    4. Items defined in device section of /etc/multipath.conf.

    Note that this will completely overwrite configuration information defined in (2) above. So, if even if you want to change/add only one attribute one have to provide the whole list for a device.

  • Items defined in multipaths section of /etc/multipath.conf.
  • 2.4. multipath, multipathd command usage

    Man page of multipath/multipathd provides good details on the usage of the tools.

    multipathd has a interactive mode option which can be used for querying and managing the paths and also to check the configuration details that will be used.

    When multipathd is running, one has to invoke multipathd with the command line multipathd -k. multipathd will enter into a command line mode where user can invoke different commands. Checkout the man page for different commands.

    3. Supported Storage Devices

    This is the list of devices that have configuration information built-in in the multipath tools. Not being in this list does not mean that the specific device is not supported, it just means that there is no built-in configuration in the multipath tools.

    Some of the devices do need a hardware handler which need to compiled in the kernel. The device being in this list does mean that the hardware handler is present in the kernel. It is possible that the hardware handler is present in the kernel but the device is not added in the list of supported built-in devices.

    3.1. Devices supported in multipath tools

    Following are the list of storage devices that have configuration information buil-in in the multipath tools.

    Common Name

    Mainline 0.4.7

    Mainline 0.4.8

    RHEL5 U1

    SLES10 SP1

    3PARdata

    Xserve RAID

    (COMPAQ,HP)

    (MSA|HSV)1.0.*

    (COMPAQ,HP)

    (MSA|HSV)1.1.*

    (COMPAQ,HP)

    (COMPAQ,HP)

    SYMMETRIX

    (HITACHI|HP)

    MSA VOLUME

    LOGICAL VOLUME.*

    Common Name

    Mainline 0.4.7

    Mainline 0.4.8

    RHEL5 U1

    SLES10 SP1

    S/390 DASD ECKD

    Axiom 500

    OPENstorage D280

    This list can be obtained by using the following commands

    1. In RHEL5 and later
      • Make sure multipathd is running and then run
      • # multipathd -k

    multipathd> show config

  • In SLES10 and later
    • run
    • # multipath -t
  • 3.2. Devices that have hardware handler in kernel

    Some storage device need special handling for path failover/failback. Which means that they need a hardware handler present in the kernel. Following are the list of the storage devices that has hardware handler in the kernel.

    Generic controller Name

    Storage device Name

    Mainline 2.6.22

    Mainline 2.6.23

    RHEL5 U1

    SLES10 SP1

    LSI Engenio

    IBM DS4000 Series, IBM DS3000 Series

    EMC CLARiiON

    AX/CX-series

    HP Storage Works and Fibrecat

    4. Install and Boot on a multipathed device

    There are advantages in making your boot/root partitions to be in SAN, like avoiding single point of failure, the disk content being accessible even if the server is down etc., This section describes the steps to be taken in the two major distributions to successfully install and boot of a SAN/multipathed device.

    4.1. Installation instructions for SLES10

    Note: This is tested on SLES10 SP1. If you have any other version, your mileage may vary.

    Install the OS in a device that has multiple paths.
    Make sure the root device’s «Mount by» option is set to «Device by-id» (this option is available under «expert partitioner» as «fstab options»).
    If you are installing on LVM, choose «Mount by» to be «by label».

    Complete the installation. Let the system boot up in multiuser mode.
    Make sure the root device, swap device are all referenced by their by-id device node entries instead of /dev/sd* type names. If they are not, fix them first.
    If using LVM, make sure the device are referenced by LABEL.

    Once booted, update /etc/multipath.conf
    If you have to make changes to /etc/multipath.conf, make the changes.
    Note: the option «user_friendly_names» is not supported by initrd. So, if you have user_friendly_names in your /etc/multipath.conf file, comment it for now, you can uncomment it later.

  • Enable multipathing by running the following commands
    • chkconfig boot.multipath on
    • chkconfig multipathd on
  • Add multipath module to initrd

    Edit the file /etc/sysconfig/kernel and add «dm-multipath» to INITRD_MODULES».
    Note: If your storage devices needs a hardware handler, add the corresponding module to INITRD_MODULES, in addition to «dm-multipath». For example add «dm-rdac» and «dm-multipath» to support IBM’s DS4K storage devices

    Run mkinitrd, if required run lilo.‘Note: You can uncomment the user friendly name if you have commented it above.

  • Reboot
  • The system will come up with the root disk on a multipathed device.

    Note: You can switch off multipathing to the root device by adding multipath=off to the kernel command line.

    4.2. Installation instructions for RHEL5

    Note: This is tested on RHEL5 U1. If you have any other version, your mileage may vary.

    Start the installation with the kernel command line «linux mpath»

    • You will see multipathed devices (/dev/mapper/mpath*) as installation devices.
  • Finish the installation.

    If your boot device does not need multipath.conf and does not have a special hardware handler, then you are done.
    If you have either of these, follow the steps below.

    Once booted, update multipath.conf file, if needed.

    Run mkinitrd, if you need a hardware handler, add it to initrd with —with option.

    # mkinitrd /boot/initrd.final.img —with=dm-rdac

  • Replace the initrd in your grub.conf/lilo.conf/yaboot.conf with the newly built initrd.
  • Reboot.
  • The system will come up with the root disk on a multipathed device.

    Note: You can switch off multipathing to the root device by adding multipath=off to the kernel command line.

    Note: By default, RedHat disables dm-multipath by blacklisting all devices in /etc/multipath.conf. It just excludes your root device. If you do not see your other multipath devices through multipath -ll», then check and fix the blacklist in /etc/multipath.conf

    4.3. Moving root/swap from single path device to multipath device

    Procedure above (Red Hat) gives details on how to use linux mpath to install on a multipathed storage device. What if you have already installed Red Hat on a SCSI disk (instead of a dm device) that has multiple paths.

    This section gives details on how to move your root and swap from /dev/sd?? to /dev/mappper/mpath??

    This procedure is tested on RHEL 5.1. If you are trying on a different release of RHEL, your mileage may vary.

    1. Before you start with this procedure, make sure
      • a) your /etc/multipath.conf works properly in your current setup.
        • — appropriate blacklist (make sure your root is not blacklisted)
        • — special config stanzas for your storage
        • — etc.,
      • b) your root device is referred by LABEL rather than the scsi name in both /etc/fstab and in your boot loader configuration file.
    2. run «/sbin/scsi_id -g -u -s /block/$your_disk_name», and save the wwid
      • repeat this step for all disks that are multipathed and used in /etc/fstab (/, /home, /boot, swap etc.,).
    3. save /sbin/mkinitrd, we will be making some changes to this file.
      • cp /sbin/mkinitrd /sbin/mkinitrd.save
    4. edit /sbin/mkinitrd:
      • — look for «use_multipath=0» and change it to «use_multipath=1»
      • — look for the line with «echo Creating multipath devices»
      • — add the following line immediately below the above line (before the for loop)
        • emit «/bin/multipath -v 0 $wwid»
      • use the wwid that was saved in step (1) Add a line for every disk/wwid noted down in step (2)
    5. Run mkinitrd to generate a new initrd image
      • — mkinitrd /boot/initrd-mpath-$(uname -r) $(uname -r)
    6. Change the boot loader configuration file:
      • — In your boot loader config file (yaboot.conf or grub.conf or lilo.conf), add a new stanza with the new initrd image and the original kernel.
        • i.e copy the original stanza as is and modify only the initrd line to be initrd-mpath-$(uname -r) (of course with uname -r expanded)
      • — add an option «fastboot» to the kernel command line. It can be added to the «append» string in the stanza.
    7. Reboot.
    8. Run mkinitrd with rootdev to generate a new initrd image
      • — mkinitrd -f —rootdev LABEL=$ROOTLABEL /boot/initrd-mpath-$(uname -r) $(uname -r)
        • Note: -f is to forcefully overwrite the old initrd image
        • Note: Use your root device’s label and _not_ $ROOTLABEL
    9. Remove the file /etc/blkid/blkid.tab, which has labels for non-multipathed block devices.
      • — rm /etc/blkid/blkid.tab
        • Note: Don’t worry about removing this, it will be created the next time system is rebooted.
    10. Remove the option «fastboot» from the kernel command line (added in step (6) above).
    11. restore the original mkinitrd (saved in step (3))
      • mv /sbin/mkinird.save /sbin/mkinitrd
      • optionally you can save the modified one for your future reference
    12. Reboot again.
    13. Verification: Your devices should be under dm-multipath’s control. Verify it by running «df» and/or «cat /proc/swaps».

    4.4. Other Distributions

    5. SCSI Hardware handler

    Some of the storage devices, like IBM’ DS4K, require special handling for path failover and failback. Until 2.6.27, this was handled at the dm layer as hardware handler. This functionality is available in RHEL 5.3 and SLES11 currently.

    They had a drawback that the underlying SCSI layer doesn’t know about it and would send I/O on the passive path, which would fail after a timeout and would print extraneous error messages in the console.

    This problem was taken care by moving the hardware handler to the SCSI layer, hence the term SCSI Hardware Handler. These handlers are modules created under SCSI directory. These modules need to be included in the initrd image and insmoded before SCSI module is inserted to take full advantage of them.

    Following are the instructions to include these modules in different distributions.

    5.1. Including hardware handler in RHEL 5.3

    1. Install the OS as you normally would.
      • If your storage is active/passive (like DS4K), then your installation would be faster if you disable the connectivity of the passive path.
    2. Boot the system.
    3. Create the new initrd image using the following command

    # mkinitrd /boot/initrd-$(uname -r)-scsi_dh.img $(uname -r) —preload=scsi_dh_rdac
    Note: The above command is for scsi_dh_rdac. If you want to include other hardware handlers, use them instead.

  • Replace the initrd in your grub.conf/lilo.conf/yaboot.conf with the newly built initrd.
  • Reboot
  • 5.2. Including hardware handler in SLES11 and SLES10 SP3

    Note: If your root device is on a mulitpathed device, then all the scsi hardware handlers will be included in the initrd, following steps are not needed.

    1. Install the OS as you normally would.
      • If your storage is active/passive (like DS4K), then your installation would be faster if you disable the connectivity of the passive path.
    2. Boot the system.

    Edit the file /etc/sysconfig/kernel and add «scsi_dh_rdac» to INITRD_MODULES».

    Note: If you want to include other hardware handlers, use them instead of scsi_dh_rdac.

    Note: Insert the hardware handler module before the adapters that will probe for your storage. For example, change «ipr qla2xxx» to «ipr scsi_dh_rdac qla2xxx»

    Create the new initrd image using the following command

      # mkinitrd -k /boot/vmlinux-$(uname -r) -i /boot/initrd-$(uname -r)-scsi_dh -M /boot/System.map-$(uname -r)

  • Replace the initrd in your grub.conf/lilo.conf/yaboot.conf with the newly built initrd.
  • Reboot
  • 6. Tips and Tricks

    1. Using alias: By default, the multipathed devices are named with the uid of the device, which one accesses through /dev/mapper/$. When one uses user_friendly_names, devices will be named as mpath0, mpath1 etc., which may meet ones needs. User also have an option to define a alias in multipath.conf for each of the device.
    1. Persistent device names: The names (uid_names or mpath names or alias names) that appear in /dev/mapper are persistent across boots, and the names dm-, dm-1 etc., can change between reboots. So, it is advisable to use the device names that appear under /dev/mapper and avoid using the dm-? names.
    2. Restart of tools after changing multipath,conf file: Once multipath.conf file is changed, the multipath tools need to be rerun for those configuration values to be effective. One has to kill multipathd, run multipath -F and then restart multipathd and multipath.
    3. Devices with paritions: Create device partitions before running multipath, as kpartx is configured to run to create multipathed partitions that way. Partions on device mpath0 appear as /dev/mapper/mpath0p1, /dev/mapper/mpath0p2, etc.,

    Using binding file in clustered environment: Bindings file holds the bindings between the device mapper names and the uid of the underlying device. By default the file is /var/lib/multipath/bindings, this can be changed by the multipath command line option -b. In a clustered environment, this file can be created in one node and can be transferred to another to get the same names.
    Note that the same effect can also be acheived by using alias and having the same multipath.conf file in all the nodes of the cluster.
    Getting the multipath device name corresponding to a SCSI device: If one knows the name of a SCSI device and wants to get the device mapper name associated with that the could use multipath -l /dev/sda, where sda is the SCSI device. On the other hand, if one knows the device mapper name and wants to know the underlying device names they could use the same command with the device mapper name. i.e multipath -l mpath0, where mpath0 is the device mapper name.

    When using LVM on dm-multipath devices, it is better to turn lvm scanning off on the underlying SCSI devices. This can be done by changing the filter parameter in /etc/lvm/lvm.conf to be filter = [ «a/dev/mapper/.*/», «r/dev/sd.*/» ].
    If your root device is also a multipathed lvm device, then make the above change before you create a new initrd image.

  • To find out if your device (vendor/product) is supported by the tool by default do the following.
    • In RHEL:
      • Make sure that multipathd is running. Then run
    • This command will list all the devices that are built-in in the tools. In SLES:
    • this would list all the devices that are built-in in the tools.

      If you have more than 1024 paths, you need to set a configuration parameter max_fds to a number equal to or greater than the number of paths + 32. Otherwise, you might multipathd daemon die with an error (in /var/log/messages) saying that there are too many files open.

    1. When multipath/multipathd starts you might see a message(s) like
      in console or /var/log/messages. This is due to dm-multipath trying to create multipath devices for your root device and/or other devices that are already mounted or opened.

    Источник

    Code: Select all

    [    0.000000] Initializing cgroup subsys cpuset
    [    0.000000] Initializing cgroup subsys cpu
    [    0.000000] Initializing cgroup subsys cpuacct
    [    0.000000] Linux version 3.16.0-4-686-pae (debian-kernel@lists.debian.org) (gcc version 4.8.4 (Debian 4.8.4-1) ) #1 SMP Debian 3.16.7-ckt9-3~deb8u1 (2015-04-24)
    [    0.000000] Disabled fast string operations
    [    0.000000] e820: BIOS-provided physical RAM map:
    [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
    [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
    [    0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved
    [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000003ffcffff] usable
    [    0.000000] BIOS-e820: [mem 0x000000003ffd0000-0x000000003ffddfff] ACPI data
    [    0.000000] BIOS-e820: [mem 0x000000003ffde000-0x000000003fffffff] ACPI NVS
    [    0.000000] BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
    [    0.000000] BIOS-e820: [mem 0x00000000ffb80000-0x00000000ffffffff] reserved
    [    0.000000] NX (Execute Disable) protection: active
    [    0.000000] SMBIOS 2.3 present.
    [    0.000000] DMI: ASUSTeK Computer Inc.  A6JC                /A6JC      , BIOS A6JCMAS.216  05/10/2006
    [    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
    [    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
    [    0.000000] e820: last_pfn = 0x3ffd0 max_arch_pfn = 0x1000000
    [    0.000000] MTRR default type: uncachable
    [    0.000000] MTRR fixed ranges enabled:
    [    0.000000]   00000-9FFFF write-back
    [    0.000000]   A0000-BFFFF uncachable
    [    0.000000]   C0000-CFFFF write-protect
    [    0.000000]   D0000-DFFFF uncachable
    [    0.000000]   E0000-EFFFF write-through
    [    0.000000]   F0000-FFFFF write-protect
    [    0.000000] MTRR variable ranges enabled:
    [    0.000000]   0 base 000000000 mask FC0000000 write-back
    [    0.000000]   1 disabled
    [    0.000000]   2 disabled
    [    0.000000]   3 disabled
    [    0.000000]   4 disabled
    [    0.000000]   5 disabled
    [    0.000000]   6 disabled
    [    0.000000]   7 disabled
    [    0.000000] PAT not supported by CPU.
    [    0.000000] found SMP MP-table at [mem 0x000ff780-0x000ff78f] mapped at [c00ff780]
    [    0.000000] initial memory mapped: [mem 0x00000000-0x01bfffff]
    [    0.000000] Base memory trampoline at [c009b000] 9b000 size 16384
    [    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
    [    0.000000]  [mem 0x00000000-0x000fffff] page 4k
    [    0.000000] init_memory_mapping: [mem 0x36200000-0x363fffff]
    [    0.000000]  [mem 0x36200000-0x363fffff] page 2M
    [    0.000000] init_memory_mapping: [mem 0x34000000-0x361fffff]
    [    0.000000]  [mem 0x34000000-0x361fffff] page 2M
    [    0.000000] init_memory_mapping: [mem 0x00100000-0x33ffffff]
    [    0.000000]  [mem 0x00100000-0x001fffff] page 4k
    [    0.000000]  [mem 0x00200000-0x33ffffff] page 2M
    [    0.000000] init_memory_mapping: [mem 0x36400000-0x375fdfff]
    [    0.000000]  [mem 0x36400000-0x373fffff] page 2M
    [    0.000000]  [mem 0x37400000-0x375fdfff] page 4k
    [    0.000000] BRK [0x01798000, 0x01798fff] PGTABLE
    [    0.000000] BRK [0x01799000, 0x0179afff] PGTABLE
    [    0.000000] BRK [0x0179b000, 0x0179bfff] PGTABLE
    [    0.000000] BRK [0x0179c000, 0x0179cfff] PGTABLE
    [    0.000000] RAMDISK: [mem 0x3641c000-0x37205fff]
    [    0.000000] ACPI: Early table checksum verification disabled
    [    0.000000] ACPI: RSDP 0x000F79B0 000014 (v00 ACPIAM)
    [    0.000000] ACPI: RSDT 0x3FFD0000 000038 (v01 A M I  OEMRSDT  05000610 MSFT 00000097)
    [    0.000000] ACPI: FACP 0x3FFD0200 000084 (v02 A M I  OEMFACP  05000610 MSFT 00000097)
    [    0.000000] ACPI: DSDT 0x3FFD0460 007BCC (v01 A6JC0  A6JC0216 00000216 INTL 02002026)
    [    0.000000] ACPI: FACS 0x3FFDE000 000040
    [    0.000000] ACPI: APIC 0x3FFD0390 00005C (v01 A M I  OEMAPIC  05000610 MSFT 00000097)
    [    0.000000] ACPI: MCFG 0x3FFD03F0 00003C (v01 A M I  OEMMCFG  05000610 MSFT 00000097)
    [    0.000000] ACPI: BOOT 0x3FFD0430 000028 (v01 A M I  OEMBOOT  05000610 MSFT 00000097)
    [    0.000000] ACPI: OEMB 0x3FFDE040 000046 (v01 A M I  AMI_OEM  05000610 MSFT 00000097)
    [    0.000000] ACPI: Local APIC address 0xfee00000
    [    0.000000] 137MB HIGHMEM available.
    [    0.000000] 885MB LOWMEM available.
    [    0.000000]   mapped low ram: 0 - 375fe000
    [    0.000000]   low ram: 0 - 375fe000
    [    0.000000] BRK [0x0179d000, 0x0179dfff] PGTABLE
    [    0.000000] Zone ranges:
    [    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
    [    0.000000]   Normal   [mem 0x01000000-0x375fdfff]
    [    0.000000]   HighMem  [mem 0x375fe000-0x3ffcffff]
    [    0.000000] Movable zone start for each node
    [    0.000000] Early memory node ranges
    [    0.000000]   node   0: [mem 0x00001000-0x0009efff]
    [    0.000000]   node   0: [mem 0x00100000-0x3ffcffff]
    [    0.000000] On node 0 totalpages: 261998
    [    0.000000] free_area_init_node: node 0, pgdat c1658ec0, node_mem_map f5c1c020
    [    0.000000]   DMA zone: 32 pages used for memmap
    [    0.000000]   DMA zone: 0 pages reserved
    [    0.000000]   DMA zone: 3998 pages, LIFO batch:0
    [    0.000000]   Normal zone: 1740 pages used for memmap
    [    0.000000]   Normal zone: 222718 pages, LIFO batch:31
    [    0.000000]   HighMem zone: 276 pages used for memmap
    [    0.000000]   HighMem zone: 35282 pages, LIFO batch:7
    [    0.000000] Using APIC driver default
    [    0.000000] ACPI: PM-Timer IO Port: 0x808
    [    0.000000] ACPI: Local APIC address 0xfee00000
    [    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
    [    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
    [    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
    [    0.000000] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
    [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
    [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
    [    0.000000] ACPI: IRQ0 used by override.
    [    0.000000] ACPI: IRQ2 used by override.
    [    0.000000] ACPI: IRQ9 used by override.
    [    0.000000] Using ACPI (MADT) for SMP configuration information
    [    0.000000] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
    [    0.000000] nr_irqs_gsi: 40
    [    0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
    [    0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000dffff]
    [    0.000000] PM: Registered nosave memory: [mem 0x000e0000-0x000fffff]
    [    0.000000] e820: [mem 0x40000000-0xfedfffff] available for PCI devices
    [    0.000000] Booting paravirtualized kernel on bare hardware
    [    0.000000] setup_percpu: NR_CPUS:32 nr_cpumask_bits:32 nr_cpu_ids:2 nr_node_ids:1
    [    0.000000] PERCPU: Embedded 14 pages/cpu @f75db000 s34752 r0 d22592 u57344
    [    0.000000] pcpu-alloc: s34752 r0 d22592 u57344 alloc=14*4096
    [    0.000000] pcpu-alloc: [0] 0 [0] 1 
    [    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 260226
    [    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.16.0-4-686-pae root=UUID=bf5b31e0-4d64-48a7-8f3b-afd3d5f67521 ro quiet init=/lib/sysvinit/init
    [    0.000000] PID hash table entries: 4096 (order: 2, 16384 bytes)
    [    0.000000] Dentry cache hash table entries: 131072 (order: 7, 524288 bytes)
    [    0.000000] Inode-cache hash table entries: 65536 (order: 6, 262144 bytes)
    [    0.000000] Initializing CPU#0
    [    0.000000] Initializing HighMem for node 0 (000375fe:0003ffd0)
    [    0.000000] Initializing Movable for node 0 (00000000:00000000)
    [    0.000000] Memory: 1016756K/1047992K available (4598K kernel code, 522K rwdata, 1448K rodata, 656K init, 460K bss, 31236K reserved, 141128K highmem)
    [    0.000000] virtual kernel memory layout:
        fixmap  : 0xffd36000 - 0xfffff000   (2852 kB)
        pkmap   : 0xff600000 - 0xff800000   (2048 kB)
        vmalloc : 0xf7dfe000 - 0xff5fe000   ( 120 MB)
        lowmem  : 0xc0000000 - 0xf75fe000   ( 885 MB)
          .init : 0xc166d000 - 0xc1711000   ( 656 kB)
          .data : 0xc147dc40 - 0xc166b880   (1975 kB)
          .text : 0xc1000000 - 0xc147dc40   (4599 kB)
    [    0.000000] Checking if this processor honours the WP bit even in supervisor mode...Ok.
    [    0.000000] Hierarchical RCU implementation.
    [    0.000000]    RCU dyntick-idle grace-period acceleration is enabled.
    [    0.000000]    RCU restricting CPUs from NR_CPUS=32 to nr_cpu_ids=2.
    [    0.000000] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
    [    0.000000] NR_IRQS:2304 nr_irqs:512 16
    [    0.000000] CPU 0 irqstacks, hard=f5806000 soft=f5808000
    [    0.000000] Console: colour VGA+ 80x25
    [    0.000000] console [tty0] enabled
    [    0.000000] tsc: Fast TSC calibration using PIT
    [    0.000000] tsc: Detected 1729.062 MHz processor
    [    0.004013] Calibrating delay loop (skipped), value calculated using timer frequency.. 3458.12 BogoMIPS (lpj=6916248)
    [    0.004018] pid_max: default: 32768 minimum: 301
    [    0.004036] ACPI: Core revision 20140424
    [    0.018401] ACPI: All ACPI Tables successfully acquired
    [    0.020047] Security Framework initialized
    [    0.020066] AppArmor: AppArmor disabled by boot time parameter
    [    0.020069] Yama: disabled by default; enable with sysctl kernel.yama.*
    [    0.020095] Mount-cache hash table entries: 2048 (order: 1, 8192 bytes)
    [    0.020099] Mountpoint-cache hash table entries: 2048 (order: 1, 8192 bytes)
    [    0.020512] Initializing cgroup subsys memory
    [    0.020522] Initializing cgroup subsys devices
    [    0.020536] Initializing cgroup subsys freezer
    [    0.020542] Initializing cgroup subsys net_cls
    [    0.020553] Initializing cgroup subsys blkio
    [    0.020562] Initializing cgroup subsys perf_event
    [    0.020566] Initializing cgroup subsys net_prio
    [    0.020603] Disabled fast string operations
    [    0.020607] CPU: Physical Processor ID: 0
    [    0.020609] CPU: Processor Core ID: 0
    [    0.020616] mce: CPU supports 6 MCE banks
    [    0.020629] CPU0: Thermal monitoring enabled (TM2)
    [    0.020642] Last level iTLB entries: 4KB 128, 2MB 0, 4MB 2
    Last level dTLB entries: 4KB 128, 2MB 0, 4MB 8, 1GB 0
    tlb_flushall_shift: 6
    [    0.020842] Freeing SMP alternatives memory: 20K (c1711000 - c1716000)
    [    0.025268] ftrace: allocating 20747 entries in 41 pages
    [    0.032124] Enabling APIC mode:  Flat.  Using 1 I/O APICs
    [    0.032548] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
    [    0.072954] smpboot: CPU0: Genuine Intel(R) CPU           T2250  @ 1.73GHz (fam: 06, model: 0e, stepping: 08)
    [    0.076000] Performance Events: Core events, core PMU driver.
    [    0.076000] ... version:                1
    [    0.076000] ... bit width:              40
    [    0.076000] ... generic registers:      2
    [    0.076000] ... value mask:             000000ffffffffff
    [    0.076000] ... max period:             000000007fffffff
    [    0.076000] ... fixed-purpose events:   0
    [    0.076000] ... event mask:             0000000000000003
    [    0.076000] CPU 1 irqstacks, hard=f5942000 soft=f5944000
    [    0.076000] x86: Booting SMP configuration:
    [    0.076000] .... node  #0, CPUs:      #1
    [    0.008000] Initializing CPU#1
    [    0.008000] Disabled fast string operations
    [    0.085525] NMI watchdog: enabled on all CPUs, permanently consumes one hw-PMU counter.
    [    0.087599] x86: Booted up 1 node, 2 CPUs
    [    0.087603] smpboot: Total of 2 processors activated (6916.24 BogoMIPS)
    [    0.088151] devtmpfs: initialized
    [    0.088395] PM: Registering ACPI NVS region [mem 0x3ffde000-0x3fffffff] (139264 bytes)
    [    0.089821] pinctrl core: initialized pinctrl subsystem
    [    0.089821] NET: Registered protocol family 16
    [    0.089821] cpuidle: using governor ladder
    [    0.089821] cpuidle: using governor menu
    [    0.089821] ACPI: bus type PCI registered
    [    0.089821] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
    [    0.089821] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000)
    [    0.089821] PCI: not using MMCONFIG
    [    0.089821] PCI : PCI BIOS area is rw and x. Use pci=nobios if you want it NX.
    [    0.089821] PCI: PCI BIOS revision 3.00 entry at 0xf0031, last bus=5
    [    0.089821] PCI: Using configuration type 1 for base access
    [    0.089821] mtrr: your CPUs had inconsistent variable MTRR settings
    [    0.089821] mtrr: probably your BIOS does not setup all CPUs.
    [    0.089821] mtrr: corrected configuration.
    [    0.104078] ACPI: Added _OSI(Module Device)
    [    0.104082] ACPI: Added _OSI(Processor Device)
    [    0.104085] ACPI: Added _OSI(3.0 _SCP Extensions)
    [    0.104087] ACPI: Added _OSI(Processor Aggregator Device)
    [    0.109060] ACPI: Executed 1 blocks of module-level executable AML code
    [    0.112850] ACPI: Dynamic OEM Table Load:
    [    0.112860] ACPI: SSDT 0xF59CA400 000382 (v01 AMI    CPU1PM   00000001 INTL 02002026)
    [    0.113607] ACPI: Dynamic OEM Table Load:
    [    0.113613] ACPI BIOS Warning (bug): Incorrect checksum in table [SSDT] - 0xEC, should be 0x6C (20140424/tbprint-218)
    [    0.113620] ACPI: SSDT 0xF59CA000 000382 (v01 AMI    CPU2PM   00000001 INTL 02002026)
    [    0.116511] ACPI: Interpreter enabled
    [    0.116536] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [_S1_] (20140424/hwxface-580)
    [    0.116543] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [_S2_] (20140424/hwxface-580)
    [    0.116564] ACPI: (supports S0 S3 S4 S5)
    [    0.116567] ACPI: Using IOAPIC for interrupt routing
    [    0.116599] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000)
    [    0.119898] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in ACPI motherboard resources
    [    0.119904] PCI: MMCONFIG for 0000 [bus00-3f] at [mem 0xe0000000-0xe3ffffff] (base 0xe0000000) (size reduced!)
    [    0.119906] PCI: Using MMCONFIG for extended config space
    [    0.119933] PCI: Ignoring host bridge windows from ACPI; if necessary, use "pci=use_crs" and report a bug
    [    0.121787] ACPI: Power Resource [GFAN] (off)
    [    0.128947] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
    [    0.128956] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
    [    0.128964] acpi PNP0A08:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
    [    0.129118] acpi PNP0A08:00: host bridge window [io  0x0000-0x0cf7] (ignored)
    [    0.129122] acpi PNP0A08:00: host bridge window [io  0x0d00-0xffff] (ignored)
    [    0.129126] acpi PNP0A08:00: host bridge window [mem 0x000a0000-0x000bffff] (ignored)
    [    0.129129] acpi PNP0A08:00: host bridge window [mem 0x000d0000-0x000dffff] (ignored)
    [    0.129133] acpi PNP0A08:00: host bridge window [mem 0x40000000-0xffffffff] (ignored)
    [    0.129136] PCI: root bus 00: using default resources
    [    0.129141] acpi PNP0A08:00: [Firmware Info]: MMCONFIG for domain 0000 [bus 00-3f] only partially covers this bridge
    [    0.129410] PCI host bridge to bus 0000:00
    [    0.129415] pci_bus 0000:00: root bus resource [bus 00-ff]
    [    0.129419] pci_bus 0000:00: root bus resource [io  0x0000-0xffff]
    [    0.129423] pci_bus 0000:00: root bus resource [mem 0x00000000-0xffffffff]
    [    0.129436] pci 0000:00:00.0: [8086:27a0] type 00 class 0x060000
    [    0.129604] pci 0000:00:01.0: [8086:27a1] type 01 class 0x060400
    [    0.129665] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
    [    0.129736] pci 0000:00:01.0: System wakeup disabled by ACPI
    [    0.129864] pci 0000:00:1b.0: [8086:27d8] type 00 class 0x040300
    [    0.129890] pci 0000:00:1b.0: reg 0x10: [mem 0xfebfc000-0xfebfffff 64bit]
    [    0.129997] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
    [    0.130069] pci 0000:00:1b.0: System wakeup disabled by ACPI
    [    0.130158] pci 0000:00:1c.0: [8086:27d0] type 01 class 0x060400
    [    0.130271] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
    [    0.130345] pci 0000:00:1c.0: System wakeup disabled by ACPI
    [    0.130431] pci 0000:00:1c.3: [8086:27d6] type 01 class 0x060400
    [    0.130543] pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold
    [    0.130620] pci 0000:00:1c.3: System wakeup disabled by ACPI
    [    0.130706] pci 0000:00:1d.0: [8086:27c8] type 00 class 0x0c0300
    [    0.130762] pci 0000:00:1d.0: reg 0x20: [io  0xec00-0xec1f]
    [    0.130881] pci 0000:00:1d.0: System wakeup disabled by ACPI
    [    0.130963] pci 0000:00:1d.1: [8086:27c9] type 00 class 0x0c0300
    [    0.131019] pci 0000:00:1d.1: reg 0x20: [io  0xe880-0xe89f]
    [    0.131135] pci 0000:00:1d.1: System wakeup disabled by ACPI
    [    0.131218] pci 0000:00:1d.2: [8086:27ca] type 00 class 0x0c0300
    [    0.131274] pci 0000:00:1d.2: reg 0x20: [io  0xe800-0xe81f]
    [    0.131423] pci 0000:00:1d.3: [8086:27cb] type 00 class 0x0c0300
    [    0.131479] pci 0000:00:1d.3: reg 0x20: [io  0xe480-0xe49f]
    [    0.131640] pci 0000:00:1d.7: [8086:27cc] type 00 class 0x0c0320
    [    0.131667] pci 0000:00:1d.7: reg 0x10: [mem 0xfebfbc00-0xfebfbfff]
    [    0.131779] pci 0000:00:1d.7: PME# supported from D0 D3hot D3cold
    [    0.131848] pci 0000:00:1d.7: System wakeup disabled by ACPI
    [    0.131933] pci 0000:00:1e.0: [8086:2448] type 01 class 0x060401
    [    0.132069] pci 0000:00:1e.0: System wakeup disabled by ACPI
    [    0.132158] pci 0000:00:1f.0: [8086:27b9] type 00 class 0x060100
    [    0.132264] pci 0000:00:1f.0: Force enabled HPET at 0xfed00000
    [    0.132275] pci 0000:00:1f.0: can't claim BAR 13 [io  0x0800-0x087f]: address conflict with ACPI CPU throttle [io  0x0810-0x0815]
    [    0.132282] pci 0000:00:1f.0: quirk: [io  0x0480-0x04bf] claimed by ICH6 GPIO
    [    0.132291] pci 0000:00:1f.0: ICH7 LPC Generic IO decode 3 PIO at 0250 (mask 000f)
    [    0.132450] pci 0000:00:1f.1: [8086:27df] type 00 class 0x01018a
    [    0.132469] pci 0000:00:1f.1: reg 0x10: [io  0x0000-0x0007]
    [    0.132482] pci 0000:00:1f.1: reg 0x14: [io  0x0000-0x0003]
    [    0.132495] pci 0000:00:1f.1: reg 0x18: [io  0x08f0-0x08f7]
    [    0.132508] pci 0000:00:1f.1: reg 0x1c: [io  0x08f8-0x08fb]
    [    0.132521] pci 0000:00:1f.1: reg 0x20: [io  0xffa0-0xffaf]
    [    0.132549] pci 0000:00:1f.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
    [    0.132552] pci 0000:00:1f.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
    [    0.132555] pci 0000:00:1f.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
    [    0.132559] pci 0000:00:1f.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
    [    0.132789] pci 0000:01:00.0: [10de:01d7] type 00 class 0x030000
    [    0.132811] pci 0000:01:00.0: reg 0x10: [mem 0xfd000000-0xfdffffff]
    [    0.132831] pci 0000:01:00.0: reg 0x14: [mem 0xc0000000-0xcfffffff 64bit pref]
    [    0.132850] pci 0000:01:00.0: reg 0x1c: [mem 0xfc000000-0xfcffffff 64bit]
    [    0.132874] pci 0000:01:00.0: reg 0x30: [mem 0xfbfe0000-0xfbffffff pref]
    [    0.133029] pci 0000:01:00.0: disabling ASPM on pre-1.1 PCIe device.  You can enable it with 'pcie_aspm=force'
    [    0.133041] pci 0000:00:01.0: PCI bridge to [bus 01]
    [    0.133047] pci 0000:00:01.0:   bridge window [mem 0xf9f00000-0xfdffffff]
    [    0.133054] pci 0000:00:01.0:   bridge window [mem 0xbdf00000-0xddefffff 64bit pref]
    [    0.133191] pci 0000:02:00.0: [10ec:8168] type 00 class 0x020000
    [    0.133220] pci 0000:02:00.0: reg 0x10: [io  0xc800-0xc8ff]
    [    0.133259] pci 0000:02:00.0: reg 0x18: [mem 0xfe0ff000-0xfe0fffff 64bit]
    [    0.133305] pci 0000:02:00.0: reg 0x30: [mem 0xfe0e0000-0xfe0effff pref]
    [    0.133430] pci 0000:02:00.0: supports D1 D2
    [    0.133433] pci 0000:02:00.0: PME# supported from D1 D2 D3hot D3cold
    [    0.133542] pci 0000:02:00.0: disabling ASPM on pre-1.1 PCIe device.  You can enable it with 'pcie_aspm=force'
    [    0.133555] pci 0000:00:1c.0: PCI bridge to [bus 02]
    [    0.133561] pci 0000:00:1c.0:   bridge window [io  0xc000-0xcfff]
    [    0.133567] pci 0000:00:1c.0:   bridge window [mem 0xfe000000-0xfe0fffff]
    [    0.133745] pci 0000:03:00.0: [8086:4222] type 00 class 0x028000
    [    0.133811] pci 0000:03:00.0: reg 0x10: [mem 0xfe1ff000-0xfe1fffff]
    [    0.134245] pci 0000:03:00.0: PME# supported from D0 D3hot D3cold
    [    0.134396] pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device.  You can enable it with 'pcie_aspm=force'
    [    0.134424] pci 0000:00:1c.3: PCI bridge to [bus 03]
    [    0.134432] pci 0000:00:1c.3:   bridge window [mem 0xfe100000-0xfe1fffff]
    [    0.134551] pci 0000:04:01.0: [1180:0476] type 02 class 0x060700
    [    0.134577] pci 0000:04:01.0: reg 0x10: [mem 0x00000000-0x00000fff]
    [    0.134615] pci 0000:04:01.0: supports D1 D2
    [    0.134618] pci 0000:04:01.0: PME# supported from D0 D1 D2 D3hot D3cold
    [    0.134656] pci 0000:04:01.0: System wakeup disabled by ACPI
    [    0.134743] pci 0000:04:01.1: [1180:0552] type 00 class 0x0c0010
    [    0.134767] pci 0000:04:01.1: reg 0x10: [mem 0xfeaff800-0xfeafffff]
    [    0.134868] pci 0000:04:01.1: PME# supported from D0 D3hot D3cold
    [    0.134901] pci 0000:04:01.1: System wakeup disabled by ACPI
    [    0.134988] pci 0000:04:01.2: [1180:0822] type 00 class 0x080500
    [    0.135012] pci 0000:04:01.2: reg 0x10: [mem 0xfeaff400-0xfeaff4ff]
    [    0.135112] pci 0000:04:01.2: supports D1 D2
    [    0.135115] pci 0000:04:01.2: PME# supported from D0 D1 D2 D3hot D3cold
    [    0.135229] pci 0000:04:01.3: [1180:0592] type 00 class 0x088000
    [    0.135252] pci 0000:04:01.3: reg 0x10: [mem 0xfeaff000-0xfeaff0ff]
    [    0.135352] pci 0000:04:01.3: supports D1 D2
    [    0.135355] pci 0000:04:01.3: PME# supported from D0 D1 D2 D3hot D3cold
    [    0.135506] pci 0000:00:1e.0: PCI bridge to [bus 04-05] (subtractive decode)
    [    0.135512] pci 0000:00:1e.0:   bridge window [io  0xd000-0xdfff]
    [    0.135518] pci 0000:00:1e.0:   bridge window [mem 0xfe200000-0xfeafffff]
    [    0.135527] pci 0000:00:1e.0:   bridge window [mem 0xddf00000-0xdfefffff 64bit pref]
    [    0.135531] pci 0000:00:1e.0:   bridge window [io  0x0000-0xffff] (subtractive decode)
    [    0.135534] pci 0000:00:1e.0:   bridge window [mem 0x00000000-0xffffffff] (subtractive decode)
    [    0.135613] pci_bus 0000:05: busn_res: can not insert [bus 05-ff] under [bus 04-05] (conflicts with (null) [bus 04-05])
    [    0.135621] pci_bus 0000:05: busn_res: [bus 05-ff] end is updated to 08
    [    0.135625] pci_bus 0000:05: busn_res: can not insert [bus 05-08] under [bus 04-05] (conflicts with (null) [bus 04-05])
    [    0.135631] pci_bus 0000:05: [bus 05-08] partially hidden behind transparent bridge 0000:04 [bus 04-05]
    [    0.135637] pci 0000:00:1e.0: bridge has subordinate 05 but max busn 08
    [    0.135672] pci_bus 0000:00: on NUMA node 0
    [    0.136268] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 *11 12)
    [    0.136356] ACPI: PCI Interrupt Link [LNKB] (IRQs *3 4 5 6 7 12)
    [    0.136442] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 *4 5 6 7 12)
    [    0.136527] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 *5 6 7 12)
    [    0.136612] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 12) *0, disabled.
    [    0.136699] ACPI: PCI Interrupt Link [LNKF] (IRQs *3 4 5 6 7 12)
    [    0.136784] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 *6 7 12)
    [    0.136869] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 *7 12)
    [    0.137186] ACPI: Enabled 4 GPEs in block 00 to 1F
    [    0.137287] ACPI : EC: GPE = 0x1c, I/O: command/status = 0x66, data = 0x62
    [    0.137418] vgaarb: setting as boot device: PCI:0000:01:00.0
    [    0.137418] vgaarb: device added: PCI:0000:01:00.0,decodes=io+mem,owns=io+mem,locks=none
    [    0.137418] vgaarb: loaded
    [    0.137418] vgaarb: bridge control possible 0000:01:00.0
    [    0.137418] PCI: Using ACPI for IRQ routing
    [    0.137418] PCI: pci_cache_line_size set to 64 bytes
    [    0.137418] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
    [    0.137418] e820: reserve RAM buffer [mem 0x3ffd0000-0x3fffffff]
    [    0.137418] hpet clockevent registered
    [    0.137418] HPET: 3 timers in total, 0 timers will be used for per-cpu timer
    [    0.137418] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
    [    0.137418] hpet0: 3 comparators, 64-bit 14.318180 MHz counter
    [    0.140032] Switched to clocksource hpet
    [    0.150422] pnp: PnP ACPI init
    [    0.150448] ACPI: bus type PNP registered
    [    0.150592] system 00:00: [mem 0xfed13000-0xfed19fff] has been reserved
    [    0.150599] system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active)
    [    0.150751] pnp 00:01: Plug and Play ACPI device, IDs PNP0b00 (active)
    [    0.150847] pnp 00:02: Plug and Play ACPI device, IDs PNP0303 PNP030b (active)
    [    0.150964] pnp 00:03: Plug and Play ACPI device, IDs SYN0a06 SYN0a00 SYN0002 PNP0f03 PNP0f13 PNP0f12 (active)
    [    0.151213] system 00:04: [io  0x04d0-0x04d1] has been reserved
    [    0.151219] system 00:04: [io  0x0800-0x087f] could not be reserved
    [    0.151223] system 00:04: [io  0x0480-0x04bf] has been reserved
    [    0.151227] system 00:04: [mem 0xfed1c000-0xfed1ffff] has been reserved
    [    0.151231] system 00:04: [mem 0xfed20000-0xfed8ffff] has been reserved
    [    0.151236] system 00:04: Plug and Play ACPI device, IDs PNP0c02 (active)
    [    0.151378] system 00:05: [mem 0xffb80000-0xffbfffff] has been reserved
    [    0.151383] system 00:05: [mem 0xfff80000-0xffffffff] has been reserved
    [    0.151387] system 00:05: Plug and Play ACPI device, IDs PNP0c02 (active)
    [    0.151514] system 00:06: [mem 0xffc00000-0xfff7ffff] has been reserved
    [    0.151519] system 00:06: Plug and Play ACPI device, IDs PNP0c02 (active)
    [    0.151655] system 00:07: [io  0x0400-0x041f] has been reserved
    [    0.151660] system 00:07: [mem 0xfec00000-0xfec00fff] could not be reserved
    [    0.151664] system 00:07: [mem 0xfee00000-0xfee00fff] has been reserved
    [    0.151668] system 00:07: [mem 0xfec10000-0xfec17fff] has been reserved
    [    0.151672] system 00:07: [mem 0xfec28000-0xfec2ffff] has been reserved
    [    0.151677] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active)
    [    0.151798] system 00:08: [mem 0xe0000000-0xe3ffffff] has been reserved
    [    0.151803] system 00:08: Plug and Play ACPI device, IDs PNP0c02 (active)
    [    0.152147] system 00:09: [mem 0x00000000-0x0009ffff] could not be reserved
    [    0.152152] system 00:09: [mem 0x000c0000-0x000cffff] could not be reserved
    [    0.152156] system 00:09: [mem 0x000e0000-0x000fffff] could not be reserved
    [    0.152160] system 00:09: [mem 0x00100000-0x3fffffff] could not be reserved
    [    0.152164] system 00:09: Plug and Play ACPI device, IDs PNP0c01 (active)
    [    0.152374] pnp: PnP ACPI: found 10 devices
    [    0.152377] ACPI: bus type PNP unregistered
    [    0.152385] PnPBIOS: Disabled by ACPI PNP
    [    0.191376] pci 0000:00:1c.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000
    [    0.191389] pci 0000:00:1c.3: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
    [    0.191394] pci 0000:00:1c.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000
    [    0.191417] pci 0000:00:1f.0: BAR 13: [io  0x0800-0x087f] has bogus alignment
    [    0.191422] pci 0000:00:1c.0: res[15]=[mem 0x00100000-0x000fffff 64bit pref] get_res_add_size add_size 200000
    [    0.191426] pci 0000:00:1c.3: res[15]=[mem 0x00100000-0x000fffff 64bit pref] get_res_add_size add_size 200000
    [    0.191430] pci 0000:00:1c.3: res[13]=[io  0x1000-0x0fff] get_res_add_size add_size 1000
    [    0.191440] pci 0000:00:1c.0: BAR 15: assigned [mem 0x40000000-0x401fffff 64bit pref]
    [    0.191445] pci 0000:00:1c.3: BAR 15: assigned [mem 0x40200000-0x403fffff 64bit pref]
    [    0.191451] pci 0000:00:1c.3: BAR 13: assigned [io  0x1000-0x1fff]
    [    0.191455] pci 0000:00:01.0: PCI bridge to [bus 01]
    [    0.191461] pci 0000:00:01.0:   bridge window [mem 0xf9f00000-0xfdffffff]
    [    0.191466] pci 0000:00:01.0:   bridge window [mem 0xbdf00000-0xddefffff 64bit pref]
    [    0.191472] pci 0000:00:1c.0: PCI bridge to [bus 02]
    [    0.191477] pci 0000:00:1c.0:   bridge window [io  0xc000-0xcfff]
    [    0.191484] pci 0000:00:1c.0:   bridge window [mem 0xfe000000-0xfe0fffff]
    [    0.191490] pci 0000:00:1c.0:   bridge window [mem 0x40000000-0x401fffff 64bit pref]
    [    0.191499] pci 0000:00:1c.3: PCI bridge to [bus 03]
    [    0.191503] pci 0000:00:1c.3:   bridge window [io  0x1000-0x1fff]
    [    0.191511] pci 0000:00:1c.3:   bridge window [mem 0xfe100000-0xfe1fffff]
    [    0.191517] pci 0000:00:1c.3:   bridge window [mem 0x40200000-0x403fffff 64bit pref]
    [    0.191528] pci 0000:04:01.0: res[15]=[mem 0x04000000-0x03ffffff pref] get_res_add_size add_size 4000000
    [    0.191532] pci 0000:04:01.0: res[16]=[mem 0x04000000-0x03ffffff] get_res_add_size add_size 4000000
    [    0.191536] pci 0000:04:01.0: res[13]=[io  0x0100-0x00ff] get_res_add_size add_size 100
    [    0.191540] pci 0000:04:01.0: res[14]=[io  0x0100-0x00ff] get_res_add_size add_size 100
    [    0.191548] pci 0000:04:01.0: BAR 0: assigned [mem 0x44000000-0x44000fff]
    [    0.191558] pci 0000:04:01.0: BAR 15: assigned [mem 0xe4000000-0xe7ffffff pref]
    [    0.191566] pci 0000:04:01.0: BAR 16: assigned [mem 0x48000000-0x4bffffff]
    [    0.191569] pci 0000:04:01.0: BAR 13: assigned [io  0xd000-0xd0ff]
    [    0.191573] pci 0000:04:01.0: BAR 14: assigned [io  0xd400-0xd4ff]
    [    0.191578] pci 0000:04:01.0: CardBus bridge to [bus 05-08]
    [    0.191581] pci 0000:04:01.0:   bridge window [io  0xd000-0xd0ff]
    [    0.191587] pci 0000:04:01.0:   bridge window [io  0xd400-0xd4ff]
    [    0.191594] pci 0000:04:01.0:   bridge window [mem 0xe4000000-0xe7ffffff pref]
    [    0.191600] pci 0000:04:01.0:   bridge window [mem 0x48000000-0x4bffffff]
    [    0.191606] pci 0000:00:1e.0: PCI bridge to [bus 04-05]
    [    0.191610] pci 0000:00:1e.0:   bridge window [io  0xd000-0xdfff]
    [    0.191618] pci 0000:00:1e.0:   bridge window [mem 0xfe200000-0xfeafffff]
    [    0.191624] pci 0000:00:1e.0:   bridge window [mem 0xddf00000-0xdfefffff 64bit pref]
    [    0.191633] pci_bus 0000:00: resource 4 [io  0x0000-0xffff]
    [    0.191636] pci_bus 0000:00: resource 5 [mem 0x00000000-0xffffffff]
    [    0.191640] pci_bus 0000:01: resource 1 [mem 0xf9f00000-0xfdffffff]
    [    0.191644] pci_bus 0000:01: resource 2 [mem 0xbdf00000-0xddefffff 64bit pref]
    [    0.191648] pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
    [    0.191651] pci_bus 0000:02: resource 1 [mem 0xfe000000-0xfe0fffff]
    [    0.191655] pci_bus 0000:02: resource 2 [mem 0x40000000-0x401fffff 64bit pref]
    [    0.191658] pci_bus 0000:03: resource 0 [io  0x1000-0x1fff]
    [    0.191662] pci_bus 0000:03: resource 1 [mem 0xfe100000-0xfe1fffff]
    [    0.191665] pci_bus 0000:03: resource 2 [mem 0x40200000-0x403fffff 64bit pref]
    [    0.191669] pci_bus 0000:04: resource 0 [io  0xd000-0xdfff]
    [    0.191672] pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfeafffff]
    [    0.191676] pci_bus 0000:04: resource 2 [mem 0xddf00000-0xdfefffff 64bit pref]
    [    0.191679] pci_bus 0000:04: resource 4 [io  0x0000-0xffff]
    [    0.191683] pci_bus 0000:04: resource 5 [mem 0x00000000-0xffffffff]
    [    0.191686] pci_bus 0000:05: resource 0 [io  0xd000-0xd0ff]
    [    0.191690] pci_bus 0000:05: resource 1 [io  0xd400-0xd4ff]
    [    0.191693] pci_bus 0000:05: resource 2 [mem 0xe4000000-0xe7ffffff pref]
    [    0.191697] pci_bus 0000:05: resource 3 [mem 0x48000000-0x4bffffff]
    [    0.191767] NET: Registered protocol family 2
    [    0.192124] TCP established hash table entries: 8192 (order: 3, 32768 bytes)
    [    0.192153] TCP bind hash table entries: 8192 (order: 4, 65536 bytes)
    [    0.192201] TCP: Hash tables configured (established 8192 bind 8192)
    [    0.192243] TCP: reno registered
    [    0.192247] UDP hash table entries: 512 (order: 2, 16384 bytes)
    [    0.192260] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes)
    [    0.192344] NET: Registered protocol family 1
    [    0.193227] pci 0000:01:00.0: Video device with shadowed ROM
    [    0.193255] PCI: CLS 32 bytes, default 64
    [    0.193326] Unpacking initramfs...
    [    0.636809] Freeing initrd memory: 14248K (f641c000 - f7206000)
    [    0.636932] Simple Boot Flag at 0x52 set to 0x1
    [    0.637188] microcode: CPU0 sig=0x6e8, pf=0x20, revision=0x39
    [    0.637202] microcode: CPU1 sig=0x6e8, pf=0x20, revision=0x39
    [    0.637332] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.fsnet.co.uk>, Peter Oruba
    [    0.637830] futex hash table entries: 512 (order: 3, 32768 bytes)
    [    0.637891] audit: initializing netlink subsys (disabled)
    [    0.637921] audit: type=2000 audit(1430292856.636:1): initialized
    [    0.638415] HugeTLB registered 2 MB page size, pre-allocated 0 pages
    [    0.638441] zbud: loaded
    [    0.638594] VFS: Disk quotas dquot_6.5.2
    [    0.638619] Dquot-cache hash table entries: 1024 (order 0, 4096 bytes)
    [    0.638709] msgmni has been set to 1738
    [    0.639274] alg: No test for stdrng (krng)
    [    0.639313] bounce: pool size: 64 pages
    [    0.639337] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 252)
    [    0.639412] io scheduler noop registered
    [    0.639416] io scheduler deadline registered
    [    0.639478] io scheduler cfq registered (default)
    [    0.639764] pcieport 0000:00:01.0: irq 40 for MSI/MSI-X
    [    0.639943] pcieport 0000:00:1c.0: irq 41 for MSI/MSI-X
    [    0.640083] pcieport 0000:00:1c.3: enabling device (0106 -> 0107)
    [    0.640182] pcieport 0000:00:1c.3: irq 42 for MSI/MSI-X
    [    0.640345] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
    [    0.640380] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
    [    0.640418] intel_idle: does not run on family 6 model 14
    [    0.640488] GHES: HEST is not enabled!
    [    0.640520] isapnp: Scanning for PnP cards...
    [    0.994020] isapnp: No Plug & Play device found
    [    0.994151] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
    [    0.994901] Linux agpgart interface v0.103
    [    0.995342] i8042: PNP: PS/2 Controller [PNP0303:PS2K,PNP0f03:PS2M] at 0x60,0x64 irq 1,12
    [    0.997944] i8042: Detected active multiplexing controller, rev 1.1
    [    0.999106] serio: i8042 KBD port at 0x60,0x64 irq 1
    [    0.999114] serio: i8042 AUX0 port at 0x60,0x64 irq 12
    [    0.999172] serio: i8042 AUX1 port at 0x60,0x64 irq 12
    [    0.999222] serio: i8042 AUX2 port at 0x60,0x64 irq 12
    [    0.999272] serio: i8042 AUX3 port at 0x60,0x64 irq 12
    [    0.999515] mousedev: PS/2 mouse device common for all mice
    [    0.999601] rtc_cmos 00:01: RTC can wake from S4
    [    0.999790] rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
    [    0.999823] rtc_cmos 00:01: alarms up to one month, 114 bytes nvram, hpet irqs
    [    0.999848] ledtrig-cpu: registered to indicate activity on CPUs
    [    1.000458] TCP: cubic registered
    [    1.000508] NET: Registered protocol family 10
    [    1.000862] mip6: Mobile IPv6
    [    1.000868] NET: Registered protocol family 17
    [    1.000876] mpls_gso: MPLS GSO support
    [    1.001156] Using IPI No-Shortcut mode
    [    1.001385] registered taskstats version 1
    [    1.002006] rtc_cmos 00:01: setting system clock to 2015-04-29 07:34:17 UTC (1430292857)
    [    1.002092] PM: Hibernation image not present or could not be loaded.
    [    1.002512] Freeing unused kernel memory: 656K (c166d000 - c1711000)
    [    1.002573] Write protecting the kernel text: 4600k
    [    1.002638] Write protecting the kernel read-only data: 1452k
    [    1.002640] NX-protecting the kernel data: 3592k
    [    1.016109] systemd-udevd[60]: starting version 215
    [    1.016691] random: systemd-udevd urandom read with 0 bits of entropy available
    [    1.030544] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
    [    1.032573] ACPI: bus type USB registered
    [    1.032627] usbcore: registered new interface driver usbfs
    [    1.032648] usbcore: registered new interface driver hub
    [    1.037964] usbcore: registered new device driver usb
    [    1.039523] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
    [    1.040450] uhci_hcd: USB Universal Host Controller Interface driver
    [    1.040676] uhci_hcd 0000:00:1d.0: UHCI Host Controller
    [    1.040689] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 1
    [    1.040701] uhci_hcd 0000:00:1d.0: detected 2 ports
    [    1.040749] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000ec00
    [    1.041116] usb usb1: New USB device found, idVendor=1d6b, idProduct=0001
    [    1.041120] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
    [    1.041123] usb usb1: Product: UHCI Host Controller
    [    1.041126] usb usb1: Manufacturer: Linux 3.16.0-4-686-pae uhci_hcd
    [    1.041129] usb usb1: SerialNumber: 0000:00:1d.0
    [    1.041300] ACPI: Fan [FN00] (off)
    [    1.041400] hub 1-0:1.0: USB hub found
    [    1.041492] ehci-pci: EHCI PCI platform driver
    [    1.041568] hub 1-0:1.0: 2 ports detected
    [    1.041931] uhci_hcd 0000:00:1d.1: UHCI Host Controller
    [    1.041942] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 2
    [    1.041953] uhci_hcd 0000:00:1d.1: detected 2 ports
    [    1.041996] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000e880
    [    1.042064] usb usb2: New USB device found, idVendor=1d6b, idProduct=0001
    [    1.042068] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
    [    1.042071] usb usb2: Product: UHCI Host Controller
    [    1.042074] usb usb2: Manufacturer: Linux 3.16.0-4-686-pae uhci_hcd
    [    1.042077] usb usb2: SerialNumber: 0000:00:1d.1
    [    1.042782] hub 2-0:1.0: USB hub found
    [    1.042798] hub 2-0:1.0: 2 ports detected
    [    1.043090] uhci_hcd 0000:00:1d.2: UHCI Host Controller
    [    1.043100] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 3
    [    1.043109] uhci_hcd 0000:00:1d.2: detected 2 ports
    [    1.043145] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000e800
    [    1.043212] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001
    [    1.043216] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
    [    1.043219] usb usb3: Product: UHCI Host Controller
    [    1.043222] usb usb3: Manufacturer: Linux 3.16.0-4-686-pae uhci_hcd
    [    1.043225] usb usb3: SerialNumber: 0000:00:1d.2
    [    1.043423] hub 3-0:1.0: USB hub found
    [    1.043433] hub 3-0:1.0: 2 ports detected
    [    1.043695] uhci_hcd 0000:00:1d.3: UHCI Host Controller
    [    1.043704] uhci_hcd 0000:00:1d.3: new USB bus registered, assigned bus number 4
    [    1.043882] uhci_hcd 0000:00:1d.3: detected 2 ports
    [    1.043920] uhci_hcd 0000:00:1d.3: irq 22, io base 0x0000e480
    [    1.044116] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001
    [    1.044120] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
    [    1.044124] usb usb4: Product: UHCI Host Controller
    [    1.044127] usb usb4: Manufacturer: Linux 3.16.0-4-686-pae uhci_hcd
    [    1.044130] usb usb4: SerialNumber: 0000:00:1d.3
    [    1.044322] hub 4-0:1.0: USB hub found
    [    1.044332] hub 4-0:1.0: 2 ports detected
    [    1.045753] ehci-pci 0000:00:1d.7: EHCI Host Controller
    [    1.045764] ehci-pci 0000:00:1d.7: new USB bus registered, assigned bus number 5
    [    1.045783] ehci-pci 0000:00:1d.7: debug port 1
    [    1.049705] ehci-pci 0000:00:1d.7: cache line size of 32 is not supported
    [    1.049716] ehci-pci 0000:00:1d.7: irq 23, io mem 0xfebfbc00
    [    1.057749] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded
    [    1.057761] r8169 0000:02:00.0: can't disable ASPM; OS doesn't have ASPM control
    [    1.060920] sdhci: Secure Digital Host Controller Interface driver
    [    1.060925] sdhci: Copyright(c) Pierre Ossman
    [    1.061302] sdhci-pci 0000:04:01.2: SDHCI controller found [1180:0822] (rev 17)
    [    1.061363] ehci-pci 0000:00:1d.7: USB 2.0 started, EHCI 1.00
    [    1.061749] r8169 0000:02:00.0: irq 43 for MSI/MSI-X
    [    1.062043] r8169 0000:02:00.0 eth0: RTL8168b/8111b at 0xf7e0c000, 00:17:31:cf:01:90, XID 18000000 IRQ 43
    [    1.062048] r8169 0000:02:00.0 eth0: jumbo features [frames: 4080 bytes, tx checksumming: ko]
    [    1.062600] sdhci-pci 0000:04:01.2: Will use DMA mode even though HW doesn't fully claim to support it.
    [    1.062608] mmc0: no vqmmc regulator found
    [    1.062611] mmc0: no vmmc regulator found
    [    1.062789] usb usb5: New USB device found, idVendor=1d6b, idProduct=0002
    [    1.062793] usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1
    [    1.062797] usb usb5: Product: EHCI Host Controller
    [    1.062800] usb usb5: Manufacturer: Linux 3.16.0-4-686-pae ehci_hcd
    [    1.062803] usb usb5: SerialNumber: 0000:00:1d.7
    [    1.063009] hub 5-0:1.0: USB hub found
    [    1.063021] hub 5-0:1.0: 8 ports detected
    [    1.063611] sdhci-pci 0000:04:01.2: Will use DMA mode even though HW doesn't fully claim to support it.
    [    1.066627] SCSI subsystem initialized
    [    1.067708] thermal LNXTHERM:00: registered as thermal_zone0
    [    1.067713] ACPI: Thermal Zone [THRM] (72 C)
    [    1.069821] libata version 3.00 loaded.
    [    1.072326] mmc0: SDHCI controller on PCI [0000:04:01.2] using DMA
    [    1.084133] hub 1-0:1.0: USB hub found
    [    1.084148] hub 1-0:1.0: 2 ports detected
    [    1.108131] hub 2-0:1.0: USB hub found
    [    1.108146] hub 2-0:1.0: 2 ports detected
    [    1.132108] hub 3-0:1.0: USB hub found
    [    1.132123] hub 3-0:1.0: 2 ports detected
    [    1.136061] firewire_ohci 0000:04:01.1: added OHCI v1.0 device as card 0, 4 IR + 4 IT contexts, quirks 0x11
    [    1.156100] hub 4-0:1.0: USB hub found
    [    1.156114] hub 4-0:1.0: 2 ports detected
    [    1.156422] ata_piix 0000:00:1f.1: version 2.13
    [    1.158183] scsi0 : ata_piix
    [    1.158363] scsi1 : ata_piix
    [    1.158458] ata1: PATA max UDMA/100 cmd 0x1f0 ctl 0x3f6 bmdma 0xffa0 irq 14
    [    1.158462] ata2: PATA max UDMA/100 cmd 0x170 ctl 0x376 bmdma 0xffa8 irq 15
    [    1.158659] ata2: port disabled--ignoring
    [    1.330176] ata1.00: ATA-8: SAMSUNG HM160HC, LQ100-10, max UDMA/100
    [    1.330181] ata1.00: 312581808 sectors, multi 16: LBA48 
    [    1.330189] ata1.01: ATAPI: HL-DT-ST DVDRAM GMA-4082N, HJ02, max UDMA/33
    [    1.346054] ata1.00: configured for UDMA/100
    [    1.360241] ata1.01: configured for UDMA/33
    [    1.362717] scsi 0:0:0:0: Direct-Access     ATA      SAMSUNG HM160HC  0-10 PQ: 0 ANSI: 5
    [    1.366616] scsi 0:0:1:0: CD-ROM            HL-DT-ST DVDRAM GMA-4082N HJ02 PQ: 0 ANSI: 5
    [    1.387353] sr0: scsi3-mmc drive: 24x/24x writer dvd-ram cd/rw xa/form2 cdda tray
    [    1.387360] sd 0:0:0:0: [sda] 312581808 512-byte logical blocks: (160 GB/149 GiB)
    [    1.387364] cdrom: Uniform CD-ROM driver Revision: 3.20
    [    1.387438] sd 0:0:0:0: [sda] Write Protect is off
    [    1.387443] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
    [    1.387476] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    [    1.387596] sr 0:0:1:0: Attached scsi CD-ROM sr0
    [    1.388407] sd 0:0:0:0: Attached scsi generic sg0 type 0
    [    1.388570] sr 0:0:1:0: Attached scsi generic sg1 type 5
    [    1.428050] usb 5-7: new high-speed USB device number 3 using ehci-pci
    [    1.512630]  sda: sda1 sda2 sda3 sda4 < sda5 sda6 sda7 >
    [    1.531400] sd 0:0:0:0: [sda] Attached SCSI disk
    [    1.578525] usb 5-7: New USB device found, idVendor=0402, idProduct=5602
    [    1.578534] usb 5-7: New USB device strings: Mfr=0, Product=1, SerialNumber=0
    [    1.578541] usb 5-7: Product: USB2.0 Camera
    [    1.636041] tsc: Refined TSC clocksource calibration: 1728.999 MHz
    [    1.636159] firewire_core 0000:04:01.1: created device fw0: GUID 00e01800035ca992, S400
    [    1.944044] usb 2-1: new low-speed USB device number 2 using uhci_hcd
    [    2.119049] usb 2-1: New USB device found, idVendor=046d, idProduct=c016
    [    2.119055] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
    [    2.119059] usb 2-1: Product: Optical USB Mouse
    [    2.119062] usb 2-1: Manufacturer: Logitech
    [    2.123666] hidraw: raw HID events driver (C) Jiri Kosina
    [    2.137957] usbcore: registered new interface driver usbhid
    [    2.137961] usbhid: USB HID core driver
    [    2.139546] input: Logitech Optical USB Mouse as /devices/pci0000:00/0000:00:1d.1/usb2/2-1/2-1:1.0/0003:046D:C016.0001/input/input5
    [    2.139682] hid-generic 0003:046D:C016.0001: input,hidraw0: USB HID v1.10 Mouse [Logitech Optical USB Mouse] on usb-0000:00:1d.1-1/input0
    [    2.300612] PM: Starting manual resume from disk
    [    2.300620] PM: Hibernation image partition 8:7 present
    [    2.300622] PM: Looking for hibernation image.
    [    2.300938] PM: Image not found (code -22)
    [    2.300941] PM: Hibernation image not present or could not be loaded.
    [    2.479333] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: (null)
    [    2.636235] Switched to clocksource tsc
    [    2.974821] random: nonblocking pool is initialized
    [    4.419355] systemd-udevd[302]: starting version 215
    [    6.603760] input: Lid Switch as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0D:00/input/input6
    [    6.604963] ACPI: Lid Switch [LID]
    [    6.605060] input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input7
    [    6.605065] ACPI: Power Button [PWRB]
    [    6.605151] input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input8
    [    6.605156] ACPI: Sleep Button [SLPB]
    [    6.605245] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input9
    [    6.605249] ACPI: Power Button [PWRF]
    [    6.760640] ACPI: AC Adapter [AC0] (on-line)
    [    6.809892] tsc: Marking TSC unstable due to TSC halts in idle
    [    6.809901] ACPI: acpi_idle registered with cpuidle
    [    6.810356] Switched to clocksource hpet
    [    6.816049] ACPI: Video Device [VGA] (multi-head: yes  rom: no  post: no)
    [    6.816251] input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/LNXVIDEO:00/input/input10
    [    6.892442] ACPI: Battery Slot [BAT0] (battery present)
    [    6.976320] asus_laptop: Asus Laptop Support version 0.42
    [    6.976429] asus_laptop:   A6JC model detected
    [    6.979689] input: Asus Laptop extra buttons as /devices/platform/asus_laptop/input/input11
    [    7.103044] r592: driver successfully loaded
    [    7.112093] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
    [    7.214278] yenta_cardbus 0000:04:01.0: CardBus bridge found [1043:1237]
    [    7.340872] yenta_cardbus 0000:04:01.0: ISA IRQ mask 0x0cb8, PCI irq 17
    [    7.340884] yenta_cardbus 0000:04:01.0: Socket status: 30000006
    [    7.340895] pci_bus 0000:04: Raising subordinate bus# of parent bus (#04) from #05 to #08
    [    7.340912] yenta_cardbus 0000:04:01.0: pcmcia: parent PCI bridge window: [io  0xd000-0xdfff]
    [    7.340921] pcmcia_socket pcmcia_socket0: cs: IO port probe 0xd000-0xdfff:
    [    7.342343]  excluding 0xd000-0xd0ff 0xd3b0-0xd3df 0xd400-0xd4ff 0xd7b0-0xd7df
    [    7.346497] intel_rng: FWH not detected
    [    7.347619]  0xdbb0-0xdbdf 0xdfb0-0xdfdf
    [    7.349113] yenta_cardbus 0000:04:01.0: pcmcia: parent PCI bridge window: [mem 0xfe200000-0xfeafffff]
    [    7.349119] pcmcia_socket pcmcia_socket0: cs: memory probe 0xfe200000-0xfeafffff:
    [    7.349135]  excluding 0xfea70000-0xfeafffff
    [    7.349142] yenta_cardbus 0000:04:01.0: pcmcia: parent PCI bridge window: [mem 0xddf00000-0xdfefffff 64bit pref]
    [    7.349148] pcmcia_socket pcmcia_socket0: cs: memory probe 0xddf00000-0xdfefffff:
    [    7.349160]  excluding 0xddf00000-0xdfefffff
    [    7.404998] ACPI Warning: SystemIO range 0x00000828-0x0000082f conflicts with OpRegion 0x00000800-0x0000084f (GPIS) (20140424/utaddress-258)
    [    7.405017] ACPI Warning: SystemIO range 0x00000828-0x0000082f conflicts with OpRegion 0x00000800-0x0000084f (PMIO) (20140424/utaddress-258)
    [    7.405029] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
    [    7.405038] ACPI Warning: SystemIO range 0x000004b0-0x000004bf conflicts with OpRegion 0x00000480-0x000004bf (GPIO) (20140424/utaddress-258)
    [    7.405050] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
    [    7.405055] ACPI Warning: SystemIO range 0x00000480-0x000004af conflicts with OpRegion 0x00000480-0x000004bf (GPIO) (20140424/utaddress-258)
    [    7.405066] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
    [    7.405071] lpc_ich: Resource conflict(s) found affecting gpio_ich
    [    7.488273] leds_ss4200: no LED devices found
    [    7.612708] snd_hda_intel 0000:00:1b.0: irq 44 for MSI/MSI-X
    [    8.210120] sound hdaudioC0D0: autoconfig: line_outs=1 (0x14/0x0/0x0/0x0/0x0) type:hp
    [    8.210133] sound hdaudioC0D0:    speaker_outs=0 (0x0/0x0/0x0/0x0/0x0)
    [    8.210142] sound hdaudioC0D0:    hp_outs=0 (0x0/0x0/0x0/0x0/0x0)
    [    8.210148] sound hdaudioC0D0:    mono: mono_out=0x0
    [    8.210155] sound hdaudioC0D0:    dig-out=0x1e/0x0
    [    8.210161] sound hdaudioC0D0:    inputs:
    [    8.210168] sound hdaudioC0D0:      Mic=0x18
    [    8.210175] sound hdaudioC0D0:      Line=0x1a
    [    8.210182] sound hdaudioC0D0:      CD=0x1c
    [    8.300563] input: PC Speaker as /devices/platform/pcspkr/input/input16
    [    8.406778] cfg80211: Calling CRDA to update world regulatory domain
    [    8.562079] iTCO_vendor_support: vendor-support=0
    [    8.576384] [drm] Initialized drm 1.1.0 20060810
    [    8.580479] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11
    [    8.580544] iTCO_wdt: Found a ICH7-M or ICH7-U TCO device (Version=2, TCOBASE=0x0860)
    [    8.580772] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
    [    9.068836] input: HDA Digital PCBeep as /devices/pci0000:00/0000:00:1b.0/sound/card0/hdaudioC0D0/input12
    [    9.095385] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x100-0x3af:
    [    9.096392]  excluding 0x170-0x177 0x1f0-0x1f7 0x370-0x377
    [    9.097096] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x3e0-0x4ff:
    [    9.097336]  excluding 0x3f0-0x3f7 0x400-0x41f 0x480-0x4bf 0x4d0-0x4d7
    [    9.097560] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x820-0x8ff:
    [    9.097756]  excluding 0x820-0x87f
    [    9.097945] pcmcia_socket pcmcia_socket0: cs: IO port probe 0xc00-0xcf7:
    [    9.098549]  clean.
    [    9.098573] pcmcia_socket pcmcia_socket0: cs: memory probe 0x0c0000-0x0fffff:
    [    9.098580]  excluding 0xc0000-0xcffff 0xe0000-0xfffff
    [    9.098611] pcmcia_socket pcmcia_socket0: cs: memory probe 0xa0000000-0xa0ffffff:
    [    9.098630]  clean.
    [    9.098651] pcmcia_socket pcmcia_socket0: cs: memory probe 0x60000000-0x60ffffff:
    [    9.098670]  clean.
    [    9.098690] pcmcia_socket pcmcia_socket0: cs: IO port probe 0xa00-0xaff:
    [    9.099314]  clean.
    [    9.107253] psmouse serio4: synaptics: Touchpad model: 1, fw: 6.1, id: 0xa3a0b3, caps: 0xa04713/0x10008/0x0, board id: 3655, fw id: 30712
    [    9.144469] input: SynPS/2 Synaptics TouchPad as /devices/platform/i8042/serio4/input/input17
    [    9.273433] iwl3945: Intel(R) PRO/Wireless 3945ABG/BG Network Connection driver for Linux, in-tree:s
    [    9.273443] iwl3945: Copyright(c) 2003-2011 Intel Corporation
    [    9.273541] iwl3945 0000:03:00.0: can't disable ASPM; OS doesn't have ASPM control
    [    9.327089] iwl3945 0000:03:00.0: Tunable channels: 13 802.11bg, 23 802.11a channels
    [    9.327102] iwl3945 0000:03:00.0: Detected Intel Wireless WiFi Link 3945ABG
    [    9.327183] iwl3945 0000:03:00.0: irq 45 for MSI/MSI-X
    [    9.464203] ieee80211 phy0: Selected rate control algorithm 'iwl-3945-rs'
    [    9.604366] wmi: Mapper loaded
    [    9.866869] device-mapper: uevent: version 1.0.3
    [    9.867405] device-mapper: ioctl: 4.27.0-ioctl (2013-10-30) initialised: dm-devel@redhat.com
    [   10.330846] nouveau  [  DEVICE][0000:01:00.0] BOOT0  : 0x046700a3
    [   10.330858] nouveau  [  DEVICE][0000:01:00.0] Chipset: G72 (NV46)
    [   10.330865] nouveau  [  DEVICE][0000:01:00.0] Family : NV40
    [   10.331024] nouveau  [   VBIOS][0000:01:00.0] checking PRAMIN for image...
    [   10.416722] nouveau  [   VBIOS][0000:01:00.0] ... appears to be valid
    [   10.416727] nouveau  [   VBIOS][0000:01:00.0] using image from PRAMIN
    [   10.416915] nouveau  [   VBIOS][0000:01:00.0] BIT signature found
    [   10.416920] nouveau  [   VBIOS][0000:01:00.0] version 05.72.22.41.92
    [   10.417246] nouveau 0000:01:00.0: irq 46 for MSI/MSI-X
    [   10.417260] nouveau  [     PMC][0000:01:00.0] MSI interrupts enabled
    [   10.417297] nouveau  [     PFB][0000:01:00.0] RAM type: DDR2
    [   10.417300] nouveau  [     PFB][0000:01:00.0] RAM size: 128 MiB
    [   10.417303] nouveau  [     PFB][0000:01:00.0]    ZCOMP: 0 tags
    [   10.467638] nouveau  [  PTHERM][0000:01:00.0] FAN control: none / external
    [   10.467653] nouveau  [  PTHERM][0000:01:00.0] fan management: automatic
    [   10.467657] nouveau  [  PTHERM][0000:01:00.0] internal sensor: yes
    [   10.487548] nouveau  [     CLK][0000:01:00.0] 20: core 100 MHz shader 100 MHz memory 270 MHz
    [   10.487555] nouveau  [     CLK][0000:01:00.0] 21: core 200 MHz shader 200 MHz memory 400 MHz
    [   10.487561] nouveau  [     CLK][0000:01:00.0] 22: core 350 MHz shader 350 MHz memory 600 MHz
    [   10.487572] nouveau  [     CLK][0000:01:00.0] --: core 199 MHz memory 391 MHz 
    [   10.487737] [TTM] Zone  kernel: Available graphics memory: 445276 kiB
    [   10.487740] [TTM] Zone highmem: Available graphics memory: 515840 kiB
    [   10.487742] [TTM] Initializing pool allocator
    [   10.487752] [TTM] Initializing DMA pool allocator
    [   10.488961] nouveau  [     DRM] VRAM: 124 MiB
    [   10.488967] nouveau  [     DRM] GART: 512 MiB
    [   10.488975] nouveau  [     DRM] TMDS table version 1.1
    [   10.488978] nouveau W[     DRM] TMDS table script pointers not stubbed
    [   10.488981] nouveau  [     DRM] DCB version 3.0
    [   10.488985] nouveau  [     DRM] DCB outp 00: 03005323 00000004
    [   10.488989] nouveau  [     DRM] DCB outp 01: 01010300 00000028
    [   10.488992] nouveau  [     DRM] DCB outp 02: 04026312 00000000
    [   10.488996] nouveau  [     DRM] DCB outp 03: 020333f1 0080c070
    [   10.488999] nouveau  [     DRM] DCB conn 00: 0000
    [   10.489003] nouveau  [     DRM] DCB conn 01: 0130
    [   10.489006] nouveau  [     DRM] DCB conn 02: 0210
    [   10.489009] nouveau  [     DRM] DCB conn 03: 0211
    [   10.489012] nouveau  [     DRM] DCB conn 04: 0213
    [   10.489015] nouveau  [     DRM] DCB conn 05: 0340
    [   10.489017] nouveau  [     DRM] DCB conn 06: 1431
    [   10.489227] nouveau  [     DRM] Saving VGA fonts
    [   10.550882] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
    [   10.550888] [drm] Driver supports precise vblank timestamp query.
    [   10.550896] nouveau  [     DRM] 0xD518: Parsing digital output script table
    [   10.604277] nouveau  [     DRM] MM: using M2MF for buffer copies
    [   10.604295] nouveau  [     DRM] Calling LVDS script 6:
    [   10.604301] nouveau  [     DRM] 0xD259: Parsing digital output script table
    [   10.882197] nouveau  [     DRM] Setting dpms mode 3 on TV encoder (output 3)
    [   10.972372] nouveau  [     DRM] allocated 1280x800 fb: 0x9000, bo f4393400
    [   10.972616] fbcon: nouveaufb (fb0) is primary device
    [   10.994186] nouveau  [     DRM] Calling LVDS script 2:
    [   10.994193] nouveau  [     DRM] 0xD2BA: Parsing digital output script table
    [   11.125216] nouveau  [     DRM] Calling LVDS script 5:
    [   11.125221] nouveau  [     DRM] 0xD24A: Parsing digital output script table
    [   11.127047] Console: switching to colour frame buffer device 160x50
    [   11.128767] nouveau 0000:01:00.0: fb0: nouveaufb frame buffer device
    [   11.128771] nouveau 0000:01:00.0: registered panic notifier
    [   11.148063] [drm] Initialized nouveau 1.1.2 20120801 for 0000:01:00.0 on minor 0
    [   11.898936] media: Linux media interface: v0.10
    [   11.933580] Linux video capture interface: v2.00
    [   11.969990] gspca_main: v2.14.0 registered
    [   12.004769] gspca_main: ALi m5602-2.14.0 probing 0402:5602
    [   12.084788] gspca_m5602: Detected an ov9650 sensor
    [   12.152785] usbcore: registered new interface driver ALi m5602
    [   42.852123] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
    [   42.852163] ata1.00: failed command: READ DMA
    [   42.852194] ata1.00: cmd c8/00:08:70:7f:8f/00:00:00:00:00/e2 tag 0 dma 4096 in
             res 40/00:fe:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
    [   42.852234] ata1.00: status: { DRDY }
    [   47.892052] ata1: link is slow to respond, please be patient (ready=0)
    [   52.876053] ata1: device not ready (errno=-16), forcing hardreset
    [   52.876070] ata1: soft resetting link
    [   53.066143] ata1.00: configured for UDMA/100
    [   53.080292] ata1.01: configured for UDMA/33
    [   53.082610] ata1.00: device reported invalid CHS sector 0
    [   53.082628] ata1: EH complete
    [   53.143338] EXT4-fs (sda2): re-mounted. Opts: (null)
    [   64.381215] EXT4-fs (sda2): re-mounted. Opts: errors=remount-ro
    [   64.670963] device-mapper: multipath: version 1.7.0 loaded
    [   64.722874] lp: driver loaded but no devices found
    [   64.745387] ppdev: user-space parallel port driver
    [   64.822392] device-mapper: multipath service-time: version 0.2.0 loaded
    [   64.822819] device-mapper: table: 254:0: multipath: error getting device
    [   64.822863] device-mapper: ioctl: error adding target to table
    [   64.861754] loop: module loaded
    [   64.916614] fuse init (API version 7.23)
    [   66.148809] EXT4-fs (sda6): mounting ext3 file system using the ext4 subsystem
    [   66.184744] EXT4-fs (sda6): mounted filesystem with ordered data mode. Opts: user_xattr
    [   66.212129] FAT-fs (sda5): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
    [   66.268792] FAT-fs (sda5): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
    [   66.303617] REISERFS (device sda3): found reiserfs format "3.6" with standard journal
    [   66.303636] REISERFS (device sda3): using ordered data mode
    [   66.303640] reiserfs: using flush barriers
    [   66.304044] REISERFS (device sda3): journal params: device sda3, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30
    [   66.304707] REISERFS (device sda3): checking transaction log (sda3)
    [   66.381986] REISERFS (device sda3): Using r5 hash to sort names
    [   67.085371] r8169 0000:02:00.0 eth0: link down
    [   67.085388] r8169 0000:02:00.0 eth0: link down
    [   67.085421] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
    [   67.980519] RPC: Registered named UNIX socket transport module.
    [   67.980524] RPC: Registered udp transport module.
    [   67.980527] RPC: Registered tcp transport module.
    [   67.980529] RPC: Registered tcp NFSv4.1 backchannel transport module.
    [   67.993848] FS-Cache: Loaded
    [   68.030294] FS-Cache: Netfs 'nfs' registered for caching
    [   68.084223] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
    [   68.810325] r8169 0000:02:00.0 eth0: link up
    [   68.810346] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
    [   69.052096] input: ACPI Virtual Keyboard Device as /devices/virtual/input/input18

    • #1

    Hello,
    since the last upgrade of my servers with the last kernel 5.3.18-3-pve multipath stop works. Everything works great on 5.3.10-1-pve and I can manage my cluster storage with lvm on my san. Is there any change with the last kernel? Is there any option I have to add to makes it works?

    When booting with 5.3.18-3-pve multipath -ll give me blank output and I’ve this error during boot:

    device-mapper: table: 253:9: multipath: error getting device

    and during the working kernel boot I’ve:

    [ 17.617230] scsi 17:0:0:7: Attached scsi generic sg10 type 0
    [ 17.617233] scsi 17:0:0:7: Embedded Enclosure Device
    [ 17.617604] scsi 17:0:0:7: Power-on or device reset occurred
    [ 17.617617] scsi 17:0:0:7: Failed to get diagnostic page 0x1
    [ 17.617671] scsi 17:0:0:7: Failed to bind enclosure -19

    but everything works out anyway.

    My configuration is:

    2x Lenovo server with QLogic Corp. ISP2722-based 16/32Gb
    1x Lenovo SAN DE2000H

    pve-manager/6.1-8/806edfe1 (running kernel: 5.3.10-1-pve) Working kernel!!!

    multipath.conf

    defaults {
    find_multipaths no
    polling_interval 2
    path_selector «round-robin 0»
    path_grouping_policy multibus
    uid_attribute ID_SERIAL
    rr_min_io 100
    failback immediate
    no_path_retry queue
    user_friendly_names yes
    }

    blacklist {
    devnode «^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*»
    devnode «^hd[a-z][[0-9]*]»
    devnode «^sda[[0-9]*]»
    devnode «^cciss!c[0-9]d[0-9]*»
    }

    multipaths {
    multipath {
    wwid 36d039ea00006a630000001855ea5d2ff
    alias mpath1
    }
    multipath {
    wwid 36d039ea00006a2480000014d5ea1c997
    alias mpath2
    }
    multipath {
    wwid 36d039ea00006a2480000018e5ea610f2
    alias mpath3
    }
    }

    Stoiko Ivanov


    • #2

    You could try installing the pve-kernel-5.4 meta package and see if it works with this kernel

    • #3

    It’s a production server… do you think is it safe to play with the 5.4?

    Stoiko Ivanov


    • #4

    depends on your workload and your hardware I guess — it’s been released since almost 2 months now and we haven’t heard of any serious regressions compared to 5.3 (and it will become the new default kernel within the next weeks)
    maybe you can try it during one of the next upgrade windows.

    • #5

    It doesn’t work

    [Fri May 8 00:05:32 2020] device-mapper: multipath round-robin: version 1.2.0 loaded
    [Fri May 8 00:05:32 2020] device-mapper: table: 253:5: multipath: error getting device
    [Fri May 8 00:05:32 2020] device-mapper: ioctl: error adding target to table

    • #6

    Still same errore on the last version (5.4.78-2-pve). Any news for my FC controller?

    Last edited: Feb 21, 2021

    • #7

    Not news for your specific issues, however this FC array is an OEM-ed NetApp appliance sold under the umbrella of Lenovo from what I can tell. It could be helpful to look for similar error message while looking for Netapp appliances. However: Most FC arrays I’ve worked with in the past needed some tuning based on vendor recommendation. I’ve made some experiences back with IBM V7000 storage systems where adapting some settings to in multipathing area to get it working reliably.

    For example some ONTAP-based Lenovo appliance recommend configuring blacklisting correctly:
    I.e. https://thinksystem.lenovofiles.com…stem_storage_de_himg_11.60.2/IC_pdf_file.html
    Also by looking at the error message I see similar KB articles popping up on Red Hat with similar recommendations. (https://access.redhat.com/solutions/38538)

    Usually these kind of storage systems are only available under support contract so it might be worth a shot bumping the storage vendor to.

    • #8

    Thank you for your response. As you think it’s a Lenovo card connected to a Lenovo DS storage. I think that I can’t have support from the vendor cause Debian/Proxmox it’s not a certified OS for this kind of hardware. Anyway I will try to open a ticket. I will look at your links in the meanwhile…

    • #9

    I should add that there is an article not directly linkable to Lenovo (previous post) called «Configuring DM-Multipath». It is quite likely your boot drive is internal and thus not multipathed, try to identify the drives the host sees and then blacklist all devices not located on the Lenovo SAN. It might also be worth to test «find_multipaths yes» instead of no. But please do read manpages and test this on a node that has not running VMs on at first.

    All in all: I unfortunately know this situation very well, where storage vendors officially only support RHEL, SUSE, VMware and Windows. SUSE might be worth a shot since the SLES 15 SP2 ships with kernel 5.3. Usually if you can «show proof» the same issue appears on a supported distribution, they usually should help you. But I think the multipathing blacklisting might be worth a shot.

    • #10

    Resolved by myself.

    I download http://filedownloads.cavium.com/Files/TempDownlods/98262/qla2xxx-src-v10.02.04.00-k.tar.gz

    installed kernel headers e build essential

    I build and install modules then I removed multipath

    Code:

    apt purge multipath-tools
    
    rm -fr /etc/multipath*
    
    apt purge multipath-tools-boot

    I blacklisted in lvm.conf everything except my local drive (sda*) and reboot

    after reboot I installed again multipath and configured it again

    Code:

    apt install multipath-tools
    
    multipath -v2 -ll
    
    update-initramfs -u -k all
    
    update-grub2
    
    systemctl mask systemd-udev-settle.service
    
    reboot

    • #11

    Could you elaborate as to why this lead to the solution? It sounds interesting — at first sight — as to why downloading the driver source from QLogic fixed your issue. If there is a source for your findings, that could be helpful for someone else in the future.

    Sidenote: QLogic was bought by Cavium in 2016, which itself got bought by Marvell in 2018, that is why the download goes to Cavium…

    • #12

    I didn’t find much on the net, I have to use just my head ;)
    Source modules do the tricks. I think that the source used in kernel newer then 5.3.10 is different from the previous. Can’t say if true, but with the compiled ones (downloaded from Cavium) my controller start works again. With the original module (the one shipped with pie kernel) I can’t see any disks from the disk storage.. I don’t have the time to investigate more looking at the kernel source, I just have to remember to recompile modules at every kernel upgrade if it doesn’t work out of the box!

    • #13

    Can you tell me in what use case you are using the Lenovo SAN DE2000H?
    Furthermore is it possible to use ZFS or Ceph ont the storage pools with Lenovo DE2000H?
    I wish not to use hardware Raid anymore also not on storage boxes like the Lenovo DE2000H.

    Click here follow the steps to fix Ovirt Multipath Error Getting Device and related errors.

    Instructions

     

    To Fix (Ovirt Multipath Error Getting Device) error you need to
    follow the steps below:

    Step 1:

     
    Download
    (Ovirt Multipath Error Getting Device) Repair Tool
       

    Step 2:

     
    Click the «Scan» button
       

    Step 3:

     
    Click ‘Fix All‘ and you’re done!
     

    Compatibility:
    Windows 7, 8, Vista, XP

    Download Size: 6MB
    Requirements: 300 MHz Processor, 256 MB Ram, 22 MB HDD

    Limitations:
    This download is a free evaluation version. To unlock all features and tools, a purchase is required.

    Ovirt Multipath Error Getting Device Error Codes are caused in one way or another by misconfigured system files
    in your windows operating system.

    If you have Ovirt Multipath Error Getting Device errors then we strongly recommend that you

    Download (Ovirt Multipath Error Getting Device) Repair Tool.

    This article contains information that shows you how to fix
    Ovirt Multipath Error Getting Device
    both
    (manually) and (automatically) , In addition, this article will help you troubleshoot some common error messages related to Ovirt Multipath Error Getting Device error code that you may receive.

    Note:
    This article was updated on 2023-02-03 and previously published under WIKI_Q210794

    Contents

    •   1. What is Ovirt Multipath Error Getting Device error?
    •   2. What causes Ovirt Multipath Error Getting Device error?
    •   3. How to easily fix Ovirt Multipath Error Getting Device errors

    What is Ovirt Multipath Error Getting Device error?

    The Ovirt Multipath Error Getting Device error is the Hexadecimal format of the error caused. This is common error code format used by windows and other windows compatible software and driver vendors.

    This code is used by the vendor to identify the error caused. This Ovirt Multipath Error Getting Device error code has a numeric error number and a technical description. In some cases the error may have more parameters in Ovirt Multipath Error Getting Device format .This additional hexadecimal code are the address of the memory locations where the instruction(s) was loaded at the time of the error.

    What causes Ovirt Multipath Error Getting Device error?

    The Ovirt Multipath Error Getting Device error may be caused by windows system files damage. The corrupted system files entries can be a real threat to the well being of your computer.

    There can be many events which may have resulted in the system files errors. An incomplete installation, an incomplete uninstall, improper deletion of applications or hardware. It can also be caused if your computer is recovered from a virus or adware/spyware
    attack or by an improper shutdown of the computer. All the above actives
    may result in the deletion or corruption of the entries in the windows
    system files. This corrupted system file will lead to the missing and wrongly
    linked information and files needed for the proper working of the
    application.

    How to easily fix Ovirt Multipath Error Getting Device error?

    There are two (2) ways to fix Ovirt Multipath Error Getting Device Error:

    Advanced Computer User Solution (manual update):

    1) Start your computer and log on as an administrator.

    2) Click the Start button then select All Programs, Accessories, System Tools, and then click System Restore.

    3) In the new window, select «Restore my computer to an earlier time» option and then click Next.

    4) Select the most recent system restore point from the «On this list, click a restore point» list, and then click Next.

    5) Click Next on the confirmation window.

    6) Restarts the computer when the restoration is finished.

    Novice Computer User Solution (completely automated):

    1) Download (Ovirt Multipath Error Getting Device) repair utility.

    2) Install program and click Scan button.

    3) Click the Fix Errors button when scan is completed.

    4) Restart your computer.

    How does it work?

    This tool will scan and diagnose, then repairs, your PC with patent
    pending technology that fix your windows operating system registry
    structure.
    basic features: (repairs system freezing and rebooting issues , start-up customization , browser helper object management , program removal management , live updates , windows structure repair.)

    Понравилась статья? Поделить с друзьями:
  • Mse mean squared error python
  • Multimc error execution of command
  • Mse error matlab
  • Multiloader serial port open error
  • Msdt exe ошибка при запуске приложения 0xc0000142