Wipefs error dev sdb probing initialization failed device or resource busy

I am trying to format a drive using the Ubuntu Disks utility. When I select and try to format the drive I get Error wiping device. Command-line wipefs -a "/dev/sdb" exited with non-zero exit sta...

I am trying to format a drive using the Ubuntu Disks utility. When I select and try to format the drive I get

Error wiping device.  Command-line wipefs -a "/dev/sdb" exited with non-zero exit status 1: wipefs: error: /dev/sdb: probing initialization failed: Device or resource busy (udisks-error-quary,0)

Nothing I know of us using it and I am doing this from a Live CD boot. What to do?

asked Jun 18, 2017 at 12:54

Steve's user avatar

4

Use the -f (force) option:

wipefs -af /dev/sdb

fosslinux's user avatar

fosslinux

3,7413 gold badges27 silver badges46 bronze badges

answered Apr 25, 2018 at 14:43

Bogdan Adrian Velica's user avatar

Unmount the disk and all the partitions on it:

sudo umount /dev/sdb*

Then retry the wipe.

Use a GParted live CD (or some other distribution containing GParted) to wipe the partition.

1

Just wanted to add, in my case I had attached 4 drives that were previously in a RAID on that machine. I had never stopped the existing RAID after disconnecting the drives, so I had to:

mdadm --stop /dev/mdX, replacing X with whatever your previous RAID was.

answered Oct 14, 2021 at 0:22

NessDan's user avatar

You have to unmount the drive. You can run lsblk to see where the drive is mounted, and then you can umount it, for example, when trying to wipefs on sdc:

lsblk output:

sda           8:0    0 476.9G  0 disk 
├─sda1        8:1    0 476.4G  0 part 
└─sda2        8:2    0   523M  0 part 
sdb           8:16   0 698.6G  0 disk 
└─sdb1        8:17   0 698.6G  0 part 
sdc           8:32   1  28.9G  0 disk 
├─sdc1        8:33   1   748M  0 part /run/media/user/ARCH_202109
└─sdc2        8:34   1    84M  0 part 

then I had to run:
sudo umount /run/media/user/ARCH_202109

and then, I could wipefs --all /dev/sdc

answered Jul 20, 2022 at 22:32

aurelia's user avatar

verify/debian-unstable
Ooops, it happened again


Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 136, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 121, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 218, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 209, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 659, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 682, in _invoke
    raise Error(res['error'])
Error: timeout

First occurrence: 2016-04-08T15:03:46.172228
Times recorded: 11
Latest occurrences:

  • 2016-04-12T10:07:07.534642 | revision 69fb681, logs
  • 2016-04-13T11:07:34.532459 | revision dca8dda1b28fd0dcfaad11c408c94010e9813737, logs
  • 2016-04-13T13:45:05.747475 | revision 5b242d9bfdcc5535002da2484f5187fecb4877ef, logs
  • 2016-04-19T04:37:18.421431 | revision 7d2faa8, logs
  • 2016-04-20T01:08:32.611423 | revision ff08333, logs
  • 2016-04-20T05:44:37.294238 | revision 59b9176, logs
  • 2016-04-21T05:26:54.479038 | revision 7bf5a77, logs
  • 2016-04-21T13:16:49.269333 | revision 3ccd2df672ae98d097fd44f507efce9c393ea413, logs
  • 2016-04-21T19:51:12.539850 | revision 1e414444d4d85d1fde0780ffc7ea23d8c6a4c255, logs
  • 2016-04-29T14:57:43.750241 | revision a2cdff0b3a3acdcc285cb57c8930d03431b1fd97, logs

Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 136, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 121, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 219, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 210, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 660, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 683, in _invoke
    raise Error(res['error'])
Error: timeout

First occurrence: 2016-04-30T20:55:03.668014 | revision e94f7b0a3ef0e6e5ef99170d6a8412cd0bfc27ea, logs
Times recorded: 24
Latest occurrences:

  • 2016-06-02T11:37:55.960040 | revision 7322765, logs
  • 2016-06-10T07:17:42.298637 | revision 88bb2fa, logs
  • 2016-06-13T07:21:28.410289 | revision 085a9914ad3d1eee982fb28714ac4d87607702a0, logs
  • 2016-06-13T16:32:44.861617 | revision 4ed7535326b59934d7805e84988f20357da0b40c, logs
  • 2016-06-15T10:41:17.447893 | revision 8bbd784, logs
  • 2016-06-16T13:28:42.947973 | revision c07ad2df86c40073c0c40d56a5bedfa482c7ea9d, logs
  • 2016-06-16T14:31:48.757338 | revision 059b600, logs
  • 2016-06-17T07:30:47.572519 | revision db61788d2d1ec8bf6a2bc375279ffa9afaf817ce, logs
  • 2016-06-18T19:12:20.753936 | revision fcbca0e5faa4bbd560990ea52e09d91dfce77b34, logs
  • 2016-06-19T09:25:56.123206 | revision ce26c50, logs

Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 136, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 121, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 219, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 210, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 664, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 687, in _invoke
    raise Error(res['error'])
Error: timeout

First occurrence: 2016-06-20T14:35:27.872918 | revision c9bea0c, logs
Times recorded: 8
Latest occurrences:

  • 2016-06-20T14:35:27.872918 | revision c9bea0c, logs
  • 2016-06-26T04:18:51.474358 | revision 9a1db1f113fa4c80993e01498f3276b9009d206d, logs
  • 2016-06-27T10:27:26.022476 | revision b4dedaf2a54a7fd9dbc66fd6d6450af7c771efc4, logs
  • 2016-06-28T15:38:57.708301 | revision 24deff78ca55dedc2d8bf1ff75a5f20ec06f1811, logs
  • 2016-07-05T16:47:58.236197 | revision 6b07f561c0bcfd1287c6ebae8eeaf960122a9291, logs
  • 2016-07-05T18:51:16.404648 | revision bc4632d, logs
  • 2016-07-07T11:29:18.494343 | revision d984ba7, logs
  • 2016-07-07T15:48:28.650102 | revision 2b40b16, logs

Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 136, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 121, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 220, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 211, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 665, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 688, in _invoke
    raise Error(res['error'])
Error: timeout

First occurrence: 2016-07-06T17:18:08.888300 | revision 9eb7b60, logs
Times recorded: 17
Latest occurrences:

  • 2016-07-13T06:49:15.268544 | revision 0f11c05, logs
  • 2016-07-13T09:30:20.963495 | revision 18dbdc67414d0a1af7f4475de4322d0b20c9c49e, logs
  • 2016-07-14T06:01:27.749648 | revision 27ace9b7380e76089a1f9cb2b173d979987fd894, logs
  • 2016-07-14T13:30:50.534010 | revision 4c5fab3cf88605b4b9f1f662a2443db48682d7b0, logs
  • 2016-07-14T15:13:26.261100 | revision daf0eb4, logs
  • 2016-07-14T16:03:17.891168 | revision f179e03, logs
  • 2016-07-14T17:06:38.499991 | revision 4c5fab3cf88605b4b9f1f662a2443db48682d7b0, logs
  • 2016-07-15T10:35:44.817129 | revision 75f729c, logs
  • 2016-07-15T13:52:09.310913 | revision d4d4f9a, logs
  • 2016-07-15T16:57:57.764340 | revision 9d5d820, logs

Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 136, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 121, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 220, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 211, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 686, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 709, in _invoke
    raise Error(res['error'])
Error: timeout

First occurrence: 2016-07-20T18:04:26.099493 | revision 658e9da9911c6128440818c60b1d834b19fb1fc5, logs
Times recorded: 7
Latest occurrences:

  • 2016-07-20T18:04:26.099493 | revision 658e9da9911c6128440818c60b1d834b19fb1fc5, logs
  • 2016-07-22T08:44:40.203810 | revision 2eb169920c6218387c47e4181b918ceda5dfaac7, logs
  • 2016-07-22T09:59:23.997013 | revision 1b16d3e9c2eeae36dd6017707215817c18075389, logs
  • 2016-07-29T12:14:13.430129 | revision 6dda17b, logs
  • 2016-08-02T08:12:20.135325 | revision e142ebd, logs
  • 2016-08-04T21:46:37.532799 | revision 0aa607b, logs
  • 2016-08-04T22:14:21.015228 | revision 12a6fa1, logs

Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 136, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 121, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 220, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 211, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 684, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 707, in _invoke
    raise Error(res['error'])
Error: timeout

First occurrence: 2016-08-11T11:30:46.182149 | revision 4e55902, logs
Times recorded: 13
Latest occurrences:

  • 2016-08-15T08:17:18.351647 | revision 1cc1bf13a572f185e0cb9e3cba470b5865a6e1de, logs
  • 2016-08-15T10:11:17.770973 | revision 41980ad, logs
  • 2016-08-15T18:26:47.233037 | revision bd38cef78a1dae6d2cf050206846b97bb958eea2, logs
  • 2016-08-15T23:23:57.733298 | revision b7d67acb346c5a56e4da5bdbc959a317b41ff8f5, logs
  • 2016-08-16T11:11:34.615477 | revision 1e01cb8, logs
  • 2016-08-16T15:17:17.749060 | revision 1d58f23, logs
  • 2016-08-16T22:35:26.482543 | revision c3c8641161e67093c394eebada6d16595842b6b5, logs
  • 2016-08-17T09:48:52.835999 | revision 0a52b5a885896ddd4c17e73706a86ec62b7124cd, logs
  • 2016-08-17T12:02:05.624437 | revision f790c18, logs
  • 2016-08-18T09:10:25.938058 | revision ea06d33a0e0888a63744a037ea4b40128e23b776, logs

Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 136, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 121, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 220, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 211, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 685, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 708, in _invoke
    raise Error(res['error'])
Error: timeout

First occurrence: 2016-08-17T11:34:37.880123 | revision 6b395ac, logs
Times recorded: 11
Latest occurrences:

  • 2016-08-18T12:48:31.195144 | revision de1148b5c6945830a2ae99c0bdc2c820e233055d, logs
  • 2016-08-18T13:20:24.912350 | revision 95fb16537ae34d929ff8e24294e5e88404109346, logs
  • 2016-08-19T10:04:34.069168 | revision 8ff288d06bb491461150eb61cca741765a2a206b, logs
  • 2016-08-20T05:32:52.874635 | revision 9b6449d6cc55d038751060902761a7de1aab809d, logs
  • 2016-08-22T08:09:37.245454 | revision 65fe2dc82260758d8771259cf05533b88d2822f2, logs
  • 2016-08-22T11:37:24.077654 | revision 54b0ee4, logs
  • 2016-08-22T12:39:57.763819 | revision 9ec5962bab05446b7a5eac45505158b48f8aae32, logs
  • 2016-08-22T15:08:52.973509 | revision 7466d61, logs
  • 2016-08-22T16:27:55.769968 | revision acb89f9, logs
  • 2016-08-22T18:56:22.387717 | revision c55f7f4, logs

Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 136, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 121, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 220, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 211, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 682, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 705, in _invoke
    raise Error(res['error'])
Error: timeout

First occurrence: 2016-08-22T17:07:34.539406 | revision 5ece9c4, logs
Times recorded: 46
Latest occurrences:

  • 2016-09-07T11:21:10.021615 | revision 0577e74, logs
  • 2016-09-08T08:44:44.201062 | revision b78f507a4006d3a64f2d235c0ba50e6e13a83a65, logs
  • 2016-09-08T12:40:42.985273 | revision 7089bea, logs
  • 2016-09-08T20:36:08.448943 | revision ca895051d5be1a67068e383a9c66633ee788f502, logs
  • 2016-09-08T23:54:35.011637 | revision 7089bea, logs
  • 2016-09-09T08:04:22.437442 | revision a7b9e6a, logs
  • 2016-09-09T09:44:16.539524 | revision 3157b01, logs
  • 2016-09-09T11:49:18.857154 | revision f7416b20389eeacc6569b92e765bfdedc9a465f0, logs
  • 2016-09-10T06:03:49.615747 | revision 8ea28a9, logs
  • 2016-09-12T16:04:26.047250 | revision 19b2355, logs

Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 137, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 122, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 220, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 211, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 682, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 705, in _invoke
    raise Error(res['error'])
Error: timeout

First occurrence: 2016-09-12T10:01:06.868166 | revision 2fd55ed, logs
Times recorded: 3
Latest occurrences:

  • 2016-09-12T10:01:06.868166 | revision 2fd55ed, logs
  • 2016-09-13T10:06:43.402949 | revision b4d9011, logs
  • 2016-09-13T12:56:28.885771 | revision b4d9011, logs

Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 137, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 122, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 220, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 211, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 685, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 708, in _invoke
    raise Error(res['error'])
Error: timeout

First occurrence: 2016-09-14T17:43:35.360474 | revision ede1ddb, logs
Times recorded: 9
Latest occurrences:

  • 2016-09-14T17:43:35.360474 | revision ede1ddb, logs
  • 2016-09-15T11:21:22.070649 | revision 9292468, logs
  • 2016-09-19T14:04:46.902530 | revision b120ba0, logs
  • 2016-09-20T11:37:58.154587 | revision 6eea3b7c45c40ee53f2d83e042f12411c15a2a87, logs
  • 2016-09-20T20:02:52.430148 | revision 2e8886be47de06f964e90581786df5d328591b1c, logs
  • 2016-09-21T21:32:55.182120 | revision 7b5088f, logs
  • 2016-09-23T13:46:36.988731 | revision 4da7c4c, logs
  • 2016-09-23T15:30:26.414888 | revision 9232d26, logs
  • 2016-09-24T10:05:33.681014 | revision 640303f, logs

Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 148, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 133, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 220, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 211, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 685, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 708, in _invoke
    raise Error(res['error'])
Error: timeout

First occurrence: 2016-09-14T19:02:32.911084 | revision e9065d739ab8c8b93189754003bfa70addc30587, logs
Times recorded: 15
Latest occurrences:

  • 2016-09-29T16:14:47.342831 | revision 1c56338, logs
  • 2016-09-30T07:50:35.413698 | revision f5702132e046a71c8d8f6b09625cde2fd812a26b, logs
  • 2016-09-30T18:58:19.152244 | revision 7859a76, logs
  • 2016-10-03T09:24:58.522838 | revision 95ba820, logs
  • 2016-10-03T09:47:38.923702 | revision a93b7b9, logs
  • 2016-10-03T16:42:23.477549 | revision a93b7b9, logs
  • 2016-10-06T23:44:08.707824 | revision 46de24e6df8077d47cf67738f3d955eb3de53168, logs
  • 2016-10-07T00:55:28.968472 | revision 8fac15f6cc077247ceb2baa2ed6965a3394f6f2c, logs
  • 2016-10-07T10:17:41.786293 | revision 617544c, logs
  • 2016-10-07T10:39:36.398700 | revision 5ddecde, logs

Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 148, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 133, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 220, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 211, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 688, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 711, in _invoke
    raise Error(res['error'])
Error: timeout

First occurrence: 2016-10-10T14:09:44.971918 | revision d71587deb0630f54d5506c37380f176ba1b6ee2e, logs
Times recorded: 20
Latest occurrences:

  • 2016-10-18T18:52:30.697509 | revision b691563eee58563af58907221c5ed1adf74bd176, logs
  • 2016-10-19T06:32:01.986008 | revision 789de1d, logs
  • 2016-10-19T07:12:22.514345 | revision b7ec348, logs
  • 2016-10-19T19:41:32.055845 | revision 277408a, logs
  • 2016-10-20T08:11:54.586677 | revision 1b5371b, logs
  • 2016-10-20T11:59:32.132988 | revision 277408a, logs
  • 2016-10-20T14:01:34.502842 | revision 92858f8, logs
  • 2016-10-20T15:17:06.546478 | revision 277408a, logs
  • 2016-10-21T06:33:36.117854 | revision deeed30, logs
  • 2016-10-21T09:51:33.881848 | revision 7e194a1, logs

Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 174, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 159, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 220, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 211, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 688, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 711, in _invoke
    raise Error(res['error'])
Error: timeout

First occurrence: 2016-10-13T10:29:25.311371 | revision 541ec9afc9b77396b9a8f1ef714d4e668afbe0a8, logs
Times recorded: 1
Latest occurrences:

  • 2016-10-13T10:29:25.311371 | revision 541ec9afc9b77396b9a8f1ef714d4e668afbe0a8, logs

Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 148, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 133, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 220, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 211, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 688, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 711, in _invoke
    raise Error(res['error'])
Error: timeout
Error wiping newly created partition /dev/sda1: Command-line `wipefs -a "/dev/sda1"' exited with non-zero exit status 1: wipefs: error: /dev/sda1: probing initialization failed: No such file or directory

First occurrence: 2016-10-21T11:09:36.249730 | revision 578c4ed, logs
Times recorded: 2
Latest occurrences:

  • 2016-10-21T11:09:36.249730 | revision 578c4ed, logs
  • 2016-10-21T14:00:42.558445 | revision 578c4ed, logs

Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 148, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 133, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 220, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 211, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 689, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 712, in _invoke
    raise Error(res['error'])
Error: timeout

First occurrence: 2016-10-21T19:26:38.098683 | revision 7c849a3d6c8dc568f8c0f5cb03bf85b8a3b33ce2, logs
Times recorded: 5
Latest occurrences:

  • 2016-10-21T19:26:38.098683 | revision 7c849a3d6c8dc568f8c0f5cb03bf85b8a3b33ce2, logs
  • 2016-10-22T05:43:55.394826 | revision 266dad0, logs
  • 2016-10-22T06:24:15.879675 | revision 1ca1204be2e6035dd0fae4ff160da90e4a7e4061, logs
  • 2016-10-22T07:32:31.070050 | revision 1ca1204be2e6035dd0fae4ff160da90e4a7e4061, logs
  • 2016-10-22T07:44:56.359439 | revision eeedaf6, logs

Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 148, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 133, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 220, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 211, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 689, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 712, in _invoke
    raise Error(res['error'])
Error: timeout
Error wiping newly created partition /dev/sda1: Command-line `wipefs -a "/dev/sda1"' exited with non-zero exit status 1: wipefs: error: /dev/sda1: probing initialization failed: No such file or directory

First occurrence: 2016-10-24T12:14:33.798975 | revision 3a7a912, logs
Times recorded: 5
Latest occurrences:

  • 2016-10-24T12:14:33.798975 | revision 3a7a912, logs
  • 2016-10-24T14:59:05.751888 | revision 1c17e2ed7d8284e0ca19cf60620db7d3721db9fe, logs
  • 2016-10-24T16:17:31.113439 | revision 5e19453, logs
  • 2016-10-25T09:04:12.315759 | revision 06b07a1, logs
  • 2016-10-25T13:18:06.914398 | revision e194b3d, logs

Traceback (most recent call last):
  File "check-storage-luks", line 54, in testLuks
    "mount_point": mount_point_secret })
  File "storagelib.py", line 151, in dialog
    self.dialog_wait_close()
  File "storagelib.py", line 136, in dialog_wait_close
    self.browser.wait_not_present('#dialog')
  File "testlib.py", line 220, in wait_not_present
    return self.wait_js_func('!ph_is_present', selector)
  File "testlib.py", line 211, in wait_js_func
    return self.phantom.wait("%s(%s)" % (func, ','.join(map(jsquote, args))))
  File "testlib.py", line 698, in <lambda>
    return lambda *args: self._invoke(name, *args)
  File "testlib.py", line 721, in _invoke
    raise Error(res['error'])
Error: timeout
Message recipient disconnected from message bus without replying

First occurrence: 2016-11-07T14:04:54.445363 | revision fc0a764, logs
Times recorded: 1
Latest occurrences:

  • 2016-11-07T14:04:54.445363 | revision fc0a764, logs


Description


Richard W.M. Jones



2012-11-03 17:14:18 UTC

Description of problem:

We use wipefs as part of the libguestfs tests, and I've noticed
two (related) changes recently.  One is not a bug, the other
seems to be a bug.

First change: You can no longer wipe a mounted filesystem, eg:
 mount /dev/sda1 /foo
 wipefs -a /dev/sda1
fails with:
 /dev/sda1: probing initialization failed: Device or resource busy

That is obviously NOT a bug.  You don't want to be able to
wipe a filesystem which the kernel has mounted.

However a second error looks like it is a bug:

 wipefs -a /dev/sda
 /dev/sda: probing initialization failed: Device or resource busy

In this second case, /dev/sda contains partitions, but nothing is
mounted.  Obviously I want to erase the partitions which is the
whole point of running wipefs(!)

Version-Release number of selected component (if applicable):

util-linux.x86_64 0:2.22.1-3.fc19
(for other packages, see:
http://kojipkgs.fedoraproject.org//work/tasks/1830/4651830/root.log)

How reproducible:

100%

Steps to Reproduce:
1. Run the libguestfs tests on Rawhide.

Additional info:

Example of the second failure:
http://kojipkgs.fedoraproject.org//work/tasks/1830/4651830/build.log


Comment 1


Richard W.M. Jones



2012-11-03 18:29:55 UTC

This worked with util-linux 2.22.1-1.fc19 & coreutils 8.17.
It was when I upgraded to util-linux 2.22.1-3 & coreutils 8.20
that it breaks.


Comment 4


Karel Zak



2012-11-20 13:47:41 UTC

(In reply to comment #0)
> However a second error looks like it is a bug:
> 
>  wipefs -a /dev/sda
>  /dev/sda: probing initialization failed: Device or resource busy
> 
> In this second case, /dev/sda contains partitions, but nothing is
> mounted. 

I have doubts that nothing is mounted... partitioned device, 
nothing mounted:

  # strace -e open ./wipefs --no-act -a /dev/sdb 2>&1 | grep sdb
  open("/dev/sdb", O_RDWR|O_EXCL)         = 3

  success!

partitioned device, sdb1 mounted:

  # mount /dev/sdb1 /mnt/test

  # strace -e open ./wipefs --no-act -a /dev/sdb 2>&1 | grep sdb
  open("/dev/sdb", O_RDWR|O_EXCL)         = -1 EBUSY (Device or resource busy)

it means that partition table has no impact to O_EXCL, the problem is if any partition is mounted.

It all seems like correct and expected ... I think we don't want to allow to delete partition table if any partition is actively used.


Comment 6


Karel Zak



2012-11-22 10:40:25 UTC

Fixed by upstream commit 2968c3fc7388f88b8debe64d61d9785601c16436.

If you are implementing LUKS tool on your USB drive which is connected your laptop, you may face two error at the start of the implementation.

Errors are as follows:

Probing initialization failed: Device or resource busy

You will face this error when you are going to use the “wipefs” tool to clean the signature from the USB drive.

1.wipefs-a

You can solve this problem by using the option “–force” that come along “wipefs” tool.

2.wipe-force

It will clean the signature from the device. It will display message which gives you information about erased bytes.

But I don’t like to do things forcefully. So I have used other ways to format device. I will explain it later. Let’s focus on the other error which  occur when you are creating LUKS partition. Error is as follows:

Cannot format device /dev/sdb1 which is still in use.

You will face this error when you are trying to create LUKS partition. 

3.luksformat

You cannot use “–force” as there is no option to execute this command forcefully.

I know you have lot of questions like,

What happens when we use wipefs tool to clean signature? Why we are unable clean the signature from the device?

Why we are unable to create LUKS partition?

You will get answers of all your questions when we will discuss the solution.

Now, let’s discuss the solution.

When you connected your USB drive to your laptop, it gets mounted at the directory. Therefore when you are trying to use “wipefs” tool on it, system shows the error. This is the same reason to get error while creating LUKS partition. We have option to clean signatures forcefully,  so we can use it. But we don’t have this option while creating LUKS partition.

Now, let’s follow some standard way rather than doing it forcefully. It is not a cumbersome. You just need to fire a simple command that will unmount the USB drive. After this you will not face any error while cleaning the signature as well as while creating the partition.

See the following picture, and you will get all your answers.

4.solution

I hope this post is beneficial for you. If you are trying to encrypt your USB drives using LUKS, please click here. In that blog post, I have explained steps to encrypt the USB drives using LUKS.

Leave a reply here or you can contact me to ask the questions. Suggestions are always welcome. For more updates, keep visiting.

Thank you.

  • #1

Hi all

My cluster consists of 6 nodes with 3 OSDs each (18 OSDs total), pve 6.2-6 and ceph 14.2.9. BTW, it’s been up and running fine for 7 months now and went through all updates flawlessly so far.

However, after rebooting the nodes one after the other upon updating to 6.2-6, the 3 OSDs on one nodes didn’t come up again. After ceph was back to clean, and the 3 OSDs being «out», I decided to destroy them; and waitet for the clean-state again. Then (on the respective node) I tried

  • ceph-volume lvm zap /dev/sda —destroy

which failed, returning:

Code:

Traceback (most recent call last):
  File "/usr/sbin/ceph-volume", line 6, in <module>
    from pkg_resources import load_entry_point
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 83, in <module>
    __import__('pkg_resources.extern.packaging.specifiers')
  File "/usr/lib/python2.7/dist-packages/pkg_resources/extern/__init__.py", line 43, in load_module
    __import__(extant)
ValueError: bad marshal data (unknown type code)

The attempt to add the OSD anyway using

  • pveceph osd create /dev/sda

also failed, returning:

Code:

wipe disk/partition: /dev/sda
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.0918 s, 192 MB/s
Traceback (most recent call last):
  File "/sbin/ceph-volume", line 6, in <module>
    from pkg_resources import load_entry_point
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 83, in <module>
    __import__('pkg_resources.extern.packaging.specifiers')
  File "/usr/lib/python2.7/dist-packages/pkg_resources/extern/__init__.py", line 43, in load_module
    __import__(extant)
ValueError: bad marshal data (unknown type code)
command 'ceph-volume lvm create --cluster-fsid a8d6705a-74c4-4904-9000-0db5742043fc --data /dev/sda' failed: exit code 1

The same happens with the other two HDDs in that node (/dev/sdb and /dev/sdc) So, I’m kind’a stuck — and I’d appreciate any hint & help on this :)

Kind regards
lucentwolf

Last edited: Jul 8, 2020

Alwin

Alwin

Proxmox Retired Staff


  • #2

You may have leftover LVs. They and the VGs containing them need to be removed first.

  • #3

Hello,

Sorry in advance, to avoid new similar topic I`ll write here.
I have issue with ceph-volume lvm zap command too.

I use PVE 6.0.1 with Ceph for testing purpose. Now I have to migrate this cluster to production. I made clear install (PVE + upgrade + Ceph) on each of 4 nodes, create cluster and face with problem that I can`t add OSD to new Ceph cluster. All disk are in use (No disk unused from GUI).
lsblk shows that disk has old Ceph volumes:

Code:

root@pve-01:~# lsblk
NAME                                                                                                  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                                                                                     8:0    0 232.9G  0 disk
|-sda1                                                                                                  8:1    0  1007K  0 part
|-sda2                                                                                                  8:2    0   512M  0 part
`-sda3                                                                                                  8:3    0 232.4G  0 part
  |-pve-swap                                                                                          253:5    0     8G  0 lvm  [SWAP]
  |-pve-root                                                                                          253:6    0    58G  0 lvm  /
  |-pve-data_tmeta                                                                                    253:7    0   1.5G  0 lvm 
  | `-pve-data                                                                                        253:9    0 147.4G  0 lvm 
  `-pve-data_tdata                                                                                    253:8    0 147.4G  0 lvm 
    `-pve-data                                                                                        253:9    0 147.4G  0 lvm 
sdb                                                                                                     8:16   0 931.5G  0 disk
`-ceph--c8f6fde5--3a68--418b--b3ba--2aaf8b4b75c5-osd--block--b7726b15--fd54--45d9--8a07--2729bda9c414 253:4    0 931.5G  0 lvm 
sdc                                                                                                     8:32   0 931.5G  0 disk
`-ceph--23342ddd--b606--40a7--8a3a--6400485cc7a2-osd--block--a12e2660--d544--4c94--af59--e91cfce06eb7 253:3    0 931.5G  0 lvm 
sdd                                                                                                     8:48   0 931.5G  0 disk
`-ceph--8108d3bd--9ef7--4c1b--b5ae--1912b56dbbff-osd--block--01cb0cdc--a0e6--43fb--b6df--f5598e099899 253:2    0 931.5G  0 lvm 
sde                                                                                                     8:64   0 931.5G  0 disk
`-ceph--67670145--0dff--4a95--9b55--7b48b6af7d0f-osd--block--ae4bb4f4--e187--42f5--a2b3--3e843536d18d 253:1    0 931.5G  0 lvm 
sdf                                                                                                     8:80   0 931.5G  0 disk
`-ceph--0bf050a7--1961--4e49--8e42--c2c5907bbc21-osd--block--2294241e--78ba--483c--8610--ad5c031c1750 253:0    0 931.5G  0 lvm

I tried to prepare disk with ceph-volume lvm zap

Code:

root@pve-01:~# ceph-volume lvm zap /dev/sdb --destroy
--> Zapping: /dev/sdb
 stderr: wipefs: error: /dev/sdb: probing initialization failed: Device or resource busy
--> failed to wipefs device, will try again to workaround probable race condition

Also I already try sgdisk -Z /dev/sdb, wipefs -af /dev/sdb/, dd if=/dev/zero of=/dev/sdb bs=500M count=2048. No luck…

How to destroy old ceph dataid`s whatever to create new OSD`s?

  • #4

Hi all, esp Alwin

LV means logical volume, right? Checking with

  • ceph-volume lvm list

returns basically the same error:

Code:

Traceback (most recent call last):
  File "/usr/sbin/ceph-volume", line 6, in <module>
    from pkg_resources import load_entry_point
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 83, in <module>
    __import__('pkg_resources.extern.packaging.specifiers')
  File "/usr/lib/python2.7/dist-packages/pkg_resources/extern/__init__.py", line 43, in load_module
    __import__(extant)
ValueError: bad marshal data (unknown type code)

To me it looks like a flaw in the python setup…

lucentwolf

  • #6

Hi Pravednik, I tried
— creating a new GPT on each drive; then
— removed all Partition tables from the drives
…same issue.

Following up on Alwin’s hint, vgdisplay shows one VG named ‘pve’ (no others) — is that the one I need to remove?

  • #7

lucentwolf, do you reboot node after removing partitions? I tried to mount disks after removing partitions and got same error. Only after reboot node all disks are available for Ceph.

  • #8

Pravednik, sure I rebooted numerous times :-(
[EDIT]: I also rebooted the other nodes — just in case it would make a difference…

Last edited: Jul 7, 2020

Alwin

Alwin

Proxmox Retired Staff


  • #9

To me it looks like a flaw in the python setup…

Did you upgrade all Ceph related packages?

Following up on Alwin’s hint, vgdisplay shows one VG named ‘pve’ (no others) — is that the one I need to remove?

No, otherwise the OS and lvm-local will be lost. ;)

What’s the output of pveceph osd create /dev/<disk>?

  • #10

Hi Alwin

…appreciate your involvment; all packages report to be up-do-date (no update action after ‘apt update’ respectively ‘apt dist-upgrade’). PVE and Ceph versions are identical to the remaining nodes (see initial post), python —version gives 2.7.16 on all nodes.

  • pveceph osd create /dev/sda

reports

Code:

create OSD on /dev/sda (bluestore)
wipe disk/partition: /dev/sda
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.14345 s, 183 MB/s
Traceback (most recent call last):
  File "/sbin/ceph-volume", line 6, in <module>
    from pkg_resources import load_entry_point
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 83, in <module>
    __import__('pkg_resources.extern.packaging.specifiers')
  File "/usr/lib/python2.7/dist-packages/pkg_resources/extern/__init__.py", line 43, in load_module
    __import__(extant)
ValueError: bad marshal data (unknown type code)
command 'ceph-volume lvm create --cluster-fsid a8d6705a-74c4-4904-9000-0db5742043fc --data /dev/sda' failed: exit code 1

Alwin

Alwin

Proxmox Retired Staff


  • #11

ValueError: bad marshal data (unknown type code) command ‘ceph-volume lvm create —cluster-fsid a8d6705a-74c4-4904-9000-0db5742043fc —data /dev/sda’ failed: exit code 1

ceph-volume is in the ceph-osd package, you may reinstall the package.

  • #12

OK, tried apt remove ceph-osd; then apt install ceph-osd, rebooted — same error :-(

Alwin

Alwin

Proxmox Retired Staff


  • #13

OK, tried apt remove ceph-osd; then apt install ceph-osd, rebooted — same error :-(

What’s the output of pveversion -v?

  • #14

That’s at first glance identical to the other nodes:

Code:

proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-4
pve-kernel-helper: 6.2-4
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.18-1-pve: 5.3.18-1
pve-kernel-5.3.13-2-pve: 5.3.13-2
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve2
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-3
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-8
pve-cluster: 6.1-8
pve-container: 3.1-8
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

  • #15

oops: Just noticed: All nodes report
ceph: 14.2.9-pve1
whereas the affected node has no such line in pveversion -v
[EDIT]
However, in the UI/Ceph/OSD the node shows Version 14.2.9 (as all others)

Alwin

Alwin

Proxmox Retired Staff


  • #16

oops: Just noticed: All nodes report
ceph: 14.2.9-pve1
whereas the affected node has no such line in pveversion -v

Then Ceph is not installed on that node or at least not the meta-package. Run a pveceph install to get it installed.

  • #17

Well, I did, and it said «1 new installed» (ie ceph) — however: Still same issue

Alwin

Alwin

Proxmox Retired Staff


  • #19

Sry, need to stay naggin’ you about this…

I indeed stumbled over the linked stackoverflow thread — however I don’t quite understand how to fix it. Recap:

  • already a simple ‘ceph-volume’ (without arguments) results in the same «ValueError…» whereas on the other nodes I get the «Available subcommands»-help displayed
  • That «ValueError: bad marshal data (unknown type code)» happens if python 2.7 .pyc is loaded in python 3.5 (and there seems to be a regression in 3.7)
  • Looking at the stackoverflow thread you mentioned it says «…reinstall the python application» or «…remove the .pyc»; but
  • Ceph & ceph-osd is re-installed (purge, autoremove, pveceph install) — so the potentially included .pyc should be fine.

So, honestly, I’m stuck. Would a re-installation of the python packages be a viable option?

Alwin

Alwin

Proxmox Retired Staff


  • #20

  • Looking at the stackoverflow thread you mentioned it says «…reinstall the python application» or «…remove the .pyc»; but

Try to manually remove the .pyc, they may not have been cleared with the package re-installation. And is the node on the latest available packages (missing updates)?

I have formatted (using GParted) an external HDD for use as it was NTFS, but the writing permissions were denied, I then tried fat32 but the file size is too small, there is no exfat option, and I would like a password protect if possible when plugged in etc.

How can I get the drive to allow me to write and have a password if possible? What is wrong in my ext4 process? Primary extension ext4 external hdd, all is seen, mounted and open-able but not writeable?

I have just tried this using commands below, and it did not work either.

Device     Boot Start       End   Sectors   Size Id Type
/dev/sdb1        2048 625141759 625139712 298.1G  b W95 FAT32
$ sudo  wipefs -a /dev/sdb1
wipefs: error: /dev/sdb1: probing initialisation failed: Device or resource busy
david@david-HP-15-Notebook-PC:~$ sudo  wipefs -a /dev/sdb1
/dev/sdb1: 8 bytes were erased at offset 0x00000052 (vfat): 46 41 54 33 32 20 20 20
/dev/sdb1: 1 byte was erased at offset 0x00000000 (vfat): eb
/dev/sdb1: 2 bytes were erased at offset 0x000001fe (vfat): 55 aa
david@david-HP-15-Notebook-PC:~$ sudo fdisk /dev/sdb1

Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognised partition table.
Created a new DOS disklabel with disk identifier 0x5fd1458f.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): 

Using default response p.
Partition number (1-4, default 1): 
First sector (2048-625139711, default 2048): 
Last sector, +sectors or +size{K,M,G,T,P} (2048-625139711, default 625139711): 

Created a new partition 1 of type 'Linux' and of size 298.1 GiB.

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 7
Changed type of partition 'Linux' to 'HPFS/NTFS/exFAT'.

**Command (m for help): w
The partition table has been altered.
Failed to add partition 1 to system: Invalid argument**

The kernel still uses the old partitions. The new table will be used at the next reboot. 
Synching disks.

It says w is an invalid argument, so what do I do with that, is this option for exfat partitioning not available on linux mint? Or has the command changed?

Now this issue is listed with the previously used command =

$ sudo  wipefs -a /dev/sdb1
wipefs: /dev/sdb1: ignoring nested "dos" partition table on non-whole disk device
wipefs: Use the --force option to force erase.

What I type now, was from a website.

(exfat is apparently what is appropriate to use on multiple systems, so is preferred, ntfs is on gparted already, ext4 is last option preferred) unless other issue I don’t know of with exfat? or is this not doable in linux?

Понравилась статья? Поделить с друзьями:

Читайте также:

  • Winzip self extractor header corrupt ошибка как исправить
  • Winzip self extractor header corrupt possible cause bad disk or file transfer error
  • Winx club ошибка msvcp71 dll решение
  • Winx club runtime error
  • Winx club msvcp71 dll вылетает ошибка

  • 0 0 голоса
    Рейтинг статьи
    Подписаться
    Уведомить о
    guest

    0 комментариев
    Старые
    Новые Популярные
    Межтекстовые Отзывы
    Посмотреть все комментарии