Error read only file system during write on dev disks

Blog

Recently I wanted to install ESXi 6.5 on a USB disk and also use that disk as a datastore to store VM on. I couldn’t get any VMs to run off the USB disk but I spent some time getting the USB disk presented as a datastore so wanted to post that here.

Installing ESXi 6.5 to a USB is straight-forward.

And this blog post is a good reference on what to do so that a USB disk is visible as a datastore. This blog post is about presenting a USB disk without ESXi installed on it – i.e. you use the USB disk entirely as a datastore. In my case the disk already had partitions on it so I had to make some changes to the instructions in that blog post. This meant a bit of mucking about with partedUtil, which is the ESXi command line way of fiddling with partition tables. (fdisk while present is no longer supported as it doesn’t do GPT).

1. First, connect to the ESXi host via SSH.

2. Shutdown the USB arbitrator service (this is used to present a USB disk to a VM): /etc/init.d/usbarbitrator stop (Update: one of the readers emailed me to say this step and the next are not needed and in fact it stops the USB disk from being made visible. So skip them, but if skipping them doesn’t help then try again without skipping them. :)

3. Permanently disable this service too: chkconfig usbarbitrator off

4. Now find the USB disk device from /dev/disks. This can be done via an ls -al. In my case the device was called /dev/disks/t10.SanDisk00Cruzer_Switch0000004C531001441121115514.

So far so good?

To find the partitions on this device use the partedUtil getptbl command. Example output from my case:

[root@esx01:~] partedUtil getptbl /dev/disks/t10.SanDisk00Cruzer_Switch0000004C531001441121115514

gpt

7625 255 63 122508544

1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128

5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0

8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0

The “gpt” indicates this is a GPT partition table. The four numbers after that give the number of cylinders (7625), heads (255), sectors per track (63), as well as the total number of sectors (122508544). Multiplying the cylinders x heads x sectors per head should give a similar figure too (122495625).

An entry such as 9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0 means the following:

  • partition number 9
  • starting at sector 1843200
  • ending at sector 7086079
  • of GUID 7086079 9D27538040AD11DBBF97000C2911D1B8, type vmkDiagnostic (you can get a list of all known GUIDs and type via the partedUtil showGuids command)
  • attribute 0

In my case since the total number of sectors is 122495625 (am taking the product of the CHS figures) and the last partition ends at sector 7086079 I have free space where I can create a new partition. This is what I’d like to expose to the ESX host.

There seems to be gap of 33 sectors between partitions (at least between 8 and 7, and 7 and 6 – I didn’t check them all :)). So my new partition should start at sector 7086112 (7086079 + 33) and end at 122495624 (122495625 -1) (we leave one sector in the end). The VMFS partition GUID is AA31E02A400F11DB9590000C2911D1B8, thus my entry would look something like this: 10 7086112 122495624 AA31E02A400F11DB9590000C2911D1B8 0.

But we can’t do that at the moment as the disk is read-only. If I try making any changes to the disk it will throw an error like this:

Error: Readonly file system during write on /dev/disks/t10.SanDisk00Cruzer_Switch0000004C531001441121115514

SetPtableGpt: Unable to commit to disk

From a VMware forum post I learnt that this is because the disk has a coredump partition (the vmkDiagnostic partitions we saw above). We need to disable that first.

5. Disable the coredump partition: esxcli system coredump partition set --enable false

6. Delete the coredump partitions:

[partedUtil delete /dev/disks/t10.SanDisk00Cruzer_Switch0000004C531001441121115514 9

[partedUtil delete /dev/disks/t10.SanDisk00Cruzer_Switch0000004C531001441121115514 7

7. Output the partition table again:

[root@esx01:~] partedUtil getptbl /dev/disks/t10.SanDisk00Cruzer_Switch0000004C531001441121115514

gpt

7625 255 63 122508544

1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128

5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

So what I want to add above is partition 9. An entry such as 9 1843232 122495624 AA31E02A400F11DB9590000C2911D1B8 0.

8. Set the partition table. Take note to include the existing partitions as well as the command replaces everything.

[root@esx01:~] partedUtil setptbl /dev/disks/t10.SanDisk00Cruzer_Switch0000004C531001441121115514 gpt «1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B 128» «5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 0» «6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 0» «8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 0» «9 1843232 122495624 AA31E02A400F11DB9590000C2911D1B8 0»

That’s it. Now partition 9 will be created.

All the partitions also have direct entries under /dev/disks. Here’s the entries in my case after the above changes:

-rw    1 root     root     62724374528 Nov 14 14:29 t10.SanDisk00Cruzer_Switch0000004C531001441121115514

-rw    1 root     root       4161536 Nov 14 14:29 t10.SanDisk00Cruzer_Switch0000004C531001441121115514:1

-rw    1 root     root     262127616 Nov 14 14:29 t10.SanDisk00Cruzer_Switch0000004C531001441121115514:5

-rw    1 root     root     262127616 Nov 14 14:29 t10.SanDisk00Cruzer_Switch0000004C531001441121115514:6

-rw    1 root     root     299876352 Nov 14 14:29 t10.SanDisk00Cruzer_Switch0000004C531001441121115514:8

-rw    1 root     root     61774025216 Nov 14 14:29 t10.SanDisk00Cruzer_Switch0000004C531001441121115514:9

lrwxrwxrwx    1 root     root            52 Nov 14 14:29 vml.0000000000766d68626133383a303a30 -> t10.SanDisk00Cruzer_Switch0000004C531001441121115514

lrwxrwxrwx    1 root     root            54 Nov 14 14:29 vml.0000000000766d68626133383a303a30:1 -> t10.SanDisk00Cruzer_Switch0000004C531001441121115514:1

lrwxrwxrwx    1 root     root            54 Nov 14 14:29 vml.0000000000766d68626133383a303a30:5 -> t10.SanDisk00Cruzer_Switch0000004C531001441121115514:5

lrwxrwxrwx    1 root     root            54 Nov 14 14:29 vml.0000000000766d68626133383a303a30:6 -> t10.SanDisk00Cruzer_Switch0000004C531001441121115514:6

lrwxrwxrwx    1 root     root            54 Nov 14 14:29 vml.0000000000766d68626133383a303a30:8 -> t10.SanDisk00Cruzer_Switch0000004C531001441121115514:8

lrwxrwxrwx    1 root     root            54 Nov 14 14:29 vml.0000000000766d68626133383a303a30:9 -> t10.SanDisk00Cruzer_Switch0000004C531001441121115514:9

Not sure what the “vml” entries are.

9. Next step is to create the datastore.

[root@esx01:~] vmkfstools -C vmfs6 -S USB-Datastore /dev/disks/t10.SanDisk00Cruzer_Switch0000004C531001441121115514:9

create fs deviceName:‘/dev/disks/t10.SanDisk00Cruzer_Switch0000004C531001441121115514:9’, fsShortName:‘vmfs6’, fsName:‘USB-Datastore’

deviceFullPath:/dev/disks/t10.SanDisk00Cruzer_Switch0000004C531001441121115514:9 deviceFile:t10.SanDisk00Cruzer_Switch0000004C531001441121115514:9

ATS on device /dev/disks/t10.SanDisk00Cruzer_Switch0000004C531001441121115514:9: not supported

.

Checking if remote hosts are using this device as a valid file system. This may take a few seconds...

Creating vmfs6 file system on «t10.SanDisk00Cruzer_Switch0000004C531001441121115514:9» with blockSize 1048576, unmapGranularity 1048576, unmapPriority default and volume label «USB-Datastore».

Successfully created new volume: 5a0afdeb-6fb1e7ee-e3a5-1cc1de7a06d0

That’s it! Now ESXi will see a datastore called “USB-Datastore” formatted with VMFS6. :)

You are here: Home / Automation / How to Format and Create VMFS5 Volume using the CLI in ESXi 5

VMware always recommends formatting and creating a new VMFS volume using the vSphere Client as it automatically aligns your VMFS volume. However, if you do not have access to the vSphere Client or you wanted to format additional VMFS volumes via a kickstart, you can do so using the CLI and the partedUtil under /sbin.

~ # /sbin/partedUtil
Not enough arguments
Usage:
Get Partitions : get
Set Partitions : set [«partNum startSector endSector type attr»]*
Delete Partition : delete Resize Partition : resize
Get Partitions : getptbl
Set Partitions : setptbl [«partNum startSector endSector type/guid attr»]*
Fix Partition Table : fix
Create New Label (all existing data will be lost): mklabel
Show commonly used partition type guids : showGuids

With ESXi 5, an MBR (Master Boot Record) partition table is no longer used and has been replaced with a GPT (GUID Partition Table) partition table. There is also only one block size of 1MB versus the 2,4 and 8 that was available in ESX(i) 4.x

We can view the partitions of a device by using the «getptbl» option and ensure we don’t have an existing VMFS volume:

~ # /sbin/partedUtil «getptbl» «/vmfs/devices/disks/mpx.vmhba1:C0:T2:L0»
gpt
652 255 63 10485760

Next we will need to create a partition by using the «setptbl» option:

/sbin/partedUtil «setptbl» «/vmfs/devices/disks/mpx.vmhba1:C0:T2:L0» «gpt» «1 2048 10474379 AA31E02A400F11DB9590000C2911D1B8 0»

The «setptbl» accepts 3 arguments:

  • diskName
  • label
  • partitionNumber startSector endSector type/GUID attribute

The diskName in this example is the full path to the device which is /vmfs/devices/disks/mpx.vmhba1:C0:T2:L0

The label will be gpt

The last argument is actually a string comprised of 5 individual parameters:

  • partitionNumber — Pretty straight forward
  • startSector — This will always be 2048 for 1MB alignment for VMFS5
  • endSector — This will need to be calculated based on size of your device
  • type/GUID — This is the GUID key for a particular partition type, for VMFS it will always be AA31E02A400F11DB9590000C2911D1B8

To view all GUID types, you can use the «showGuids» option:

~ # /sbin/partedUtil showGuids
Partition Type       GUID
vmfs                 AA31E02A400F11DB9590000C2911D1B8
vmkDiagnostic        9D27538040AD11DBBF97000C2911D1B8
VMware Reserved      9198EFFC31C011DB8F78000C2911D1B8
Basic Data           EBD0A0A2B9E5443387C068B6B72699C7
Linux Swap           0657FD6DA4AB43C484E50933C84B4F4F
Linux Lvm            E6D6D379F50744C2A23C238F2A3DF928
Linux Raid           A19D880F05FC4D3BA006743F0F84911E
Efi System           C12A7328F81F11D2BA4B00A0C93EC93B
Microsoft Reserved   E3C9E3160B5C4DB8817DF92DF00215AE
Unused Entry         00000000000000000000000000000000

Once you have the 3 arguments specified, we can now create the partition:

~ # /sbin/partedUtil «setptbl» «/vmfs/devices/disks/mpx.vmhba1:C0:T2:L0» «gpt» «1 2048 10474379 AA31E02A400F11DB9590000C2911D1B8 0»
gpt
0 0 0 0
1 2048 10474379 AA31E02A400F11DB9590000C2911D1B8 0

UPDATE (01/15) — Here is a quick shell snippet that you can use to automatically calculate End Sector as well as creating the VMFS5 volume:

partedUtil mklabel ${DEVICE} msdos
END_SECTOR=$(eval expr $(partedUtil getptbl ${DEVICE} | tail -1 | awk ‘{print $1 » \* » $2 » \* » $3}’) — 1)
/sbin/partedUtil «setptbl» «${DEVICE}» «gpt» «1 2048 ${END_SECTOR} AA31E02A400F11DB9590000C2911D1B8 0»
/sbin/vmkfstools -C vmfs5 -b 1m -S $(hostname -s)-local-datastore ${DEVICE}:1

Note: You can also use the above to create a VMFS-based datastore running on a USB device, however that is not officially supported by VMware and performance with USB-based device will vary depending on the hardware and the speed of the USB connection. 

We can verify by running the «getptbl» option on the device that we formatted:

~ # /sbin/partedUtil «getptbl» «/vmfs/devices/disks/mpx.vmhba1:C0:T2:L0»
gpt
652 255 63 10485760
1 2048 10474379 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

Finally we will now create the VMFS volume using our favorite vmkfstools, the syntax is the same as previous release of ESX(i):

~ # /sbin/vmkfstools -C vmfs5 -b 1m -S himalaya-SSD-storage-3 /vmfs/devices/disks/mpx.vmhba1:C0:T2:L0:1
Checking if remote hosts are using this device as a valid file system. This may take a few seconds…
Creating vmfs5 file system on «mpx.vmhba1:C0:T2:L0:1» with blockSize 1048576 and volume label «himalaya-SSD-storage-3».
Successfully created new volume: 4dfdb7b0-8c0dcdb5-e574-0050568f0111

Now you can refresh the vSphere Client or run vim-cmd hostsvc/datastore/refresh to view the new datastore that was created.

Hi,

Plateform : One machine with 8 hard drives in Raid 5 => 1 LUN of 2To, vSphere 5.0.0 Upd1

We want to deploy a Oracle RAC 10g. For this purpose we follow this document : https://www.vmware.com/files/pdf/solutions/oracle/Oracle_Databases_VMware_RAC_Deployment_Guide.pdf

In the prerequisites, we need to have several datastores.

In the vSphere client, we can’t create several datastores : only one of 2To (default) or, if we delete the default one, we can create a smaller, but the remaining space isn’t available anymore.

So we decide to create manually the several datastores, by following this document : Manually creating a VMFS volume using vmkfstools -C (1009829)

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1009829

The command we want to do is this one :

vmkfstools -C vmfs5 -b 1m -S NewDatastore /vmfs/devices/disks/naa.60901234567890123456789012345678:1

In this document, we have to make a partition for each datastore we want to create.

Layout of the disks

~ # ls -lah /vmfs/devices/disks/

drwxr-xr-x    1 root     root          512 Jun 26 12:13 .

drwxr-xr-x    1 root     root          512 Jun 26 12:13 ..

-rw——-    1 root     root         1.9T Jun 26 12:13 naa.600605b00581fb6018c09f852081693a

-rw——-    1 root     root         4.0M Jun 26 12:13 naa.600605b00581fb6018c09f852081693a:1

-rw——-    1 root     root         4.0G Jun 26 12:13 naa.600605b00581fb6018c09f852081693a:2

-rw——-    1 root     root 50.0G Jun 26 12:13 naa.600605b00581fb6018c09f852081693a:3

-rw——-    1 root     root 250.0M Jun 26 12:13 naa.600605b00581fb6018c09f852081693a:5

-rw——-    1 root     root 250.0M Jun 26 12:13 naa.600605b00581fb6018c09f852081693a:6

-rw——-    1 root     root 110.0M Jun 26 12:13 naa.600605b00581fb6018c09f852081693a:7

-rw——-    1 root     root 286.0M Jun 26 12:13 naa.600605b00581fb6018c09f852081693a:8

lrwxrwxrwx    1 root     root           36 Jun 26 12:13 vml.0200000000600605b00581fb6018c09f852081693a536572766552 -> naa.600605b00581fb6018c09f852081693a

lrwxrwxrwx    1 root     root           38 Jun 26 12:13 vml.0200000000600605b00581fb6018c09f852081693a536572766552:1 -> naa.600605b00581fb6018c09f852081693a:1

lrwxrwxrwx    1 root     root           38 Jun 26 12:13 vml.0200000000600605b00581fb6018c09f852081693a536572766552:2 -> naa.600605b00581fb6018c09f852081693a:2

lrwxrwxrwx    1 root     root           38 Jun 26 12:13 vml.0200000000600605b00581fb6018c09f852081693a536572766552:3 -> naa.600605b00581fb6018c09f852081693a:3

lrwxrwxrwx    1 root     root           38 Jun 26 12:13 vml.0200000000600605b00581fb6018c09f852081693a536572766552:5 -> naa.600605b00581fb6018c09f852081693a:5

lrwxrwxrwx    1 root     root           38 Jun 26 12:13 vml.0200000000600605b00581fb6018c09f852081693a536572766552:6 -> naa.600605b00581fb6018c09f852081693a:6

lrwxrwxrwx    1 root     root 38 Jun 26 12:13 vml.0200000000600605b00581fb6018c09f852081693a536572766552:7 -> naa.600605b00581fb6018c09f852081693a:7

lrwxrwxrwx    1 root     root           38 Jun 26 12:13 vml.0200000000600605b00581fb6018c09f852081693a536572766552:8 -> naa.600605b00581fb6018c09f852081693a:8

Edit disk settings with fdisk => Failure.

~ # fdisk -u /vmfs/devices/disks/naa.600605b00581fb6018c09f852081693a

Found valid GPT with protective MBR; using GPT

fdisk: Sorry, can’t handle GPT partitions, use partedUtil

Edit disk settings with partedUtil.

~ # partedUtil getptbl “/vmfs/devices/disks/naa.600605b00581fb6018c09f852081693a”

gpt

254458 255 63 4087881728

1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128

5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0

8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

2 1843200 10229759 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

3 10229760 115087359 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

~ #

we tried to create a partition with partedUtil.

~ # partedUtil setptbl “/vmfs/devices/disks/naa.600605b00581fb6018c09f852081693a” gpt “9 115087360 157030400 AA31E02A400F11DB9590000C2911D1B8 0”

gpt

0 0 0 0

9 115087360 157030400 AA31E02A400F11DB9590000C2911D1B8 0

Error: Read-only file system during write on /dev/disks/naa.600605b00581fb6018c09f852081693a

SetPtableGpt: Unable to commit to disk

Questions :

Why the disk is in Read/only mode ?

How can we change that ?

Do you have any advice ?

Thanks.

Regards.

Ludovic.

PS : Plz, forgive my english, i’m french

August
4

I’ve been recently doing some scripting work with the Ultimate Deployment Appliance (UDA) which was developed by Carl Thijsen of the Netherlands. The reason for this work is to make it easy for me to switch between different versions of EVO:RAIL using my SuperMicro systems. I want to be able to easily flip between different builds, and its seemed like the easiest way to do this remotely was using my old faithful the UDA. This means I can run EVO:RAIL 1.2.1 which based on vSphere5.5, and then rebuild the physical systems around our newer builds, which incidentally use vSphere6.0.

Anyway, I encountered an odd error when scripting the install of VMware ESXi 5.5. One hadn’t seen with VMware ESXi 6.0. The error looked like said :Error: Error: Read-only file system during write on /dev/disks/naa.blah.blah.blah.

Screen Shot 2015-08-04 at 13.46.16

Normally, the lines:

clearpart –alldrives –overwritevmfs
install –firstdisk=ST300MM0026,local –overwritevmfs

Would be enough to wipe any existing installation and VMFS volume. But the installer wasn’t happy. Incidentally “ST300MM0026” is the boot disk, a Seagate drive. However, that didn’t seem to work. I had to modify the ‘clearpart’ line like so:

clearpart –firstdisk=ST300MM0026 –overwritevmfs
install –firstdisk=ST300MM0026,local –overwritevmfs

I think what was happening was that clearpart wasn’t seeing the drive properly, and specifing it by model number allowed the VMFS partition to properly cleared.

Anyway, I doubt this will matter to most people, but I thought I would share in case someone else sees this…

UPDATE: Well, after automating the install of VMware ESXi 5.5, decided to flip back to VMware ESXi 6.0. I encountered the exact same error. So now both my 5.5 and 6.0 scripts include the change to clearpart.


Copyright 2022. All rights reserved.

Posted August 4, 2015 by Michelle Laverick in category «vSphere

Понравилась статья? Поделить с друзьями:
  • Error react children only expected to receive a single react element child
  • Error reached end of file while parsing перевод
  • Error reached end of file while parsing android studio
  • Error rcpt to not accepted from server opencart 3
  • Error rc2135 file not found