Using large vDisks in OnApp – Part 3: how to deal with failed resize transactions


Rostyslav Yatsyshyn 
L3 Support Engineer

This is the third and final article in a series about using large vDisks on OnApp KVM compute resources. In the last two articles, I explained how to add a new vDisk and resize it beyond 2TB; and how to resize an existing vDisk beyond 2TB.

This time, we’ll deal with a potential issue you may encounter when resizing an existing vDisk. There is the potential for the resize operation to fail, which is usually because there is already a partition on the vDisk when you increase its size beyond 2TB.

As a result, you could see this sort of failure log in the OnApp GUI:

If this happens to you, don’t panic! The partition has been deleted but all of the data is still there.

Your output logs could differ, but the key point is that the last command executed, before the ‘Fatal’ record, was creating the resized partition after it had been removed: so, the partition is removed but a logical volume has already been increased. It looks as follows from the inside of the virtual server:

root@onappgpttest:~# lsblk
fd0 2:0 1 4K 0 disk
vda 253:0 0 5G 0 disk
└─vda1 253:1 0 5G 0 part /
vdb 253:16 0 1G 0 disk [SWAP]
vdc 253:32 0 2.1T 0 disk

How to deal with Failed Resizing Transactions

* Note. The following procedure is applicable to KVM compute resources and Linux-based VSs with vDisks migrated to GPT. For other cases, please create a support ticket.

To deal with this issue, follow these steps:


1. Stop the affected virtual server from the OnApp Control Panel.


2. To resize LVM and get the original size, run from the Control Panel:

curl --request PUT «» --data
curl --request PUT «» --data
curl --request PUT «» --data
curl --request PUT «» --data

The first API request is required just to ensure that this vDisk is offline. The second request is required for the vDisk to come online on the same compute resource. Then, we’re resizing the vDisk to its previous size. Finally, the last request is required for the vDisk to come offline again.

Notes: – the current HV’s IP address and API port number. 8080 is used by default.

/lvm/Datastore/onapp-va2rypo0fhhihz/VDisk/sjeiqnfehkfebd – the Datastore and vDisk

«{\”state\»:4,\”new_size\»:2100}» – resizing logical volume to size 2100 in MB


All the information required for running correct API requests can be found in the failed transaction. For example:


3. Start the virtual server from GUI and connect to it via SSH.


4. Create the primary partition from the sector to the end of the vDisk, depending on which sector was used as first for this partition: 2048 or 1024.

For 1024s:

root@onappgpttest:~# sgdisk --set-alignment=1024 -n 1:1024:$ENDSECTOR /dev/vdc

For 2048s:

root@onappgpttest:~# sgdisk -N 1 /dev/vdc


*Note. If you are not sure which was the first sector of the removed partition, you still can check it out in the latest failed resize transaction.

Example 1: the beginning of the partition is 1049kB. It’s sector 2048 (see part 1 of this article series)


Example 2: the eginning of the partition is 524kB. it’s sector 1024  (see part 2 of this article series)


5. Finally, run e2fsck.

root@onappgpttest:~# e2fsck -f /dev/vdc1
e2fsck 1.42.13 (17-May-2015)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vdc1: 13/137625600 files (0.0% non-contiguous), 336688823/550502139 blocks

The vDisk resizing was rolled back to the previous state, and now it’s ready for further usage.


That’s all for now. If you come across any issues with vDisk resizing, raise a ticket with support. I hope this series of tech blogs has been helpful!