I recently had to copy the partition table of a 3TB disk in a situation where tools such as sfdisk could not be installed.
Since GPT table length is dependant on the number of partitions, you need to do some investigation.
In this case, it was a ‘QNAP’ server that had fdisk (no GPT support) and parted.
On a working drive, run
parted -ms /dev/sda print
Note the number of partitions.
Formula = (128*N)+1024
Where N is the number of partitions you have. In this case I had 4, so I end up with a value of 1536
dd if=/dev/sda of=GPT_TABLE bs=1 count=1536
You now have a backup of a valid partition table you can apply to another drive
dd if=GPT_TABLE of=/dev/sdb bs=1 count=1536
Once this was done, you can manually re-add the drive.
mdadm –manage /dev/md0 –add /dev/sdb3
If you are wondering how we determined the sd[a-z], we accomplished this through hot-swapping the drive to generate logs indicating the drive.
Now why this supposedly automated RAID product required this…
For rhel6.4, after dd, parted shows sdb normal but lsblk won’t show the partitions thus mdadm fails. Any advice?
Found the fix for it:
Need to run partprobe after dd like:
partprobe /dev/sdb
to update the os with the new device files for these new gps partitions.
dd won’t do it for you.
Why we shuld read 1 byte at a time? May be ‘bs=1536 count=1’ works better?
Hey Alex,
It doesn’t really matter which method you do when dealing with such small sizes, but you are correct that this way would work as well. 🙂
You have forgot about protective copy of GPT table at the end of disk. I do not have very extensive knoledge in this mater but I will share what I know. Copied partition table also includes adress of this protective GPT table (now not existent on destination disk), also it should be at the end but new disk can have different size (substantialy or not) in LBA sectors. Also copy has the same GUIDs as source. I do not know what consequences it can have for tools, OSes or potential recovery process but I think (I’m not sure) your destination disk do not adhere to GPT spec. Also the back of envelop calculation of size of GPT table may be incoret in future since the size of each entry is specified in header of GPT (
I think usage of sgdisk would be better idea.
That is a great point!
This situation was weird in that there were no GPT tools available and no packages readily available to install them with, so I had to make do with dd.
If I get a chance, I’ll see if it’s as simple as writing the same table to the very end of the drive.
Thanks very useful miss the old days of bs=512 for normal MBR!
Cheers