Linux copy GPT partition table with dd

I recently had to copy the partition table of a 3TB disk in a situation where tools such as sfdisk could not be installed.

Since GPT table length is dependant on the number of partitions, you need to do some investigation.

In this case, it was a ‘QNAP’ server that had fdisk (no GPT support) and parted.

On a working drive, run

parted -ms /dev/sda print

Note the number of partitions.

Formula = (128*N)+1024

Where N is the number of partitions you have. In this case I had 4, so I end up with a value of 1536

dd if=/dev/sda of=GPT_TABLE bs=1 count=1536

You now have a backup of a valid partition table you can apply to another drive

dd if=GPT_TABLE of=/dev/sdb bs=1 count=1536

Once this was done, you can manually re-add the drive.

mdadm –manage /dev/md0 –add /dev/sdb3

If you are wondering how we determined the sd[a-z], we accomplished this through hot-swapping the drive to generate logs indicating the drive.

Now why this supposedly automated RAID product required this…

Recover rpool ZFS pools and snapshots on Solaris 10 SPARC

I assume you’ve already taken a recursive ZFS snapshot and sent the snapshot/pool off to your local NFS server.

In this example, I have a Netra X1 with 2 drives (mirrored)

Now you need to boot your Solaris 10 CD/DVD, either from physical media or over the network


boot net -s

or


boot cdrom -s

Important: Make sure your disks are labeled correctly! (SMI)


format -e c0t0d0s0
>label
>0 (For SMI)
>modify
>Free Space Hog (Select s0, not s6)

Do the same for your other disk as well (In my case, c0t2d0s0)

Create your rpool


zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/zpool.cache rpool mirror c0t0d0s0 c0t2d0s0

Mount NFS


mkdir /tmp/a
mount 192.168.1.1:/archives /tmp/a

ZFS Receive


zfs receive -dvuF rpool < /tmp/a/archive/rpool-backup

Set which pool you want to boot from


zpool set bootfs=rpool/ROOT/s10s_u8wos_08a rpool

Install boot blocks (SPARC)


installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t0d0s0
installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t2d0s0

Reboot


reboot -- disk

View processes (PID) using disk I/O

Install the sysstat package

apt-get install sysstat

# iostat -m
Linux 2.6.27-14-generic (slowaris)      07/28/2009      _x86_64_

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
4.88    6.65    3.03    1.49    0.00   83.95

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn

md0             221.24         0.64         0.78    1765002    2159490

We can see that I/O isn’t terribly high here, but there are 221 transfers per second going on.

To check what processes are causing I/O

echo 1 > /proc/sys/vm/block_dump

tail -f /var/log/syslog | grep md0

Jul 28 08:20:12 server kernel: [2752582.434647] kvm(17362): READ block 1017744552 on md0
Jul 28 08:20:12 server kernel: [2752582.502401] kvm(17362): READ block 615283608 on md0
Jul 28 08:20:13 server kernel: [2752582.634622] kvm(17362): READ block 1017744576 on md0
Jul 28 08:20:14 server kernel: [2752583.964709] kvm(17362): READ block 1017744608 on md0
Jul 28 08:20:14 server kernel: [2752584.372889] kvm(1868): dirtied inode 17367041 (live-default-32.img) on md0
Jul 28 08:20:14 server kernel: [2752584.372908] kvm(1868): dirtied inode 17367041 (live-default-32.img) on md0

We see that kvm is causing some disk I/O; now where know where to start investigating!

To turn off these messages

echo 0 > /proc/sys/vm/block_dump

Raid on Debian Etch

If you’d like to do some RAID 0/1 in Debian Etch with some SATA drives you can do this in only a few simple steps.

cfdisk all your drives; create your partition(s) and make the type linux-raid-autodetect (Type: FD).

mknod /dev/md0 b 9 0

mdadm –create /dev/md0 –level 0 –raid-devices=2 /dev/sda1 /dev/sdb1

You can do level 1 the same way. Keep in mind that if you do multiple md devices, change the last number to the number of the device (mknod /dev/md1 b 9 1, for example).

For RAID 5, do

mdadm –create /dev/md0 –level 5 –raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdb1

Just keep in mind that RAID 5 needs three or more drives to work.

If you’d like to add more drives to a RAID 5, you can do so easily.

mdadm –add /dev/md0 /dev/sdd1
mdadm –grow /dev/md0 –raid-devices=4

After doing this, you need to resize your filesystem on md0

apt-get install lvm2 lvm-common

pvresize /dev/md0

pvdisplay

You should now be using the full size of the array.

Now all you have to do is

mkfs.ext3 /dev/md0