I would like to clarify the right way to deploy MapR on the largest disks.
Having installed a new 3TB Seagate Barracuda disk (STBD3000100) I found that both "fdisk -l" and MapR Control System can only recognize the first 2.2TB, consistent with DOS partition table limitation:
[root@hostname ~]# fdisk -l
........
Disk /dev/sdb: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
[root@hostname ~]#
Seagate documentation directs me to their "beyond 2TB" support site that says that larger disks are automatically recognized by Linux kernels "v2.6.35 or newer" which excludes CentOS 5.x
http://www.seagate.com/www/en-us/support/beyond-2tb/
http://knowledge.seagate.com/articles/en_US/FAQ/218575en
I could see a possible solution being to format the disk as GPT using Parted and if it succeeds at creating a 3TB GPT partition (/dev/sdb1) then install MapR FS there. I have done this kind of formatting on another box (non-MapR) and here is how fdisk sees that format:
[root@hostname ~]# fdisk -l
........
WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util fdisk doesn't support GPT. Use GNU Parted.
Note: sector size is 4096 (not 512)
WARNING: The size of this disk is 3.0 TB (3000592977920 bytes).
DOS partition table format can not be used on drives for volumes
larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID
partition table format (GPT).
Disk /dev/sdd: 3000.5 GB, 3000592977920 bytes
255 heads, 63 sectors/track, 45600 cylinders
Units = cylinders of 16065 * 4096 = 65802240 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 45601 2930266576 ee EFI GPT
[root@hostname ~]#
My question is whether installing MapR-FS on large-sized GPT partitions is supported / tested and if not what is the alternative solution. The reason I am even asking is because MapR Control System only sees the disk (/dev/sdb) as 2TB...
Having installed a new 3TB Seagate Barracuda disk (STBD3000100) I found that both "fdisk -l" and MapR Control System can only recognize the first 2.2TB, consistent with DOS partition table limitation:
[root@hostname ~]# fdisk -l
........
Disk /dev/sdb: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
[root@hostname ~]#
Seagate documentation directs me to their "beyond 2TB" support site that says that larger disks are automatically recognized by Linux kernels "v2.6.35 or newer" which excludes CentOS 5.x
http://www.seagate.com/www/en-us/support/beyond-2tb/
http://knowledge.seagate.com/articles/en_US/FAQ/218575en
I could see a possible solution being to format the disk as GPT using Parted and if it succeeds at creating a 3TB GPT partition (/dev/sdb1) then install MapR FS there. I have done this kind of formatting on another box (non-MapR) and here is how fdisk sees that format:
[root@hostname ~]# fdisk -l
........
WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util fdisk doesn't support GPT. Use GNU Parted.
Note: sector size is 4096 (not 512)
WARNING: The size of this disk is 3.0 TB (3000592977920 bytes).
DOS partition table format can not be used on drives for volumes
larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID
partition table format (GPT).
Disk /dev/sdd: 3000.5 GB, 3000592977920 bytes
255 heads, 63 sectors/track, 45600 cylinders
Units = cylinders of 16065 * 4096 = 65802240 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 45601 2930266576 ee EFI GPT
[root@hostname ~]#
My question is whether installing MapR-FS on large-sized GPT partitions is supported / tested and if not what is the alternative solution. The reason I am even asking is because MapR Control System only sees the disk (/dev/sdb) as 2TB...
The MapR Control System uses hdparm to figure out various things about the drive, and hdparm may have the same limitation as fdisk. But the formatting stuff doesn't do anything further than opening the provided pathname and using "stat" to read the size of the disk. Give it a try and let us know if it works for you.