I had to mod WatchDrives last night after installing a 4TB external drive on the backup server. DD was tossing an unexpected error:

> dd if=/dev/rdisk7 of=/dev/null iseek=0 count=1

dd: /dev/rdisk7: Invalid argument
0+0 records in
0+0 records out
0 bytes transferred in 0.000056 secs (0 bytes/sec)

And yes, /dev/rdisk7 is a valid character device. I spent some time verifying that, and then eventually tracked down the cause. Manufacturers have started bumping up the block size on large devices. DD defaults to 512 bytes/block, and to this day I hadn't seen one other than that. Normally WD will set a larger block size for the actual scans, usually 1024*1024 (1 "computer" megabyte) and then set count= accordingly for amount of data to test read, in megabytes. So that command was working fine. But the above was used to guarantee the disk was spun up before running the performance test, and defaulted to a block size of 512. And that triggered dd to produce the unusual error message.

> dd if=/dev/rdisk7 of=/dev/null iseek=0 count=1 bs=4096

1+0 records in
1+0 records out
4096 bytes transferred in 0.000587 secs (6978013 bytes/sec)

This drive (seagate 4TB ext) has a block size of 4096, and DD will chuck an error if it tries to use a block size smaller than that. I thought there was a way to tell (from command line) what the block size was, but I haven't found it yet. WD has another routine that was also failing for the same reason. It was used to "probe" a device to see how big it was. It would start out in the petabytes and hop around in a split half search, to find the exact last readable block. It was also using the default block size. For this sort of function, knowing the exact block size is necessary though. So I had to tack on a bit to the start to try different sizes until it found one that worked, and then go from there. I realize that diskutil can provide that information, but diskutil does not provide that information in OS X 10.4, on one of the systems WD runs on, so this method is necessary.

This post is more to "make the information available" via google etc than anything else. Maybe it'll help someone else someday, when they get this deceptive error message. I suppose odd problems cropping up from time to time as drive sizes increase is unavoidable. (in the case of that seagate, perhaps their firmware had issues managing indexes > 30 bit)
I work for the Department of Redundancy Department