Let's be clear about what's happening.

dd (short for duplicate data) is just a copy command. It's similar to cat, except that instead of copying a byte at a time (relying on the underlying I/O system to use a more reasonable block size when appropriate), it copies blocks made up of records. The command was originally envisioned for copying from one magnetic tape to another, possibly changing the blocking factor along the way.

Disks have sectors, typically 512 bytes, but you can actually read across sector boundaries with impunity. That lets you, for example, write a disk file a sector at a time and then read it back 17 sectors at a time. Magnetic tape doesn't work that way. You can write blocks of any size, even as small as a single byte, but you must read them back exactly as you wrote them. If you are writing punched card images to tape, they would be 80-byte records, but you might write them in 20-record blocks. If the next program to read the tape cannot handle blocks bigger than 1200 bytes (15 records at 80 bytes per), you'd need to re-block the tape. Or if you want to copy those 80-byte records to disk, expanding each record to 96 bytes by appending 16 nulls, because the next program expects 96-byte records, that would be another kind of re-blocking. Either way, dd to the rescue. Copying data from one device (or file) to another, possibly changing record size and/or blocking factor along the way, is what dd does.

What dd does not do is look for or correct errors. It assumes it will be able to read and write the devices. The underlying I/O system will let it know if that assumption should turn out to have been overly optimistic.

So, what happens if, as in this case, you're using dd to copy from a disk device to /dev/null, without re-blocking? /dev/null is always writeable, and will never report a write error. Reading from disk, on the other hand, is more nuanced.

When dd reads from disk, the controller on the disk attempts to read however many sectors are requested (breaking the request down at track boundaries, and often rounding up each part to a full track, in anticipation that if you want anything in a track you'll probably want something else in the track soon). Then it applies error correction.

Error correction? Why, certainly. Although we think of data on disk as being digital, the signal coming from the read head is actually analog, and has to be digitized. The digital result is exact, but the analog input is anything but. Every time you read the same sector, you will see a different analog signal. Ideally, these analog signals all digitize the same, but not always. Fortunately, an obscene number of parity bits are added to the data as it's written, and in most cases errors can be corrected simply by examining the parity bits.

Disk errors are extremely common. Typical figures (from a few decades ago, when I last looked into the matter) are that you'll get on the order of one read error on every million reads, even if the disk is perfectly healthy. Error correction, relying on the parity bits, fixes most of these errors, bringing the error rate down to something like one in a trillion, even on a healthy disk. (I don't know what modern error rates are, but the push to greater and greater densities means drives are always designed to push their limits. The spurious error rate is not likely to be any better, and may be much worse, with modern technology.)

If the drive's controller cannot reconcile the error even considering the parity bits, it'll re-read the sector. The new analog signal may digitize to something it can correct.

What happens next depends on how many re-tries are needed before that happens. The exact cutoff values are specific to the controller's firmware, and will vary from manufacturer to manufacture and possibly from one model to another. More retries calls for more drastic action, and possible actions in increasing levels of severity are:
  • Do nothing. You've got the correct data with no or only a few retries, so call it a glitch and ignore it.
  • Rewrite the sector. It's not just the analog read signal that can be imprecise. The write signal is also analog, and re-writing the data may magnetize the disk ever so slightly differently. To find out, try re-reading the data.
  • Look for a spare sector, write the now-correct data there, and remap this sector to that one.
  • Give up. Report a read error to whatever program was trying to read the data.
At some point along the way, S.M.A.R.T. may be notified.

Can an error go away? Certainly. If the drive gives up after 256 retries (for example), and you re-try the read again, the drive might succeed with only 103 additional retries. That lets it re-map the sector to a spare, and problem solved. The disk is probably failing, though. Even if the error goes away, the disk itself may also be going away soon.

What you're hoping will happen when you scan the disk using dd is that marginally bad sectors will be just bad enough to make the drive take action, either re-writing the sector or mapping it to a spare. dd won't see or report any errors, but the sector is now healthier. (The disk, on the other hand, may still be failing.) dd is not itself doing any of this repair. It's just giving the drive controller an opportunity to notice and hopefully address any incipient problem areas.

There are programs that do a more thorough scan. It's possible to talk to the firmware on the drive, telling it "for the time being, do not attempt any error recovery at all. Read the sector once, and show me all the data, including the parity bits." That lets the program apply its own policy for dealing with transient (or permanent) errors.

Absent such a program, though, dd is a reasonably good poor man's substitute. It won't do as thorough a job as a real disk scan utility, but it might still be helpful.