differs by a pretty substantial 22GB
Cluster overhead, usually. On the average, every file wastes 50% of one cluster, at the end. Divide cluster size by 2 and multiply that by the number of files (usually 500,000+) and that's your statistical average wasted space due to cluster overhead. Also, files require directory entries. Last I looked, each file took an average of around 39 bytes. Bigger directories also consume overhead to themselves. It adds up. Most "used space" counters only count bytes used, (cluster size)*(clusters used). The only time you typically get an accurate size total is when querying a device total, like a volume's total used space or total free space.
I had a laptop here last month that had almost no disk space left, and I was having problems finding where it had gone. It was short about 18gb, which was a lot more than I'd have expected. Turned out to be a crapton of very small temp files in /var/tmp/ that some loopy app had dumped in there. I had temp folders with 56,000 items in them. All those little files, each with their own cluster overhead. DF would not find the used space. I ended up writing a script to find DIRECTORIES that were large in size (physical size, not content count) where I would then look to see if there was a lilliputian problem again. I'd find a directory that DF claimed consumed 28mb of bytes, when it was actually consuming several GB worth of clusters.
Also somewhat related, larger hard drives are now starting to show up that use larger than 512 byte block sizes. (some flash drives in particular) This establishes the smallest possible cluster size and the possible cluster size increment. A seagate 4th I bought recently used 4kb disk blocks instead of 512 byte. (and that causes a few programs to have problems... like a few of my disk scripts!)