Garsh. shocked

So, it looks like your problems were confined to /usr (or so we hope i guess), since nothing turned up inside either /bin or /sbin. And —out of almost 30,000 files in the /usr folder —you found 138 bad items in /usr/bin and 1803 in /usr/share. Okay, nice.

Comparing ls -ldF@ on your du versus mine, we see...
Code:
-r-xr-xr-x@ 1 root  wheel  0 Jul 14 2009 /usr/bin/du*
	com.apple.ResourceFork	13356 
	com.apple.decmpfs	1 

for yours, and...
Code:
-r-xr-xr-x  1 root  wheel  48432 Feb 11 17:26 /usr/bin/du*

for mine.

Apparently that com.apple.decmpfs extended attribute is part of Snowy's new file storage compression thingamajig (and we recall "com.apple.ResourceFork" from back in the day). So —if one were to guess —it looks like maybe those items got installed in a compressed state... and then got stuck that way[???] ::shrug:: idunno.




Originally Posted By: artie505
(I just noticed that final entry

find: fts_read: Cannot allocate memory

Any idea what it's all about?)

The -exec . . . {} + portion of the find command seems to have tripped over those results of yours (which add up to around 150 KB worth of pathnames to be processed. I'm not sure exactly where it breaks down normally (at 100K maybe?).

The safe way to prepare for a really large results list is to first write the output from find into a temp file... and then read (and/or pipe) that temp file into whatever command -exec was calling. [such techniques lend themselves more to an elaborate shell script, as opposed to just a simple "one-liner" posted in a forum for hasty troubleshooting attempts.

[another option perhaps is to use the old -exec . . . {} \; method,which usually runs slower, but that way doesn't produce nearly as "neat" an output (especially with ls) where all the columns line up properly, because it calls a separate (sub-shell) command for *each* found item.]

Last edited by Hal Itosis; 05/31/10 05:41 AM.