fts_read is a function that iterates through a directory. "Cannot allocate memory" is error 12, ENOMEM. If you're getting that, it usually means malloc is failing to allocate memory for a variable somewhere. I suppose there's a small possibility that the paths could be using up all the memory available to the process, but since 150 KB is hardly anything, I doubt that. It could be that it's trying to fork a process with too much VM space, but the fork seems to be succeeding since the ls program is actually getting run. The argument list could be too long, except that it isn't — ARG_MAX is 262144 bytes (256 KiB), which is longer than 150 KB.

There's clearly something wrong, though, since I'm getting the same error on my machine, and my list of zero-files in /usr, /bin, and /sbin is only 2924 bytes long. There's nothing too long about that list of arguments at all, and I can even copy and paste the list straight into ls -l@ at the shell, and it works fine. But the find command does not.

Something's returning ENOMEM in the code, which most likely means that something is using up the process's available memory. Or it could be a bug in the code. Who knows. If you really want to make the find command run faster by consolidating arguments, the best way to do it is probably just to pipe to xargs. That way, you'll get the additional benefit that if the argument list does get longer than ARG_MAX, it'll split it up for you without running the command once for each line.

find <path> <search criteria> | xargs ls -l@

Last edited by CharlesS; 06/04/10 03:12 PM.