Depends on the GC algorithm used. Various GC algorithms only trace reachable objects, not unreachable ones.
Reference counting does the opposite, more or less. When you deallocate something, it's tracing unreachable objects.
One of the problems with this is that reference counting touches all the memory right before you're done with it.
> And of course, not all kinds of object lend themselves to garbage collection - for instance, file descriptors, since you can't guarantee when or if they'll ever close. So you have to build your own reference counting system on top of your garbage collected system.
This is not a typical solution.
Java threw finalizers into the mix and everyone overused them at first before they realized that finalizers suck. This is bad enough that, in response to "too many files open" in your Java program, you might invoke the GC. Other languages designed since then typically use some kind of scoped system for closing file descriptors. This includes C# and Go.
Garbage collection does not need to be used to collect all objects.
> There's trade-offs, yes, but the trade-off is simply that garbage collected languages refuse to provide the compiler and the runtime all the information they need to know in order to do their jobs - and a massive 30 year long rube goldberg machine was built around closing that gap.
When I hear rhetoric like this, all I think is, "Oh, this person really hates GC, and thinks everyone else should hate GC."
Embedded in this statement are usually some assumptions which should be challenged. For example, "memory should be freed as soon as it is no longer needed".