Yes. C programs have been doing this for over 40 years now. A leak free C program has an equivalent free for every malloc, which means they know exactly when everything gets allocated and freed.
For a simple example, take a text editor (sure, you would likely allocate a much bigger buffer in practice) that allocates for each line of text an object, and adds the buffer’s pointer to a list to be freed - this freeing happens when the user closes the open text file window. While you do know that every allocation will be freed and know their relative order, you don’t know anything more specific - will the user open multiple such files first, close them in some random order, etc.
> Do you honestly claim that you know when deallocations happen in any codebase full of conditionals depending on outside effects (user input, network, etc)?
And the answer to that is yes. You even admitted that here:
> While you do know that every allocation will be freed and know their relative order
This is a lot more specific than "the GC will free this memory at some indeterminate point that it deems acceptable".
Additionally, in your specific example:
> this freeing happens when the user closes the open text file window
So you can plan for that. You can pop up a saving screen if you run tests and realize that the deallocations take a bit of time. With GC, it's luck of the draw. I'm speaking from experience.
I wrote a tool that used Roslyn to do some transpiling of a custom format in our company. It was very important that it ran fast, since this algorithm was going to be run in a time sensitive situation. And it had to free it's memory as soon as it was done using it, since it was hot swapping the DLLs and I needed to make sure I wasn't getting name collisions from DLLs that were waiting for the GC to run to fully unload the old DLLs. I tried so many different ways to tell C# GC to collect the object tree that I knew was no longer necessary, but it was seemingly impossible.
Microsoft even has a page dedicated to debugging why an assembly won't unload and it reads:
> The difficult case is when the root is a static variable or a GC handle.[0]
Now you may say that this while problem only arises because of my weird specific use case, which is true but I couldn't change my requirements since those were hard requirements from my company. This could all easily be solved if a GC allowed you to define the concept of ownership. I had references hanging around that didn't matter because the object holding the references didn't own that memory.
All that to say, yes you can know exactly when you're memory will be deallocated in a language like C. In a language like C#, your left to the whims of the GC which can be a deal breaker in lots of cases.
What really gets me too, is languages like C# end up creating DI frameworks with the "novel" concept of lifetime requirements to make sure object lifetimes are properly scoped. What the heck? It's got a GC. Why did they go through all that trouble if the GC just cleans it up for you? If I have to think about object lifetimes, I may as well switch to a language that makes that explicit rather than a language that obfuscated it as much as possible.
[0]: https://docs.microsoft.com/en-us/dotnet/standard/assembly/un...
But I don’t think that giving up the comfort/performance of a good GC is a good tradeoff in all the other cases. The same way Rust et alia can opt into some form of GC with (A)RC, GC languages can have escape hatches as well.