That’s just not comparable. I’ll use your figure: If you use 400KB RAM for a process, using the remaining 240KB for something else doesn’t degrade the performance of the initial process (assuming nothing is trying to use more than the available RAM). Each unit of RAM is independent no?
Every token of context you use causes a drop in LLM performance.
More RAM == more processes can be supported with no degradation
More context != more stuff can be done with no degradation