No, this is about normal tokens. While a SOTA LLM outputs a token at a time, it already has a high level plan of what it is going to say many tokens ahead. This is in reply to the GP who thinks that an LLM can somehow produce coherent and thoughtful sentences while never seeing more than one token ahead.