My unspoken thought-objects are wordless concepts, sounds, and images, with words only loosely hanging off those thought-objects. It takes additional effort to serialize thought-objects to sequences of words, and this is a lossy process - which would not be the case if I were thinking essentially in language.
I am comfortable asserting that an LLM like GPT-4 is only capable of thinking in language; there is no distinction for an LLM between what it can conceive of and what it can express.