It is probably possible to do this with fine tuning.
Once the context window is full, it might be possible to convert the content in there into additional training examples and feed those to a fine tuning process, which then retrains the model. (The OpenAI API for fine tuning is here: https://platform.openai.com/docs/guides/fine-tuning)
It would be a bit like sleeping. Whenever the context window fills up, the model would have to go offline for a while to move memories from its context window (short term) to its network weights (long term).