Yeah, LLMs seem exceptionally good at summarizing large amounts of structured data with a prompt at the end, like that YT demonstrates.
If you have a back-and-forth conversation, with the previous conversation chunks prepended as context to the next interaction, it will rapidly lose track of where you instructed it to spend its attention.
The manner in which the context is used seems to make a huge difference.