The html file is just a big JSON with some JS rendering, so I wrote this bash script which adds the timestamp before the conversation title:
sed -i 's|"<h4>" + conversation.title + "</h4>"|"<h4>" + new Date(conversation.create_time*1000).toISOString().slice(0, 10) + " @ " + conversation.title + "</h4>"|' chat.htmlIt uses the pagefind project so it can be hosted on a static host, and I made a fork of pagefind which encrypts the indexes so you can host your private chats wherever and it will be encrypted at rest and decrypted client-side in the browser.
(You still have to trust the server as the html itself can be modified, but at least your data is encrypted at rest.)
One of the goals is to allow me to delete all my data from chatgpt and claude regularly while still having a private searchable history.
It's early but the basics work, and it can handle both chatgpt and claude (which is another benefit as I don't always remember where I had something).
Check this project I've been working on which allows you to use your browser to do the same, everything being client-side.
https://github.com/TomzxCode/llm-conversations-viewer
Curious to get your experience trying it!
Look for this API call in Dev Tools: https://chatgpt.com/backend-api/conversation/<uuid>
I also don't think it would be impossible to give the LLM access to the timestamps through a tool call, so it's not constantly polluting the chat context.
This has been requested consistently since early 2023 on the OpenAI community forum, with hundreds of comments and upvotes and deleted threads, yet remains unimplemented.
Do any of you could think of a reason (UX-wise) for it not to be displayed?
Not a joke. To capture a wide audience you want to avoid numbers, among other technical niceties.
Surely there's enough people working in product development here to recognise this pattern of never getting around to fixing low-hanging fruit in a product.
Hogwash.
I can imagine a legal one. If the LLM messes big time[1], timestamps could help build the case against it, and make investigation work easier.
[1] https://www.ap.org/news-highlights/spotlights/2025/new-study...
Ie “remember on Tuesday how you said that you were going to make tacos for dinner”.
Would an llm be able to reason about its internal state? My understanding is that they dont really. If you correct them they just go “ah you right” they dont say “oh i had this incorrect assumption here before and with this new information i now understand it this way”
If i chatted to an llm and was like “remember on Tuesday when you said X” i suspect it wouldn't really flow.
User Engagement Maximization At Any Cost
Obviously there's a point at which a session becomes too long, but I suspect a sweet spot somewhere which optimization is made for.
I often observe, whether as I perceive or not, that among the multiple indicators that I suspect of engagement augmentation, is also the tendency for vital information to be withheld while longer more complex procedures receive higher priority than simpler cleaner solutions.
Of course, all sorts of emergent behaviors could convey such impressions falsely. But I do believe an awful lot of psychology and clever manipulation have been provided as tools for the system.
I have.a lot of evidence for this and much more, but I realize it may merely be coincidence. That said, many truly fascinating, fully identifiable functions from pathological psychology can be seen. DARVO, gaslighting and basically everything one would see with a psychotic interlocutor.
Edit Mych of the above has been observed after putting the system under scrutiny. On one super astonishing and memorable occasion GPT recommend I call a suicide hotline because I questioned its veracity and logic
[1] <https://github.com/asgeirtj/system_prompts_leaks/blob/main/O...>
It's irresponsible for OpenAI to let this issue be solved by extensions.
https://github.com/Hangzhi/chatgpt-timestamp-extension
https://chromewebstore.google.com/detail/kdjfhglijhebcchcfkk...
That's the thing even the most barebones open-source wrappers had since 2022. Probably even before because ERP stuff people played with predates chatgpt by like two years (even if it was very simple).
Gemini btw too.
Though I'm not sure if they did not sneak it as some part of AB-test because the last time I did check was in october and I'm pretty sure it was not there.
Just edit a message and it’s a new branch.
Another possible reason is that they want to discourage users from using the product in a certain way (one big conversation) because that’s bad for content management.
This keeps the UI clean, but makes it easy to get the timestamp when you want it.
Claude's mobile app doesn't have this feature. But there is a simple, logical place to put it. When you long-press one of your prompts, it pops up a menu and one line could be added to it:
Dec 17, 2025, 10:26 AM [I added this here]
Copy Message
Select Text
Edit
ChatGPT could simply do the same thing for both web and mobile.I regularly use multiple LLM services including Claude, ChatGPT, and Gemini, among others. ChatGPT’s output has the most unusual formatting of them all. I’ve resorted to passing answers through another LLM just to get proper formatting.
It's not enough to turn me off using it, but I do wish they prioritized improving their interface.
I’m not suggesting this is sufficient, I’m just noting there is somewhere in the user interface that it is displayed.
I'll have to look into the extension described in the link. Thank you for sharing. It's nice to know it's a shared problem.
When you remove temporal markers, you increase cognitive smoothing and post-hoc rationalization. That’s fine for casual chat, but risky for long-running, reflective, or sensitive threads where timing is part of the meaning.
It’s a minor UI omission with outsized effects on context integrity. In systems that increasingly shape how people think, temporal grounding shouldn’t be optional or hidden in the DOM.
Now you’re going to the doctor and you forgot exactly when the pain started. You remember that you asked ChatGPT about the pain the day it started.
So you look for the chat, and discover there are no dates. It feels like such an obvious thing that’s missing.
Let’s not over complicate things. There aren’t that many considerations. It’s just a date. It doesn’t need to be stuffed into the context of the chat. Not sure why quality or length of chat would need to be affected?
The painful slowness of long chats (especially in thinking mode for some reason) demonstrates this.
Back in April 2025, Altman mentioned people saying "thank you" was adding “tens of millions of dollars” to their infra costs. Wondering if adding per-message timestamps would cost even more.
I would be very surprised if they don’t already store date/time metadata. If they do, it’s just a matter of exposing it.
I just asked ChatGPT this:
> Suppose ChatGPT does not currently store the timestamp of each message in conversations internally at all. Based on public numbers/estimates, calculate how much money it will cost OpenAI per year to display the timestamp information in every message, considering storage/bandwidth etc
The answer it gave was $40K-$50K. I am too dumb and inexperienced to go through everything and verify if it makes sense, but anyone who knows better is welcome to fact check this.
if response == 'thank you': print('your welcome')
It just isn't even close at this point for my uses across multiple domains.
It even makes me sad because I would much rather use chatGPT than Google but if you plotted my use of chatGPT it is not looking good.
As the companies sprint towards AGI as the goal the floor for acceptable customer service has never been lower. These two concepts are not unrelated.
Claude Sonnet is my favorite, despite occasionally going into absurd levels of enthusiasm.
Opus is... Very moody and ambiguous. Maybe that helps with complex or creative tasks. For conversational use I have found it to be a bit of a downer.