I wasn’t happy with existing log viewers - most of them force a specific log format, are tied to ingestion pipelines, or are just a small part of a larger platform. Others didn’t display logs the way I wanted.
So I decided to build my own lightweight, flexible log viewer - one that actually fits my needs.
Check it out:
Video demo: https://www.youtube.com/watch?v=5IItMOXwugY
GitHub: https://github.com/iamtelescope/telescope
Live demo: https://telescope.humanuser.net
Discord: https://discord.gg/rXpjDnEcHow do I get my logs (e.g. local text files from disk like nginx logs, or files that need transformation like systemd journal logs) into ClickHouse in a way that's useful for Telescope?
What kind of indices do I have to configure so that queries are fast? Ideally with some examples.
How can I make that full-text substring search queries are fast (e.g. "unexpected error 123")? When I filter with regex, is that still fast / use indices?
From the docs it isn't quite clear to me how to configure the system so that I can just put a couple TB of logs into it and have queries be fast.
Thanks!
I will consider providing a how-to guide on setting up log storage in ClickHouse, but I’m afraid I won’t be able to cover all possible scenarios. This is a highly specific topic that depends on the infrastructure and needs of each organization.
If you’re looking for a all-in-one solution that can*both collect and visualize logs, you might want to check out https://www.highlight.io or https://signoz.io or other similar projects.
And also, by the way, I’m not trying to create a "Grafana Loki killer" or a "killer" of any other tool. This is just an open source project - I simply want to build a great log viewer without worrying about how to attract users from Grafana Loki or Elastic or any other tool/product.
My perspective:
A lot of people who operate servers (including me) just want to view and search their logs -- fast and convenient. Your tool provides that. They don't care about whether the backend uses ClickHouse or Postgres or whatever, that's just a pesky detail. They understand they may have to deal with it to some extent, but they don't want to have to become experts at those, and to conclude everything by themselves, just to read their logs.
Also, those things are general-purpose databases, so they don't tell the user how to best set them up so your tool can produce results fast and convenient. So currently, neither side helps the user with that.
That's why it's best if your tool's docs gives some basic tips on how to achieve the most commonly desired goals: Some basic way to get logs into the backend DB (if there's a standard way to do that for text log files and journald, probably fine to just link it), and docs on what indices Telescope needs to be faster than grep for typical log search tasks (ideally with some quick snippet or link on how to set those up, for people who haven't used ClickHouse before).
So overall, it's fine if the tool doesn't do everything. But it should say what it needs to work well.
It doesn't do full-text search indices. So if you just search for some word across all your logs (to find eg when a rare error happened), it is very slow (it runs the equivalent of grep, at 500 MB/s on my machine). If you have a couple TB, it takes half an hour!
As you say, even plain grep is usually faster for such plain linear search.
I want full-text indices so that such searches take milliseconds, or a couple seconds at most.
If I search telescope logs on google that’s the top result for me.
Viewer looks pretty good though. Reminds me of DataDog UI, but not as slow. Will play around more, thanks!
Regarding performance - 95% of Telescope's speed depends on how fast your ClickHouse responds. If you have a well-optimized schema and use the right indexes, Telescope's overhead will be minimal.
I need a central place, something simple where I can actually read the contents of the logs that are generated by the dozen of services that I run for clients, etc… instead of stupidly SSH’ing to every server.
Does this fit the use case?
I tried Loki once but it was painful to set up and more geared toward aggregating events and stats.
I’m interested in learning more about the software installation experience.
note: I'm part of the Multiplayer team.
If you're looking for this kind of UI also check out Coroot https://github.com/coroot/coroot which has awesome UI for logs and OpenTelemetry traces and also stores data in Clickhouse
I'm one of those authors of an existing log viewer (hyperdx) and was curious if we were one of those platforms that didn't fit your needs? Always love learning what use cases inspire different approaches.
You can think of it as just one part of a logging platform, where a full platform might consist of multiple components like a UI, ingestion engine, logging agent, and storage. In this setup, Telescope is only the UI.
At the moment, I have no plans to support arbitrary data visualization in Telescope, as I believe there are better BI-like tools for that scenario.
Is there any open source tool that does the same?
Telescope can work with any table in ClickHouse. Of course, not every single ClickHouse type has been tested, but there shouldn’t be any issues with the most common ones
If you want, you can check how it works with the OTEL schema in the live demo here: https://telescope.humanuser.net/sources/otel-demo/explore
Also, what service did you use to make the video, if you don't mind my asking?
I haven't tested the new JSON format in ClickHouse yet, but even if something doesn't work at the moment, fixing it should be trivial.
As for the video service, it wasn’t actually a service but rather a set of local tools:
- Video capture/screenshots - macOS default tools
- Screenshot editing - GIMP
- Voice generation - https://elevenlabs.io/app/speech-synthesis/text-to-speech
- Montage – DaVinci Resolve 19
Just curious, what is the most challenging thing in your opinion when building such log viewer?
For me, the most challenging parts are still ahead - live tailing and a plugin system to support different storage backends beyond just ClickHouse. Those will be interesting problems to solve! What was the biggest challenge for you?
From my perspective, a ClickHouse-based setup can be cheaper and possibly faster in certain conditions – here’s some comparison made by ClickHouse Inc. - https://clickhouse.com/blog/clickhouse_vs_elasticsearch_the_...
My motto is "Know your data". I’m not a big fan of schemaless setups - I believe in designing a proper database schema rather than just pushing data into a black hole and hoping the system will handle it.
Regarding the name, "Telescope" is also the name of a Neovim fuzzy finder[0] that dominates the ecosystem there. Other results appear by searching "telescope github".
OSS o11y platform built on clickhouse & otel.