Snowflake released a white paper before its IPO days and mentioned this same feature (secretly exploding JSON into columns). Explains how snowflake feels faster than it should, they’ve secretly done a lot of amazing things and just offered it as a polished product like Apple.
https://www.singlestore.com/blog/json-builtins-over-columnst...
For a colstore database, dealing with json as strings is a big perf hit.
Are there any setups for reliable data ingestion into Clickhouse that don't involve spinning up Kafka & Zookeeper?
Vector is a relatively simple ingest tool that supports lots of sources and sinks. It's very simple to run — just a config file and a single binary, and you're set. But it can do a fair amount of ETL (e.g. enriching or reshaping JSON), including some more advanced pipeline operators like joining multiple streams into one. It's maybe not as advanced as some ETL tools, but it covers a lot of ground.
Since you mention Kafka, I would also mention Redpanda, which is Kafka-compatible, but much easier to run. No Java, no ZooKeeper. I think you'd still want Vector here, with Vector connecting Redpanda to ClickHouse. Then you don't need the buffering that Vector provides, and Vector would only act as the "router" than pulls from Redpanda and ingests into ClickHouse.
Another option is RudderStack, which we also use for other purposes. It's a richer tool with a full UI for setting up pipelines, and so on.
Are you looking for setups for OSS ClickHouse or managed ClickHouse services that solve it?
Both Tinybird & ClickHouse Cloud are managed ClickHouse services that include ingest connectors without needing Kafka
Estuary (an ETL tool) just released Dekaf which lets them appear as a Kafka broker by exposing a Kafka-compatible API, so you can connect it with ClickHouse as if it was Kafka, without actually having Kafka (though I'm not sure if this is in the open source Estuary Flow project or not, I have a feeling not)
If you just want to play with CH, you can always use clickhouse-local or chDB which are more like DuckDB, running without a server, and work great for just talking to local files. If you don't need streams and are just working with files, you can also use them as an in-process/serverless transform engine - file arrives, read with chDB, process it however you need, export it as CH binary format, insert directly into your main CH. Nice little pattern than can run on a VM or in Lambda's.
> Estuary (an ETL tool) just released Dekaf which lets them appear as a Kafka broker by exposing a Kafka-compatible API
This is definitely an improvement, but if it looks like kafka and sounds like kafka, I get a bit sus.
> If you just want to play with CH, you can always use clickhouse-local
I've done that, but getting from this to "streaming data" is where I get stuck.
> If you don't need streams
Afraid streams are what I'm dealing with..
That's the same stage I get stuck every time.
I have data emitters (in this example let's say my household IoT devices, feeding a MQTT broker then HomeAssistant).
I have where I want the data to end up (Clickhouse, Database, S3, whatever).
How do I get the data from A to B, so there are no duplicate rows (if the ACK for an upload isn't received when the upload succeeded), no missing rows (the data is retried if an upload fails), and some protection if the local system goes down (data isn't ephemeral)?
The easiest I've found is writing data locally to files (JSON, parquet, whatever), new file every 5 minutes and sync the older files to S3.
But then I'm stuck again. How do I continually load new files from S3 without any repetition or edge cases? And did I really need the intermediate files?
Duplicates get merged out, and errors can be handles at the http level. (Admittedly, one bad row in a big batch post is a pain, but I don’t see that much)
There are plenty of ways to do it with batching, but I assume you want to real-time "insert into table" style or a direct "ch.write(data)", then no. There is no way as far as I know without batching. This is one of the main reason we stopped CH for our last project about 3 years ago for financial data analytic tooling. CH doesn't have a transaction log like WAL, so your data producers need to be smart or you need a "queue" type service to deal with it, whether it's S3 or Kafka or Kinesis to allow batching.
Those are probably some old sources of knowledge. You need to use Kafka if you want it to handle batching for you. But Clickhouse can handle batching as well by using asynchronous inserts:
https://clickhouse.com/blog/asynchronous-data-inserts-in-cli...
What makes Clickhouse different that you're unable to load data into?
https://docs.databend.com/sql/sql-commands/dml/dml-copy-into...
I'm so excited for this! One of my major bug-bears with storing logs in Elasticsearch is the set-type-on-first-seen-occurrence headache.
Hope to see this leave experimental support soon!
For example, consider the documents {"value": 42} and {"value": "foo"}. To index this, index {"value::int": 42} and {"value::str": "foo"} instead. Now you have two distinct fields that don't conflict with each other.
To search this, the logical choice would be to first make sure that the query language is typed. So a query like value=42 would know to search the int field, while a query like value="42" would look in the string field. There's never any situation where there's any ambiguity about which data type is to be searched. KQL doesn't have this, but that's one of their many design mistakes.
You can do the same for any data type, including arrays and objects. There is absolutely no downside; I've successfully implemented it for a specific project. (OK, one downside: More fields. But the nature of the beast. These are, after all, distinct sets of data.)
But now all my queries that look for “value” don’t work. And I’ve got two columns in my report where I only want one.
[1] https://docs.pinot.apache.org/basics/indexing/star-tree-inde...
This is incorrect. ClickHouse is designed for distributed setups from the beginning, including cross-DC installations. It has been used on large production clusters even before it was open-sourced. When it became open-source in June 2016, the largest cluster was 394 machines across 6 data-centers with 25 ms RTT between the most distant data-centers.
> for example if you add additional nodes it is not easy to redistribute data.
This is precisely one of the issues I predict we'll face with our cluster as we're ramping up OTEL data and it's being sent to a small cluster, and I'm deathly afraid that it will continue sending to the every shard in equal measure without moving around existing data. I can not find any good method of redistributing the load other than "use the third party backup program and pray it doesn't shit the bed".
(disclaimer: I have not used either yet)
Plus it has a MySQL-flavoured client connector where Clickhouse does its own thing, so may be easier to integrate with some existing tools.
Spark can do analysis on huge quantities of data, and so can Microsoft Fabric. What Pinot can do that those tools can't is extremely low latency (milliseconds vs. seconds), concurrency (1000s of queries per second), and ability to update data in real-time.
Excellent intro video on Pinot: https://www.youtube.com/watch?v=_lqdfq2c9cQ
Now, it wasn't a good schema to start with, and there was about a factor of 3 or 4 size that could be pulled out, but clickhouse was a factor of 20 better for on disk size for what we were doing.
For truly big data (terabytes per month) we rely on BigQuery. For smaller data that is more OLTP write heavy we are using psql… but I think there is room in the middle.
https://clickhouse.com/blog/a-new-powerful-json-data-type-fo...
CREATE TABLE mk3
ENGINE = MergeTree
ORDER BY (account_id, resource_type)
SETTINGS allow_nullable_key = 1
AS SELECT
*,
CAST(content, 'JSON') AS content_json
FROM file('Downloads/data_snapshot.parquet')
Query id: 8ddf1377-7440-4b4d-bb8d-955cd0f2b723
↑ Progress: 239.57 thousand rows, 110.38 MB (172.49 thousand rows/s., 79.48 MB/s.) 22%
Elapsed: 4.104 sec. Processed 239.57 thousand rows, 110.38 MB (58.37 thousand rows/s., 26.89 MB/s.)
Received exception:
Code: 107. DB::ErrnoException: Cannot open file /var/folders/mc/gndsp71j6zz64pm7j2wz_6lh0000gn/T/clickhouse-local-503e1494-c3fb-4a5e-9514-be5ba7940fec/data/default/mk3/tmp_insert_all_1_1_0/content_json.plan.features.available.core/audio.dynamic_structure.bin: , errno: 2, strerror: No such file or directory. (FILE_DOESNT_EXIST)Well, if you want to do things exactly how JS does it, then storing them all as float is correct. However, The JSON standard doesn't say it needs to be done the same way as JS.
It turned out great idea which inspired other databases.
Since we've been using ClickHouse long before this JSON functionality was available (or even before the earlier version of this called `Object('json')` was avaiable) we ended up setting up a job that would materialize json fields out of a json blob and into materialized columns based on query patterns against the keys in the JSON blob. Then, once those materialized columns were created we would just route the queries to those columns at runtime if they were available. This saved us a _ton_ on CPU and IO utilization. Even though ClickHouse uses some really fast SIMD JSON functions, the best way to make a computer go faster is to make the computer do less and this new JSON type does exactly that and it's so turn key!
https://posthog.com/handbook/engineering/databases/materiali...
The team over at ClickHouse Inc. as well as the community behind it moves surprisingly fast. I can't recommend it enough and excited for everything else that is on the roadmap here. I'm really excited for what is on the horizon with Parquet and Iceberg support.
It's common knowledge that 'postgres is all you need' - but if you somehow reach the stage of 'postgres isn't all I need and I have hard proof' this should be the next tech you look at.
Also, clickhouse-local is rather amazing at csv processing using sql. Highly recommended for when you are fed up with google sheets or even excel.
If I were still there, I'd be pushing a lot harder to finally throw away the legacy system (which has lost so many people it's basically ossified, anyway) and just "rebase" it all onto clickhouse and pyspark sparksql. We would throw away so much shitty cruft, and a lot of the newer mapreduce and RDD code is pretty portable to the point that it could be plugged into RDD's pipe() method.
Anyway. My current job, we just stood up a new product that, from day 1, was ingesting billions of rows (event data) (~nothing for clickhouse, to be clear. but obviously way too much for pg). And it's just chugging along. Clickhouse is definitely in my toolbox right after postgres, as you state.
I'd throw in one more to round it out however. The three rings of power are Postgres, ClickHouse and NATS. Postgres is the most powerful ring however and lots of times all you need.
Typically when you discover the need for OLAP DB is when you reach that scale, so I'm personally not sure what the real use case for DuckDB is to be completely honest.
Can someone briefly explain how or if adding data types to JSON - a standardized grammar - leaves something that still qualifies as JSON?
I have no problem with people creating supersets of JSON, but if my standard lib JSON parser can't read your "JSON" then wouldn't it be better to call it something like "CH-JSON"?
If I am wildly missing something, I'm happy to be schooled. The end result certainly sounds cool, even though I haven't needed ClickHouse yet.
The first is JSON as a data encoding, ie. the particular syntax involving braces and quotes and commas and string escapes.
The second is JSON as a data type, ie. a value which may be a string, number, bool, null, array of such values, or map from string to such values. The JSON data type is the set of values which can be represented by the JSON data encoding.
The article describes an optimized storage format for storing values which have the JSON data type. It is not related to JSON the data encoding, except in that it allows input and output using that encoding.
This is the same thing as postgres' JSONB type, which is also an optimized storage format for values of the JSON data type (internally it uses a binary representation).
The data remains standard JSON and so standard JSON parsers wouldn’t be affected since the optimizations are part of the storage layer and not the JSON structure itself.
No, not really.
The blog post talks about storing JSON data in a column-oriented database.
The blog post talks about importing data from JSON docs into their database. Prior to this, they stored JSON documents in their database like any standard off-the-shelf database does. Now they parse the JSON document when importing, and they store those values in their column-oriented database as key-value pairs, and preserve type information.
The silly part is that this all sounds like a intern project who was tasked with adding support to import data stored in JSON files into a column-oriented database, and an exporter along with it. But no, it seems an ETL job now counts as inventing JSON.
[1] https://www.postgresql.org/docs/current/datatype-json.html
I had to scroll way down the article, passing over tons of what feel like astroturfing comments advertising a vendor and their product line, to see the very first comment pointing out the elephant in the room.
I agree, whatever it's described in the blog post is clearly not JSON. It's a data interchange format, and it might be mappable to JSON under the right circumstances, but JSON it is not. It's not even a superset or a subset.
I mean, by the same line of reasoning both toml, CSV, and y'all are JSON. Come on. Even BSON is described as a different format that can be transcoded to JSON.
The article reads like a cheap attempt to gather attention to a format that otherwise would not justify it.