After 3 months BigQuery storage ends up being about half the cost of object storage if you use partitioned tables and don't edit the data.
> How does Logflare's approach contrast with other entrants like axiom.co/99 who are leveraging blob stores (Cloudflare R2) for storage and serverless for querying for lower costs?
Haven't really looked at their arch but BigQuery kind of does that for us.
> Multiple pluggable storage/query backends (like Clickhouse) is all good, but is there a default that Logflare is going to recommend / settle on?
tbd
> Are there plans to go beyond just APM with Logflare (like metrics and traces, for instance)?
Yes. You can send any JSON payload to Logflare and it will simply handle it. Official open telemetry support is coming, but it should just work if your library can send it over as JSON. And you can send it metrics.
> I guess, at some level, this product signals a move away from Postgres-for-everything stance?
Postgres will last you a very long time usually but at some point with lots of this kind of data you'll really want to use an OLAP store.
With Supabase Wrappers you'll be able to easily access your analytics store from Postgres.
https://supabase.com/blog/postgres-foreign-data-wrappers-rus...