Writing in a single thread removes a whole host of problems in understanding (and implementing) how data changes over time. (And a busy MVCC sql db spends 75% of its time doing coordination, not actual writes, so a single thread applying a queue of transactions in sequence can be faster than your gut feeling might tell you.)
Transactions as first-class entities of the system means you can easily add meta-data for every change in the system that explains who and why the change happened, so you'll never again have to wonder "hmm, why does that column have that value, and how did it happen". Once you get used to this, doing UPDATE in SQL feels pretty weird, as the default mode of operation of your _business data_ is to delete data, with no trace of who and why!
Having the value of the entire database at a point in time available to your business logic as a (lazy) immutable value you can run queries on opens up completely new ways of writing code, and lets your database follow "functional core, imperative shell". Someone needs to have the working set of your database in memory, why shouldn't it be your app server and business logic?
Looking forward to see what this does for the adoption of Datomic!
This one confused me. The obvious reason why you don't want the whole working set of your database in the app server's memory is because you have lots of app servers, whereas you only have one database[1]. This suggests that you put the working set of the database in the database, so that you still only need the one copy, not in the app servers where you'd need N copies of it.
The rest of your post makes sense to me but the thing about keeping the database's working set in your app server's memory does not. That's something we specifically work to avoid.
[1] Still talking about "non-webscale" office usage here, that's the world I live in as well. One big central database server, lots of apps and app servers strewn about.
So in a traditional DB you might have a DBA set up a reporting database so the operational one is not affected. Using Datomic the reporting service gets a datomic peer that has a copy of the DB in database without any extra DBA work and without affecting any web services. This also works nicely with batch jobs or in any situation where you don't want to have different services affect each others performance.
Its true that a lot more memory gets used, but it is relatively cheap - usually the biggest cost when hosting in the cloud being the vCPUs. But usually in Clojure/Datomic web application you don't need to put various cache services like Redis in front of your DB.
Thea assumption here is that the usual bottleneck for most information systems and business applications is reading and querying data.
But if you have a handful of app servers, it's much more reasonable. The relatively low scale back-office systems I tend to work with typically has 2, max 3. Also, spinning up an extra instance that does some data crunching does not affect the performance of the app servers, as they don't have to coordinate.
There's also the performance and practicality benefits you get from not having to do round-trips in order to query. You can now actually do 100 queries in a loop, instead of having to formulate everything as a single query.
And if you have many different apps that operates on the same DB, it becomes a benefit as well. The app server will only have the _actual_ working set it queries on in memory, not the sum of the working set across all of the app servers.
If this becomes a problem, you can always architecture your way around it as well, by having two beefy API app servers that your 10s or 100s of other app servers talks to.
Having the applications keep a cached version of the db means that when one of them runs a complex or resource intensive query, it's not affecting everyone else.
So is any cloud-managed db offering and at that scale we talking very small costs anyway.
Why datomic instead?
I don’t think I agree with this as stated. It is too squishy and subjective to say “perfect”.
More broadly, the above is not and should not be a cognitive “anchor point” for reasonable use cases for Datomic. Making that kind of claim requires a lot more analysis and persuasion.
This is Ions in the Cloud version, or for on-prem version the in-process peer library.
It doesn't feel like the people behind Datomic actually want to have users outside of the Clojure world, which will be rather limiting to adoption.
You could have one or more append-only tables that store events/transactions/whatever you want to call them, and then materialized-views (or whatever) which gather that history into a "current state" of "entities", as needed
If eventual-consistency is acceptable, it seems like you could aggressively cache and/or distribute reads. Maybe you could even do clever stuff like recomputing state only from the last event you had, instead of from scratch every time
How bad of an idea is this?
How do they scale it for Nubank? (millions of users)
However, if you want to paginate data that you need to sort first, and the data isn't sorted the way you want in the index, you have to read all of the data first, and then sort it. But this is also what a database server would need to do :)
So, "events" in Datomic are structured and Datomic uses them to give you query powers, they're not opaque blobs of data.
It's a good idea to version your schema changes using something like liquibase into git, that gets rid of at least some of those pains. Liquibase works on a wide variety of databases, even graphs like Neo4j
I got the same feeling in Erlang many times, once write operations start getting parallel you worry about atomic operations, and making an Erlang process centralize writes through its message queue always feels natural and easy to reason about.
Releasing only binaries, while I understand people being grumpy about it, seems like an interesting way of keeping their options open going forwards. Since it was always closed source, it now being 'closed source but free' is still a net win.
The Datomic/Cognitect/NuBank relationship is an interesting symbiotic dynamic and while I'm sure we can all think of ways it might go horribly wrong in future I rather hope it doesn't.
Based on experience with Prolog, I always thought using Datalog in a database like Datomic would mean being able to model your data model using stored queries as a very expressive way of creating "classes". And that by modeling your data model using nested such queries, you alleviate the need for an ORM, and all the boilerplate and duplication of defining classes both in SQL and as objects in OO code ... since you already modelled your data model in the database.
Does Datomic live up to that vision?
Datomic also support rules, including recursive rules. I wrote a library to do OWL-style inference about classes of entities using rules. You can see an example here (https://github.com/cognitect-labs/onto/blob/master/src/onto/...). This is a rule that infers all the classes that apply to an entity from an attribute that assigns it one class.
I would also say that building an "entity type definition" system as entities in Datomic is almost the first thing every dev tries after the tutorial. It works... but you almost never _need_ it later.
I was more thinking of the means to define your data "classes" (or whatever it is called on this context) though, rather than how it is passed around.
Datomic Cloud is slow, expensive, resource intensive, designed in the baroque style of massively over-complicated CloudFormation astronautics. Hard to diagnose performance issues. Impossible to backup. Ran into one scenario where apparently we weren't quick enough to migrate to the latest version, AWS had dropped support for $runtime in Lambda, and it became impossible to upgrade the CloudFormation template. Had to write application code to export/reimport prod data from cluster to another—there was no other migration path (and yes, we were talking to their enterprise support).
We migrated to Postgres and are now using a 10th of the compute resources. Our p99 response times went from 1.3-1.5s to under 300ms once all the read traffic was cut over.
Mother Postgres can do no wrong.
Still, Datomic seems like a cool idea.
There were some cool ideas in Datomic Cloud, like IONs and its integrated deployment CLI. But the dev workflow with Datomic Pro in the REPL, potentially connected to your live or staging database is much more interactive and fun than waiting for CodeDeploy. I guess there is a reason Datomic Pro is the featured product on datomic.com again. It appears that Cognitect took a big bet with Datomic Cloud and it didn't take off. Soon after the NuBank acquisition happened. That being said, Datomic Cloud was not a bad idea, it just turned out that Datomic Pro/onPrem is much easier to use. Also of all their APIs, the "Peer API" of Pro is just the best IME, especially with `d/entity` vs. "pull" etc.
Datomic's killer feature is time travel.
Did you simply not use that feature once you moved off Datomic (and if so why'd you pick Datomic in the first place)? Or are you using Postgres using some extension to add in?
Our data model is not large and we had a very complete test suite already, so it was easy to produce another implementation backed by postgres, RAM, etc.
We use https://django-simple-history.readthedocs.io/en/latest/ (with some custom tooling for diff generation) for audit logs and resettability, and while you can't move an entire set of tables back in time simultaneously, it's usually sufficient for understanding data history.
> Mother Postgres can do no wrong.
I'll say that Postgres is usually the answer for the vast majority of use-cases. Even when you think you need something else to do something different, it's probably still a good enough solution. I've seen teams pitching other system just because they wanted to push a bunch of JSON. Guess what, PG can handle that fine and even run SQL queries against that. PG can access other database systems with its foreign data wrappers(https://wiki.postgresql.org/wiki/Foreign_data_wrappers).
The main difficulty is that horizontally scaling it is not trivial(although not impossible, and that can be improved with third party companies).
Don't misunderstand me. There are plenty of times when something else is the right choice. I'm just saying, when I have a say in the matter, folks need to clear that bar -- "tell me why tool xyz is going to be so much better than postgres for this use case that it justifies the overhead of adding another piece of software infrastructure."
Like, you want to add a document database? Obviously Mongo, Elasticsearch, etc are "best of breed." But Postgres is pretty capable and this team is already good at it. Are we ever going to have so many documents that e.g. Elasticsearch's mostly-effortless horizontal scaling even comes into play? If you don't ever see yourself scaling past 1,000 documents then adding a new piece of infra is a total joke. I see that kind of thing all the time. I can't tell if developers truly do not understand scale, or if they simply do not give a f--- and simply want to play with shiny new toys and enrich their resumes.
I mean, I've literally had devops guys telling me we need a Redis cluster even though we were only storing a few kilobytes of data, that was read dozens of times daily with zero plans to scale. That could have been a f'in Postgres table. Devops guy defended that choice hard even when pressed by mgmt to reduce AWS spend. WTF?
You should give TerminusDB a go (https://terminusdb.com/), it's really OSS, the cloud version is cheap, fast, there are not tons of baroque settings, and it's easy to backup using clone.
TermiusDB is a graph database with a git-like model with push/pull/clone semantics as well as a datalog.
Simple, eloquent, damn true.
I guess they don't claim to be open source, they're claiming to be free, which is - in itself - awesome.
Last time I checked, you couldn't push binaries to maven central, without also releasing the source. That may have changed.
EDIT: I was wrong. They actually released binaries under the Apache licence, not the source code. Which is, mildly said, deceptive. I don't even have an idea what that actually means.
They don't say anything about the source code being published. That's why (to me) this is so interesting. I've never seen binaries released without source code before.
But Maven Central has strict rules around what can be published there. I just double checked and it's a requirement to publish the source as well as the binaries:
https://central.sonatype.org/publish/requirements/#supply-ja...
https://vvvvalvalval.github.io/posts/2018-11-12-datomic-even...
This is why I' rather use XTDB [1], a database similar to datomic in spirit, but with bitemporality baked in.
Datomic is an operational database management system - designed for transactional, domain-specific data. It is not designed to be a data warehouse, nor a high-churn high-throughput system (such as a time-series database or log store).
It is a good fit for systems that store valuable information of record, require developer and operational flexibility, need history and audit capabilities, and require read scalability.
(via https://docs.datomic.com/pro/getting-started/brief-overview....)There is some SQL temporal support but it's not great and varies a lot. Also since it's not native to the storage it has a lot of complexity issues under the rug making it not great.
Many financial systems use Event Sourcing (OOP + ORM). I had to suffer this at a previous employer.
See https://vvvvalvalval.github.io/posts/2018-11-12-datomic-even...
- mutable data vs immutable data
- tables, row based vs tripple store, attribute based (EAV/RDF)
- table schemas vs attribute schemas
- relational connections vs graph connections
- SQL vs datalog
- nested queries vs flat queries and rules
- remote access vs client side index
etc.
It allows you to write queries in a pull style, it can be trigger based, datalog or raw index access. Its by default immutable and allows historical query. It allows meta data on the transaction themselves.
A lot of the time the user builds much of that himself or relays on frameworks to do it.
> Datomic binaries are provided under the Apache 2 license which grants all the same rights to a work delivered in object form.
So... no?
(I say that, but "Datomic binaries" presumably refers to compiled JVM class files; and JVM bytecode is notoriously easy to decompile back to legible source code, with almost all identifiers intact. Would Apache-licensing a binary, imply that you have the right to decompile it, publish an Apache-licensed source-code repo of said decompilation, and then run your own FOSS project off of that?)
I watched a lot of that and used Clojure fulltime for five years. Wonder what he's up to these days.
Edit: Oh, there are streaming tickets for $20.
How did that work out for you? Usually following a hype cycle, there is a negative hype cycle i.e. Mongo is webscale, then Mongo is a buggy mess.
Clojure seemed to just fade away. Did it turn out well or are there interesting pitfalls that make it not as great as advertised?
The best things about Clojure are things you don't really appreciate until you've already done the work to learn them.
For example, I never would have known how amazing it was to evaluate code inside the editor until I did the work of learning Emacs + evil-mode + nrepl/cider + whatever so that I could spin up my http server in-process and then modify code without restarting everything. Even today I'm doing `nodemon index.ts` like a goofball.
I stopped using Clojure simply when I met someone who wanted to build some big projects with me and, despite appreciating that Clojure was probably amazing, they simply couldn't be bothered to learn it. Fair enough. It was when Javascript was just getting `yield` coroutines (before async/await) which finally made Javascript bearable for me enough to switch to it for server work.
Clojure just has increasingly compelling languages and ecosystems to compete with, yet it has a huge ramp up, being a lisp, that make it hard for people to choose it.
Just consider how, to even write Clojure comfortably, you really need something like Paredit. Else, what exactly are you going to do if you want to nest or unnest a (form) deeper in the ast structure? Manually rebalance parens? Cut and paste it? Only Paredit lets you easily move a (form) around the ast. And it's amazing but yet another tool you have to learn just to truly evaluate Clojure.
That said, I've been (and currently am) a Clojure engineer for the past 5 years and loving it. Quite a lot of jobs out there, more and more each time I look, healthy ecosystem and friendly community. It doesn't hurt that it's the most paid programming language as well.
> How did that work out for you?
For the personal projects, it's been incredibly useful. The language fits the way I think, and being built on the JVM, it has both a performant runtime and lots of access to a wide ecosystem of libraries.
The Clojure-specific ecosystem library has been accused of being stagnant. I tend to take a more charitable view. The libraries I've used have tended to be smaller in scope and more well defined in terms of feature set. This makes it easier for them to converge on a feature-complete state, and many of them have done just that. If you don't mind assembling multiple smaller libraries into the useful whole you need, this can provide a stable platform on which to build and develop expertise and higher level libraries.
For larger scale commercial work, it's a harder sell. As you've pointed out, Clojure is not hugely popular, so it's fundamentally a minority language. This can make VC's touchy about funding. This is true to the extent I'm aware of at least one organization that started moving away from Clojure for that reason.
There's also the shape of the learning curve. It can be hard to get started with Clojure because of the issues around the syntax and associated tooling. The more piecemeal aspect of the library ecosystem can then make it harder to get to hit the early successes a larger framework-oriented approach can give you out of the box. You can get there, but it at least takes more initial effort. The same is true for all the abstractive power of Clojure (and other Lisps). Abstractions are nice, but they take time to develop and the payoff is on a considerable lag. The useful rule about waiting to abstract until after you see 2 or 3 instances of a pattern means you need to at least have spent enough time to see those 2 or 3 instances (and maybe a few more) before you really start to see the payoff in your own code.
The net of all this is that it's a language that may make it more difficult to get funding, will be initially somewhat confusing to most developers, and the payoff may well be deferred to the point you don't see it before you give up (either out of frustration or due to external factors). All in all, a considerable set of headwinds.
So what does that mean? It's probably better for projects on a longer time horizon that have a team willing and able to put in extended effort to effectively use the language. (And if the team is not self-funded, good to have a funder with some ability to accept the risk of a non-conventional solution). Not saying these projects don't exist, just that they're not common enough to build a 'popular/mass-market' ecosystem on.
Also the ability to annotate transactions is awesome.
So many goodies.
Here's a good summary:
https://medium.com/@val.vvalval/what-datomic-brings-to-busin...
I think they went way too fast to commercial, and needed to go a freemium model to actually get market share.
I had a few projects it would have been cool on, but I just did postgres instead and won in the long run.
There's a reasonably interesting writeup of the tech details that helps show off Atomics value some, https://www.zsolt.blog/2021/01/Roam-Data-Structure-Query.htm... https://news.ycombinator.com/item?id=29295532
I hope more companies consider this unusual arrangement at least as an alternative to other approaches. Permissively licensed binaries can come in handy, though it certainly comes with it's risks. For example, Microsoft released the binaries for its WebView2 SDK under the BSD license; this is nice of course, but the side-effect is that we can (and did) reverse engineer the loader binary back to source code. I suspect that's unlikely to happen for any substantially large commercial product, and I am not a lawyer so I can't be sure this isn't still legally dubious, but it's still worth considering: the protections of a EULA are completely gone here, if you just distribute binaries under a vanilla permissive open source license.
Were they hoping that people wouldn't notice?
The products even have github repos! There's just no source in them.
It's open source cosplay.
update: So, I asked chatGPT and this is what I got:
----
ChatGPT> Do you know how Datomic and XTDB differ?
Yes, Datomic and XTDB are both database systems, but they differ in a few ways:
1. Architecture: Datomic follows a client-server architecture, where the client sends requests to the server for data manipulation. On the other hand, XTDB is designed as a distributed database system, where data is automatically partitioned across multiple nodes for high availability and scalability.
2. Query Language: Datomic uses the Datalog query language, which is similar to Prolog, for querying the database. XTDB, on the other hand, provides a modern and expressive query language based on Clojure.
3. Data Model: Datomic stores data in the form of entities, attributes, and values. XTDB uses a document-based data model, where each document is a self-contained unit of data.
4. Distributed Transactions: XTDB supports distributed transactions while Datomic does not. This means that XTDB can guarantee the consistency of data across multiple nodes in a distributed system, while Datomic cannot.
5. Open Source: XTDB is open-source, while Datomic is a commercial product, which requires a license to use.
Overall, both Datomic and XTDB have their own unique features and advantages, and the choice between the two depends on the specific requirements and use case of the application.
----
I dunno, it sounds like XTDB is a clear winner here, no?
> You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that (...)
Applying this only to their binaries directly contradicts what the license says.
A copyright license is a copyright license: in theory, all a copyright license does is give you additional rights to use something. Using a license like Apache 2 for binaries is somewhat unconventional, but it's totally possible. It (obviously) does not give you access to the source code, and I think this could never work with the GPL and other copyleft licenses because they use wording that implies you need to distrubute the source code, which you don't have.
The copyright owner, of course, has ownership, so their obligations don't really change by virtue of giving someone a copyright license. As far as I know, they could give someone a license to use something that is completely invalid and could never actually be used, and they can definitely do things like stop distributing under one license and switch to another. They own the source code, and they own the binaries (I believe the binaries would be considered a sort of derivative work in copyright terms, but again, not a lawyer.) So when they distribute a binary under a given license, it's unrelated to any particular distribution of source code. The only time this gets complex is when the ownership of a asset is split among many disparate parties, at which point everyone is pretty much beholden to the copyright licenses; like open source projects without CLAs. But if they own the source code entirely, they could, for example, distribute some source code under GPL, but then distribute modified binaries under a commercial license with a EULA, and not redistribute the modified source code, since it's their code to license to others, not a license they are subjected to themselves.
It's certainly weird for the binary license to be Apache, rather than some proprietary EULA, though.
> You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form
This is actually an interesting question. But I can’t see how a binary only distribution would be in the spirit of the Apache license.
I can feel the internal open vs. closed source argument from here.
To some, the answer is “open source” no matter the question. Hello wagging tail, meet dog.
> Datomic binaries are provided under the Apache 2 license which grants all the same rights to a work delivered in object form.
That doesn't answer the question at all. I assume the answer is no, because otherwise they would just say yes, and have a link to the source code somewhere. But that is such a weird, and possibly duplicitous way to answer.
This is cool as well. It's a CloudFormation template based product you can deploy from AWS Marketplace.
In this case Datomic maintains development control over their product and "source of truth" is still themselves, and the implicit assumption is that you enthusiastically use their product for free with no strings attached because you respect them as the source of truth.
Freeware has been a thing for mere four decades now.
Using it was pretty nice at the scale of a small startup with a motivated team, but scaling it up organizationally-speaking was a challenge due to Datalog's relative idiosyncrasy and poor tooling around the database itself. This was compounded by the parallel challenge of keeping a Clojure codebase from going spaghetti-shaped, which happens in that language when teams scale without a lot of "convention and discipline"--it may be easier to manage otherwise. All of that said, this was years ago so maybe things have changed.
At this point I'd choose either PostgreSQL or SQLite for any project I'm getting started with, as they are both rock-solid, full-featured projects with great tooling and widespread adoption. If things need to scale a basic PostgreSQL setup can usually handle a lot until you need to move to e.g. RDS or whatever, and I'm probably biased but I think SQL is not really that much worse than Datalog for common use-cases. Datalog is nice though, don't get me wrong.
EDIT: one point I forgot to make: the killer feature of being an immutable data store that lets you go back in time is in fact super cool, and it's probably exactly what some organizations need, but it is also costly, and I suspect the number of organizations who really need that functionality is pretty small. The place I was at certainly didn't, which is probably part of the reason for the friction I experienced.
[1] https://docs.datomic.com/pro/api/io-stats.html [2] https://docs.datomic.com/pro/api/query-stats.html
- It is possible to make queries against the database PLUS additional data not yet added, that is, "what if" queries
- Having a stable database-as-value is really useful for paginating results; you don't have to worry about new values being inserted into your results during execution, the way you do with traditional databases no longer how long (minutes, hours, even days) you take to traverse the data
- Reified transactions makes it possible to store extra data with each transaction, trivially, such as who made the update and why
- Immutability is amazing for caching at all layers
Development experience is extremely nice using clojure. I've used it for two other projects and has been very reliable. My latest project didn't really need any of its features compared to a traditional rdbms but I opted for it anyways so I don't have to write sql.
> “Datomic added to DynamoDB was the only option that didn’t force us to sacrifice features or add the expense of developer time. Without it, we would have had to push back a lot more, as the features would have been too difficult.”
(https://www.datomic.com/the-brigades-story.html)
like, what? effectively useless information.
some of the other testimonials mention keeping revision history, which is neat, but why Datomic vs. others? it's pretty easy to keep revision history with other databases too.
[1] https://github.com/terminusdb/terminusdb [2] https://github.com/vaticle/typedb
> Is it Open Source?
Datomic binaries are provided under the Apache 2 license which grants all the same rights to a work delivered in object form.
Datomic will continue to be developed at Nubank, where it is a critical piece of our infrastructure.
"No, the source is not available, and the product will continue to be developed by us, internally. However, binaries are provided..."
Doesn't this mean, that, as soon as I (somehow) get hold of the source code, I can distribute it as I want?
If you want historical audit trails, make them intentional and subject to the same rules and patterns as your regular data.
My view is that Datomic is a novel upstart in the persistence space. Most of their competition - Postgres, Mongo, Cassandra - is open-source, so they're just shooting themselves in the foot. The "pay us extra for convenient hosting and consulting" model isn't threatened by open-source in the slightest.
The only thing I can think of is that they're trying to compete with Oracle/Db2/SQL server, but I can't imagine an enterprise eyeing any of those solutions ever giving Datomic a chance.
I always wonder if this sort of move portends an exit of some of the core technical team, who would very much like to fork the codebase and move on, but in this case with only the binaries being opened up, it feels more as though they want some more people to try Datomic out. Databases such as Neo4J do this as well - free to run, but you'll probably want to pay for support.
Actually, you get the best thing from the two world. Plenish is a library that allows you to sync the content in Datomic to Postgres. https://github.com/lambdaisland/plenish
> This section only applies to Datomic 990-9202 and lower. Newer versions of Datomic Cloud will be free of licensing related costs, and you will only pay for the hardware that you use to run the system.
2007 - the Clojure programming language is announced by Rich Hickey and gains quite a bit of traction over the next 5 or 6 years. It never becomes a "top 5" language, but it could still today be arguably considered a mainstream language. It's been endorsed as "the best general purpose programming language out there" by "Uncle" Bob Martin[1] (author of Clean Code) and Gene Kim[2] (auther of The Phoenix Project, the seminal DevOps book). The fact that Rich spent two years working on it without pay and without the commercial backing many other languages enjoy is a real testament to his commitment and his vision. A Clojure-related emacs package[3] quotes Rich when it starts a REPL: "Design is about pulling things apart."
2012 - the Datomic database is announced by Rich Hickey's company. The database is praised for its ingenuity and its "time travel" features. It was designed to be deployed anywhere in the beginning, but, over time, it became difficult to deploy outside of AWS environments and even the AWS deployment path was quite cumbersome--the Datomic marketing page used to feature a maze-like diagram of all the AWS-specific features needed to make the thing work (it would be nice to find a link to that picture); I'd think most companies would have trouble digesting that and integrating it into their technology "stack".
2020 - Nubank (a Brazilian fintech backed by at least one US venture firm and a large production user of Datomic) acquires Rich Hickey's company. It appears Datamic never gained much use outside of a handful of companies. Making it free of charge (2023) may be the cost-effective thing to do in such a situation if it costs more to handle billing and payments than are brought in. The reason they're not releasing the source code could be legal one or simply the fact that open sourcing a large piece of software takes a lot of effort--something a for-profit financial services company like Nubank doesn't prioritize (rightly so).
1: https://blog.cleancoder.com/uncle-bob/2019/08/22/WhyClojure.... 2: https://itrevolution.com/articles/love-letter-to-clojure-par... 3: https://github.com/clojure-emacs/cider/blob/master/cider-uti...
so, less than useful if you want to study and modify datomic; you may have the legal "right to repair" but not the practical possibility
I will also argue that 90% of those don't need this. Just by seeing the term "web scale" makes me shy off.
What have you thought about or read so far?
Hope it goes open-source as well later on.
Thanks guys!