As I was working on a side project, I noticed I wanted to use SQLite like a Document Database on the server. So I built Doculite. DocuLite lets you use SQLite like Firebase Firestore. It's written in Typescript and an adapter on top of sqlite3 and sqlite.
Reasons:
1) Using an SQL Database meant having less flexibility and iterating slower.
2) Alternative, proven Document Databases only offered client/server support.
3) No network. Having SQLite server-side next to the application is extremely fast.
4) Replicating Firestore's API makes it easy to use.
5) Listeners and real-time updates enhance UX greatly.
6) SQLite is a proven, stable, and well-liked standard. And, apparently one of the most deployed software modules right now. (src: https://www.sqlite.org/mostdeployed.html)
What do you think? Feel free to comment with questions, remarks, and thoughts.
Happy to hear them.
Thanks
The npm sqlite package used to have sqlite3 as a direct dependency in older major versions and most of the support issues were against sqlite3 instead of sqlite. Taking that dependency out and having the user inject it into sqlite instead removed 99% of the support issues. It's also really nice that sqlite has no dependencies at all.
If you go the adapter pattern route, you can support other sqlite libraries like better-sqlite3. Sometimes sqlite/sqlite3 doesn't fit a user's use-case and alternative libraries do.
Same deal with the pub/sub mechanism. You have the Typescript types defined to create abstractions. Would be nice to see adapters for Redis streams / kafka / etc, as in-memory pub/sub may not cut it after a certain point.
Great start on your library!
So exporting one Database that allows people to pick which driver they want to use might be a simple and user-friendly solution (incl. better-sqlite3) which people have asked for.
- https://github.com/haxtra/kvstore-sqlite (Basic key-value store for SQLite databases.)
- https://github.com/haxtra/super-sqlite3 (Fast SQLite library with optional full db encryption, simple query builder, and a host of utility features, all in one neat package.)
- https://github.com/haxtra/live-object (Standard javascript object with built-in JSON serialization to file. Dreams do come true sometimes.)
All from github user: https://github.com/haxtra
I think the super-sqlite3 source might also be an inspiration for the 'driver' topic: "super-sqlite3 is a thin wrapper around better-sqlite3-multiple-ciphers, which extends better-sqlite3 (the fastest SQLite library for node.js) with full database encryption using SQLite3MultipleCiphers. super-sqlite3 then adds its own query builder and other convenience features."
And do check out this user's XRay (JavaScript object browser component) library for your preferred component framework.
In my bookmarks I also found these other related and interesting links: - https://dgl.cx/2020/06/sqlite-json-support (An article about SQLite as a document database, using the relatively new 'genrated columns' feature of sqlite 3.31.0, which you seem to be using)
- https://www.npmjs.com/package/best.db (easy and quick storage)
- https://tinybase.org (This project seems to be an even more similar idea to Doculite) https://github.com/tinyplex/tinybase (The reactive data store for local-first apps.)
Good luck with your project!
Do you have examples of what you mean?
Do you just mean in the code or is this something about how it is imported as a dependency in package.json?
https://github.com/kriasoft/node-sqlite
In the src/index.ts file, the `open()` function takes in an instance of sqlite3, vs the file importing it from the sqlite3 package itself.
An adapter / driver pattern would extend this where you can interchange usage of either the sqlite package or an alternative.
For example, let's say you have two SQLite drivers. They both allow you to execute a statement, but their methods and maybe parameters are named differently. One might expose a `exec()`, while the other exposes `run()` to do the same thing.
Your codebase probably is coded for one or the other, which means you can't freely interchange the drivers.
So what you do to support this is create a an interface with methods that describe what you want to do. In the above case, you might describe a method called `executeStatement()`.
Then you create two classes, one for each driver. They both implement `executeStatement()`, but under the hood, they'll run `exec()` and `run()` in their implementations.
So your main code now will accept an instance of anything that implements your interface, and instead of calling like `exec()`, you'll be calling `executeStatement()`, which under the hood calls `exec()`.
So it goes like this:
- Define common interface for interacting with different database libraries (the abstraction)
- Implementation class using that interface (the drivers)
- Your constructor takes in anything that conforms to that interface
- The user creates an instance of your implementation (eg SqliteDriver) and feeds in the instance of the driver (eg `new SqliteDriver(<output of npm sqlite open()>)`
- Your code calls the interface methods in place of the actual database calls instead
You had something like:
db.collection('users').set(..)
So rather than calling SQLite `run()`, it'd be calling your interface method `executeStatment()` (which calls `run()`) instead in that `set()` call.It looks like the "Bridge" pattern is what you want here:
https://www.phind.com/agent?cache=cll1zw1n60010la08mn89dd8g
The generated code is Java, but the idea and concept is the exact same that I've described above.
The way _I_ would expect to do this is something like this:
const ref = db.collection('page').doc('foo');
do {
const current = await ref.get();
try {
await ref.set({ likes: current.likes + 1 }, { when: { likes: current.likes } });
} catch {
continue;
}
} while (false);
If `set()` attempts to write to the ref when the conditions in `when` are not matched exactly, the write should fail and you should have to try the operation again. In this example, the `set()` call increments the like value by one, but specifies that the write is only valid if `likes` is equal to the value that the client read. In the scenario I provided, one of the two concurrent requests would fail and retry the write (and succeed on the second go).Consider the case where a user is submitting an e-commerce order. You want to mark their order as processed and submit it for fulfillment. If you read the order to check if it's already submitted, two requests to submit it made at almost the same time (e.g., hitting the button twice) will both read that it's unfulfilled and try to each submit it.
By doing an atomic write you can be sure that at most one request submits the order.
https://cloud.google.com/firestore/docs/samples/firestore-da...
What are you using for this today?
If you're using SQLite you can use exclusive transactions to perform the read+write but I'm sure there's probably a more efficient way to go about it. You can craft an UPDATE that selects on the primary key and the condition and then use sqlite3_changes() to get back the number of records modified (and fail if it's zero), but that may not be possible with your setup.
https://github.com/WiseLibs/better-sqlite3#why-should-i-use-...
As for on the server, no. Sqlite is a c library, not a separate application- the work happens inside the node process. Regardless of how you do it, any call into sqlite is going to block. Adding promises or callbacks on top of that is just wasting CPU cycles, unlike reading from the filesystem or making a network request, where the work is offloaded to a process outside of node (and hence why it makes sense to let node do other things instead of waiting).
In fact, if you synchronously read and write within a single function with no awaits or timeouts in-between, you don't have to worry about atomicity- no other request is being handled in the meantime.
https://www.sqlite.org/json1.html
Edit: where is the database file stored? A parameter for the Database() constructor seems obvious, but not seeing it in the basic sample.
It looks like your tables have a single value column and a id generated column that extracts $.id from that value:
CREATE TABLE IF NOT EXISTS ${collection} (
value TEXT,
id TEXT GENERATED ALWAYS AS (json_extract(value, "$.id")) VIRTUAL NOT NULL
)
GENERATED ALWAYS AS was added in a relatively recent SQLite version - 2020-01-22 (3.31.0) - do you have a feel for how likely it is for Node.js users to be stuck on an older version? I've had a lot of concern about Python users who are on a stale SQLite for my own projects.Honestly, that sounds absolutely frightening to my ear. There are redis, there is mongo, orient, and plenty of document-based rapid-development databases which will be a substantially better solution than turning sqlite into Firebase.
Yes, you can use data change notification callbacks. It is a cool feature of SQLite, but did you know that their performance is a big concern in a large-scale database (document database grows very quickly by design)? What about COUNTs? Batching operations? Deadlocks? This will go out of hand quickly, because a hammer is used as a shovel here.
You will unveil more problems with SQLite for this job as you dig deeper. What you really want is fast writes and dumb querying by document id, whereas SQLite gives you an ultra querying suite that you don't utilize but still have to pay with slower writes. This is a classic system-design problem. Just try rewriting your collection and document abstractions using proper document data storage and you will see how less complicated it is.
Addressing main points like implementing atomic transactions (read and write operation on a doc) seems warranted since it exists in Firebase as well.
- https://github.com/Rolands-Laucis/Socio
- https://www.youtube.com/watch?v=5MxAg-h38VA&list=PLuzV40bvrS... (video updates of the code and functionality)
b) You can't use IndexedDB on the server, so you wouldn't be able to write sync code that runs on both the client and the server
Modern offline-first applications include a local backend every bit as complex and demanding as a server-based backend.
> 6) SQLite is a proven, stable, and well-liked standard.
How does adding this adapter on top affect the stability?
Have you looked at SurrealDB? - https://surrealdb.com/
Seems like it would provide you with what you need.
We currently use Realm for this use case. It’s a local database just like SQLite, but with native support for documents and listeners. Did you try that out?
I've tried the second, but the time comes when you need to (re)order items which gets clumsy.
how is this implemented?