Unlike PostgREST/Hasura and some other dynamic tools you can "eject" at this point if you'd like and continue on development without the generator in a language you already know. But I'm working on exposing Lua-based hooks you could carry across whatever backend language you choose to generate and avoid the need to eject.
It has builtin support for paginated bulk GET requests with filtering, sorting, limiting. Built-in support for bcrypt-password authentication and optional SQL filters specified in configuration for authorization of particular endpoints based on session and request metadata.
Still very much a work in progress but the goal is to push the envelope on application boilerplate.
Screenshots are of the example/notes project in the repo.
My goal in building this is to allow myself to more rapidly prototype real, complex applications. It's not there yet but I've got such an application in mind, building toward support for it as I'm developing this.
If I were to use something like this, it'd be for rapid development initially for prototyping purposes. Then I'd transition to something more bespoke as needed.
If you know all your requirements up front, then starting bespoke from the beginning may be the better route.
Rails' generators are a pretty good starting point here, if a little bit more verbose than I'd like. They're great for getting the boilerplatey bits off the ground, and focusing on the bits that are unique to what you're doing.
It also integrates with graphql-codegen so you can generate code for apollo/others
With strapi you configure your DB and get code generated in JS that supports a standard CRUD REST API. If you want to add business logic, you can override any particular endpoint you want. Their docs even come with the default implementation for easy copy/paste.
I would love to see research in this space continue, I think it's the future of bringing non-technical people into the product development process (if you can understand building a workflow with Excel/Google Sheets/Airtable, you can understand building an API). I'm excited to check out your project.
The UI allows for build out the schema, and has all the CRUD interfaces pre-built and the CRUD api endpoints.
But then the code is still all there and you can add additional controllers actions, model hooks and services.
I haven’t run into any hard limitations yet, been very impressed.
This project doesn't seem to have any inbuilt AuthZ functionality, so unless your database has that built in like Postgres, or you need something that's not possible in-database, I guess you just... can't.
The biggest problem is that unless the main language you’re familiar with is PL/pgSQL, you’ll eventually run into the roadblock that is having everything reside in the DB.
In my case, I simply could not figure out how to use the Users table without having the password returned in all queries. I could turn off the automatically generated queries that Postgraphile made, but at that point why bother with it?
Other problems are that there isn’t a mature and well established method of maintaining development, staging, and production database schemas on projects with more than one developer.
Graphile makes https://github.com/graphile/migrate, however, it seems incredibly brittle and is built on a workflow that will absolutely break production if it is slightly deviated from.
I really liked using it! I just hope some of these developer ergonomics issues could be better handled.
This is a little limited in this current form and I'm working to expand SQL filters to match up to HTTP codes so you can say this request needs to have a session otherwise it is a 401, then it also needs to match another filter otherwise it is a 403. But this other endpoint is ok to show without a session if the object being requested is marked public in the db.
There's a lot to think through especially when extending these filters to bulk methods.
https://github.com/eatonphil/dbcore/blob/master/examples/not...
The main use case is giving a data scientist or another application access to the results of a few arbitrary queries without giving them full access to the database. So it's a bit like giving them access to a SQL view, but without them needing to set up a driver, etc. to connect.
I work for a large corporation. They want to implement the Bezos mandate [1]. No direct database access between teams, API abstraction for everything.
OK, now let's think in onion layers (or hexagonal/clean architecture if you like). Think of layers of services with different purposes - data services (containing no application/business logic), application services (for business logic, orchestration, process), and UI/UX services (to power differing end user experiences).
Data services don't have to do much - be the data/repository layer, expose productive interfaces for CRUD. Need to read across data sources? Think of federated data services that can combine data on the fly, perhaps like GraphQL.
These kinds of tools are perfect for the first layer of services that abstract the database world from the application world. Just simple services, even ones that effectively let you mimic what SQL queries can do (filter, sort, page, etc.). Individual record oriented interfaces, and bulk oriented interfaces. The query side of CQRS (command query separation).
Many will say, "I don't need all this complexity and layers" - and sure, for smaller or simpler applications, probably not!
But, if you have to operate on any kind of larger scale, with multiple data sources, systems, etc., you end up needing the layers. And these types of tools automate some of the lower layers.
Perhaps when we talk about this, we shouldn't be focusing on "oh it's too complicated", and instead building frameworks or reference architectures that automate away the complexity - so it looks easy again, but now it is more flexible, perhaps easier to scale.
I believe that we are on the cusp still, of an almost fully defined, service based architecture (microservices and server less were just one part of the continuing development of that story). Federation is another part of that story oft ignored. Thinking of the onion as service layers is another part. Erasing the network boundary as a concern through much higher speed internetworking is another part.
Eventually we may come to see, that it is all a big "system", some parts just aren't connected to each other directly.
Sorry, got a bit rant-y at the end there :) Just passionate about sharing this world view with others - as I continue to see this architecture developing!
[EDIT] I wanted to add, it's not just that the use case for this is in a data service layer for automation - from a logical perspective I mean. In big companies such as the one I work for, we never get the resources we need, ironically. We are overwhelmed with demands, and must operate under the Bezos mandate rules. Tools such as Octo are not a panacea, but, they are a good compromise if you have to move fast, they are time-and-cost-savers. And they can get you surprisingly far.
[1] https://www.calnewport.com/blog/2018/09/18/the-human-api-man...
If you like this, check out OctoSQL[0]... Also in Go... Though OctoSQL lets you query multiple databases / files / event streams like kafka using SQL from your command line, not as a server, so a fairly different use case, but you should check it out nevertheless!
The naming clash is funny.
I am wondering what future role badger will play in the future? It would also make a great additional KV backend btw.
We're considering moving to a more in-memory model, as we're not sure if the badger storage idea was a good one and worth it.
TBH we're still not quite sure in what direction we'll be continuing. Though we're surely gonna be developing it further.
But currently we're considering a rewrite with multiple assumptions changed (column oriented).
I think there's certainly space for this project, i.e. hand-written queries, on any database (Postg[REST|raphile] both only work with Postgres of course, not sure about Hasura). Not sure it will succeed without support for more forms of Serverless deployment, primarily Lambda.
Also not keen on the passwords being kept in a plaintext file - someone will check that into git. OpenFaaS has secret support which you can use Amal. So does Knative.
We are in similar space, we take input params of db and generate CRUD apis with Auth+ACL and then APIs are packed into a single lambda function. There is support for serverless framework as well.
? Enter the database password: ${DB_PASSWORD}
Are folks using these kind of things for non-trivial production applications?