It's all fun and games in the greenfield happy path. Do the wrong modification to someone's bank account? Just update the code so you no longer do the wrong modifications. Leave the bad data in-place. Or if you want to fix the bad data, maybe reach out to the customer to ask what his balance should be. Or check the dev-logs, you do make your dev-logs a better source-of-truth than the database, don't you? Once you 'know' what the balance should be, just use your CRUD operation to set things right, aka "create money".
I agree with the article that exposing R and U interfaces on all entities is a completely natural, human way of think about it. It allows for completely intuitive patterns like "check-then-act" and "read-modify-write", (which are also the names of race conditions.)
EDIT: I forgot to comment on the obvious fallback here. If you really screw things up, it might be possible to restore your database to an earlier state, and just disappear all the money which moved through the system after the database snapshot was created.
If you want to do CRUD but without the headaches of U and D you can do this:
https://en.wikipedia.org/wiki/Bitemporal_modeling
It may seem scary at first and requires more thought, but the benefits are very real, even if the history is not exposed to users in any way.
We're all pretty keen for it. Looks great!
And people storing other people's personal data end up with fines if they don't remove the data on request. The whole situation got so bad, that timescale had several bugs which prevented U and D of single records, you could only delete a shard. At least C and R is fast.
Therefore, I would suggest to use "CRUD" terminology mostly the more technical parts of the application (e.g. some adapter to communicate with a database) and to use business terms (from the "ubiquituos language", as it is called in domain-driven design) otherwise.
I once had a coworker argueing against DDD ideas with the killer argument that it would be only "CRUD". But it was in no way useful to think about the problem in these terms. Later it turned out that we had quite some business logic and that "CRUD" wouldn't have been very helpful to express that.
So, really, all computing boils down to a Turing machine. No need to learn any other technology.
Of course everything that works with a database is CRUD because what else can you do with a database apart from create, retrieve, update and delete data?
I think that if you're doing your job right in interface design the structure of your database shouldn't be immediately apparent to the user. The database design process and the interface design process should be completely separate.
All business logic must happen somewhere. In a CRUD framework, much of that is just pushed to the edges of those consuming the CRUD API, in order to keep the CRUD part clean. A trade off I'm not sure I agree with.
That's true. But NOT everything is CRUD.
Validation is not CRUD (you do not want your DB to validate your data in most cases -- and even whet the DB does validation: the V is not in CRUD). Form submissions are not CRUD. Queues (AMQP, SQS, pub-sub) are notoriously not CRUD. Scaling you cluster is not CRUD. Deploying your software is not CRUD.
I could go on...
Think ChatGPT, people aren't there to do CRUD, they are there to talk to the model. Saving the results and what they write is a useful feature so CRUD is still needed, but not the main attraction.
Or any other kind of server that computes something, like a game server, as long as you want to write a server program instead of just storing data.
I for one spend a fair amount of time refactoring reports and other complex requests that ER & friends generate absolute monsters for, because the data model is optimised for just plonking data in and pulling it back a few objects at a time, with no thought to larger access patterns that are going to be needed by the users later. Sometimes that means just improving the access SQL (replacing the generated mess with something more hand crafted), sometimes it is as simple as improving the data's indexing strategy, someone the whole schema needs a jolly good visit from the panel beater.
A particular specialty is turning report/dashboard (both system-wide and per-user) computations that have no business taking so long over _that_ size of data, from "takes so long we'll have to precompute overnight and display from cache" to hundreds of even tens of milliseconds, otherwise known as "wow that is quick enough to run live!".
This is exacerbated by devs working on local on-prem SQL Server instances, with dedicated many-core CPUs and blinding fast SSDs, initially, then the product being run in production on AzureSQL where for cost reasons several DBs are crammed into a standard-class elastic pool where CPU & memory are more limited and IO for anything not already in the buffer pool is several orders of magnitude slower than local (think "an elderly asthmatic arthritis-riddled ant librarian fetching you an encyclopaedia from back store" slow).
The other big "oh, it worked well in dev" cause is that even when people dev/test against something akin to the final production infrastructure, they do that testing with an amount of data that some clients will generate in days, hours, or even every few minutes (and that is ignoring the amount that will just arrive in one go as part of an initial on-boarding migration for some clients).
Glorified-Predictive-Text generated EF code is not currently helping any of this.
</rant> :-)
For trolls, it's quaternary to a point: one, two, three, many, many-one, many-two . . . many-many-many-three, lots