I find this ends up being, like, 1 or 2% of queries. It's also very hard if not impossible to guess which queries will end up in that group.
You're better off building it with the ORM first and breaking out SQL later when you are trying to performance optimize.
There is also a small % of queries which use some feature of your database engine which the ORM won't support.
I tend to use the ORM function CUD operations, and just write raw SQL for SELECTs unless they're super-simple.
> It's also very hard if not impossible to guess which queries will end up in that group. > You're better off building it with the ORM first and breaking out SQL later when you are trying to performance optimize.
I disagree on this. It's almost no extra work to just write these optimised in the first place (if you're not trying to squash everything into an ORM workflow). So it makes sense just to write them all optimised. Ditto for doing batch inserts and updates rather than looped inserts/updates (although you can usually use the ORM for this).
Not everyone is capable of quickly optimizing SQL, and I don't think it's an absolutely necessary skill to build a decent application. Junior devs can pick up on this skill over time and as long as they can manage to avoid any obvious footguns, using the ORM is fine most of the time.
Writing queries that are optimized in the first place just means now you have to maintain a bunch of SQL and you have to rely on anyone modifying that SQL later understanding those optimizations. Sometimes it's necessary, but if it's not, I think it's much nicer to stay in ORM-land even if the query might not be optimal.
I mean that's true, but equally not everyone is capable of using an ORM. I don't think SQL is inherently any harder to learn.
At my last job, I had juniors who had never used SQL at all productive in SQL within a couple of weeks, and using "complex" SQL like JSON aggregation and windows function with a few months. They were a little intimidated by it when they started, but didn't find it too hard to learn in the end.
Using a case-sensitive filter (default for Django) in a DB with case-sensitive collation (default in Postgres)? Django will helpfully cast the tuple and your query to UPPER to match it for you, and the former wrecks indexing.
Checking if a string ends with something else? Goodbye, index.
I _think_ the latter can be worked around in PG with a GIN index, but I’m not positive (I work with MySQL much more). And in any case, you’d have to know to create that, and I imagine most devs won’t.
Fixing seemingly tiny things like that have a massive impact on large table query speed.
What is it that you do here that can't be handled by, say, django's workhorses - filter and select_related?
If it's impossible to write 90% of your queries in an ORM my suspicion would be that you're either not using the ORM correctly or you're using a crappy ORM.
- Complex joins
- Complex WHERE clauses with mixes of AND and OR (with parentheses)
- JSON aggregation
- Window functions
tend to require quite heavyweight syntax in ORMs (e.g. nested lambda functions). Whereas the corresponding SQL tends to introduce much less noise.
It's basically just another case of a dedicated language being nicer to use than a DSL embedded into a general purpose language. Normally it's not worth creating a whole language just for nicer syntax, but in the case of SQL the language already exists! So why not use it.
That adds up, with almost no downside most of the time.
This approach is more bottom up. You end up with uni directional data flow, better separation of concerns, data coupling instead of object dependencies and better performance right out of the bat.
The cost? In my experience just some basic familiarity with SQL.