I wish writing SQL queries was more popular than ORMs
Some backend libraries let you write SQL queries as they are and deliver them to the database. They still handle making the connection, pooling, etc.
ORMs introduce a different API for making SQL queries, with the aim to make it easier. But I find them always subpar to SQL, and often times they miss advanced features (and sometimes not even those advanced).
It also means every time I use a ORM, I have to learn this ORM's API.
SQL is already a high level language abstracting inner workings of the database. So I find the promise of ease of use not to beat SQL. And I don't like abstracting an already high level abstraction.
Alright, I admit, there are a few advantages:
if I don't know SQL and don't plan on learning it, it is easier to learn a ORM
if I want better out of the box syntax highlighting (as SQL queries may be interpreted as pure strings)
if I want to use structures similar to my programming language (classes, functions, etc).
But ultimately I find these benefits far outweighed by the benefits of pure sql.
ORM lets you to use plain objects over untyped strings. I take typed anything over untyped anything, everyday
ORM lets you to use multiple database backends. For ex, you don't need to spawn a local postgres server, then clean/migrate it after each test suit, you can just use in-memory sqlite for that. OK this has some gotchas, but that's a massive improvement in productivity
I too want my query results in an object, but thankfully libraries like sqlx for golang can do this without the extra overhead of an ORM. You give them a select query and they spit out hydrated objects.
As far as multiple DBs go, you can accomplish the same thing as long as you write ANSI standard SQL queries.
I've used ORMs heavily in the past and might still for a quick project or for the "command" side of a CQRS app. But I've seen too much bad performance once people move away from CRUD operations to reports via an ORM.
Even something as ubiquitous as JSON is not handled in the same way in different databases, same goes for Dates, and UUID. I am not even mentioning migrations scripts. As soon as you start writing raw SQL, I pretty sure you will hit a compatibility issue.
I was specifically talking about python, can't argue with golang. OK you have a valid point for performance, gotta keep an eye on that. However, I am satisfied for our CRUD api
I was about to write the same thing. Really the object thing is the whole reason to use ORMs.
Using plain SQL is a compatibility and migration nightmare in medium and bigger sized projects. If anything using plain SQL is just bad software design at least in an OOP context.
Better than an ORM is to use a query builder. You get the expressiveness of SQL with the safety and convenience of an ORM.
Most developers that use ORMs create poorly performing monstrosities, and most developers who write raw SQL create brittle, unsafe and unmaintainable software. There is a happy medium here.
I also find ORMs and query builders much easier to debug than most mative SQL database queries. Mostly because native SQL error messages tend to be some of the most unhelpful, most undescriptive crap out there, and ORMs help a bit with that.
Seriously, fuck MySQL error messages. 9 times out of 10 shit boils down to "you got some sort of error somewhere roughly over there, go fix".
I'm always curious about this particular feature/argument. From the aspect of "i can unit test easier because the interface is abstracted, so I can test with no database." Great. (though there would be a debate on time saved with tests versus live production efficiency lost on badly formed automatic SQL code)
For anything else, I have to wonder how often applications have actual back-end technologies change to that degree. "How many times in your career did you actually replace MSSQL with Oracle?" Because in 30 years of professional coding for me, it has been never. If you have that big of a change, you are probably changing the core language/version and OS being hosted on, so everything changes.
Some of us have had to support multiple database targets. So I don't know about changing a database in a running application, but a good abstraction has made it easier to extend support and add clients when we could quickly and easily add new database providerz
If you are building software where the customer is the deployer being flexible on what database can be used is a pretty big step. Without it could turn off potential customers that have already existing infastructure.
Working in a data intensive context, I saw such migrations very often, from and to oracle, ms sql, postgres, sas, exasol, hadoop, parquet, Kafka. Abstraction, even further than orms, is extremely helpful.
Unfortunately in most real case scenarios companies don't value abstraction, because it takes time that cannot be justified in PI plannings and reviews. So people write it as it is quicker, and migrations are complete re write. A lot of money, time and resources wasted to reinvent the wheel.
Truth is that who pays doesn't care, otherwise they'd do it differently. They deserve the waste of money and resources.
On the other hand, now that I think of it, I've never seen a real impacting OS migration. Max os migration I've seen is from centos or suse to rhel... In the field I work on, non unix OSes are always a bad choice anyway
Yeah, I have my own stuff that lets me do MSSQL, DynamoDB, REST/HATEAOS, regular Hash Maps, and some obscure databases (FilePro).
I throw them in a tree structure and perform depth-first searches for resources. Some of them have stuff for change data capture streaming as well, (eg: SQLNotifications, DynamoDB Stream, WebSockets).
DynamoDB was a rough one to optimize because I have to code to pick the best index. You don't do that with SQL.
The code on backend is the same as frontend, but a different tree. Frontend queries against REST and a cache layer. Backend queries against anything, REST included.
Composable querying/pushdown is nice but transaction management is huge. It's not an easy task to correctly implement a way to share transactions between methods and between repository classes. But the alternative is, your transactions are limited to individual methods (or you don't use them, and you risk leaving your database in an inconsistent state without manual cleanup).
I agree. If you have a relational database and an object-oriented programming language you're going to have to map data one way or another.
That being said, using object-oriented doesn't necessarily mean the data abstraction needs to be objects too. Python is object-oriented yet Pandas is a very popular relational abstraction for it.
SQL injection safety
Parameterized queries are native to the database engine. They're going to be available regardless what you use on the client side.
(Well, if the database implements them... having flashbacks to back when MySQL didn't, and it taught a couple of generations of programmers extremely bad "sanitization" practices.)
query composition
Check out the active record pattern. It's a thin layer over SQL that lets you put together a query programatically (and nothing more).
connection builders
This is very database specific and many ORMs don't do a great job of it. If anything this is a con for ORMs not a pro.
transaction management
Again, very hit and miss. Each database has particular quirks and you need to know so much about them to use transactions effectively that it negates any insulation that the ORM provides.
I'm also a big fan of raw SQL. Most ORMs are fine for CRUD stuff, but the moment you want to start using the "relational" part of the database (which... that's the whole point) they start to irritate me. They also aren't free - if you're lucky, you pay at comptime (Rust's Diesel) but I think a lot of ORMs do everything at runtime via reflection and the like.
For CRUD stuff, I usually just define some interface(s) that take a query and manually bind/extract struct fields. This definitely wouldn't scale, but it's fine when you only a handful of tables and it keeps the abstraction/performance tradeoff low.
Agree 100%. Especially when you're doing more complicated queries, working with ORM adds so much complexity and obfuscation. In my experience, if you're doing much of anything outside CRUD, they add more work than they save.
I also tend to doubt their performance claims. Especially when you can easily end up mapping much more data when using a ORM than you need to.
I think ORMs are a great example of people thinking absolutely everything needs to be object oriented. OO is great for a lot of things and I love using it, but there are also places where it creates more problems than it solves.
I once had a task stripping a ODM out of a large project, reverting to the native driver, because of its (extremely) poor performance. Also the fun of profiling the project to prove the ODM was to blame. I also empathize with the "supposed to make things simpler, makes them more complicated instead" point you make.
From many experiences, I hate ORM/ODMs and am immediately suspicious of anyone who likes them.
Since working with SQLAlchemy a lot (specifically it's SQL compiler, not it's ORM), I don't want to work with SQL any other way. I want to have the possibility to extract column definitions into named variables, reuse queries as columns in other queries, etc. I don't want to concatenate SQL strings ever again.
Having a DSL or even a full language which compiles to SQL is clearly the superior way to work with SQL.
I worked on one project only which used what I guess is an ORM-like pattern, and I have to say it was actually really nice. The code was Javascript, and there was a mapping:
Class <-> DB table
Field <-> DB column
Row <-> Object
For each class, there was a big mapping table which indicated which database-backed fields needed to exist in that class, and then there was automated code that (1) could create or update the database to match the specified schema (2) would create helper methods in each class for basic data functions -- the options being "Create me a new non-database-backed object X" "I've set up the new object, insert it into the DB" "give me an iterator of all database-backed objects matching this arbitrary query", "update the appropriate row with the changes I've made to this object", "delete this object from the DB," and "I'm doing something unusual, just run this SQL query".
I honestly really liked it, it made things smooth. Maybe it was the lack of hesitation about dropping back to SQL for stuff where you needed SQL, but I never had issues with it and it seemed to me like it made life pretty straightforward.
They're nice if they also migrate your db schema. That way you define your schema once and use it both to setup your db and interact with it via code. I do write raw sql for more complex queries, e.g. when there's recursion.
I find ORMs exist best in a mid-sized project, most valuable in a CQRS context.
For anything small, they massively over complicate the architecture. For the large enterprise systems, they always seem to choke on an already large and complex domain.
So a mid size project, maybe with less than a hundred or so data objects works best with an ORM. In that way, they've also been most productive mainly for the CUD of the CRUD approach. I'd rather write my domain logic with the speed and safety of an ORM during writes, but leverage the flexibility and expressiveness of SQL when I'm crafting efficient read queries.
The SQL generation is great. It means you can quickly get up and running. If the orm is well designed it should perform well for the majority of queries.
The other massive bonus is the object mapping. This can be an absolute pain in the ass. Especially between datasets and classes.
I find SQL to be easy enough to write without needing generation. It is very well documented, and it is very declarative and English-like. More than any ORM, imo.
Completely agree. Most ORMs focus on hiding SQL away (for good reasons, such as portability and type safety), but I wish there were more approaching it in the reverse. That is, have the user write schemas, queries and migrations in SQL, and generate models with typesafe APIs in return. I'm only aware of SQLDelight in this space, but it's such a great idea to have the source of truth be actual SQL, and a build time generator and validator working alongside you.
I absolutely prefer using an ORM for querying but I'm definitely never letting the ORM create the schema for me. I will always do that myself and generate the ORM definitions from SQL, and I will never use an ORM that doesn't have that as an option.
I find SQL, especially prepared statements, to be essentially a function call with a string of text containing identifiers that get swapped out, then a bunch of arguments that get swapped in.
Perhaps there are better SQL libraries that make this more fluent.
But that is so close to an ORM, I feel like I might as well use an ORM.
TL;DR you can't be an expert at every aspect of coding, so I let the big boys handle SQL and don't torture the world with my abysmal SQL code.
I've seen enough bad SQL to claim you're wrong (I write bad SQL myself, so if you write SQL like I do, you're bad at it).
Seriously, the large majority of devs write terrible SQL and don't know how to optimise queries in any way. They just mash together a query with whichever JOIN they learned first. NATURAL JOIN? Sure, don't mind if I do! Might end up being a LEFT JOIN, RIGHT JOIN, or INNER JOIN, but at least I got my data back right?
Off the top of your head, do you know all the joins that exist, when to use which one, and which ones are aliases for another? Do you know how to write optimal JOINs when querying data with multiple relations?
When writing similar queries, do you think most are going to copy-paste something that worked and adapt it? What if you find out that it could be optimised? Then you'll have to search for all queries that look somewhat similar and fix those.
When you create an index for a table, are you going to tell me you are going to read up on the different types each time to make sure you're using the one that makes sense? Postgres has 6, MySQL only has 2 tbf depending on storage engine, but what about other DBs? If you write something for one DB and a client or user wants to host it with another, what will your code look like afterwards?
Others have brought up models in code, so that's already discussed, but what about migrations? Do you think it's time well-spent writing every single migration yourself? I had the distinct pleasure of having to deal with hand-written migrations that were copy-pasted and modified columns that had nothing to do with the changed models, weren't in a transaction, failed half-way through, and tracking down which migration had actually failed. These were seasoned developers who completely forgot to put any migration in transactions. They had to learn the hard way.
I used to be an ORM-hater, but my experience with Django has changed my mind, somewhat. I still think there are projects where ORM is unnecessary or even harmful, but for some projects, being able to lean on an ORM to create simple queries/updates or to handle DB migrations is a big time saver. And you can always fall back to hand-written SQL when you need to as long as the ORM allows it, which it absolutely should.
ORMs introduce a different API for making SQL queries, with the aim to make it easier.
I wouldn't say that, but instead, that they strive to keep everything contained in one language/stack/deployment workflow, with the benefit of code reusability (for instance, it's completely idiotic, if you ask me, that your models' definition and validation code get duplicated in 3 different application layers (front/API/DB) in as many different languages.
ORMs are not a 100% solution, but do wonders for the first 98% while providing escape hatches for whatever weird case you might encounter, and are overall a net positive in my book. Moreover, while I totally agree that having DB/storage-layer knowledge is super valuable, SQL isn't exactly a flawless language and there's been about 50 years of programming language research since it was invented.
This is a project I am already keeping a close eye on, but I would rather qualify it as a "better SQL" than as an alternative to your typical (framework's) ORM. For instance, it won't morph CRUD operations and data migrations into a language/stack that's native to the rest of the project (and by extension, imply learning another language/stack/set of tools...)
I've been able to write unit tests for SQL within the database to address testing important business logic that exists in SQL. The test fixtures just become stored (version controlled) database scripts to set needed test data in place in the test DB. Then we still mock over the db call in the code for unit tests as usual.
It's more effort up front, but I find it much easier to maintain complex DB interactions inside the DB, isolated from the downstream consumer code.
Obviously, there's an art to knowing when this is needed, or appropriate. I've worked for organizations where almost everything important was a performant SQL query. In that org, maintenance got dramatically simpler and the product more reliable when we started writing SQL tests after moving important DB work directly into the DB.
You can only depend on ORM upto a point. Beyond that you have to go use Arel (relationship algebra in Ruby), execute prepared SQL statement, trigger and functions.
I use ORM for concise, easier to read and maintainable code. e.g. joining three or more tables in SQL is cumbersum and verbose. Writing related multiple query is too time consuming, etc.
I learnt from relational algebra, SQL, ORM to vendor specific SQL.