It's hard to point those out without coming across as a hater, and because they might seem so obvious. Trying...:
With the way transactions and durability work in SQLite, a normal setup will not do more than on the order of ~100 transactions per second without adding additional layers of complexity. Which is completely fine for it's intended usecase of being an embedded, lightweight database and of course is not due to some bad engineering decisions, but due to inherent tradeoffs in how transactions are handled.
So basically it's all fine until you try to use SQLite for something for which it is not a good fit, like a large volume of inserts. Other solutions exist that are conceptually better equipped for this usecase out of the box (Postgres/MySQL/Elastic/etc). Not a shortcoming in SQLite per-se, but rather a practical shortcoming that comes up when you try to use it for the wrong job.
Case in point is TFA which takes the theoretical SQLite database size (140TB) limit and runs with claiming that is the useful limit on how much data you can store in SQLite. LOL! That value is when SQLite page IDs start overflowing and not an estimate on how much data can be usefully handled in a single SQLite database.
SQLite is, by the nature of what it is, not a good fit for big data volumes. Try loading 100TB of data into SQLite and try to run even a single query. Even assuming that you have a hyper-fast SSD, a single query will take days to complete! But it still get's mentioned as a potential solution for that problem every so often. In this post even...
So you could consider those practical shortcomings or ill-advised usage. It depends on the definition/perspective I guess. At any rate, SQLite, even if generally being excellent, is not the panacea as which it sometimes gets sold here on HN. No database is, there are just too many tradeoffs involved.