In a networked environment, which includes the web, it is typical to expose your database over the network. In the olden days clients started speaking SQL over the network, but there are a number of pitfalls to this approach. SQL was designed for use on mainframes, which, understandably, does not translate to the constraints of the network very well.
To alleviate the pressure of those pitfalls, we started adding middle databases (oft called web apps, API services, REST services, etc.) that proxied the database through protocols that are more ideal to the realities and limitations of the network. Clients were then updated to use the middle database, seeing the hacks required to make SQL usable over the network to be centralized in one spot, greatly reducing the management burden.
But having two database servers is pretty silly when you think about it. Especially when the "backend" database's protocol isn't suitable for the network[1]. Enter the realization that if you use something like SQLite, you don't need another, separate database server. You can have one database server[1] that speaks a network friendly API. Except SQLite itself has a number of limitations that doesn't make it well suited to being the backing engine of your network-first DBMS.
That is what the article is about — Pointing out those limitations, and how Turso plans to overcome them. If your use case isn't "web app", SQLite is already going to do the job just fine.
[1] After all, if it were suited for networks, you wouldn't need the middle service. Clients would already be talking to that database directly instead.
[2] As in one logical database server. In practice, you may use a cluster of servers to provide that logical representation.
Are you an AI?