I work on embedded computers with mostly around 64K RAM using C99. Any form of alloc is forbidden. So I implemented a string lib that works with what is called here as views. I hold length and contend in preallocated arrays. Each string has exactly 127 characters and is also zero-terminated to fulfill C-API needs, and my tables can hold between 16 and 64 strings depending on the project. There is even a safety zero at index 127 enforced in any operation. This system allows for fast, non-copy workflow, and ownership is always obvious; a string is not owned. I even have different "arenas" for different parts of the system that can clear independently. I use this approach also in a desktop context, albeit scaled up in length and number. This combines view, zero delimiter, ownership, and arena-like management altogether.
Man, I really don't miss working in C++. Used to be my daily driver until I ended up in C# land. I understand why C++ is the way it is, I understand why it's still around and the purposes it serves, but in terms of the experience of using the language... I wouldn't want to go back.
System APIs requiring passing a null-terminated string are also painful to use from other languages, where strings are not null-terminated by default. They basically require taking a copy of a string and adding a null-terminator before performing a call.
It's usually the case that the more strident someone is in a blog post decrying innovation, the more wrong he is. The current article is no exception.
It's possible to define your own string_view workalike that has a c_str() and binds to whatever is stringlike can has a c_str. It's a few hundred lines of code. You don't have to live with the double indirection.
The zero-terminated string is by far C's worst design decision. It is single-handedly the cause for most performance, correctness, and security bugs, including many high-profile CVEs. I really do wish Pascal strings had caught on earlier and platform/kernel APIs used it, instead of an unqualified pointer-to-char that then hides an O(n) string traversal (by the platform) to find the null byte.
There are then questions about the length prefix, with a simple solution: make this a platform-specific detail and use the machine word. 16-bit platforms get strings of length ~2^16, 32 b platforms get 2^32 (which is a 4 GB-long string, which is more than 1000× as long as the entire Lord of the Rings trilogy), 64 b platforms get 2^64 (which is ~10^19).
Edit: I think a lot of commenters are focusing on the 'Pascalness' of Pascal strings, which I was using as an umbrella terminology for length-prefixed strings.