logoalt Hacker News

socketclustertoday at 12:52 AM10 repliesview on HN

I've also become something of a text maximalist. It is the natural meeting point in human-machine communication. The optimal balance of efficiency, flexibility and transparency.

You can store everything as a string; base64 for binary, JSON for data, HTML for layout, CSS for styling, SQL for queries... Nothing gets closer to the mythical silver-bullet that developers have been chasing since the birth of the industry.

The holy grail of programming has been staring us in the face for decades and yet we still keep inventing new data structures and complex tools to transfer data... All to save like 30% bandwidth; an advantage which is almost fully cancelled out anyway after you GZIP the base64 string which most HTTP servers do automatically anyway.

Same story with ProtoBuf. All this complexity is added to make everything binary. For what goal? Did anyone ever ask this question? To save 20% bandwidth, which, again is an advantage lost after GZIP... For the negligible added CPU cost of deserialization, you completely lose human readability.

In this industry, there are tools and abstractions which are not given the respect they deserve and the humble string is definitely one of them.


Replies

bccdeetoday at 2:13 PM

> For the negligible added CPU cost of deserialization, you completely lose human readability.

You could turn that around & say that, for the negligible human cost of using a tool to read the messages, your entire system becomes slower.

After all, as soon as you gzip your JSON, it ceases to be human-readable. Now you have to un-gzip it first. Piping a message through a command to read it is not actually such a big deal.

show 1 reply
astrobe_today at 12:07 PM

> The optimal balance of efficiency, flexibility and transparency.

You know the rule, "pick 2 out of 3". For a CPU, converting "123" would be a pain in the arse if it had one. Oh, and hexadecimal is even worse BTW; octal is the most favorable case (among "common" bases).

Flexibility is a bit of a problem too - I think people generally walked back from Postel's law [1], and text-only protocols are big "customers" of it because of its extreme variability. When you end-up using regexps to filter inputs, your solution became a problem [2] [3]

30% more bandwidth is absolutely huge. I think it is representative of certain developers who have been spoiled with grotesquely overpowered machines and have no idea any idea of the value of bytes, bauds and CPU cycles. HTTP3 switched to binary for even less than that.

The argument that you can make up for text's increased size by compressing base64 is erroneous; one saves bandwidth and processing power on both sides if you can do away without compression. Also, with compressed base64 you've already lost the readability on the wire (or out of the wire since comms are usually encrypted anyway).

[1] https://en.wikipedia.org/wiki/Robustness_principle

[2] https://blog.codinghorror.com/regular-expressions-now-you-ha...

[3] https://en.wikipedia.org/wiki/ReDoS

yegletoday at 2:04 AM

As someone who's daily job is to move protobuf messages around, I don't think protobuf is a good example to support your point :-)

AFAIKT, binary format of a protobuf message is strictly to provide a strong forward/backward compatibility guarantee. If it's not for that, the text proto format and even the jaon format are both versatile, and commonly used as configuration language (i.e. when humans need to interact with the file).

show 1 reply
beej71today at 3:13 AM

I've moved away from DOCish or PDF for storage to text (usually markdown) with Makefiles to build with Typst or whatever. Grep works, git likes it, and I can easily extract it to other formats.

My old 1995 MS thesis was written in Lotus Word Pro and the last I looked, there was nothing to read it. (I could try Wine, perhaps. Or I could quickly OCR it from paper.) Anyway, I wish it were plain text!

show 1 reply
whatevermom5today at 1:14 AM

Base64 and JSON takes a lot of CPU to decode; this is where Protobuf shines (for example). Bandwidth is one thing, but the most expensive resources are RAM and CPU, and it makes sense to optimize for them by using "binary" protocols.

For example, when you gzip a Base64-encoded picture, you end up 1. encoding it in base64 (takes a *lot* of CPU) and then, compressing it (again! jpeg is already compressed).

I think what it boils down to is scale; if you are running a small shop and performance is not critical, sure, do everything in HTTP/1.1 if that makes you more productive. But when numbers start mattering, designing binary protocols from scratch can save a lot of $ in my experience.

show 1 reply
handfuloflighttoday at 1:45 AM

I marvel at the constraint and freedom of the string.

show 1 reply
makeitdoubletoday at 6:51 AM

The text based side of protobuf is not base64 or json. We'd be looking at either CSV or length delimited fields.

Many large scale systems are on the same camp as you as their text files flow around their batch processors like crazy, but there's absolutely no flexibility or transparency.

Json and or base64 are more targeted as either low volume or high latency systems. Once you hit a scale where optimizing a few bits straight saves a significant amount of money, self labeled fields are just out of question.

8n4vidtmkvmktoday at 7:59 AM

The value of protobuf is not to save a few bytes on the wire. First, it requires a schema which is immensely valuable for large teams, and second, it helps prevent issues with binary skew when your services aren't all deployed at the same millisecond.

ozimtoday at 12:22 PM

I think you want ZSTD instead of GZIP nowadays.

the8472today at 12:58 AM

shipping base64 in json instead of a multipart POST is very bad for stream-processing. In theory one could stream-process JSON and base64... but only the json keys prior would be available at the point where you need to make decisions about what to do with the data.

show 1 reply