logoalt Hacker News

arethuzalast Wednesday at 10:35 AM1 replyview on HN

What happens if the message entered by the user into the Client is more than 4096 bytes?


Replies

nlitenedlast Wednesday at 11:05 AM

From what I see, the code is incorrect in reading “messages” from TCP socket stream, and will be failing randomly in production with messages longer than 1500 bytes, and also sometimes when even shorter.

Instead, the TCP socket must be treated as a stream of bytes, and use either some delimiter as message boundary (like \n, while escaping any newlines inside JSON), or write message size before the message bytes itself, so that the code knows how many bytes to read until full message is read.

Edit: to clarify, TCP protocol does not guarantee that if you write some bytes in one go, they will be read in one go as well. Instead, they may be split into multiple “reads”, or glued together with the preceding chunk, or both. It’s a “stream of bytes” protocol, it only guarantees that written bytes come one after another in the same order.

So the “naive” message separation used in code above (read a chunk and assume it’s the entire message that was written) will work in manual tests, and likely even in local automated tests, but will randomly break when exposed to real network conditions.

show 2 replies