logoalt Hacker News

ssl-3yesterday at 6:39 PM3 repliesview on HN

Right.

So we have 20 verifiers running at 500MHz, and this stack of verifiers is trustworthy. It does reliably-good work.

We also have a single 10GHz CPU core, and this CPU core is not trustworthy. It does spotty work (hence the verifiers).

And both of these things (the stack of verifiers, the single CPU core) peak out at exactly the same computational speeds. (Because otherwise, the CPU's output can't be verified.)

Sounds great! Except I can get even better performance from this system by just skipping the 10GHz CPU core, and doing all the work on the verifiers instead.

("Even better"? Yep. Unlike that glitch-ass CPU core, the verifiers' output is trustworthy. And the verifiers accomplish this reliable work without that extra step of occasionally wasting clock cycles to get things wrong.

If we know what the right answer is, then we already know the right answer. We don't need to have Mr. Spaz compute it in parallel -- or at all.)


Replies

firefly2000yesterday at 8:46 PM

If the workload were perfectly parallelizable, your claim would be true. However, if it has serial dependency chains, it is absolutely worth it to compute it quickly and unreliably and verify in parallel

show 1 reply
ant6ntoday at 12:05 AM

You can verify in 100-way parallel and without dependence, but you can’t do it with general computation.

moffkalastyesterday at 6:49 PM

Haha, well do you have a point there. I guess I had the P!=NP kind of verification in my head, where it's easy to check if something is right, but not as easy to compute the result. If one could make these verifiers on some kind of checksum basis or something it might still make sense, but I'm not sure if that's possible.