logoalt Hacker News

Go Proposal: Secret Mode

157 pointsby enzlast Tuesday at 9:10 PM67 commentsview on HN

Comments

Someoneyesterday at 9:48 PM

FTA: “Heap allocations made by the function are erased as soon as the garbage collector decides they are no longer reachable”

I think that means this proposal adds a very specific form of finalisers to go.

How is that implemented efficiently? I can think of doing something akin to NSAutoReleasePool (https://developer.apple.com/documentation/foundation/nsautor...), with all allocations inside a `secret.Do` block going into a separate section of the heap (like a new generation), and, on exit of the block, the runtime doing a GC cycle, collecting and clearing every now inaccessible object in that section of the heap.

It can’t do that, though, because the article also says:

“Heap allocations are only erased if the program drops all references to them, and then the garbage collector notices that those references are gone. The program controls the first part, but the second part depends on when the runtime decides to act”

and I think what I am thinking of will guarantee that the garbage collector will eagerly erase any heap allocations that can be freed.

Also, the requirement “ the program drops all references to them” means this is not a 100% free lunch. You can’t simply wrap your code in a `secret.Do` and expect your code to be free of leaking secrets.

dpifkelast Wednesday at 12:27 AM

Related: https://pkg.go.dev/crypto/subtle#WithDataIndependentTiming (added in 1.25)

And an in-progress proposal to make these various "bubble" functions have consistent semantics: https://github.com/golang/go/issues/76477

(As an aside, the linked blog series is great, but if you're interested in new Go features, I've found it really helpful to also subscribe to https://go.dev/issue/33502 to get the weekly proposal updates straight from the source. Reading the debates on some of these proposals provides a huge level of insight into the evolution of Go.)

show 1 reply
__turbobrew__today at 1:19 AM

> Protection does not cover any global variables that f writes

Seems like this should raise a compiler error or panic on runtime.

fsmvlast Tuesday at 10:42 PM

One thing that makes me unsure about this proposal is the silent downgrading on unsupported platforms. People might think they're safe when they're not.

Go has the best support for cryptography of any language

show 3 replies
voodooEntityyesterday at 5:40 PM

Ok, i kinda get the idea, and with some modification it might be quite handy - but i wonder why its deemed like an "unsolvable" issue right now.

It may sound naive, but packages which include data like said session related or any other that should not persist (until the next Global GC) - why don't you just scramble their value before ending your current action?

And dont get me wrong - yes that implies extra computation yada yada - but until a a solution is practical and builtin - i'd just recommend to scramble such variables with new data so no matter how long it will persist, a dump would just return your "random" scramble and nothing actually relevant.

show 3 replies
nixpulvistoday at 12:02 AM

I looked into this a bit for a rust project I'm working on, it's slightly difficult to be confident, when you get all the way down to the CPU.

https://github.com/rust-lang/rust/issues/17046

https://github.com/conradkleinespel/rpassword/issues/100#iss...

robmccollyesterday at 10:01 PM

This is interesting, but how do you bootstrap it? How does this little software enclave get key material in that doesn't transit untrusted memory? From a file? I guess the attacker this is guarding against can read parts of memory remotely but doesn't have RCE. Seems like a better approach would be an explicitly separate allocator and message passing boundaries. Maybe a new way to launch an isolated go routine with limited copying channels.

show 1 reply
raggiyesterday at 9:01 PM

This seems like it might be expensive (though plausibly complete), so I wonder if it’ll actually benchmark with a low enough overhead to be practical. We already struggle with a lack of optimization in some of the named target use cases - that said this also means there’s space to make up.

hamburglaryesterday at 9:11 PM

Personally, I’m more interested in what a process can do to protect a small amount of secret material longer-term, such as using wired memory and trust zones. I was hoping this would be an abstraction for that.

cafxxtoday at 12:03 AM

I find this example mildly infuriating/amusing:

    func Encrypt(message []byte) ([]byte, error) {
        var ciphertext []byte
        var encErr error
    
        secret.Do(func() {
            // ...
        })
        
        return ciphertext, encErr
    }
As that suggests that somehow for PFS it is critical that the ephemeral key (not the long-term one) is zeroed out, while the plaintext message - i.e. the thing that in the example we allegedly want secrecy for - is totally fine to be outside of the whole `secret` machinery, and remain in memory potentially "forever".

I get that the example is simplified (because what it should actually do is protect the long-term key, not the ephemeral one)... so, yeah, it's just a bad example.

teerayyesterday at 8:43 PM

I wonder if people will start using this as magic security sauce.

_1tanyesterday at 10:01 PM

Seems neat, anything similar in Java?

maxlohyesterday at 5:02 PM

> The new runtime/secret package lets you run a function in secret mode. After the function finishes, it immediately erases (zeroes out) the registers and stack it used.

I don't understand. Why do you need it in a garbage-collected language?

My impression was that you are not able to access any register in these language. It is handled by the compiler instead.

show 7 replies
burnt-resistoryesterday at 9:48 PM

Consumer-grade hardware generally lacks real confidentiality assurance features. Such a software feature implemented in user-space is moot without the ability to control context switching, rendering it mostly security theater. Security critical bits should be done in a dedicated crypto processor that has tamper self-zeroing and self-contained RAM, or at the very least, in the kernel away outside the reach of user-space processes. No matter how much marketing or blog hype is offered, it's lipstick on a pig. They've essentially implemented a soft, insecure HSM.

Big thumbs down from me.

jeffrallenyesterday at 8:57 PM

Wow, this is so neat. I spent some time thinking about this problem years ago, and never thought of such an elegant solution.

leohyesterday at 8:25 PM

Kind of stupid it didn’t have something like this to begin with tbh. It really is an incredible oversight when one steps back. I am fully ready to be downvoted to hell for this, but rust ftw.

show 1 reply