> They are. In `~/.aws/cli/cache` and `~/.aws/sso/cache`. AWS doesn't do anything particularly secure with its keys.
Thanks for the correction. That’s disappointing to read. I’d have hoped they’d have done something more secure than that.
> And none of the AWS client libraries are designed for the separation of the key material and the application code.
The client libraries can read from env vars too. Which isn’t perfect either, but on some OSs, can be more secure than reading from the FS.
> If I remember correctly, LastPass (or was it Okta?) was hacked by an attacker spying on the RAM of the process that had credentials.
That was a targeted attack.
But again, I’m not suggesting OIDC solves everything. But it’s still more secure than not using it.
> And if you look at the timeline, the attack took only minutes to do. It clearly was automated.
Automated doesn’t mean it happens the moment the host is compromised. If you look at the timeline, you see that the attack happened over night; hours after the system was compromised.
> They could have just waited a bit. 8 hours does not materially change anything, the credential is still long-lived enough.
Except when you look at the timeline of those specific attack, they probed AWS more than 8 hours after the start of the working day.
A shorter TTL reduces the window of attack. That is a material change for the better. Yes I agree on its own it’s not a complete solution. But saying “it has no material benefit so why bother” is clearly ridiculous. By the same logic, you could argue “why bother rotating keys at all, we might as well keep the same credentials for years”….
Security isn’t a Boolean state. It’s incremental improvements that leave the system, as a whole, more of a challenge.
Yes there will always be ways to circumvent security policies. But the harder you make it, the more you reduce your risk. And having ephemeral access tokens reduces your risk because an attacker then has a shorter window for attack.
> I tried to wargame some scenarios for hardware-based security, but I don't think it's feasible at all. If you (as a developer) have access to some AWS system, then the attacker running code on your behalf can also trivially get it.
The “trivial” part depends entirely on how you access AWS and what security policies are in place.
It can range anywhere from “forced to proxy from the hosts machine from inside their code base while they are actively working” to “has indefinite access from any location at any time of day”.
A sufficiently advanced attack can gain access but that doesn’t mean we shouldn’t be hardening against less sophisticated attacks.
To use an analogy, a burglar can break a window to gain access to your house, but that doesn’t mean there isn’t any benefit in locking your windows and doors.
Agreed.
> A sufficiently advanced attack can gain access but that doesn’t mean we shouldn’t be hardening against less sophisticated attacks.
I'm a bit worried that with the advent of AI, there won't be any real difference between these two. And AI can do recon, choose the tools, and perform the attack all within a couple of minutes. It doesn't have to be perfect, after all.
I've been thinking about it, and I'm just going to give up on trying to secure the dev environments. I think it's a done deal that developers' machines are going to be compromised at some point.
For production access, I'm going to gate it behind hardware-backed 2FA with a separate git repository and build infrastructure for deployments. Read-write access will be available only via RDP/VNC through a cloud host with mandatory 2FA.
And this still won't protect against more sophisticated attackers that can just insert a sneaky code snippet that introduces a deliberate vulnerability.