The specs were derived from the code, not from the original requirements. So this is "we modeled what the code does, then found the code doesn't do what we modeled." That's circular unless the model captures intent that the code doesn't , and intent is exactly what you lose when you reverse-engineer specs. Would love to see this applied to a codebase where the original requirements still exist
Still my all time favorite snippet of code.
TC BANKCALL # TEMPORARY, I HOPE HOPE HOPE
CADR STOPRATE # TEMPORARY, I HOPE HOPE HOPE
TC DOWNFLAG # PERMIT X-AXIS OVERRIDE
https://github.com/chrislgarry/Apollo-11/blob/master/Luminar...Has someone verified this was an actual bug?
One of AI’s strengths is definitely exploration, f.e. in finding bugs, but it still has a high false positive rate. Depending on context that matters or it wont.
Also one has to be aware that there are a lot of bugs that AI won’t find but humans would
I don’t have the expertise to verify this bug actually happened, but I’m curious.
Software that ran on 4KB of memory and got humans to the moon still has undiscovered bugs in it. That says something about the complexity hiding in even the smallest codebases.
is this bug the reason why the toilet malfunctioned?
Super interesting. I wish this article wasn’t written by an LLM though. It feels soulless and plastic.
An application of their specification language, https://juxt.github.io/allium/
It seems the difference between this and conventional specification languages is that Allium's specs are in natural language, and enforcement is by LLM. This places it in a middle ground between unstructured plan files, and formal specification languages. I can see this as a low friction way to improve code quality.
Fascinating read. Well done. Everyone involved in the Apollo program was amazing and had many unsung heroes.
> Rust’s ownership system makes lock leaks a compile-time error.
Rust specifically does not forbid deadlocks, including deadlocks caused by resource leaks. There are many ways in safe Rust to deliberately leak memory - either by creating reference count cycles, or the explicit .leak() methods on various memory-allocating structures in std. It's also not entirely useless to do this - if you want an &'static from heap memory, Box.leak() does exactly that.
Now, that being said, actually writing code to hold a LockGuard forever is difficult, but that's mainly because the Rust type system is incomplete in ways that primarily inconvenience programmers but don't compromise the safety or meaning of programs. The borrow checker runs separately from type checking, so there's no way to represent a type that both owns and holds a lock at the same time. Only stacks and async types, both generated by compiler magic, can own a LockGuard. You would have to spawn a thread and have it hold the lock and loop indefinitely[0].
[0] Panicking in the thread does not deadlock the lock. Rust's std locks are designed to mark themselves as poisoned if a LockGuard is unwound by a panic, and any attempt to lock them will yield an error instead of deadlocking. You can, of course, clear the poison condition in safe Rust if you are willing to recover from potentially inconsistent data half-written by a panicked thread. Most people just unwrap the lock error, though.
Someone please amend the title and add "using claude code" because that's customary nowadays.
This is so insightfully and powerfully written I had literal chills running down my spine by the end.
What a horrible world we live in where the author of great writing like this has to sit and be accused of "being AI slop" simply because they use grammar and rhetoric well.
Are there any consequences for the Artemis 2 mission (ironic)?
> The specs were derived from the code itself
Oh dear. I strongly suggest this author look specification up in a dictionary.
For anyone who liked this, I highly suggest you take a look at the CuriousMarc youtube channel, where he chronicles lots of efforts to preserve and understand several parts of the Apollo AGC, with a team of really technically competent and passionate collaborators.
One of the more interesting things they have been working on, is a potential re-interpretation of the infamous 1202 alarm. It is, as of current writing, popularly described as something related to nonsensical readings of a sensor which could (and were) safely ignored in the actual moon landing. However, if I remember correctly, some of their investigation revealed that actually there were many conditions which would cause that error to have been extremely critical and would've likely doomed the astronauts. It is super fascinating.