Everybody seems to be missing the forest for the trees on this.
There is absolutely no "sign extension" in the C standard (go ahead, search it). "Sign extension" is a feature of some assembly instructions on some architectures, but C has nothing to do with it.
Citing integer promotion from the standard is justified, but it's just one part (perhaps even the smaller part) of the picture. The crucial bit is not quoted in the article: the specification of "Bitwise shift operators". Namely
> The integer promotions are performed on each of the operands. The type of the result is that of the promoted left operand. [...]
> The result of E1 << E2 is E1 left-shifted E2 bit positions; vacated bits are filled with zeros. If E1 has an unsigned type, the value of the result is E1×2^E2, reduced modulo one more than the maximum value representable in the result type. If E1 has a signed type and nonnegative value, and E1×2^E2 is representable in the result type, then that is the resulting value; otherwise, the behavior is undefined.
What happens here is that "base2" (of type uint8_t, which is "unsigned char" in this environment) gets promoted to "int", and then left-shifted by 24 bits. You get undefined behavior because, while "base2" (after promotion) has a signed type ("int") and nonnegative value, E1×2^E2 (i.e., base2 × 2^24) is NOT representable in the result type ("int").
What happens during the conversion to "uint64_t" afterwards is irrelevant; even the particulars of the sign bit of "int", and how you end up with a negative "int" from the shift, are irrelevant; you got your UB right inside the invalid left-shift. How said UB happens to materialize on this particular C implementation may perhaps be explained in terms of sign extension of the underlying ISA -- but do that separately; be absolutely clear about what is what.
The article fails to mention the root cause (violating the rules for the bitwise left-shift operator) and fails to name the key consequence (undefined behavior); instead, it leads with not-a-thing ("sign-extension bug in C"). I'm displeased.
BTW this bug (invalid left shift of a signed integer) is common, sadly.
It's incredibly common for people talking about C online or even in books (be that blog posts, side notes, tutorials, guides) to constantly make mistakes like these.
C seems to be one of those languages where people think they know it based on prior and adjacent experience. But it is not a language which can be learned based on experience alone. The language is full of cases where things will go badly wrong in a way which is neither obvious nor immediately evident. The negative side effects of what you did often only become evident long after you "learn" it as something you "can" do.
If you want to write C for anything where any security, safety, or reliability requirement needs to be met, you should commit to this strategy: Do not write any code which you are not absolutely certain you could justify the behaviour of by referencing the standard or (in the case of reliance on a specific definition of implementation defined, unspecified, or even (e.g. -ftrapv) undefined behaviour) the implementation documentation.
If you cannot commit to such a (rightfully mentally arduous) policy, you have no business writing C.
The same can actually be applied to C++ and Bash.
That's very interesting, I'm only familiar with the C++ standard where bit shifts are defined in terms of multiplications and divisions by powers of 2: https://eel.is/c++draft/expr.shift
So it seems in regard to bit shifts, C++ behaves slightly differently (it seems to have less UB) than C.
It was implementation defined for shifting negative numbers, but now the standard specifies twos-complement for this and all related IB
The root problem is actually that the C language allows implicit conversions from an unsigned type to a signed type and from a signed type to an unsigned type, and in certain contexts such implicit conversions are actually mandated by the standard, like in the buggy expression from the parent article.
It does not matter which is the relationship between the sizes of such types, there will always be values of the operand that cannot be represented in the result.
Saying that the behavior is sometimes undefined is not acceptable. Any implicit conversion of this kind must be an error. Whenever a conversion between signed and unsigned or unsigned and signed is desired, it must be explicit.
This may be the worst mistake that has ever been made in the design of the C language and it has not been corrected even after 50 years.
Making this an error would indeed produce a deluge of error messages in many carelessly written legacy programs, but the program conversion is trivial and it is extremely likely that many of these cases where the compilers do not signal errors can cause bugs in certain corner cases, like in the parent article.