> Nobody forces you to use a real Unix timestamp.
Besides the UUIDv7 specification, that is? Otherwise you have some arbitrary kind of UUID.
> I would not count on the first 48 bits being a "real" timestamp.
I agree; this is the existential hazard under discussion which comes from encoding something that might or might not be data into an opaque identifier.
I personally don't agree as dogmatically with the grandparent post that extraneous data should _not_ be incorporated into primary key identifiers, but I also disagree that "just use UUIDv7 and treat UUIDs as opaque" is a completely plausible solution either.
I mean, any 32-bit unsigned integer is a valid Unix timestamp up until 19 January 2038, and, by extension, any u64 is, too, for far longer time.
The only promise of Unix timestamps is that they never go back, always increase. This is a property of a sequence of UUIDs, not any particular instance. At most, one might argue that an "utterly valid" UUIDv7 should not contain a timestamp from far future. But I don't see why it can't be any time in the past, as long as the timestamp part does not decrease.
The timestamp aspect may be a part of an additional interface agreement: e.g. "we guarantee that this value is UUIDv7 with the timestamp in UTC, no more than a second off". But I assume that most sane engineers won't offer such a guarantee. The useful guarantee is the non-decreasing nature of the prefix, which allows for sorting.
That is like the HTML specification -- nobody ever puts up a web page that is not conformant. ;p
The idea behind putting some time as prefix was for btree efficiency, but lots of people use client side generation and you can't trust it, and it should not matter because it is just an id not a way of registering time.