logoalt Hacker News

geokonyesterday at 8:43 PM1 replyview on HN

1. It's an important problem, but I think this just isn't done at the Record layer. Nor can you? You'd probably want to do that on the person-key->username service (which would have some log-in and way to tie two keys to one username)

2. In a sense that's also not something you think about at the Record level either. It'd be at a different layer of the stack. I'll be honest, I haven't wrapped my head entirely around `did:plc`, but I don't see why you couldn't have essentially the same behavior, but instead of having these unique DID IDs, you'd just use public keys here. pub-key -> DID magic stuff.. and then the rest you can do the same as AT. Or more simply, the server that finds the hashed content uses attached meta-data (like the author) to narrow the search

Maybe there is a good reason the identity `did:plc` layer needs to be baked in to the Record, but I didn't catch it from the post. I'd be curious to here why you feel it needs to be there?

3. I'm not 100% sure I understand the challenge here. If you have a soup of records, you can filter your records based on the type. You can validate them as they arrive. You send your records to the Bluesky server and they validates them as they arrive.


Replies

verdvermyesterday at 11:22 PM

2. The point of the PLC is to avoid tying identity to keys, specifically for the point that if you lose your keys, you lose your identity. In reality, no body wants that as part of the system

3. The soup means you need to index everything. There is no Bluesky server to send things to, only your PDS. Your DID is how I know what PDS to talk to to get your records

show 1 reply