logoalt Hacker News

philipkglassyesterday at 6:00 PM1 replyview on HN

It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter. [1]

I think that the best use of frontier AI models outside of generic corporate settings is going to be building generic frameworks and procedures for training specialized models. No ethically-trained American coding model would ever consent to write a Plutonium Process Engineering agent. But you can get it to write a general framework for pretraining models and preparing them for agentic usage, to which the copious published literature on plutonium production could be given as a data set.

[1] https://blog.codinghorror.com/your-favorite-programming-quot...


Replies

cyanydeezyesterday at 7:03 PM

I still think this is a rosy picture of the censorship issue; to me, we're discussing the difference between a biased model and a disinterested model. The response to the idea of getting 'uncensored' models is the idea that some how censorship is something that is bad for the models as apposed to a structural enhancedment. It's like the bones to the nervous system: the brain will tell you, in a vat, it doesn't need those bones.