> Could you give some specific examples of AI regulations that you think would be good?
AI companies need to be held liable for the outputs of their models. Giving bad medical advice, buggy code etc should be something they can be sued for.
90% of the time I'm pro anything that causes a problem for the big corporations, but buggy code? C'mon.
It's a pile of numbers. People need to take some responsibility for the extent to which they act on its outputs. Suing OpenAI for bugs in the code is like suing a palm reader for a wrong prediction. You knew what you were getting into when you initiated the relationship.
90% of the time I'm pro anything that causes a problem for the big corporations, but buggy code? C'mon.
It's a pile of numbers. People need to take some responsibility for the extent to which they act on its outputs. Suing OpenAI for bugs in the code is like suing a palm reader for a wrong prediction. You knew what you were getting into when you initiated the relationship.