I expect that nowadays many online are speaking with GenAI without even realising it.
It’s already been bad enough that you may be unknowingly conversing with the same person pretending to be someone else via multiple accounts. But GenAI is crossing the line in making it really cheap for narratives to be influenced by just building bots. This is a problem for all social networks and I think the only way forward is to enforce validation of humanity.
I’m currently building a social network that only allows upvote/downvote and comments from real humans.
>I’m currently building a social network that only allows upvote/downvote and comments from real humans.
And how exactly do you do that? At the end of the day there is no such thing as a comment/vote from a real human, it's all mediated by digital devices. At the point the signal is digitized a bot can take over the process.
I agree with the other commenters that you'll face a lot of challenges, but personally I hope you succeed. Sounds like an idea worth pursuing.
>I’m currently building a social network that only allows upvote/downvote and comments from real humans.
What's your method for detecting real humans?
Also will your social network allow bots labelled as bots?
Good bot (/s)
I don't know that "real humans" is good enough. You can do plenty of manipulation on social networks by hiring lots of real humans to upvote/downvote/post what you want. It's not even expensive.
Dead Internet Theory https://en.wikipedia.org/wiki/Dead_Internet_theory