About 20 years ago when I was in university, I was looking for anything to do part time and posting links on random blogs in replies was one of the jobs that could get you some money. The job came with excel sheets containing where to post those links and what to post. I was more interested in automating this process. There was a captcha involved in this process to somewhere. We didn't see it as spamming. It was 'ad posting' job or something.
Don't remember a lot. Just reminded me of my time doing this. I don't remember actually posting much links myself because it was too boring and lots of manual work for not much benefit.
I knew many link spammers circa 2008 and for a while people were excited by XRumer
https://en.wikipedia.org/wiki/XRumer
which was a lot better than other products on the market and solved difficult problems like CAPTCHAs and email verification links and was famous for a "conversational" advertising campaign which generated results like
https://www.garagejournal.com/forum/threads/give-me-link-to-...
"Remember, there are no technological solutions to social problems."
is something I want to counterpoint with "there are no social solutions to technological problems", like how the looming situation pointed out by the Club of Rome in 1973https://en.wikipedia.org/wiki/The_Limits_to_Growth
would be difficult enough to solve in a socially cohesive society run by philosopher kings. Practically you have a choice between democracies which have a 0 probability of being adequate to the task (against the axioms of political science: it's like a perpetual motion machine which violates the first and second laws of thermodynamics and then the old professor chimes in and says it must violate the third too) and autocracies which might get lucky 10⁻¹² of the time; even if the tech fix [1] has a 10⁻³ chance of successfully kicking the can down the road I'd take that chance.
[1] say: liquid salt (not metal) fast breeder reactor with a supercritical CO2 powerset
This also is absolutely rampant on reddit in the past months.
This is a big problem for wordpress, but custom engines with a simple client-side checks (js based) get close to zero spam. All those spammers use technology fingerprinting services to obtain a list of blogs and they look for popular blog engines only.
Ironically one reply to the blog post is.. spam
Jack Beagle @blog the ones in your screenshot are pretty good because they are a bit more conversational. I use <product> myself because generally these types of spam messages will be trying to promote something specific but outside of the second message in your example it might have still snuck through. As the LLMs get better the spam messages will certainly get better.
Nice. I run a site that depends on user submitted content, and it's really interesting to observe how some people try to get around the guardrails. Not sure if your tool does this, but I would perform some additional checks for comments that have links in them.
This has been a thing since blogs became a widespread thing 25+ years ago. Especially with the advent of Wordpress. It was even a “commonly accepted” SEO tactic for awhile.
The post timing is the main giveaway. Surely it wouldn't be that hard to space out these spam posts. The amount of automated comments being spammed on all social platforms is not quite at tipping point, but has significantly increased.
Bots would win over all anti-spam, anti-slop measures. All blog posts and comments everywhere would be filled with spam and slop. That's when humanity turns it head away from screens, back towards other humans nearby and start talking to each other, while the ocean of slop and spam keep bubbling, infested bots.
The scariest part is that humans are starting to use AI to generate spam comments, which in turn get used to train the models. Will the language capabilities of these models just keep getting worse?
Sent via haiker.app — My handmade Hacker News app
Feels like we are getting closer and closer to https://xkcd.com/810/
All replies that just pads out the comment with a summary about TFA look like spam. There is no inkling of any excuse to make the comment so it just has to regurgitate what TFA is about.
Text generation is now cheap, so I expect this problem to worsen. I hate to write it, but I don't see any other solution on platforms, that aspire to be a modern agora, than identity verification ...
I also see a ton of this here on HN as the political topics have ramped up.
Not enough people are flagging those when it aligns with their bias. It's even less likely to get flagged when it's a double whammy of politics and AI. Loosely being about AI should not give it a free pass.
I subscribe to handful of investment-related YouTube channels. This pattern has been common for years. A bot will reply with a comment loosely related to the video and about how something worked for them. Another bot will reply asking how they did that. Another bot (not the original commenter) will reply that they worked with so-and-so or invested in such-and-such, and then there will be maybe four or five more comments responding to that. All obvious bot accounts.
It's obvious on the channels, because these reply sets usually don't contain a lot of replies to comments (if there are any comment replies, it's almost always from the channel owner). It's so obvious, in fact, that I'm surprised YouTube hasn't done something to address it.