This is one of those things that don’t translate to legal reality very well, as then you have to define “what is an algorithm”.
Is adding advertisements an algorithm?
Is including likes an algorithm?
Is automatically starting the next video after a previous one has finished an algorithm?
Is infinite scroll an algorithm?
Etc
This is some kind of a meme where people believe things can’t be defined in legal terms and therefore can’t be regulated. These people are usually not lawyers.
Does anyone know where it’s coming from? I can certainly believe that incompetent jurisdictions have a ton of issues with people misapplying the law and using loopholes.
"By algorithm" can be easily defined.
The easy benchmark to setup can easily be, that any feed that displays the data in a way other than the following is considered an editorial choice and thus the platform is liable as a publisher:
1. In a chronological order, and only filtered based on user selected options.
2. In any other order explicitly selected by the user.
An exception can be made to allow filtering out content that violates the platforms terms and conditions.
Alternatively there can be no exception, effectively making these platforms unworkable. This is also a choice. We do not need these platforms, including this one.
"Algorithm" is a method of selecting the content to display. You're listing presentation types, not selection types. Presentation has nothing to do with supervised selection. Selecting the next video in the infinite scroll would be the algorithm, not the infinite scrolling mechanism itself.
Instead, a regulation could mandate the administration an anonymized unbiased mandatory eval test at the end of every week/bi-week/month just like instruments for psych evaluation (e.g. do you feel your <mental-health-metric> has become worst in the last <time-period> on <scale>. Did you have <mental-health-marker> after watching content on social media?).
The said regulation can then mandate that after calibration and correction the feed pull back by training the algorithm to adjust it in a rapid A/B test.
This is all doable by the companies themselves, but since they wont, the key is to mandate it and publish the aggregate results regularly — like make it part of the quarterly share holder's SEC reporting requirement or something.
Everything other than sorting the list of entities by a standard measurement unit (time, length, mass, temperature, amount) needs to be covered by this law.
The moment you add other entities to the list (e.g. ads inbetween posts), then it's also subject to the same restrictions.
New York did a pretty good job in their law that limits addictive feeds. Here's what their law says:
> "Addictive feed" shall mean a website, online service, online application, or mobile application, or a portion thereof, in which multiple pieces of media generated or shared by users of a website, online service, online application, or mobile application, either concurrently or sequentially, are recommended, selected, or prioritized for display to a user based, in whole or in part, on information associated with the user or the user's device, unless any of the following conditions are met, alone or in combination with one another:
> (a) the recommendation, prioritization, or selection is based on information that is not persistently associated with the user or user's device, and does not concern the user's previous interactions with media generated or shared by other users;
> (b) the recommendation, prioritization, or selection is based on user-selected privacy or accessibility settings, or technical information concerning the user's device;
> (c) the user expressly and unambiguously requested the specific media, media by the author, creator, or poster of media the user has subscribed to, or media shared by users to a page or group the user has subscribed to, provided that the media is not recommended, selected, or prioritized for display based, in whole or in part, on other information associated with the user or the user's device that is not otherwise permissible under this subdivision;
> (d) the user expressly and unambiguously requested that specific media, media by a specified author, creator, or poster of media the user has subscribed to, or media shared by users to a page or group the user has subscribed to pursuant to paragraph (c) of this subdivision, be blocked, prioritized or deprioritized for display, provided that the media is not recommended, selected, or prioritized for display based, in whole or in part, on other information associated with the user or the user's device that is not otherwise permissible under this subdivision;
> (e) the media are direct and private communications;
> (f) the media are recommended, selected, or prioritized only in response to a specific search inquiry by the user;
(> g) the media recommended, selected, or prioritized for display is exclusively next in a pre-existing sequence from the same author, creator, poster, or source; or
> (h) the recommendation, prioritization, or selection is necessary to comply with the provisions of this article and any regulations promulgated pursuant to this article.
This doesn't differ much from the legal reality that I've seen. Terms need to be defined, yes. It will require work to do so. And that work should be done even if it's a bother.
Ok so then the "algorithm" must be made available to authorities (or even better, the public at large) and be approved or rejected based on a court or a law. Obviously an algorithm based on "engagement" or "narrative" should be rejected with prejudice every time.
I don't see a single difficult example here. The answer is "NO." It's strange that you couldn't even find one.
I mean "Is including likes an algorithm?" You might as well ask if having a dog in the video is an algorithm. Any question about "likes" would be if you're manipulating the video selection based on likes, or is the user given a control to manipulate the video selection based on likes. If it's you it's an algorithm. If it's the user, it's a control. If you lie about the likes, then it's an algorithm. If you're transparent about the likes, then it is a control.
The other ones aren't even worth discussing. You might as well ask if having a blue logo is an algorithm, or if Comic Sans is an algorithm. "It's all so complicated!"
-----
edit: that being said, the EU does not care about this issue at all, and has had plenty of mandate and plenty of time to have done something about it if it did. They are also going to say "it's all so complicated." Because their problem is the unpopularity of center-left neolib governments that are just barely holding on with extreme minority support through bureaucratic means because they wrote the regulations. They want to keep what's came for British Labour during the recent council elections from coming for them.
So I guarantee that content will somehow become an "algorithm." The goal is to keep people who don't like them from speaking to each other.
This kind of complex leglislation already exists in many areas of the law: revenue collection being the most obvious one. We could choose to treat "societal harm" the way we treat "tax collection".
I'm not saying there aren't infinite edge cases and second-order effects - but we tolerate those already for many things. I'm not pretending this is simple or even desirable - I'm merely stating it's possible if we want to do it.
My biggest fear is that (like the UK Online safety act) this acts to favour the huge corporations because they are the only ones that can afford a team of lawyers. Any legislation should aim to carve out exceptions to avoid indirectly helping monopolies.