EU AI regulation isn't there to stop AI innovation, it's only there to restrict where and when AI can be used on decisions that affect people. For example, you can't deny someone healthcare, a bank account, a rental, unemployment payments, or a job, just because "computer says NO"[1].
I don't understand how people can be against this kind of regulation, especially knowing how biased and discriminatory AI can be made to be while also being a convenient scapegoat for poor policies implemented by lazy people in charge: "you see your honor it wasn't our policies and implementation that were discriminatory and ruined lives, it was the AI's fault, not ours".
2019 Directive for Copyright in the Digital Single Market, articles 3 and 4. The two regulations kinda complement each other.
PS: by private I mean licensed by any license except for "free for all" and/or completely private.
1) commenters who read the article, and are generally in favor, as it is neither vague nor broad, and instead celebrating it as targeted legislation for current problems that can be updated.
2) commenters who did not read the article, and are having exactly the knee-jerk reaction the person you replied to is describing.
Here are some examples of the second sort of comment:
> EU legislators are totally detached from reality, it can be seen that they do not understand what is the matter with AI, for them it is just "another IT tool" that can be "regulated". As always: US innovates, EU regulates.
> EU tech legislation is comical at this point. A bunch of rules that almost nobody follows and at best they fine FAANG companies a few hours of revenue.
Note how neither actually mentions anything substantial beyond the headline.
I've had it on my list to try integrating Hume.ai (https://www.hume.ai/) into a prototype educational environment I've been playing with. The entirety of their product is emotion detection, so this must be concerning for them.
My own desire is to experiment with something that is entirely complementary to the learner, not coercive, guided by the learner and not providing any external assessment. In this context I feel some ethical confidence in using a wide array of inputs, including emotional assessment. But obviously I see how this could also be misused, or even how what I am experimenting with could be redirected in small ways to break ethical boundaries.
While Hume is a separate stack dedicated to emotional perception, this technology is also embedded elsewhere. GPT's vision capabilities are pretty capable at interpreting expressions. If LLMs grow audio abilities then they might be even better at emotion perception. I don't think you can really separate audio input from emotional perception, and it's not clear whether those emotional markers are intentional or unintentional cues.
> The notion of emotion recognition system for the purpose of this regulation should be defined as an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.
Elsewhere it specifically calls "emotion recognition" to be of "limited risk" (calling for transparency) and elsewhere kind of implies it to be "high risk" (as being part of the "annex"), though maybe it's just calling out use of emotion recognition in those high risk areas (e.g., credit scores).
But it doesn't seem to actually define "emotion recognition." (Though someone else says it involves biometric data, which seems in line with everything else in the regulation.)
All that said, it seems like under the law you could actually make emotion recognition systems, even for education, it's just that education institutions and workplaces couldn't use them. (Though that's a pretty big blocker for an educational tool!)
[1] https://www.europarl.europa.eu/topics/en/article/20230601STO...
Links to a 2019 article. It would probably be good to get some more recent numbers. I think even a ChatGPT wrapper “uses” AI although they did not develop it and have no moat.
Can you explain why you think this is comical?
On reading the text, I'm not convinced that they actually are. Copyright of the training data is only mentioned once in the act that I can find, here:
> Any use of copyright protected content requires the authorization of the rightholder concerned unless relevant copyright exceptions and limitations apply. Directive (EU) 2019/790 introduced exceptions and limitations allowing reproductions and extractions of works or other subject matter, for the purposes of text and data mining, under certain conditions.
Initially "Any use of copyright protected content requires the authorization of the rightholder concerned" sounds like a strong anti-scraping stance, but then the "unless relevant copyright exceptions and limitations apply" makes it nothing more than a restatement of how copyright works in general. The question is whether any exceptions/limitations do apply, and the fact that they immediately point to the DSM directive's copyright exception for text and data mining implies they see it as sufficient for machine learning datasets.
The "certain conditions" essentially just means following robots.txt if it's for commercial purposes, which all scrapers I'm aware of already do regardless.