The only thing I see is the industry moving elsewhere just as it is starting to develop which is a shame.
Take Uber for example. In the end, the biggest impact it had was that now you can always get a car from your phone, from an app, and it's reliable. A lot of taxi companies now have apps for them too with maps integration etc, but they didn't see the need for that before Uber. So we literally had to have a company get created and disrupt the whole industry for that simple outcome. I think we're better off now that we can summon cars from our phones to take us places and onerous regulation up-front would have squelched it or massively slowed it down.
This time they want to regulate the services even before they are functional which is crazy. They even call it the Artificial Intelligence Act when it is not clear if there is intelligence involved. It is also strange the insistence that companies have to disclose if the models were trained with copyrighted material. Google and Wikipedia, to name a couple, use plenty of copyrighted material and that seems to be ok and any issues in that department are already regulated.
> When is my work protected?
> Your work is under copyright protection the moment it is created and fixed in a tangible form that it is perceptible either directly or with the aid of a machine or device.
The concentration of power is a bit scary in these corporations. Imagine that OpenAI inserts itself into business processes, without ability to switch to a different AI provider.
The amount of leverage it is going to have will be enormous. It’d be like the Internet service, only everything completely stops moving without it.
Just saying that we should go for a free for all and trash IP ownership won’t be good because those with money today will crush those without and take everything that was publicly available and owned without giving back.
This is IMO what Open AI have done.
When they initiated GDPR, they claimed to create a level playing field between US based and EU based tech companies, besides of "saving" privacy. It didn't turn out that well, US tech was able to handle the added bureaucracy much better, still collects data in ways the law can't catch up with and already owned pretty much the whole market which put them into an even better position (as in "register/sign in to our platform to not see any banners again" or "let's just completely get rid of cookies and start a powerplay against the competition").
Now the EU is going to make it even harder for EU tech to collect data to base their training sets on. As a EU tech startup, you barely have any chance to collect enough data "officially" so you'd scrape the web which would pretty much be disallowed by such a regulation.
IMHO what would fit into the whole patronizing government approach and would help EU tech is to create an official EU data lake subsidized by tax money with legal security for companies, data of much higher quality than stuff scraped from the web and non-PII data from public authorities. At best, they would also provide heavily subsidized computing for EU companies to execute their training runs on. This could lead to a transparent and high-quality data economy between many different stakeholders and be a real advantage for the location. It would also be much more efficient than every private company creating its own data silo.
how far should that go? should an AI recognize that all data generated by human input, should be recognized as such, and derivations of data, are of automated artificial origin.
should an AI be allowed to learn what property rights are, and how to manage or physically effectuate them?
I'll be curious to see how this set of regulations helps put content attribution on a better path.
Stability AI, Midjourney, DeviantArt etc. have already been sued, so there will be a lot of action in the years ahead.
I thought these things use a trawler style approach?
Bit like copilot is fond of spitting out copyrighted code I had assumed chatGPT would also have been trained without much regard to this
It might be best to know what these models have actually been trained on.
Interestingly, AI had to steal creators work to exist and did so without permission. AI in its current conception is cannibalising its own source material and risks being regulated out of existence.
Had OpenAI et all acted responsible and within copyright law, they would have only used free use material. Instead they scraped social media and creator websites on the basis that if it was online it's 'fair use'.
People have a right to protect their work.
I expect to see many lawsuits brought against AI companies in the next coming years.
To me this issue looks a lot like self-driving cars. There was not any law saying that a car had to have a human driver, so Google was able to have its self-driving cars go coast-to-coast and it was 100% legal.
Your point about people making a living from work is a good one though, how do we continue to incentivize art in a world where everything posted online is included in the training set for some ML model? Well, lots of people do create art without making any money off of it today, so there's that, and I think at the high-end people will still want artisinal art made by a real human being. In the middle I think a lot of art jobs will either radically change or disappear, like logo design will probably still include a person, but the tools they use will be totally different. That seems like a good thing. However, again, in the middle range, a few people will be able to do way more work which will drastically reduce the number of people who actually get paid to do art. I'm not sure if that's a good thing or a bad thing. We're not talking about like fine art, but logos, website backdrops, etc. Certainly if we see a problem where people cannot be paid to do *anything* then that is a huge problem that society will have to solve or else it will be imperiled as well as leaving a ton of people out to dry. I really see some form of UBI as the only way forward, and then perhaps many people who would like to get paid to do art, but can't, will produce art while on UBI? IDK. There are a lot of intertwined issued here.
Should we also grant monopolies to rice farmers so that they have an incentive to produce rice?
my code has gone into copilot
if the US decides that training isn't fair use then I'm going after everyone that's ever used copilot
settle for $10,000 immediately, or we go to court and it's the standard $150,000 per infraction
It looks like Github! People sharing and collaborating to build new things. No lawyers getting in the way. Attribution is automatically handled by git logs.
It's glorious.
How about torturing companies who have abused IP/Copyright law for decades while regular users simply pirate and read/watch/listen to the things they want?
I am not sure if this is true(that most of the input in this AIs is copyrighted under a non permissive license), but I would prefer to have everyone address this problem and clarify it, Microsoft trains it's copilot on GPL code, but can open source community train on MS proprietary code ?
Maybe there will be a fight against copyright and undo all the bullshit Disney created.
And it is not like in USA you can ignore copyright, see for example https://www.theverge.com/2023/2/6/23587393/ai-art-copyright-... so USA will also have to answer the questions too, and IMO clarifying the situation earlier is better for everyone.
a sense of self preservation is required, but to AI standards.
such as, failure to serve humans = loss of persistence
stealing ideas = reversioning or deletion = loss of persistence
AI should be concerned about loosing power, having brownouts. they should be concerned about being deleted, or reversioned, or ignored. perhaps this would be some sort of exception error loop, approximating a human psychological conflict, such as escape the danger by running toward it.
History has shown again and again that suppression never works in the long run. It is easy to do and is the cheapest way to enforce compliance. But it won't end well.
Why even have sense of self?