These are a good set of principles for any company (or individual) can follow to guide them how they use AI.
How can we train and create AIs with diverse creative viewpoints? The flexibility and creativity of AIs, or lack of, guides proper principles of using AI.
In the long term I am at least certain that AI can emulate anything humans do en masse, where there is training data, but without unguided self evolution, I don't see them solving truly novel problems. They still fail to write coherence code if you go a little out of the training distribution, in my experience, and that is a pretty easy domain, all things considered.
I work in a corporate setting that has been working on a "strategy rebrand" for over a year now and despite numerous meeting, endless powerpoint, and god knows how much money to consultants, I still have no idea what any of this has to do with my work.
CERN was founded after WW2 in Europe, and like all major European institutions founded at the time, it was meant to be a peaceful institution.
Classified project obviously have stricter rules, such as airgaps, but sometimes, the limits are a bit fuzzy, like a non-classified project that supports a classified project. And I may be wrong but academics don't seem to be the type who are good at keeping secrets nor see the security implication of their actions. Which is a good thing in my book, science is about sharing, not keeping secrets! So no AI for military projects could be a step in that direction.
Human oversight: The use of AI must always remain under human control. Its functioning and outputs must be consistently and critically assessed and validated by a human.And with testing and other services, I guess human oversight can be reduced to _looking at the dials_ for the green and red lights?
It's funny how many official policies leave me thinking that it's a corporate cover-your-ass policy and if they really meant it they would have found a much stronger and plainer way to say it
They endorse limited trust, not exactly a foreign concept to anyone who's taken a closer look at an older loaf of bread before cutting a slice to eat.
This is critical to understand if the mandate to use AI comes from the top: make sure to communicate from day 1, that you are using AI as mandated and not increasing the productivity as mandated. Play it dumb, protect yourself from "if it's not working out then you are using it wrong" attacks.
‘CERN uses 1.3 terawatt hours of electricity annually. That’s enough power to fuel 300,000 homes for a year in the United Kingdom.’ [2]
I think AI is the least of their problems, seeing as they burn a lot of trees for the sake of largely impractical pure knowledge.
[1] https://home.web.cern.ch/news/official-news/knowledge-sharin... [2] https://home.cern/science/engineering/powering-cern
Also, the web was invented at CERN.
Far less power than those projected gigawatt data centers that are surely the one thing keeping AI companies from breaking even.