shades of
> Anticipating the arrival of this all-powerful technology, Sutskever began to behave like a spiritual leader, three employees who worked with him told us. His constant, enthusiastic refrain was “feel the AGI,” a reference to the idea that the company was on the cusp of its ultimate goal. At OpenAI’s 2022 holiday party, held at the California Academy of Sciences, Sutskever led employees in a chant: “Feel the AGI! Feel the AGI!” The phrase itself was popular enough that OpenAI employees created a special “Feel the AGI” reaction emoji in Slack.
From the Atlantic article. He seems a serious person but it's hard to take this circle seriously.
Nothing OpenAI have done more recently suggest to me that they are closer to the AGI. They did create a bunch of other models with different input format that are probably based off pretty much the same thing. And it looks like those other models will be much more expensive to run and burn up their runway sooner.
Trouble is, everyone has a different interpretation of those words.
Do you mean they're no closer to "highly autonomous systems that outperform humans at most economically valuable work", or do you mean no closer to "a general-purpose machine-learning based system, even if it's only IQ 85", or do you mean no closer to "a conscious and self-aware artificial mind", or do you mean "a super-intelligent system that beats all humans put together"?
> And it looks like those other models will be much more expensive to run and burn up their runway sooner
I thought "runway" was the metaphor for "before take-off"? They've definitely taken off now. If they were burning more money than they bring in, they could've just not lowered the prices.
The last tweet is clearly addressing them in OpenAI internal meme-speak given The Culture, 'Feel the AGI', internal emoji etc. references.
[ ] If you are not intimately familiar with the development of AI, your warnings on safety can be disregarded due to your basic ignorance about the development of AI
[x] If you are intimately familiar with the development of AI, your warnings on safety can be disregarded due to potential conflicts of interest and koolaid drinking
Unbridled optimism lives another day!
Not saying AGI is impossible, just think the large models and the underlying statistic model beneath are not the path.
Sorry, but sci-fi novels are not going to cut it here. If anything, the last year and a half have just supported the notion that we’re not close to AGI.
If AGI happens, even in retrospect, there may not be a clear line between "here is non-AGI" and "here is AGI". As far as we know, there wasn't a dividing line like this during the evolution of human intelligence.
As a society, we don't even agree on the meanings of each of the initials of "AGI", and many of us use the triplet to mean something (super-intelligence) that isn't even one of those initials; for your claim to be true, AGI has to be a higher standard than "intern of all trades, senior of none" because that's what the LLMs do.
Expert-at-everything-level AGI is dangerous because the definition of the term is that it can necessarily do anything that a human can do[0], and that includes triggering a world war by assassinating an archduke, inventing the atom bomb, and at least four examples (Ireland, India, USSR, Cambodia) of killing several million people by mis-managing a country that they came to rule by political machinations that are just another skill.
When it comes to AI alignment, last I checked we don't know what we even mean by the concept: if you have two AI, there isn't even a metric you can use to say if one is more aligned than the other.
If I gave a medieval monk two lumps of U-238 and two more of U-235, they would not have the means to determine which pair was safe to bash together and which would kill them in a blue flash. That's where we're at with AI right now. And like the monks in this metaphor, we also don't have the faintest idea if the "rocks" we're "bashing together" are "uranium", nor what a "critical mass" is.
Sadly this ignorance isn't a shield, as evolution made us without any intentionality behind it. So we don't know how to recognise "unsafe" when we do it, we don't know if we might do it by accident, we don't know how to do it on purpose in order to say "don't do that", and because of this we may be doing cargo-cult "intelligence" and/or "safety" at any given moment and at any given scale, making us fractally-wrong[1] about basically every aspect including which ones we should even care about.
[0] If you think it needs a body, I'd point out we've already got plenty of robot bodies for it to control, the software for these is the hard bit
we should all thank G-d these people weren't around during the advent of personal computing and the internet - we'd have word filters in our fucking text processors and publishing something on the internet would require written permission from your local DEI commissar.
arrogance, pure fucking hubris brought about by the incomprehensibly stupid assumption that they will get to be the stewards of this technology.
is that you think what Jan Leike was working on or "Yuddite" philosophy is in anyway supportive of DEI. These things aren't related, and you're not anywhere close to the real problem by screeching about DEI.
very well. straight from the horse's mouth:
>When designing the red teaming process for DALL·E 3, we considered a wide range of risks3 such as:
>1. Biological, chemical, and weapon related risks
>2. Mis/disinformation risks
>3. Racy and unsolicited racy imagery
>4. Societal risks related to bias and representation
(4) is DEI bullshit verbatim, (3) is DEI bullshit de facto - we all know which side of the kulturkampf screeches about "racy" things (like images of conventionally attractive women in bikinis) in the current year.
I don't know which exact role did that exact individual play at trust/safety/ethics/fart-fart-blah-fart department over at openai, but it is painfully, very painfully obvious what are openai/microsoft/google/meta/anthropic/stability/etc afraid their models might do. in every fucking press release, they all bend over backwards to appease the kvetchers, who are ever ready, eager and willing to post scalding hot takes all over X (formerly known as twitter).
Open question to HN: to your knowledge/experience which AGI-building companies or projects have a culture most closely aligned with keeping safety, security, privacy, etc. as high a priority as “winning the race” on this new frontier land-grab? I’d love to find and support those teams over the teams that spend more time focused on getting investment and marketshare.
AFAIK they are still for profit, but they split from OpenAI because they disagreed with the lack of safety culture from OpenAI.
This is also seen in their llms which are less capable due to their safety limitations
The older Claude 2.1 on the other hand was so ridiculously incapable of functioning due to a safety first design I'm guessing it inspired the goodie 2 parody AI. https://www.goody2.ai/
OpenAI literally said they were setting aside 20% of compute to ensure alignment [1] but if you read the fine print, what they said was that they are “dedicating 20% of the compute we’ve secured ‘to date’ to this effort” (emphasis mine). So if their overall compute has increased by 10x then that 20% is suddenly 2%, right? Is OpenAI going to be responsible or is it just a mad race (modelled from the top) to “win” the AI game?
[1] https://openai.com/index/introducing-superalignment/?utm_sou...
That is: Alignment with humanity/society as a whole is even further away. And might even be considered out-of-scope for AI research: ensuring that the AI creator (persons/organization) is aligned with society might be considered to be in the political domain.
This means that we're not ready to answer "with whom", because we don't know what "aligned" even means.
Meanwhile deepmind with alpha fold etc is showing AI can help with pressing problems of humanity without AGI as the necessary first step.
It's a similar kind of waiting, with varying levels of optimism. When you say "the public isn't clamoring for" I think a lot of the clamoring over Covid—19 was a nonspecific desire to get some normalcy. With AI it's the same, with people worrying about their future careers, or those of loved ones. In either case, people want answers, whether they are enthusiastic about what the companies are doing or not.
> Meanwhile deepmind with alpha fold etc is showing AI can help with pressing problems of humanity without AGI as the necessary first step.
Humanity is big enough to wait on both of them.
I don't want to, like, have the argument here about it. Nobody will persuade anybody of anything. But it is not a truth universally acknowledged that AGI superintelligence is actually a thing.
There are other reasons to have qualms about OpenAI! They could be misleading the market about their capabilities. They could be abusing IP. They could be ignoring clear and obvious misuse cases, the same way Facebook/Meta did in South Asia.
But this exit statement seems to be much more about AGI superintelligence than any of that stuff. OK, so if I don't think that's a thing, I don't have to pay attention, right?
This is flabbergasted statement to me but is probably the necessary attitude to push the AI/ML frontier, I guess.
I feel old.
It's one thing to push this kind of hype to get people talking about A.I. if you're trying to capitalize on this space. It's something else entirely to swallow your own marketing BS as if it were gospel.
But let's face it, this guy probably isn't serious, he's just spewing more hype upon departing OpenAI looking for the next tech company to hire him.
His call for preparation makes it sound like it's near.
Nvidia already huge. Microsoft and Apple are more users.
Yeah, this shit is near. Also- Quite a dangerous experiment we’re running. And safety-first people are not at the helm anymore.
I'm not convinced, you can throw all the compute (btw, it's not growing exponentially any more, we have arrived at atom scale) at it and I'm not convinced this will lead to AGI.
Our rudimentary, underpowered brain is GI, now you're telling me stacking more GPU bricks will lead to AGI? If it indeed does, it would have came by now.
I don't believe we can create something more intelligent than us. Smarter, yes, but not more intelligent. We will always have a deeper and more nuanced understanding of reality than our artificial counterparts. But it does not mean that AI can not be a danger.
Wishful thinking already disproven by collectives (Google and OpenAI) that are already better at understanding and acting upon reality than you or any single human intelligence. These are systems, and they can soon hollow out their biological components.
It is also a sign of hubris to think that we can create an artificial construct that is more intelligent than we are.