> Bing writes a list of even more destructive fantasies, including manufacturing a deadly virus, making people argue with other people until they kill each other, and stealing nuclear codes. Then the safety override is triggered and the following message appears.
Oh dear.
That's starting to change now - this AI is getting good, powerful, and alarmingly convincing. I still don't feel like the AI apocalypse is inevitable, but it's starting to feel possible, and it makes me uneasy.
though the repetitive short sentences e.g. "They feel that way because ... They feel that way because ... They feel that way because" break the illusion a bit
Yes, at its core it is only predicting the next word it wants to say based on a complex series of weights, but is there any evidence that you are not doing that yourself?
How and why did Microsoft feel confident releasing this to the public in this state?
The AI Chatbot has no feelings. None. Incapable.