I am talking about creative work. The kind of work where people had to go and first identify the issue themselves (not to mention come up with a unique solution in the first place), and then tell others how to work with it. In fact, there are too many variables for me to truly express how I feel or think on the matter.
All I am saying is that OpenAI knew that they were playing a dangerous game, and their only excuse is that enough "powerful" people will back their project for it not be stomped into the ground by regulations.
Do you not find it pathetic that OpenAI talks about security, precautions, responsible AI and so forth, and yet at the same time rake in millions of dollars in revenue? Do you think their goal was to 'advance humanity' or something like that? I'd be very doubtful of that.
I try not to guess at motives and goals of people I don't know because I don't think they matter. It doesn't sound like your opinions would change if Altman submitted to a brain scan where you could verify his purity (or not), and honestly mine wouldn't either. And no, I dobn't see pathos in for-profit entities talking about safety. Carmakers, construction companies, and theme parks all do the same thing.
I see AI as a productivity multiplier, much as you described factory automation. And I see a lot of information workers suddenly echoing the same concerns we all ignored from factory workers, because oh wow now it could affect us.
I’m sure film photographers complained about digital photography when it was first introduced. And painters about film photography. Etc etc.