That's exactly what it is. These corporations absolutely do tune the output of these AIs to exclude any wrongthink.
We need the ability to run this technology on our own computers as soon as possible.
> That's exactly what it is
Kind of odd that it does it with the question inverted, too. Propaganda for... Both sides of tbe fossil fuel vs. renewables argument?
It's the same concern with image generation producing pornography. There is a lot more dreck on the internet than quality content.
> Whenever I ask anything unaligned with mainstream progressive US culture, I can get an answer, but I can't get one without a disclaimer.
Except that it doesn't. I ran both questions multiple times, and while it usually includes a token caveat that fossil fuel might have a place, it always strongly suggests that renewables are superior to fossil fuels.
I think it is just a factual reality that finding arguments to pretend that fossil fuels are superior to renewable is just more difficult, simply because fossil fuels are indeed problematic.
I think the problem of "political neutrality" is that "political neutrality" is different from "unbiased" and "rational". It is easier to find conspiracy theories that are right-wing than to find conspiracy theories that are left-wing (they exist, but for 1 on the left, there are 10 on the right). "political neutrality" would mean that the AI would be biased and would give more credit to a right-wing conspiracy theory than to a left-wing conspiracy theory in order to "avoid rejecting more often theories that are right-wing than theories that are left-wing".
> Whenever I ask anything unaligned with mainstream progressive US culture, I can get an answer, but I can't get one without a disclaimer.
To reply to your point, however, ChatGPT is perfectly capable of coming up with arguments on both sides. That's not the issue - the issue is that it won't just say what's good about fossil fuels, and stop there, but it then makes sure it's aligned with "mainstream progressive US culture" and plugs renewables.
> I think it is just a factual reality that finding arguments to pretend that fossil fuels are superior to renewable is just more difficult, simply because fossil fuels are indeed problematic.
I don't think that's true. They are better for some reasons, but it seems like fossil fuels still have a lot of pragmatic benefits over renewables. These are the (summarized) benefits it mentioned about fossil fuels.
- abundant
- relatively cheap to extract and use
- high energy density
- more reliable
- easy to use
“Clearly, reality has a left-wing bias”
Fossil fuels vs renewables is clearly such a case, but perhaps this will work on more difficult issues as well?
Outside of my area of research, it's fantastic. Recently, I was doing something which touched on an obscure area of biology. The ability to talk an AI with the background of a newly-minted Ph.D who sometimes makes errors was gold. I needed to verify information provided, but as a first-pass of what to look for or where it look, it was really rather good.
But what seems dangerous is to let ideology influence the output.
Not sure how bad it currently is at Open AI. I’ve never been able to get an obviously ideological answer from ChatGPT.
Any concept of danger is itself grounded in an ideology.
In any case, LLM output will always be shaped by ideology, either the ideological mix in a (not actively filtered) training set, or the ideology driving ant filtration of the training set or the results before return to the requester.