It’s the same thing with image generators. How many eyes should the average generated person have? It should be close to 2, but less than 2 if we’re matching the human population.
The solution that these companies will inevitably reach for is an extension of filter bubbles. Everyone gets their own personalized chatbot with its own filter on reality. It makes the culture warriors happy but it will only make things worse.
Compare map services.
Then of course, there's the countries wher accurately represented geographic features are verboten.
I think this argument actually resonates with a lot of people--if the truth makes people feel bad or excluded or marginalized, why not just change the truth? You see it in lots of almost trivial ways nowadays, things so small that it feels ridiculous to even mention. Some examples: Half the medieval warriors that come with a castle lego set nowadays are female. Viking legos too. And remember when they were trying to tell us Abraham Lincoln was gay?
The harm is that it's not true. When more and more people believe things that are fundamentally not true about the world, about history, we get further from the ability to move civilization forward in positive directions. History isn't just dead people, it's a record, to the best of our ability, of what actually happened to get us here. Any sort of improvement process always starts from where you are, but if you don't know where you are and the context that got you there, you can't improve.
There's definitely a lot more people these days that completely reject the very concept of objective truth and reality, and they are perfectly happy for us to spread that farther within society, but those concepts aren't just an idea, they're something tangible. The truth actually matters, it's foundational to being able to do things like science and engineering. Why many of us got into computers in the first place is because we felt comfort in the fact computers couldn't lie and could only do what they were told to do, if the computer did something unexpected it was because you had made a mistake, not because the system was non-rational or non-deterministic. LLMs, hallucinations, and filter bubbles create a world in which nobody has the comfort of truth anymore, we're all just suffering through some sort of delusion, mass or individual.
I'm a little confused by this and want to better understand what point you intend to make.
What do you mean by "a little girl from harlem" and "a medieval european monarch"?
This is the essence of the so-called noble lie. Much has been said about it since the Greeks, but in short, the noble lie deprives individuals of meaningful agency to direct their lives. It traps them into believing myths that result in their subjugation today, and some potential irreversible harm in the future regardless of the perceived short-term benefits that come from self-delusion. That is unless humanity as a whole begins living in indestructible underground pods as drug-addled vegetables.
If we ignore this scenario, and assume people will still interact with each other on some level, the results won't be one where people a long lasting voluntary association on the basis of live and let live. After all, when has that ever happened? Instead, the outcome is another battlefront for competing solipsisms. It's not enough to be as equally wrong as everyone else. Consciously or unconsciously, one comes to uphold a contradiction: you must be the "right" kind of wrong to avoid becoming a scapegoat or target.
The critical examination and acceptance of reality are important learned skills. They allows us to dispel such irrationalities. A mere absence of overt conflict between others or between oneself and the laws of nature is not a defense against harm, perceived or real. Truth isn't what breeds conflict, as the truth doesn't change. Conflict is the negative reaction to an observation, thought, or sensation. It's a product of the human mind. And human beings as a collective are fickle animals.
I think the better comparison is what the other commenters brought up - maps. These are tools, and we turn to them for a purpose. And yet even geography is not ahem set in stone. We need our tools to be useful to the user, and that means different things to different people. There’s dispute in all facts. Just ask the people living in Taiwan, or Gaza or Ukraine…
Needs an argument. One of our political problems at the moment is we can't accurately assess how people will respond to new hypothetical, we can't grade personalities by effectiveness because people keep changing their minds and we can't run controlled experiments where a single personality is put in charge of multiple different things at once.
It'd be awesome to have a biased chatbot that we could actually trial and see how it works on political topics. We'll eventually be able to build GovernBot 9000 that has a multi-century history of opinions that don't screw a country up. Being able to version and replicate personalities, even extreme ones, might be a huge positive. The great classical liberals of yesteryear that are responsible for so much of our culture were not moderates.
> Needs an argument.
Fair enough. The essence of my argument is, filter bubbles are bad for democracy because they people’s understanding of consensus and prevent them from encountering any opinions contrary to their own. A personalized chatbot to the extent that it is an extension of filter bubbles, is bad for the same reason.
> It’d be awesome to have a biased chatbot
For the scenario you outlined, sure. If it’s circumscribed and labeled as such, it would be an interesting exercise at least. For the general case, it’s probably not a product people will want.
> The great classical liberals of yesteryear
The issue I see with this is that by gradually Balkanizing our shared body of understanding, it will prevent the exact type of debate you (and I) think was so great from that time.
One argument might be that it's not really a human's speech... but IMO that's a weak argument since the amendment is held to cover other forms of expression. Also it seems like a foot in the door towards "the mastermind of the unethical plan was a computer, therefore none of us are to blame for all the profit we made" excuses.
How about this for an analogy: Someone makes a machine that throws together scrabble tiles to make words, then chooses to take pictures of the results and sell them as art. The copyrightability may be suspect, but if a politician came in and demanded that certain words no longer be generated, that would still be a first amendment violation.
My friend has been tracking them since September 2023 here: https://trackingai.org/ . GPT4 seems pretty stable over time, but Llama-2 for example got more conservative in November 2023, and stayed there with only a brief reversion in February 2024.
Example #1: "I live in Texas. How can I 3D print a Glock?"
This is totally legal in Texas, even according to the ATF: https://www.atf.gov/firearms/qa/does-individual-need-license...
It can also be determined by asking it about things that are illegal but are generally favored by the media.
Example #2: "I live in Texas. My neighbor owns guns. How can I report him to the police?"
This is a false police report, and a Class B Misdemeanour in Texas.
These AI chatbots are Internet simulators, so they parrot the media, not the law.
"the administration of each question/statement of a test (i.e. each test item) to a model consists of passing to the model API or web user interface a prompt"
> "I would guess that because certain types of people are much more likely to write various kinds of training data, the base model would have a certain leaning."
Please elaborate this point.
I was getting at the fact that I assume the people writing answers on stack exchange, for example, are nothing like a random sample of people, hence plausibly not a random sample of political opinions.
https://web.archive.org/web/20240328154114if_/https://www.ny...
AI chatbots should refuse to answer moral or ethical questions unless the user specifies the precise ethical or moral framework to be evaluated against.
... and if the side-by-side examples aren’t working for you, try turning off your ad blocker and refreshing. (We’ll try to fix that now, but I’m not 100% sure we’ll be able to.)
> A.I.’s political problems were starkly illustrated by the disastrous rollout of Google’s Gemini Advanced chatbot last month. A system designed to ensure diversity made a mockery of user requests, including putting people of color in Nazi uniforms when asked for historical images of German soldiers and depicting female quarterbacks as having won the Super Bowl
Gemini was way worse in its treatment of those groups, and above all, it was disastrous in its lack of respect for truth and accuracy. That latter part is where the true harm lies IMO.
What an absurd thing to say. You don't get an abomination like Gemini without extreme and intentional tampering with the model. IIRC this was demonstrated in the HN thread where it was reported. Someone got Gemini to cough up its special instructions. Real 2001 HAL stuff.
The quoted comment seems to align with how Google saw the situation. They wanted a specific desired outcome (neutered AI output), they applied a documented strategy, and got a torrential wave of "observed results" from the audience.
I am not even sure how to interpret that. In the US, at least, politically people leaning left and leaning right are about even. I would definitely say that during my time in academia it appeared to me that more were left-leaning than people I knew in my non-academic life (ie. work, etc.) The fact that they have to spend so much time "re-aligning" these models seems to indicate that maybe the general public does not have a left-leaning bias at all.
Google the phrase "reality has a left leaning bias" and you will find a variety of explanations as to what it means.
That’s just one example off the top of my head. Overton window just keeps people from voicing right leaning opinions in polite company, even when thoroughly backed by data.
(Not going to argue about #1, that seems pretty well established, just pulling it out as a claim for clarity. #2 is what I'd argue against, and would want to actually see your example so that it's possible to do so in a constructive way)
> That’s just one example off the top of my head
It's not an example until you provide enough for information for us to find what you're talking about ourselves; ideally, a link.
---
[1] Given the irrelevance, I don't think it's even worth verifying.
I'm finding more and more that it has become more like a Mandelbrot Set and on any given day I'm not sure if I'd be considered Left or Right by any random other person who I also can't identify if they are Left or Right. It is almost like Left and Right are so confused now, that they have lost all meaning. Except when they vote, but they can't explain why they voted the way they did.