The large majority of the images you link to show kids with 5 fingers, as well as 5-fingered baseball gloves. The cases of four fingers are due to orientation.
Your "1." also shows Marcie with five fingers. You see Charlie Brown with 4 fingers because he's holding a baseball. In 2. he's also holding a baseball. You would not see 5 fingers on one side because doing so would look strange.
In your unlabeled "0." there are plenty of kids with 5 fingers. There are some with fewer, but they are holding things or drawn in way to suggest we are seeing the hand from the side.
I don't understand your hesitancy. Your own samples should be enough for you to decisively conclude that the Google AI's claim that Peanuts was "traditionally drawn with four fingers (or three fingers and a thumb) on each hand" is wrong. If not, it sure seems like you trust Google AI over your own eyes. Why are you so hesitant to agree?
My point is that you don't need to consult secondary sources when the primary sources are easily available.
When this came up a few days ago, I spot checked the complete works of Peanuts, from a collection on archive.org at https://archive.org/details/peanutscomics19502000/Volume%201... . The consistent pattern across the nearly 50 years of Peanuts is the kids have five fingers unless obscured by orientation or objects.
You can do that yourself, and triple-check that Google AI's answer is clearly wrong.
Thus, I think it's a good example of how fact checking with LLMs can lead people astray, and the large negative externalities I mentioned combined with its well-known tendencies to make incorrect statements make it a very poor starting point when the primary source, at least in this case, is so easy to access.
If most of the sources are wrong, and LLMs are being trained on those, isn't it logical that the latter will also likely output that same wrong information?
When do you know if most of the sources are wrong, unless you yourself know most of the sources are wrong?
No comments yet.