Well, the original purpose was to do OCR for things like the NYTs archives and other libraries. The part where you identify road signs & traffic lights was supposedly to train self driving cars. Now, it's apparently just more analytics & tracking for Google to sell you things. [1]
But, since LLM is so error prone & AI companies don't seem to want to pay humans to verify either the data being input into LLM training is valid, or the output is accurate, something like a forced CAPTCHA to be used for verifying correct LLM data by unpaid labor.
It's just a dystopian thought I had. I probably shouldn't have said it outloud (it might give them ideas).
[1]
https://www.techradar.com/pro/security/a-tracking-cookie-far...