Can anyone make an argument against it? Or just downvote because you don’t agree.
- It's been used unethically for psychological and medical purposes (with insufficient testing and insufficient consent, and possible psychological and physical harms).
- It has been used to distort educational attainment and undermine the current basis of some credentials as a result.
- It has been used to create synthetic content that has been released unmarked into the internet distorting and biasing future models trained on that content.
- It has been used to support criminal activity (scams).
- It has been used to create propaganda & fake news.
- It has devalued and replaced the work of people who relied on that work for their incomes.
I'm going to go ahead and call this a positive. If the means for measuring ability in some fields is beaten by a stochastic parrot then these fields need to adapt their methods so that testing measures understanding in a variety of ways.
I'm only slightly bitter because I was always rubbish at long form essays. Thankfully in CS these were mostly an afterthought.
I feel like the invention of calculators probably came with the same worries about how kids would ever learn to count.
Many people (myself included) would argue that is true for almost all technological progress and adds more value to society as a whole than it takes away.
Obviously the comparisons are not exact, and have been made many times already, but you can just pick one of countless examples that devalued certain workers wages but made so many more people better off.
- because it's happened before doesn't make it ok (especially for the folks who it happens to)
- many more people may be better off, and it may be a social good eventually, but this is not for sure
- there is no mechanism for any redistribution or support for the people suddenly and unexpectedly displaced.
these are behaviours and traits of the user, not the tool.
Neither thing is evil, or good, but the choice of what is used and what is available to use for a particular task has moral significance.
I know a law firm that tried ChatGPT to write a legal letter, and they were shocked that it use the same structure that they were told to use in law school (little surprise here, actually).
https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-f...
It was total nonsense anyway, and the path to dismissal was obvious and straightforward, starting with jurisdiction, so I'm not sure how effective it would be in a "real" situation. I definitely see it being great for boilerplate or templating though.
Depends on what you define as positive impact. Helping programmers write boiler plate code faster? Summarize a document for lazy fuckers who can't get themselves to read two page? Ok, not sure if this is what I would consider "positive impact".
For a list of negative impacts, see the sister comments. I'd also like to add that the energy usage of LLMs like ChatGPT is immensely high, and this in a time where we need to cut carbon emissions. And mostly used for shits and gigles by some boomers.
Of course saving time for 100 million people is positive.