The people that do so are largely doing it for their own self-gain (e.g., self-promotion) or because it makes them feel important. I had a very low stress job for a few years and ended up as a moderator for over a dozen large subreddits, including a few defaults. Socializing with Reddit's prominent moderators was enlightening.
It's hard to imagine "ideology" being relevant to the vast majority of reddit... Do you really think the moderators of ELI5 or PeopleFuckingDying or some obscure porn reddit or whatever are primarily concerned with "ideology"?
I used to help moderate a poker forum. I was a professional poker player, and an extremely active user of the forums. I don't recall pushing an ideology beyond "keep discussions constructive and topical."
The person you just replied to was a mod. Are you implying that their work was somehow about pushing an ideology?
But they have no problem digging into downvoted comments and deleting them, even if the system already did the job for them (put the downvoted stuff at bottom and hidden).
And if there was really some big conspiracy to skirt around this system they'd have to organize on a platform outside of reddit, ensure everyone is always accessing through VPNs so reddit doesn't notice multiple accounts modding from the same IP, and hope no one ever defects and exposes the underground moderation ring.
At least you know it's just one person, spread across multiple contexts.
I had a string of unusual behaviors when I ran /u/dontbenebby, culminating in being involuntarily being made the moderator of several Snapchat related subreddits around the time that Reddit let you view analytics and things I was posting were getting six or seven figure views as I dodged literal assassination attempts every time I tried to take a peaceful walk in the woods.
For context, I was (in)famous for not logging IPs, or even numbers of pageviews as far back as when I dropped that Facebook zero day on my blog and virtually planted myself in the middle of the protests against Mahmoud Ahmadinejad, and then went on to lecture class full of CMU students they should use strong anonymity tools and careful opsec if organizing protests in oppressive regimes like Tehran or Times Square as I threw up an image of a dead protester on the screen.
I meant what I said then, and I mean it now.
And maybe I spoke offline with whoever made me the moderator of a subreddit I never visited, for an application I have never used? In that case, let's share it with the whole class the three core points my art was intended to drive home:
1.) They are going to nuke Penn Quarter, not Pittsburgh.
2.) It is not my problem if you drop dead of a heart attack because you fucked around and found out.
3.) I am an alumni -- that means I can do whatever I want.
Anyways, I'm off to read a book and "do email".
Cheers!!
- Greg.
I gave up on it when I got banned from certain subreddits for posting quotes from congressional testimony. If you post anything that deviates in the slightest from the moderator's viewpoint, you get banned.
The end result is an echo chamber that's getting tighter and smaller, excluding any diversity of opinion. It's no way to run a business.
All of the like-minded echo-chambers. They seem to have no appetite for certain heresies and have walked back Aaron Swartz' original emphasis on free speech as a virtue.
The UK politics subreddit used to be one of my favourite subreddits back in the early 2010s. Back then it was quite a small community and while we had differences of opinions I think it's fair to say we enjoyed each other's company. But around the time of the Brexit vote, then Trump shortly after that, the subreddit started getting flooded with reactionary, low-effort comments and anyone who tried to provide a nuanced opinion or alternative view point was typically downvoted and insulted.
I along with a few other long-time commenters were mostly in favour of Brexit at the time so we would constantly be downvoted and insulted whenever we wrote anything in favour of Brexit. And the worst was when a post made it to /r/all because then you'd an even larger flood of low-effort commenters just downvoting and insulting everyone with a different opinion.
And this wasn't even just minor insults, this was people telling me to kill myself and that I'm a horrible person literally everyday. I'm not sure how much this was a political subreddit thing vs Reddit generally, but it was honestly ridiculous the stuff people would say to me there.
Needless to say, I obviously left the community shortly after 2016, but I've seen similar things play across the site since. There seems to be no room for a difference of opinion there anymore. The mods if anything are just an amalgamation of the average Redditor.
Or for the 30 seconds it will take you to make a new account.
According to the "dead internet theory" they already are. I'm inclined to believe that a lot of the political discourse on there is bot driven.
See previous discussion: https://news.ycombinator.com/item?id=18881827
That mod has become toxic and imho should either apologize or be removed asap.
In the UK Reddit has pushed a new subreddit called "HeyUK". It has turned up in the subscription feeds for some (all?) UK users automatically without the user asking for it or adding it. If you remove it from your list of subreddits the posts will still show up in your feed as "sponsored". As far as I can see this new subreddit is seeded with just cross posts from other UK subreddits and is created/pushed by Reddit itself.
The big issue I have is that this is just another subreddit with 15-odd random people who are the mods. These people have the unilateral power to shape discourse and be the arbiter of what is "UK" and what isn't.
Reddit is getting a bit too big, this feels very strange. On the swing-back we then have Reddit not banning the "jailbait" subreddit until it made major US news.
I have no idea what's going on with social media anymore, I'm just left with the overwhelming feeling that the people with the voice and the power are not the best of us.
I actually was permanently banned from reddit last night for saying "I didn't know shooting a guy in the nuts would kill him" for spreading hatred/violence in a video game subreddit. It kind of caught me off guard.
The "just start your own" strategy rarely works, and at the end of the day you're still on a platform that can ban your little upstart for any reason it wants. That's assuming the people in the main subreddit even learn that your alternative exists, as the mods don't want people leaving their little fiefdom and can ban you for mentioning it. This mindset is also counterproductive because it advocates for completely and totally giving up any effort to address the problem in the main subreddit. It lets the troublesome moderators run unchallenged and can make things even worse.
No? That’s not how Reddit works, you join the subreddits you want to join
I begged them to help me pin down what the subreddit was about since the submissions were all over the place, and some people seemed to think it was for a certain type of robot content and others a different type. Most ignored the question.
I tried to share articles and videos of actual leading edge robots that I though t were awesome. Generally these were ignored, along with most such things. Occasionally a video of a real robot would randomly become popular for some reason. The worst most repeated robot sketches would often receive many votes. Anything even remotely erotic went straight to the top.
They seemed to like art quite a bit, but often the voting was the opposite of what it should have been. Like artwork that was clearly derivative or low quality was top billing for the day, and amazing work was ignored.
Then there was someone who really wanted to use it for some channel that was obviously kind of a stealth marketing system. I repeatedly warned everyone about it and tried to discourage it, but the only feedback from anyone was that they liked the content and I was overreacting.
Due to the incredibly poor judgement of the people voting in the sub, I got fed up and left.
With all Reddit-like forums, there are big secondary factors to what bubbles to the top. Just as an example, the timing can be critical [0], the current top posts [1], the current trends and even just whether the people who like that particular style are browsing new at just the right time to give the post some starting traction. It's a bit like the difference between weather and climate, you really need to sample a lot to get an accurate picture.
[0] If a lot of users are online when the post is new it will earn a lot more votes. Someone on a data subreddit actually modeled which times are the best to post content for maximum visibility. EDIT: There's even a website for this now: https://dashboard.laterforreddit.com/analysis/
[1] "Blocking" the top spots can easily happen. Just look at what stays at the top of HN on a Sunday vs. when Elon stirs something up at Twitter yet again.
They care about how it makes them feel.
Smaller forums in the past tend to deal with this by having some groups that were strongly moderated with a strict set of rules and no/low tolerance of them being broke. Then more 'general' discussions where the rules were lessoned.
If a bunch of people showed up tomorrow on Hacker News, who mostly wanted to discuss Pokemon, they'd probably be shown the door. I don't see why a person running a subreddit shouldn't be able to do the same thing.
It doesn't seem all that different to me than whoever is first to claim a company name, a domain name, or when we go back further in time, land.
This already is a huge step forward compared to Reddit, where you aren't even sure which subreddits are owned by the corporation / official groups, or if they're "fan run". Are you sure /r/Ubuntu is a fan-run subreddit, or does it have official Ubuntu communications?
I didn't realize this was a thing outside of Libera, and maybe Freenode still. But then, I haven't been on other networks in quite some time.
I didn't think that we had the expertise to be the judges of truth in that area (it was during those few weeks where the governmentwas saying "don't use masks", and these moderators agreed with that), and the relationship quickly deteriated.
It might be possible to use this method to crowdsource things like creating subreddit rules and removing comments that break those rules.
Quadratic voting with your vote ratio being a derivative of your activity on the subreddit to mitigate bot voting, et cetera.
There's not much to do aside from start a rival subreddit (with a less popular name), or just give up on Reddit entirely and go elsewhere.
Welcome to the internet, the same thing could be said about every web forum since the 90s.
> There's not much to do aside from start a rival subreddit
Yes, that's the system working as intended. You don't have any right to the subreddit any more than the people who got there first. There have been many successful offshoot subreddits that go on to eclipse their predecessors in size if the new subreddit is actually better.
What is “ownership” in this case? Total private tyranny, with delegation of authority? Does it transition to collectively and more democratically managing the channel, as stackexchsnge does?
The only thing getting there first does is prime realestate in the namespace and network effects of having an existing community.
The name of the subreddit shouldn't matter much at all. For each category there are several subreddits but people don't actively move to the subreddits with the best moderators.
For aggregators as a whole, it's the same. Places like https://tildes.net/ don't have many visitors even though Reddit's flaws should incentivize significant amounts of users to try other aggregators.
It's the same with banning ChatGPT from StackOverflow: Who cares and who notices? Art is either evoking some feeling or not and it's different for everybody. An answer on SO is either helpful or not. Who cares how it was written? ChatGPT can easily say something more helpful than me, stable diffusion can easily make something I'd rather have on my wall than Da Vinci's Mona Lisa (or anything more along my preferences). Why do we care so much? What's "real art" anyway?
I always like a colleague's mousepad, it said: "Is this art or can we throw this out?" Always makes me smile.
People were posting low-quality rambling bullshit, sometimes completely off-base, without even bothering with the most basic of smell-tests. People occasionally post low-quality rambling bullshit too, or things that are off-base, but with ChatGPT you can post 100 answers in an hour.
It's a matter of scale. The ban wasn't pre-emptive, it was reactive in response to a real observed problem with people lazily Ctrl+C/Ctrl+V spamming poor quality nonsense from ChatGPT.
While technically not allowed, you can still use ChatGPT on Stack Overflow: just make sure it's correct, copy-edit things a bit to remove some of the waffling and repetition that ChatGPT tends to generate, and no one will even notice.
I'm less involved in the art community, but I would imagine that most communities are at least in part about people who create things for the joy of creating things, and then share that in the community for the joy of sharing. I don't have anything against AI art, but if lots of people start lazily spamming that kind of stuff then you've kind of lost your community. It's not so much about what is or isn't art, it's about having a community.
That said, this mod is clearly being an ass about it.
The biggest problem with ChatGPT is that when it's wrong, it's confidently wrong and cannot quantify its uncertainty in any way (maybe it's too human-like in that respect...) Furthermore, the whole idea of a reputation economy collapses if reputation becomes "too cheap to meter".
I wish 90% of real people answers were correct...
I see AI-generated art as being similar to taking performance enhancing drugs in sport or using something that's against the rules in motorsports. Outsiders don't really care because they just see someone performing at the same level of the others by using clever tech, but if you're part of the group then you will care much more.
As an unpopular opinion: I don't watch races, but maybe I would watch that. I mean, Roborace is pretty cool (but it's mostly about the tech rather than racing, and IIRC it's all strictly driverless), I wonder how a man-machine pair would do.
Similarly, I find most sports boring and hardly worth watching (unless in a good company, but then it's not really about the sports), but if some day there would be Olympic Games NG+ without any doping or gear restrictions, I'd most likely check those out with great interest. I do realize there would be tons of drawbacks and nuances involved (and more than a lot of people screaming hate how this is wrong to them), but that'd be a) interesting on a personal level - watching some folks who trained for their whole life isn't really resonating with me - I'm happy for them being beyond good but I can't say I really care; and b) would produce enormous effects on society, in terms of medical and technological advances, just like the space races of the past (and hopefully the future) - I'd surely cheer for that.
Then, I'm a proponent of machine assistance in computer games, too. In my opinion, human bodies and minds are inherent sources of unfair advantage and machine assistance - if equally available to everyone - is the greatest equalizer. Though, of course, I acknowledge that a lot of games are designed solely or mostly around imperfections of human performance (mechanical or perception).
I'm a software dev attempting to learn art. I recently joined Mastodon related to this purpose and it's quite the hot topic there. Many, many artists pissed about how their work is being used to train corporate profits as well as potentially undermining their living/passion/etc. I've actually seen some cool art in protest of "AI"... usually involving malformed hands which the artist community have gravitated towards being the representation of current AI capabilities.
I think it matters how it is created, personally. Not because the author of an individual piece of art is important to me, but rather because once AI moves into a problem space and can effectively and accurately "solve" that problem space the displacement of humans will be surreal. How it affects people is the important thing to me. I'll be interested to see how we manage to recognize this reality as AI improves.
If you submit an answer yourself and it’s wrong, if someone begins the process of critiquing it or editing it, they can engage in a dialogue with you in order to make this happen. You can explain how you came up with your answer, and they can help you debug your thinking. Seeing this process unfold over a couple comments is often one of the most enlightening things on SO.
How is this supposed to happen if you submit a ChatGPT answer which you have just accepted on blind faith and maybe don’t even understand?
"Who cares if the diagnosis is done by a medical expert or someone pulling out random drugs they tried before? As long as I feel better immediately after taking them, who cares how they were prescribed?"
The case on SO is clearly different, as ChatGPT might answer incorrectly or answer with something containing subtle bugs. There's also a good chance that you won't be immediately able to spot those bugs, as, if you were sufficiently knowledgeable in the topic yourself, you would have most likely not asked that question.
The case for art is a bit different, as there is no technically correct way to do it, but there is still a value to the way it is created. Would you think the first picture drawn by your child is worth the same as any other bad painting? Would you agree that a perfect copy of the Mona Lisa has equal value to the actual object? If no, it should be pretty easy to see why a painting generated by an AI is different from one created by a human.
Playing devil's advocate here, but this also clearly applies to human answers.
As a test I ran some questions through ChatGPT a few weeks ago; both popular ones and just some random ones from the homepage (including some not-very-good questions). In only one case would I say that it was "good enough" to post. All the rest ranged from "mostly correct but with huge omissions" to "used 5 paragraphs to explain what could be one or two sentences" to "this is not even remotely correct in any way, but at a glance it actually looks kinda correct". The "looks kinda correct" can be pretty misleading, because there have been a few answers where I went "wait, is this actually a thing? I didn't know about that!" and when investigating further it turned out it wasn't a thing at all and ChatGPT was just trolling me.
- Humans actually try to answer the question at hand, whereas ChatGPT is only auto-completing text. This might lead to similar results in a lot of cases, but the goal of ChatGPT is not to produce correct SO answers.
- Related, humans will (usually) back off or correct answers if errors are pointed out to them. ChatGPT (at the moment) doesn't.
- Humans will (usually) not attempt to answer questions they don't feel like they are qualified to answer. ChatGPT will.
I'm not saying that SO is/was perfect as-is, but ChatGPT is - in my opinion - overall not likely to improve SOs quality as of now. Since it is currently banned on SO, it seems like this is not only my opinion.
I don't think the average person cares at all.
Nice to hear the artist has gotten a more positive response in /r/drawing.
So either 1) It's not AI-generated art, or 2) It is AI-generated art and the artist is a master at prompting.
Either way they should be celebrated.
For example, if I include "anatomically correct fingers" it significantly decreases the number of images with wildly creative ideas for how human fingers should be drawn.
Negative prompting works too. "deformed fingers" or "inaccurately drawn anatomy" can go a long way.
[1] https://cdn.midjourney.com/6f52a6e9-b3f2-4830-81b1-84c8f8ca4...
[2] https://cdn.midjourney.com/361e143f-5121-4bff-ada9-069c2e400...
If it takes a human a month to paint something beautiful, and 1 minute for AI, it's really hard to compete with AI.
The best we have is Midjourney V4, and it's getting quite close.
Disney uses a robot Spiderman stunt double to be able to launch it into the air and do aerial acrobatics, edging into the dance example.
Human artists who are just highly skilled executors of bad taste are going to be decimated by AI.
Stuff like this never had any artistic value in the first place, so it makes perfect sense to me that a bot would create it rather than a person.
Lawsuits need to destroy these models stolen from the public.
No need. Soon there will be so much AI generated "art" that human-made art will largely be pushed out of popular consciousness.
A model randomly remixing human works to reach an approximated result that matches a few words inserted by a human cannot be put on the same plane as a human expressing a concept or inner feelings using knowledge he acquired by looking at other works.
When I look at art (be it paintings, stories, animation, or any other medium) I always think about the people behind it, what stories are they trying to tell, what life lessons and ideals are they trying to bring across the screen.
A text prompt absolutely cannot convey the same amount of meaning as a work of art that takes actual time to make and refine, and however amazing the output may look, it is inherently meaningless, with no human intent behind it to make something great and original.
You don't get to define what is art. Art can and does move people who have absolutely no idea of what the process to create it was.
You have your way to experience art, it isn't the only way. In fact you have a limited ability to learn about the process, how much of it are you making up in your head?
in the fine art world, for now, that's been whats driven value. and when I am in that world to move cash around reliably, thats what I consider and I almost don't consider the aesthetics at all.
outside of the fine art world, there basically is no art market and it is purely aesthetics. the process and intent is irrelevant, only the result and content. I just want cool looking things.
Since it is not possible for you to invalidate my view, then its also what really matter.
AI art has its own process. It can be quite therapeutic.
I think artists need to be fair about AI, is there any artist that created their style without ever studying other artists? That is high improbable because humans need to observe to create art. There is even a saying that "Good Artists Copy; Great Artists Steal".
Just like how it is a bad idea to train github copilot on copyrighted code, or how the same company that made Stable Diffusion promised that they will not use copyrighted music for training (because they're scared of the music industry), copyrighted art should not be used on training sets without permission.
This is going to cause a lawsuit somewhere down the line, even if images only contribute a few bits each, signatures are still seen and this could make an argument in a court case.
It would be fair to everyone to only use public domain material for training sets.
Humans learn actual techniques; they understand what the elements in their art actually mean; when they copy, they do so with intention (whether malicious or not). ML approaches to content creation are incapable of intention, because intention is the product of a conscious mind, and regardless of how similar some of the data structures involved in them are to certain models of the human brain, not one of the existing ML projects even remotely approaches anything we could term consciousness. (Nor is that even their purpose.)
What about the fact that these models aren't just randomly spitting out and taking credit for random images? This seems the most salient point to me — if I used a paintbrush to create a copyright-violating clone of some notable artwork or IP and tried to pass it off as my own, I'd be breaking the law. We wouldn't try to ban paint and canvas and the human arm because it has the potential to create something that infringes on copyright, we'd enforce the actual act.
If these models make this kind of infringement easy, then they are bad products and their users will run the risk of going to court. The whole thing seems like a non-issue.
2.0 uses a new text encoder trained from scratch and it just did not capture the same famous names as the OpenAI CLIP used in 1.x.
Programmers are only freaking out about copyright because they naively believe AI can never automate them out of a job, whereas everyone else (artists included) deserves what's coming to them.
It's the same concern in both cases, but different interpretations of the stakes involved.
The same is true of other institutions, such as congresses and parliaments. Note that politicians run on the basis of what new laws they've gotten passed far more often than what laws they've blocked.
There's something to be said for the idea of a branch of government whose function is limited to repealing laws.
Pretty sure that happens to about 10% of Reddit users every year.
https://en.wikipedia.org/wiki/Hot_metal_typesetting#/media/F...
Citation needed.
> natural world has a lot of repeated elements and hand painting each one detracts from time that could be spent on more expressive aspects of the work.
Can't agree. Using these repeated elements artist can add additional level of impression for a viewer. While AI probably will use random distribution in this situation.
> combined work of artists and AI can produce greater art than the artist alone
An internet court is needed for these cases, like courts in the real world, and supported by them. And an internet police, which makes sure the court rulings are obeyed. Also supported by real world police, if necessary.
This is not new and it is similar to speed painting, and all these prompters using Stable Diffusion cannot do such a thing.
Problem solved and job done.
https://www.seattleweekly.com/news/seattles-reddit-community...
We have no metric or insight into this. The percentage will keep increasing as it's cheap and very economically beneficial for companies to use.
True art is something that can’t be replicated by AI. You will have no doubt once you see it. It still exists even with the proliferation of AI art.
It’s like the difference between a random picture and a meme. The meme looks like a picture, but it captures an emotion or essential human truth that you connect with upon looking at it, where as a picture is just a random picture that could look like a meme but has no real meaning to it. You will know what I’m talking about.
What now. Is my inner experience of art not valid? How do we reconcile this?
Ignoring your assertion about what art is, I have to ask: what happens when you can't tell the difference?
I can compare it to a relationship where a parent praises their child when they do something wrong, instead of scolding them and explaining what is right. So in the end the child is doing totally batshit crazy things and thinks that is normal.
Or imagine that you get in a bubble where your sense of excess heat is distorted and you can touch a red hot iron without any severe pain, yet your hands are slowly crumbling away like a charcoal.
The patterns there are very very strange.
When was it ever like that for artists? Most complex music to perform doesn't mean it's the "best", neither is it like that for art.
As an example from visual art, abstract art is sometimes very simple, yet have a profound impact on people, and it was never about being "technical", "complex" or "hard to reproduce".
So, the challenge is still to dig deep and make something interesting and relevant. The hurdles are removed. The gates are wide open. You can make anything. So… what will you actually make?
For centuries. Everything you just said about abstract art encompasses the entire reason it was important and prompted backlash that continues to this day.
I don't think it's possible to judge these questions you present at a glance anyway.