> Could an AI agent craft compelling emails that would capture people's attention and drive engagement, all while maintaining a level of personalization that feels human? I decided to find out.
> The real hurdle was ensuring the emails seemed genuinely personalized and not spammy. I knew that if recipients detected even a whiff of a generic, mass-produced message, they'd tune out immediately.
> Incredibly, not a single recipient seemed to detect that the emails were AI-generated.
https://www.wisp.blog/blog/how-i-use-ai-agents-to-send-1000-...
The technical part surprised me: they string together multiple LLMs which do all the work. It's a shame the author's passions are directed towards AI slop-email spam, all for capturing attention and driving engagement.
How much of our societal progress and collective thought and innovation has gone to capturing attention and driving up engagement, I wonder.
A sufficiently advanced personal assistant AI would use multimodal capabilities to classify spam in all of its forms:
- Marketing emails
- YouTube sponsorship clips
- Banner ads
- Google search ads
- Actual human salespeople
- ...
It would identify and remove all instances of this from our daily lives.
Furthermore, we could probably use it to remove most of the worst parts of the internet too:
- Clickbait
- Trolling
- Rage content
I'm actually really looking forward to this. As long as we can get this agent into all of the panes of glass (Google will fight to prevent this), we will win. We just need it to sit between us and everything else.
I fell for oldschool marketing yesterday. Im moving into a new appartment in a couple months. The local ISP who runs fiber in my new building cold-called me. I agreed over the phone to setup the service. That was proper target marketing. The person who called me knew the situation and identified me as a very likely customer with a need for service (the building has a relationship with the ISP). I would never have responded to an email or any wiff of AI chatbot. They only made the sale because of expensive human effort.
Not all. Also men on Mars, AGI, Fusion etc.
usdebtclock.org
The "smart people are all working in advertising" trope is idiotic. Just an excuse for people to justify their own laziness. There is an infinite number of opportunities out there to make the world better. If you are ignoring them, that's on you.
Clicking on ads helped with our goal to AI today. Showing you the right ad and beating those trying to game it is machine learning heavy. When was the first time we started seeing spelling correction and next word suggestions? It was in google search bar. To serve the correct ads and deal with spam? heavy NLP algorithms. If you stop and think of it, we can drop a think line from the current state of LLMs to these ads click you are talking about.
"I'm sending spam that sneaks past your spam filter. Sign up to make it stop."
When I realized it was just dudes copy-pasting a “smart contract” and then doing super shady marketing, it was already illegal in my jurisdiction.
And in reality, most software work is 1) API calls and 2) applied math. If you're not in cutting edge private tech or acedemia, your work probably falls into 1 or both categories. Modern "Software engineers" is more a matter of what scale of APIs you're wrangling, not how deep of domain knowledge you have.
Of the people who replied. I bet plenty figured it out, but didn't bother to reply.
...of course they'd probably get an LLM to write the article too.
Consider:
* spammers have access to large amounts of compute via their botnets
* the effectiveness of any particular spam message can easily be measured - it is simply the quantity of funds arriving at the cryptocurrency wallet tied to that message within some time window
So, just complete the cycle: LLM to generate prompts, another to generate messages, send out a large batch, wait some hours, kick off a round of training based on the feedback signals received; rinse, lather, repeat, entirely unattended.
This is how we /really/ get AI foom: not in a FAANG lab but in a spammer's basement.
PS. Howerver, see comments downthread about "survivorship bias". Not everybody will reply, so biases will exist.-
Everyone is so comfortable doing shit like this.
Definitely interesting to see the different culture in tech and programming since programmers are so used to sharing code with things like open source. I think programmers should be more skeptical about this bullshit, but one could make the argument that having a more flexible view of intellectual property is more computer native since computers are really just copying machines. Imo, we need to have a conversation about skills development because while art and writing accept that doing the work is how you get better, duplicating knowledge by hand in programming can be seen as a waste of time. We should really push back on that attitude though or we'll end up with a glut of people in the industry who don't understand what's under all the abstractions.
In all seriousness, manipulation and bullshit generation emerges as the single major real world use of AI. It's not good enough yet to solve the big problems of the world: medical diagnostic, auto accidents, hunger. Maybe just a somewhat better search tool, maybe a better converational e-learning tool, barely a better Intellisense.
But, by God, is it fantastic at creating Reddit and X bots that amplify the current line of Chinese and Russian propaganda, upvote and argue among themselves on absurd topics to shittify any real discussion and so on.
Do you think those countries are the only ones doing this? Just the other day there was a scandal about one of the biggest Swedish parties, one that's in the government coalition, doing exactly this. And that's just one that got caught. In countries like India and Brazil online disinformation has become an enormous problem, and I think that in the USA and Europe, as the old Soviet joke went: "Their propaganda is so good their people even believe they don't have any".
People can be both wonderful and despicable, regardless of era or mechanism.
> As founder, I'm always exploring innovative ways to scale my business operations.
While this is similar to what other founders are doing, the automation, scale and the email focus puts it closer to spam in my book.
Now do Google.
trillions, easily. People wanna sell you stuff, and they will pay to get your eyeballs. doesn't matter if it's to sell you a candy bar or to enlist you into the military. Even non-profit/charities need awareness. They all need attention and engagement.
Facebook + Instagram is $100B+ business, So is Youtube and Ads.
An average human now spends about ~3h per day on their screens, most of it on social media.
We are dopamine driven beings. Capturing attention and driving up engagement is one of the biggest part of our economy.
Note my 'best case' scenario for the near future is pretty upsetting.
In defence of that guy, he's only doing it because he knows it's what pays the bills.
If we want things to change, we need to fix the system so that genuine social advancement is what's rewarded, not spam and scams.
Not an easy task, unfortunately.
As for the humans, we went fishing instead.
Everyone is playing lip service to global warming, energy efficiency, reducing emissions.
At the same time data centers are being filled with power hungry graphic cards and hardware to predict if showing a customer an ad will get a clock, generating spam that “engages” users aka clicks.
It’s like living in a episode of black mirror.
Till then, I will probably avoid more and more communicating with strangers on the internet. It will get even more exhausting, when 99% of them are fake.
Datacenters save a lot more energy than they make. Alone how much co2 is saved when i can do my banking online instead of having to drive to a bank is significant.
The same with a ton of ohter daily things i do.
Is video producing co2? yes. But you know what creates a lot more co2? Driving around for entertainment.
And the companies running those GPUs actually have an incentive to be co2 neutral while bitcoin miners don't: They 1. already said they are doing / going co2 neutral due to 2. marketing and they will achieve it becauseh 3. they have the money to do so.
When someone like Bill Gates or Suckerberg say 'lets build a nuclear power plant for AGI' than they will actually just do that.
Flame me all you want, but this is one case where Bitcoin is much more useful than LLM. If it doesn't create value, as its naysayers claim, at least it allows exchanging value. LLMs on the other hand, burn electricity to actively destroy the Internet's value, for the profit of inept and greedy drones.
That's why I created EtherGPT, an LLM Chat agent that runs decentralized in the Ether blockchain, on smart contracts only, to make sure that value is created and rewards directly the people and not big companies.
By providing it just a fraction of just a bit north of 10% of the current fusion reactions occuring in our sun, and giving it a decade or two on processing time and sync, you can ask it simple questions like "what do dogs do when you're not around" and it will come up with helpful answers like "they go to work in an office" or funny ones like "you should park your car in direct sunlight so that your dog can recharge its phone using solar panels".
Bitcoin consumes as much energy as a country and has basically done nothing besides moving money from one group of people to a random other group of people.
And bitcoin is also motivated to find the cheapest energy independent of any ethical reasoning (taking energy from cheap chinese hydro and disrupting local energy networks) while AI will have energy from the richest companies in the world (ms, google, etc.) which already working on co2 neutral 24/7.
LLMs deliver value. Right here today, to countless people across countless jobs. Sure, some of that is marketing, but that's not LLM's fault - marketing is what it always has been, it's just people waking up from their Stockholm syndrome. You've always been screwed over by marketers, and Internet has already been destroyed by adtech. Adding AI into the mix doesn't change anything, except maybe that some of the jobs in this space will go away, which for once I say - good riddance. There are more honest forms of gainful employment.
LLMs, for all their costs, don't burn energy superlinearly. More important, for LLMs, just like for fiat money, and about everything else other than crypto, burning electricity is a cost, upkeep, that is being aggressively minimized. More efficient LLMs benefit everyone involved. More efficient crypto just stops working, because inefficient waste is fundamental to cryptos' mathematical guarantees.
Anyway, comparing crypto and LLMs is dumb. The only connection is that they both eat GPUs and their novelty periods were close together in time. But they're fundamentally different, and the hypes surrounding them are fundamentally different too. I'd say that "AI hype" is more like the dot-com bubble: sure, lots of grifters lost their money, but who cares. Technology was good; the bubble cleared out nonsense and grift around it.
For one thing, it seems to be coming true.
To a farm upstate?
if models handle my day to day minutia so I have more time, why the hell not...
(I know this is very optimistic POV and not realistic but still)
You're trying to take the time and attention of as many people as possible, without regard for whether or not they'll benefit.
One safeguard people have is knowing that it costs something to send in some way to contact them. I'm this case, the sender's time and attention. LLM spam aims to foil that safeguard,. intentionally.
The author sounds unfamiliar with this brand of marketing email, so I can see why it would come off disquieting to find it’s all AI - but it’s equally annoying from a human.
At least with AI sending this crap nobody can use these emails to justify their sales bonus.
Designing the content of spam e-mails sounds like a small aspect of the "job".
If AI spams start fooling people more reliably, that's not something to celebrate.
This blogger thought, at first, that it came from an actual reader. I can't remember the last time I thought that a spam was genuine, even for a moment. Sometimes the subject lines are attention-getting, but by the time you see any of the body, you know.
Sure, AI spam can severely disrupt peoples attention by competing with "real" people more competently. But people will not have twice the attention. We will simply shut down our channels when the number of real-person-level-ai-spam goes to infinity, because there is no other option. Nobody will be fooled, very quickly, because being fooled would require super human attention.
Granted, that does not seem super fun either.
But when everyone copies what that one person or one company is doing. Software makes the copying process dead easy.
Once the herd starts stampeding, it creates a secondary effect of an arms race for finite Attention of a finite target audience. That assault and drainage of that finite attention pool, happens faster and faster and every one gets locked in trying to outspend the other guy.
An example currently is Presidential Campaigns furiously trying to out fund raise each other. Its going to top 15-17 billion this year. All the campaign managers, marketers, advertisors make bank. And we know what quality of product the people end up with. Cause why produce a high quality product when you can generate demand via Attention Capture.
The chimp troupe is dumb as heck as a collective intelligence.
[1]: https://www.wisp.blog/blog/how-i-use-ai-agents-to-send-1000-...
He misrepresented himself as a big fan of all these blogs, who's read their posts etc. and that's how he achieved such a high response rate. In effect he deceived people into trusting him enough to spend their time on a response.
Now ordinarily this would be a little "white lie" and probably not a huge deal, but when you multiply it by telling it 1,000 times it becomes a more serious issue.
This is already an issue in email marketing. The gold standard of course is emailing people who are double opted in and only telling the truth, and if AI is used to help create that sort of email I don't really have a problem. There is basically a spectrum where the farther away you get from that the progressively more illegal/immoral your campaigns become. By the time you are shooting lies into thousands of inboxes for commercial purposes... you are the bad guy.
Sorry to say but the real issue here is Kurt has crossed an ethical line in promoting his startup. He did the wrong thing and he could have done it pretty effectively with conventional email tools too.
These early days is ripe to make some quick cash before it all comes crashing down.
I'm skeptical: It's easier to create bullshit than to analyze and refute it, and that should remain true even with an LLM in each respective pipeline.
----
P.S.: From the random free-association neuron, an adapted Harry Potter quote:
> Fudge continued, “Remove the moderation LLMs? I’d be kicked out of office! Half of us only feel safe in our beds at night because we know the AI are standing guard for misinformation on AzkabanTube!”
> “The rest of us sleep less soundly knowing you have put Lord Bullshittermort’s most dangerous channels in the care of systems that will serve him the instant he makes the correct prompts! They will not remain loyal to you when he can offer them much more scope for their training and outputs! With the LLMs and his old supporters behind him, you’ll find it hard to stop him!”
> ...
> At least with AI sending this crap nobody can use these emails to justify their sales bonus.
What weird, misplaced animus. You're happy some salesguy got fired, while his boss sends even more spam and possibly makes even more money due to automation?
Those hack marketers rate-limited this kind of spamming. Now things are about to get worse.
Wouldn't the exact argument apply to that boss as well?
In classic HN style the original reply lacks empathy, and demonstrates a preference of machines over humans. Life goes on...
The crazy part is that book was released in 1994! Iirc Greg Egan isn't a big fan of modern "AI", wishing instead for a more axiom-based system rather than a predict-the-next-token model. But in any case, I was re-reading it recently and shocked at how closely that plot point was aligning with the way things are actually shaping up in the world.
The timeframe for this happening in the book was 2050 btw
Anyone that tried to set up a new email domain will tell you its quite a serious task. Email spammers are constantly on the run, setting up new domains, changing up the content to evade spam filters. Its very time consuming, hard and unpredictable. It time for social media to close the gap with email and make spamming effectively as hard.
I postulate that if we applied similar techniques to social media after a couple of years online discourse is going to improve. Or we are not going to do this and the death of the open internet will continue.
Things get much harder when you want to view public posts by strangers, but I imagine some kind of similar reputation-based system could still work.
Still not close to 100%, but when I feel like I do, I will then have a filter and an automated message telling people that removing plus addresses from my email is forbidden and I will not read their message if they do.
You will tell me where you found me, or I won't even listen to you. Because in the future, with an even larger infestation of automated agents passing off as human, that's the bare minimum I need to do.
Still a smart enough system might be able to discover a valid email from my other id info, like my name. But this start to be a lot of work, while just `s/+[^@]*@//` is easy enough to do.
I was recently thinking about this Ozempic fad and how it will lead to no one being overweight but just be dependant on Ozempic...until food producers that made everyone fat in the first place with their processed junk will produce Ozempic resistant foods...and then we are really in a world of hurt.
https://podcasts.apple.com/us/podcast/this-is-how-the-food-i...
Title: "This Is How the Food Industry Is Preparing For a Post-Ozempic World"
For interest sake, users of Unspam that have a title of CEO on their Linkedin see about ~10% of all mail making it into their inbox be categorised as spam (leadgen, recruitment, or software dev services).
I wish your landing page had a simple "how it works" explanation with a screen shot or diagram, rather than forcing me to sign in directly, and also allowing the app to read *and* send emails. Also, I don't see any pricing?
Finally, signing up, I got an error:
Error 1101 Ray ID: 89d4e0957c2f5a44 • 2024-07-03 06:39:15 UTC - Worker threw exception
Would you seriously enable it even if Gmail offered it?
Highly unclear.
All without the writer needing to be involved in reading the cold outreach.
Will our AI overlords create perfect androids to fool us into thinking we're interacting with a human when it's just LLMs disguised as people? Are we ourselves delusional because we're actually already LLMbots so advanced that we can't distinguish thought and running inference? Why do we have only 12 fingers?
If the dead internet theory isn't already true, it is going to be soon.
Such "personalized" cold outreach is seen as the next holy grail by marketers and will be a common sight on LinkedIn, Twitter, Email etc, soon.
There will likely be rewards at first. An uptick in response rates as most of the market won't recognize emails are AI generated. But because it's trivial to send AI personalized emails at massive scale, your email inbox will become entirely useless.
10 signups / 970 emails sent
Cold outreach is dead and word-of-mouth is the most effective marketing method
Anyway, I assume that the reason they are dismantling the skills system (and their verification quizzes) and moving things into personal “projects” is because it’s too easy for marketers to skip the LinkedIn tools if it remained the way it was. Now, however, with Microsoft own LLMs trundling through our data, they’re going to maintain their monopoly on easy access to professionals that meet certain requirements.
I guess it could also be because those skill quizzes had their answers readily available all over the interwebs.
Now, in the post-LLM age, it doesn't sound like a joke anymore.
You could refine this in further iterations by also adding examples based on previous correct/incorrect interest predictions, thereby effectively reducing the amount of spam / making cold outreach suck less.
There are different ways to use AI to achieve the same goals, some more responsible than others.
I've actually made an internal company April fools website. Too bad I've never kept a copy but here goes.
It's called Proxy Ai. It reads your emails so you don't have to. It reads every posts on social media so you don't FOMO. It communicates with those chatty colleagues so you don't have to. Proxy Ai... So you don't have to.
"That actually sounds like a pretty good product. Does it send you a summary of the conversations, emails and social media posts?"
"No"
Quoting from Dirk Gently's Holistic Detective Agency (Douglas Adams):
> The Electric Monk was a labor-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.
It said:
Hi -
Just a note to say I'm a big fan of your writing. I always learn something and love your voice, which is hilarious and singular.
Write a book!
Best,
{Name}
{Link to sender's startup}
{Link to sender's substack}
New to writing online, it made me feel really good that someone enjoyed what I wrote and took the time to write and say so.
After reading this piece, though, I went back and read it again, and I just don't know. It's not quite GPT's usual voice, but it is strangely non-specific.
The startup is an AI startup, the person's Substack is full of generative AI illustrations, and they do seem like an AI fan, but reading their posts, they also seem like someone who's genuinely interested in preventing a dystopia.
I suppose receiving encouraging emails from strangers is just another situation that'll have us looking over our shoulders now, on guard, trying to walk the line between naivety and paranoia.
If I ever receive spam addressed to foobar.com@mydomain.com that is unrelated to your service I know you leaked or abused my data. Result: you will get a DSGVO complaint and I filter all emails addressed to this address from my inbox.
The good thing about using a catchall email address is that I don't have to create a mailbox for each service/purpose, I can just make email addresses up as I go. All you need for that is your own domain and a mailserver that aupports it.
Has this ever resulted in significant penalties for those companies? I used to do this but I gave up as it never seemed to achieve anything.
https://gmail.googleblog.com/2008/03/2-hidden-ways-to-get-mo...
https://learn.microsoft.com/en-us/exchange/recipients-in-exc...
Not trying to tell you to stop though, this is definitely a good idea, when it works.
For example, using a HMAC of the domain. So you generate foobar.com-sr32j4@mydomain.com, it's impossible to generate the sr32j4 part without knowing your secret key, and your mail server checks that sr32j4 is correct before accepting the mail.
Edit: Apparently you can also purchase a domain directly through them if you prefer, although you have to be a paying customer for 7 days first https://www.fastmail.com/how-to/email-for-your-domain/
I never fell for a spam mail so far (i.e. not once clicked a link like OP did), but I fully expect this will change soon. Tough times for people that commonly expect mail from random strangers.
I have no need of messages by random strangers
Then one day he just stopped replying, and his email address would bounce. My best guess is it got shut down, for, you know, scamming. Bummed me out though, he was cool, except for the scamming thing.
Even in the workplace it is now common for most people to have a signature saying "only contact me via ms teams".
I am pretty sure that sooner or later the spam will find its way on teams/slack/discord the same way it does on whatsapp but at the very least they are easier to block permanently.
Wow, that's some extrapolating from a personal bubble if I've ever seen one. Plenty of workplaces still have email as their default communication method.
Hadn't touched marketing for ~5 years, as I said I know the org well so I thought it will take me about a month to get the next 6 months of marketing built and automated. How wrong was I. 7 days later, the full marketing org is running, at a decent scale, on autopilot, for a year, and I don't know if/when I'd need to hire someone into marketing.
Marketing has not fundamentally changed, but it's changed such that one individual could fully operate the fundamentals. Personally I love it, I'm sure others are going nuts.
> Hey Raymond,
Thank you so much for your kind words about my post on revamping my homelab! It’s always a pleasure to hear from someone who appreciates the journey of continuous improvement. Your message truly brightened my day.
Indeed, using Deno Fresh for my blog has been an exciting adventure. The process of managing updates and deployments, while sometimes challenging, has been incredibly rewarding. It’s like tending to a garden, where each update is a new seed planted, and every deployment is a blossom of progress. The satisfaction of seeing everything come together is unparalleled.
Your introduction of Wisp has certainly piqued my interest. A CMS that simplifies content management sounds like a dream come true, especially for someone like me who is always looking for ways to streamline processes and enhance efficiency. The name “Wisp” itself evokes a sense of lightness and ease, which is exactly what one hopes for in a content management system.
I would love to learn more about Wisp and how it could potentially fit into my workflow. The idea of having a tool that can make content management more intuitive and less time-consuming is very appealing. Could you share more details about its features and how it stands out from other CMS options? I’m particularly interested in how it handles updates and deployments, as these are crucial aspects for me.
Thank you again for reaching out and for thinking of me. I’m looking forward to hearing more about Wisp and exploring the possibilities it offers. Let’s continue this conversation and see where it leads!
Best regards, Tim
No one believes the CEO has taken the time to email you with onboarding instructions immediately after signing up anymore. But outreach tactics like this are still quite manipulative.
This person wants me to buy their product, and before they can get a word out about it they’re already lying to me - about the origin, the intent, the faux thoughtfulness.
I want nothing to do with shameless dishonesty. This isn’t the way to sell your product.
Wisp, if you’re reading this, I now have a permanent negative image of your brand.
I wouldn't have figured out this was Ai, and might have engaged had if the topic was relevant to me. I would not have engaged with a traditional spam email even if it had been relevant to me, so there's a real incentive to do stuff like this.
I think marketers underestimate that they may turn people off their brand in the long run by these tactics, because people do not like being fooled. And the more sophisticated the scheme the more outraged people are when they find out.
Knowing the people (mostly marketers) leading the project I can 100% guarantee that they would call these Emails shenanigans a great idea and would immediately start (to tell someone) to implement it without taking a step back and thinking it through.
Enormous amounts of email will be generated but no one will ever see it.
---
Hey Travis,
Checked out the Next.js Notion Starter Kit. Amazing project!
Noticed you might be juggling multiple tools to manage content. Ever thought about a headless CMS that can streamline this?
Wisp might be a handy solution. Let me know what you think!
Cheers, Raymond
>> Have you ever received an email that felt so personalized, so tailored to your interests and experiences, that you couldn't help but be intrigued? What if I told you that email wasn't crafted by a human, but by an artificial intelligence (AI) agent?
> I don't really have words for this, but I dislike this.
What a classy understatement. I find the strategy employed by Wisp predictable and infuriating. Like insects or other near-automata, humanity is racing to the bottom with "Generative AI". And I use "AI" in the loosest possible sense here, because once you pull back the curtain, current tech is actually only a slightly better Markov chain.
After using chatgpt regularly, it's responses to anything but the most trivial, clueless questions are riddled with errors and "hallucinations". I often don't bother anymore, because it's easier to go to the original source: stackoverflow, reddit, and community forums. Gag. It does still make a good shrink / Eliza replacement.
It isn't responding with answers. It's responding with probable verbiage. An actual "answer" requires a type of interpretation that it doesn't perform.
> What a classy understatement.
Maybe i should write a blog, simply because i have a lot of words for this... but well, they would neither be classy nor understatements.
LinkedIn -- like a floodlight in a swamp.
Those haven't been in the best shape for the last decade anyway. The benefits of easily accessible compressed knowledge far outweigh the cost, so we're still going up imo.
ChatGPT is perfect for mundane development tasks and language mobility, so quite useful for a significant portion of especially low level developers. I've prompted a bunch of useful little Python scripts myself, without ever bothering to even check the syntax.
I myself tend to name-and-shame regardless of how it may turn out, whether "positive" or "negative," when I feel compelled to be posting online about a thing I have encountered in my personal life. I think that openness and clearly-evident facts are very important parts of supporting the story that I wish to tell. (And if I did not wish to tell the story, then I would not have done so.)
* But a line must be drawn somewhere.
My own line is this: When I encounter a fucking nazi in real life, I make sure to not propagate whatever it is that this fucking nazi has to say, even if I have a story to write about that fucking nazi. (And we rather unfortunately have plenty of these fucking nazis here in Ohio, so I do get opportunities every now and then to exercise this self-restraint.)
And the common consensus in this thread, which I agree with, is that Wisp is obnoxious, insidious, and is an active participant in the degradation of quality of both email, and the internet as a whole.
It boils down to a risk/reward trade-off, but I doubt that someone would as easily send thousands of spam mails, and also publicly boast about it
Otherwise I don't think you can argue any legitimate interest.
In reality it’s very easy to end up subscribing to newsletters and even my European embassy subscribed me to their event newsletter in Thailand—of course I never agreed to any of that.
It seems that with the gpdr this is now eu wide:
AI spam emails will definitely happen on a large scale in the future, but on the optimistic side, we'll have AI assistants to read every email. Personalization won't really matter; it only matters if the marketed service is genuinely useful to me. In the future, if I have a need, my AI will find the best solution by considering many options. Previously, this was time-consuming, but now AI ensures we find the most suitable solution.
No fancy marketing will be needed because AI will filter most of them. In the end, marketers will find that the most efficient way to market is to honestly list out your service/product specs, as AI will compare them. On the other hand, for things I'm not sure I need, AI will help judge if they are indeed useful to me, regardless of how fancy the email/call is. If they are, it will facilitate cooperation; if not, it will skip them.
Therefore, marketing may still have a role: to help you discover things you aren't fully aware you need, and AI will help you decide if you really need them.
>"I knew that if recipients detected even a whiff of a generic, mass-produced message, they'd tune out immediately."
Then don't brag about it on your blog! Sheez.
(Ok, so technically he's not bragging about it on his blog, because it's probably just an LLM bragging about it on his blog for him, but that's the point!)
> Does this mean that I should private my GitHub-mirror to my personal blog, because this can become a common thing?
Abusing public information on GitHub has become more common. The other day, I received some cryptocurrency spam ads from GitHub. It turns out to be a bot injecting ads as issues on other people's repos and randomly @ing accounts. It deleted such issues immediately, so the net effect is that I get an unfilterable spam email.
Did he use an LLM to write the blog post too?
> It felt like a family fridge decorated with printed stock art of children’s drawings.
Yep. "Generative AI" is like an infinite clip-art gallery that can be searched with very specific queries.
The coin has two sides: in some situations it devalues human effort - as in writing (long/detailed) documents in formal language is now attainable by everyone. In situations where sincerity and originality matters, human effort has now increased in value.
Watch out recruiters, AI can do better than you! Not like I will like these unsolicited outreaches more, the exact number is zero, how many times I found these useful or relevant before when biorobots wrote and sent and administrated it in half or just few minutes, and I do not look forward having these now on mass scale when hundreds of AI could write thousands, flooding my email account, making it absolutely unusable.
Just as most of us ignore calls from unknown numbers, we may also default to ignoring emails from unknown senders in the future. This could lead to a reluctance to send emails, as they might be perceived as "unknown" to the recipient.
Whether they are AI or not, I have no idea, but sometimes, and recently in emails, I purposely make a typo or grammar mistake to add some "human" touch to it, knowing that an AI will always type a perfect one.
How it was written is not relevant. Off to the trash it goes.
> At the same time, we need to establish guidelines around transparency and consent for AI-driven communications at scale. Deception through omission is still deception – people should be aware when they're interacting with an AI agent versus a human.
This is clearly pissing in the pool. I've gotten so much value from people who have made their emails public with a 'if you're curious or learning feel free to email me' (e.g. patio11) and I've long had the invitation in my HN profile too.
Nasty for people to abuse this to extract value for the few weeks/months it takes people to realise what's happening and make themselves harder to contact.
This reminds me of AI-generated fake security vulnerability reports about curl: https://news.ycombinator.com/item?id=38845878
They reached out to me, asking whether my company would be interested in Something Somethingification. I decided that since I don't even understand the term, I'm not the right person, and decided to ignore it.
Then they followed up. Meh.
Then they followed up again, and I thought "okay, a little reward for perseverance", and replied something along the lines of (I don't work there anymore, no access to the original):
"Hey, thank you for reaching out.
Unfortunately, since I don't even know what Something Somethingification is, I am not the right person to talk to. So I'll kindly pass and consider this email human-generated spam. Thanks!"
A response came. Within a minute, barely seconds after "undo send" disappeared.
"Who would be the best person to reach out to, then?
By the way, this is a GPT assisted conversation, so it's a computer generated spam."
WHAAAAT. This really got me. Remember, it was 2021.
"Okay", I replied, "Now you got my interest!
How many such conversations are you able to have at the same time?"
It replied, within a minute. It contained a quite from Arthur C. Clarke that "every technology advanced enough is indistinguishable from magic" and his picture. And an answer: "Actually, sourcing contacts is the bottleneck, so we have only a few of these each day. Anyway, do you happen to know who we could reach out to instead?".
I was amazed, I decided I'll reward this with what they want.
I replied how impressive it is again, as the whole conversation made sense, and it gave them a contact to a director that could be the right person. They won this one.
We need to update our spam filtering techniques, fast. Somehow. But how?
It seems like CoPilot/ChatGPT has this all-too-eager tone in the beginning of their responses.
The demo (1) of not Scarlett Johansson telling a blind man what a great job he was doing for managing to flag a taxi sounded so fucking patronizing to my ears. Worse is, the user has a British accent, the Brits probably hate that patroniz^Hsing too. It reminds me of that 4chan green text about a man's flight to the US and how everyone was saying "Great job!"
They will all lose money, time and more with the coming wave of spam and fraud.
"Hey, love your work. random flattery What do you think about mine?"
I've received a few messages like that before LLMs were around, just an annoying self-marketing technique.
Something like a marriage of a digital signature with a captcha: the message has a digital signature of the sender that can be verified with their public key, but it is somehow verifiable that the particular signature provider only does the signature if a human being completes the (difficult AI-proof) captcha.
Something like this approach can at least mitigate the mass AI email problem, although the one-off AI emails are unlikely to be slowed by this approach.
It's possible to use a noreply.github.com linked to your username for making commits. And you can to change the authorship of past commits in your own repos with write access.
I try to avoid give my email in a public and processable format whenever possible.
The only problem is that they referenced a role at a company I'm no longer at. The, presumably AI, author crafted the email in reference to my former role at a different startup.
After seeing this thread, I decided to follow up on my AI suspicions. Nothing conclusive, but that person is currently touting that they've sold their "course" to "1000+ founders."
No thanks.
Both are unsolicited emails, i.e. spam.
I feel confident that Gmail’s spam filter will be able to handle this quite well.
I’m betting that the introduction of LLMs will not change the fundamentals of spam-fighting.
> Assuming they could solve the problem of the headers, the spam of the future will probably look something like this: > > Hey there. Thought you should check out the following: > http://www.27meg.com/foo
Funny. 20 years later, that’s indeed how many spam messages look like.
The key difference here is personalization.
Traditionally, if a message was personalized it fell under 'cold outreach' and users were more likely to interact and play along. Just like what happened with the author (the same applies for everyone).
It's like the difference between receiving a flyer vs being contacted by a sales representative. Even if it's they advertise the same product, the perception is different, the results are different.
If you're mean the difference from a pure technical spam detection perspective, I'm not familiar, but would love to read more about the subject and the state of the art techniques if anyone has some resources to recommend.
Unless you're specifically looking for unsolicited offers, in which case you probably have a process for them, they seem like a waste of time.
b) If you want to read more, feel free to check the link I posted. Paul Graham has thought/written a lot about this. I think one reason people has forgotten about those articles is that today, a huge number of us use Gmail, so we don’t actually need to think so much about how spam filtering is implemented.
And AFAIK, Bayesian filtering (by the recipient) doesn’t require any knowledge of what other people has received.
https://docs.github.com/en/account-and-profile/setting-up-an...
I'm not after shallow interactions today and I would use it (much like a dumb spam filter) to judge a new sender's respect for my time expecting them to have stated their business with total upfront clarity, not mystery.
Everyone's spam filter is tuned differently from others', so spammers had a hard time beating this with automated messages. About the best they could do was adding random keywords in hopes of triggering someone's positive "not spam" trigger.
Now spammers gain personalisation at scale, so this advantage is at risk.
And also from the About page on the linked website
Even now, we're starting to have a sense for which images and text were AI generated. And they'll evolve to get around the antibodies. And we'll build new ones.
https://github.com/skorokithakis/spamgpt
It was a bit of fun, until I realized that most of the replied from the spammers were AI as well. We were just automatically spamming each other while OpenAI made money.
I stopped using it then.
Serves them right. Unless they're a bot too of course, then you can't waste their time.
Although he got more click-throughs to the top of his funnel, none of them are going to pass through to a conversion because once you reach his site, you realize that he's deceived you.
That he doesn't even realize this is concerning...
The general public doesn’t want or need it. They want to work less and get paid more.
Maybe in future I will have my ”AI secretary” to answer those and have a discussion with the ”AI sales assistant”.
I talked to many people, and all have developed immunity against the cold outreach.
it's a pure numbers game. even people who think they're immune are 1 highly-targeted, pain-point addressing email from replying.
As noted in the article, you might in the future not even notice you're being AI-spammed. What if "timharek.no" is AI-generated?
What if Wisp CMS being so upfront about its use of AI is part of the trick? It just got exposure on HN, after all!
You definitely should mark this email as spam so this cannot become a common thing.
People sending AI crap to others should have their email accounts banned.
Can't help but wonder if the advent of LLM systems wouldn't be quite so depressing if we weren't already operating in an internet that's been reduced to basically a cesspool of advertising and communication-spam.
One issue I see is that it’s much harder to employ an LLM defensively (for filtering) than offensively.
Welp.
Subject: Your Passion For Homelabbing is Contagious (Spam: 6/10)
Report: Flattery to establish a connection. Quick shift to product promotion. friendly but lacks personalization. Specific reference promotes their solution. Calls for a response.
So even if buddy buddy spam becomes pervasive you really only have to decide how accepting your are of obvious sales tactics in normal comms. It may end up that everyone having more nuanced spam filters forces humans to use those same tactics less in normal comms.
Specifically smart filter to remove SPAM in a smarter way.
Most people get a lot of spam from sales agents, SEO services, start-up accelerator, etc...
With GabrielAI you can say stuff like:
"If the email is from a SEO agency or it is trying to sell me SEO service"
Then move it to SPAM.
Similarly for all other type of spam or emails.
You can also move stuff to different labels in Gmail to organise your inbox.
Spam is spam?
Some people struggle with learning new ways of controlling for scams but it's never going away, just something they must consider more and use better tools to solve.
The "upside" is that nature eventually takes care of things when they go out of equilibrium, so there might be a forest fire on the horizon to restore it. In the case of AI spam, it might cause people to automatically filter their incoming mail from any content that even implicitly tries to sell something, or even any email arriving from an address that is not on their whitelist. This might eventually cause people to need to actually physically meet (gasp!) in order to add each other to their whitelist.
Edit: "Unnecessary" might be my judgement, instead of "acceptable."
> There's also the question of ethical considerations around using AI for mass personalized outreach. While my experiment yielded positive results, with recipients appreciating the personalized touch, there's a potential slippery slope.
Unbelievable... I'm not a philosopher, but in my understanding, being ethical doesn't mean walking the line just fine so as people don't call you out on your bullshit.
The ethics of an action is of consideration both BEFORE and after executing it, and on the merit of the action itself!
Cold spamming is illegal where I'm at, probably Europe as a whole?
I'd be curious how this plays out in court. Probably something like:
- If you use an AI tool to scrape leads and to generate the content but then still send out individual emails from your Mail provider, it's still a cold email.
- If you use an AI tool and also automate the email delivery, it should be considered spam.
...
2024: AI impersonating Bill Gates sends you SPAM
It’s sad that going forward I probably won’t be able to tell genuine interest from this kind of fake bullshit.
If they don't know my name, they don't even know where they got my email from, so probably spam, however intelligible it looks.
It's the same in the age of spam calls. If it's a mobile phone and the person behind didn't even bother to introduce themselves via SMS/WhatsApp, I don't pick up.
This will make it worse.
Solutions? At least some could involve key exchange. How about a bounty of some sort on spammers?
... shall we tell him?
Dug?
They admit (or actually brag) about it on their company blog "I used AI agents to send out nearly 1,000 personalized emails to developers with public blogs on GitHub."
Do you think they're bluffing?
> This sounds like the average email written by a human
that's the point
Guys, it’s a tool like any other.
Anyways. LLM is a program created by supercomputers to be deceptive.
Also it took away the aspect of life that people around the world could cold email each other if their hobbies align.
And in general, now the percentage of potential bad actors went from near 0 to near 100.
And for why? .. ..
a) doable
b) the right solution.
(And eventually start producing very weak chips, that can run your business and accounting on a TUI.)Your right to swing your fist stops at my nose.
... what an incredibly odd thing to say.
But really, I've noticed that thought-ending cliches like this one are popping up as defensive reactions around LLMs more and more. This particular thought-ender displays the most common theme - it dismisses all skepticism as being driven by some amorphous "anti-AI" demographic, presumably allowing the author to dismiss any concerns and thereby preventing any critical thought from occurring.
Kind of feels like "nocoiner" and "have fun being poor", v2 ...
As TFA shows, this machine learning is almost indistinguishable from actual intelligence. It might not be sci-fi AI, but it certainly is artificial, and is is indistinguishable from intelligence. AI is a very apt description of what it is.