Yes they could have accessed logs before but there’s a difference between directed checking after incidents and active surveillance at scale.
It had no impact of recruiters trying to win me back since then.
I work at a tech firm in India, and we are encouraged to create skills.md based on the traits of our colleagues, with the intention of reducing key personnel risk. A handful of engineers were let go as the result of a re-alignment, and their AI counterparts are actively maintaining their code.
I wonder if this is where they are going.
Like that "Scott is an asswipe who never agrees to any idea that isn't his" or what?
If keystrokes are captured, isn't this a double-edged sword where maybe the company might be inadvertently collecting evidence against itself if there's an investigation and the investigators want to collect keystrokes?
Not that I support it -- but typically companies don't do this in spite of security concerns, they do it to address security concerns. But of course, what meta is doing sounds like a different situation. It sounds like they want to make a model that replaces part of their workforce.
And when all of the above happens Meta will be absolved of any responsibility.
I don't understand how it's legal either. I guess we need laws against it yesterday.
Meta already has literally have billions of people's personal profiles and browsing history.
I don't think screenshots of their SWE's IDEs is going to be useful for identifying internet users.
The cat is out of the bag, but that doesn't mean it's a non-issue.
EDIT: While we are here, let's do this for politicians as well :), publicly available, auditable 24-hour surveillance.
The legal environment is the only way to baseline behavior. In countries with strong worker's rights, you generally don't have to fight much to make use of them; it's the norm for management, too. Likewise, the US-style norm of having no expectations toward your employer and the "stay in your lane" type takes rampant in the thread are also symptoms of the environment and its norms.
And you expect Meta employees, of all people, to believe this?
I realize you can argue whatever is done at work should have no expectation of privacy, and I get that, but as an employer myself I've always felt that schemes like keyboard and mouse tracking are going a chasm too far. Your employees are human beings not robots. In the older context of corporate productivity tracking there are far better metrics available - starting with, I don't know, maybe talking to your employee and asking them how things are going.
I wouldn't have a problem if it were opt-in, but if this were foisted upon me I would surely quit.
Then they’ll deploy models trained on this, and begin capturing employees using AIs that are good at using AIs to do work.
Repeat a few times and they’ll start capturing the keystrokes from people mashing their heads into keyboards with dispair and exclaiming, “Why can’t these models do anything anymore!!”
It’s only once the business is having a cash crunch or will no longer need to hire competitive candidates that they start letting people go without severance.
These models already have the skills that humans were using them for, so either by training the models to use subagents or simply inlining the work done by the AI, you have a much easier time training the model to perform tasks from a human-distribution. The humans have done the work of making the human-distribution look more like an AI distribution.
If it is as you say, then eventually the house of cards will crumble. Then we can finally go back to work and quit being inundated with needing to use AI for everything.
- the project was time constrained, so hardly any time, and
- there were serious ethical questions which could never be addressed well within the allotted time for this project
So we ended up discarding the idea of collecting data from a representative group, even before we got to the point of asking "how do you do handle that ethically". We ended up collecting data from 1 subject. The student in question, indeed. He handled the data from which he derived heuristics that simulated the data. The collected data therefore never left the student's hands.
<sarcasm>Silly us, we should have just not bothered and collected it from anyone and anywhere. Apparently.</sarcasm>
In all seriousness, this callous and complete disregard for ethical questions offends me so very much.
We’ve been moving towards a more and more tyrannical company controlled society for a long time and now they’re straight up doing hacking tactics to train machines to take our jobs. Doesn’t get much more bleak than that.
The goal is to manufacture a lack of empathy along the lines of: "why should I treat this person better than I was treated".
But then, we're talking about humans, especially the violence-enjoying strata of humans here.
I hope this is widely hacked. If these employees are any good, someone will whip up a countermeasure that feeds absurdly wild and nonsensical data into Meta's fetid, gaping maw.
You can browser personal accounts from your phone.
I’m surprised this needs to be said out loud.
Last time I checked, Facebook is not a thing other than for watching AI generated content, Instagram is still a thing to watch mind-numbing content and get distracted from other problems by doomscrolling.
It’s same as language translators and RLHF annotators doing work to contribute to AI training data. Is Facebook or Instagram solving a problem for humanity that’s worth selling your soul? Won’t a job that can be automated with AI training data of clicks and typing, markdown files, next function prediction with no coherence be significantly worse than a job that requires creativity?
The presentation of the video and all the comments were on awesome cool ego-centric video understanding research that’s going to totally obsolesce human labor. I couldn’t get over how grim the video was. Here are some people in one of the least desirable positions in the world, and that’s not enough. Now they must labor without a shred of dignity, knowing they’re training their own replacements and likely not a thing they can do about it.
I’ve struggled to find enough freelance work to stay busy recently, but more than that I’m starting to feel a moral crisis. It’s getting harder and harder for me to feel like what we’re collectively doing isn’t absolutely fucked.
This is like going to work in a drug-lab where everyone is required to strip naked to ensure no "product" can be smuggled out. It's a zero trust environment at first blush, with the added terror of it being used to replace you with AI.
People working naked in a drug lab have more job security than meta employees and an equivalent level of respect and trust from their employer. However, they can't unionize because they have no legal protections. Their employer could literally point a gun at them if they complained. That isn't the case for Meta employees. Just sayin'.
The interesting design question isn't "can we collect this" — it's "what do we lose by not collecting it."
I run a small web tool. Early on I considered session recording, heatmaps, keystroke timing. Every one of those would have made the product slightly better to optimize. I didn't add any of them. Not because of regulation, but because I didn't want to be in the business of explaining to users what I was doing with their cursor path.
Meta is in a different position — these are employees, not customers, and the stated goal is AI training. But the logic is the same: once you've decided the data is useful, the collection feels justified. The question is who gets to decide "useful to whom."The part that bothers me most isn't the mouse movements. It's that there's no symmetry employees can't see what Meta does with their cursor data the same way Meta can see what the employee's cursor does.
Meanwhile, nobody seems focused on capturing CEO’s data for AI training.
Imagine in 300 years we are still ruled by zuck, ellison, bezos, musk, thiel, et al, just in ai model form empowered by estates worth more than entire nations and legal protections designed to outlast heat death of the universe. Assuming there is still a "we" living on earth. Charitable assumption I guess.
Someone had to do it, distasteful though it may be. Could be quite hilarious what it learns in the process.
Dogtraining? Dogwalking? Dogfeeding?
1) With vibe coding being so effective, Meta employees have easy ways to poison the data set in <30 min of work.
2) What could they possibly want or create that would justify the bad press and employee disengagement from work devices? Better spam bot detection? They have to have some of the best already.
3) No really, Meta employees have the opportunity to really mess with the data in quite humorous ways without any pushback (see 0)
I couldn't imagine life without my unique keystrokes and mouse movements.
Some call it museumverse.
“ The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.”
i've heard it described that evil is that which believes itself to be good without exception. i think i'm starting to agree...
As far as I understand, there is plenty of research there in disciplines raging from social studies through psychology to game theory and economics, as well as informal simulations, that strongly suggest that human interactions are positive to participants pretty much if and only if those interactions are repeated, which realistically only occurs if participants are circumstantially close already - same neighborhood, same job, family, friends, same school, etc.
One-off interactions are almost invariably toxic with at least one of the participants getting cheated, bullied, or otherwise harmed.
So the whole premise of connecting people unconditionally, including anonymously, automatically, and from opposite sides of the world is inherently broken and doomed to do a lot of damage.
So even Meta's self proclaimed mission is damaging to society if followed, what could possibly at that point be expected from what they actually do, given the combination of basic facts that the primary purpose of any business is to make money, Meta's specific notoriously evident disregard towards ethics, their position as an advertisement business and entertainment provider, being deep into enshitification and market saturation, and of course actual honest mistakes to boot.
I will say that I feel for the folks who work at Meta...I can't help but to feel they have long jumped the shark.
Being a terminal application, all interaction is trainable signal (unlike, say, cursor, which is an IDE and let users freely explore, edit the files, move the mouse. Model sees nothing of it, nothing to train upon).
So meta is doing the obvious, we want to train a computer use model, we need training data. Better to capture from employee than buying low quality data.
This is clearly and obviously the shortest possible path to a total surveillance society.
They don’t add anything beneficial to society. They exist to sell ads.
Technofascism.
Fixed it.
kk1Gi// file.js<Esc>M/func<Enter>o let<Esc>``
Taking screenshots too.if they continue to share their work through open releases despite the leadership change, i hope we get to benefit with their work.
not quite optimistic about the result as i wonder if on aggregate we all consistently interact with computers the most efficient way possible. maybe to beat captcha or scraper detection through mimicry perhaps.
Seems like a strange approach in general. I'd have assumed you'd just have it use accessibility features to get at things, if there is no other interface.
Also, why are the investors not suing the legs off of Zuck for the whole meta verse debacle? It is a scam and pure fraud. Also dumb name, sue for that too. Should have just renamed it meeme.
Sure, you can do everything a human can, but it also seems VERY inefficient
As an alternative, maybe you could just do network in/out?
The computer UI is the way it is because that is optimal for humans, if your plan is to replace humans why not just replace the whole stack os and all to something these models already know how to use?
Terminated for under-performance? Subpoena the surveillance data. Get their data scientists in for a deposition to explain the algorithm and schema.
All it takes is one disgruntled employee with deep enough pockets and a large enough axe to grind to turn this training data into nuclear waste that poisons the company top to bottom, but I'm confident nothing will change until that happens, at which point the current crop of leaders will already have their moneybags and have fucked off safely into the sunset.
the signal is every time a human has to grab the wheel. that's a label for what the agent still misses.
They 'trust me'. Dumb f*ks.
Does the executive know better at this point but have toasted the culture and no one can fight against it anymore?
There is also this effect:
- CEO says "the lights are a bit dim in here"
- that turns into "We need to change all of the lightbulbs in here immediately!"
(this is especially true in firms where the CEO cares a lot about being proactive).
Two great posts/stories about this:
1. This post about smart employees "reading their managers minds": https://yosefk.com/blog/people-can-read-their-managers-mind....
2. In Michael Crichton's book Disclosure there is a great line: "Why did you dress casually instead of wearing a suit? Is it b/c you wanted to do that or b/c the CEO did it and you wanted to show you were part of the team??"
What does this link tell you? https://www.thedailybeast.com/facebooks-sheryl-sandberg-told...
Hooking keystrokes, mouse, screenshots on a local machine is what every decent journaling or timesheet app already does, and nobody cares because the file stays on the user's machine. Meta isn't getting dragged because they figured out how to instrument work laptops. They're getting dragged because they're the ones holding the logs, and "just for training" is a promise that hasn't aged well anywhere it's been deployed the last ten years.
Annoying side effect — the genuinely useful version of this, local activity logs you own for your own records, gets lumped in with bossware every time this comes up. Most freelancers and consultants I know would pay for the former. Most of them would quit over the latter.
Half my workday is me browsing random tabs while an AI agent does the actual work. They're going to train a model on alt-tabbing and scrolling HN/Twitter/Reddit.
Don't get me wrong. It is not to applaud here, it is simply fascinating looking through a factual lens here.
And maybe there will be more questions than answers. And I bet this is going to be funny, when there won't be a clear picture in the end. What are high performer, low performer anyway? There are many pieces missing. I for example do a lot of visuals using a notebook with a pencil. To this day I find Miro etc. distracting and for my creativity to distracting. Hand writing is different to typewriting. I am way faster in the last case and associate through everything until I lose track of the main thing. Not with notes on paper. I utilize this fact, don't go for one thing over the other, but bloat is the result of doing everything virtually.
So how would my keystrokes then look like? I don't know. Highly efficient, maybe, but lots and lots of gaps without hitting the keys.
Low performers? I was overlooking over 500 engineers, did over 400 interviews, build departments from the ground up, watched people do work and helping them via inhouse developed tools to work better while having more fun.
So I think in Gauss often times and low performers, a term, I despise, but it is used for simplicity, aren't really doing nothing, in fact they work a lot, it is just the content or method that is so bad.
My best devs but a lot of consideration into architecture and communication - I trained them. They fell key decisions and helped teams get better.
The industrious low performers complained about them, that they rarely are doing "the work" on their PC. Well, well.
So, would I feel comfortable? No. And don't do to others - as the saying goes.
But if there won't be any consequences just data which cannot be tight to a worker, or and if, it can only be used to benefit them, I would happily take part in such a data gathering, because we all do personal optimization and I am curious about what the data "says" vs. subjective feeling.
On the other hand, tracking might be inevitable - hear me out - if these people are working on NDAs etc. Leakage is monitored anyway, make no mistake. So it sounds like closing a gap.
Tough, very tough.
I tend to say no one gets ousted in corporate companies for their mistakes but by their foes. So data is one thing, the one stabbing you another.
Always thought Meta was a god awful run company and this just brings home the cake
A cynic would say this has nothing to do with AI since meta owns employee machines anyways and has always been collecting data. Perhaps voluntary attrition > layoffs thinking in action.
But I’m having a lot of trouble envisioning how my keystrokes could actually train something you could use while you’re typing. Latency between keystrokes and seeing things appear would kill my productivity way more than a tool could recapture. Heck, when VS Code fell in love with agent suggestions it took me a week to fix my editor so I could be productive again.
Really though it seems reasonable to me. They want data to train AI, and their employees are obviously a large source.
They could already track your every click. They have root on your work MacBook. Most employers do.
If you then think of crazy companies such as Palantir, something really has to be done about those entities. As a first step I suggest disbanding those companies, for many reasons, including wrong ethics.
Btw do they at least pay them extra for this spying or is it supposed to be for free? I mean if they paid at least 30-50% on top of the salary maybe I wouldn't mind doing it on dedicated meta computer.
Martin Niemöller's "First They Came" fits perfectly and it shows how hypocritical it is to approach the subject of ai training only when it applies to you. What a Wonderful World!
I...admire the diligence
Horseshit.
1. Employees are being asked to train AI to replace them.
2. Performance assessments will 100% be impacted. No question.
Thinking back on the OTT interview experience that Facebook helped pioneer, imagine making it through that, getting paid a massive sum of money BUT barely getting by on it because of the location, then they drop this crap on you?
Big Brother is always watching.
More proof that they do not care about you at all. This is Meta's way of moving fast and destroying everything at all costs.
Optimizing ourselves to death.
Capitalism is asleep at the wheel with its foot stuck on the gas pedal.
I know you've long been hypnotized by libertarianism and the cult of the individual.
Maybe it's time you reconsider in light of the overwhelming evidence that the capitalist class is, in fact, not your friend.
The only known way for workers to assert their rights is collective action. Alone, you are weak and replacable. Together, we are strong.
It's time for a proper tech worker's union, to give us some fangs to claw back our dignity with.
> The tool will run on a list of work-related apps and websites and will also take occasional snapshots of the content on employees’ screens for context, according to one memo, posted by a staff AI research scientist on Tuesday in a dedicated internal channel for the company's model-building Meta SuperIntelligence Labs team.
ALL YOUR DATA IS BELONG TO US
¯\_(ツ)_/¯
If they captured display output as well, it could be a very useful dataset for generalized computer use.
Now imagine a society where your individual daily actions are recorded, reviewed and helpfully advised upon.
Millions of people making millions of actions each day and all recorded compared and sifted for positive feedback and improvement overall.
Just how far ahead would such a society pull compared to one that stays at today’s level. Compared to one that used totalitarian methods enabled by such surveillance?
The difference between Soviet and Western Europe was not the tech, it was the trust.
If we can build a society with f trust then this tech will turbo charge us.
If …