Organizationally, everyone should be prepared for and encourage that kind of response as well, such that employees are never scared to say it because they're worried about a snarky/angry/aggressive response.
This also applies to non-work related calls: someone from your credit card company is calling and asking for something? Call back on the number on the back of your card.
A few understand immediately and are good about it. Most have absolutely no idea why I would even be bothered about an unexpected caller asking me for personal information. A few are practically hostile about it.
None, to date, have worked for a company that has a process established for safely establishing identity of the person they're calling. None. Lengthy on-hold queues, a different person with no context, or a process that can't be suspended and resumed so the person answering the phone has no idea why I got a call in the first place.
(Yet I'll frequently get email full of information that wouldn't be given out over the phone, unencrypted, unsigned, and without any verification that the person reading it is really me.)
The organisational change required here is with the callers rather than the callees, and since it's completely about protecting the consumer rather than the vendor, it's a change that's unlikely to happen without regulation.
What's fun here is, the moment they ask you for anything, flip the script and start to try to establish a trust identity for the caller.
Tell them you need to verify them, and then ask how they propose you do that.
Choose your own adventure from there.
Last time I did that, the caller said "but you can just trust that I'm from <X>." So I replied that they, likewise, could just trust that I'm me, and you could practically hear the light bulb click on. They did their best to help from there but their inbound lines aren't staffed effectively so my patience ran out before I reached an operator.
However, I don't have strict requirements. When a simple callback to the support line on the card, bill, or invoice doesn't suffice--and more often than not it does, where any support agent can field the return call by pulling up the account notes--all I ask for at most is an extension or name that I can use when calling through a published number. I'll do all the leg work, and am actually a little more suspicious when given a specific number over the phone to then verify. Only in a few cases did I have to really dig deep into a website for a published number through which I could easily reach them. In most cases it suffices to call through a relatively well attested support number found in multiple pages or places[1].
I'm relatively confident that every American's Social Security number (not to mention DoB, home address, etc) exists in at least one black market database, so my only real practical concern is avoiding scammers who can't purchase the data at [black] market price, which means they're not very sophisticated. A callback to a published phone number for an otherwise trusted entity that I already do business with suffices, IMO. And if I'm not already doing business with them, or if they have no legitimate reason to know something, they're not getting anything, period.
[1] I may have even once used archive.org to verify I wasn't pulling the number off a recently hacked page, as it was particularly off the beaten path and a direct line to the department--two qualities that deserve heightened scrutiny by my estimation.
For example whenever a caller is requesting sensitive information, they give you a temporary extension directing to them or an equal, and ask you to call the organization's public number and enter that extension. Maybe just plug the number into their app if applicable to generate a direct call.
Like other comments have mentioned, the onus should be on them. Also, they would benefit from the resultant reduction in fraud. Maybe a case study on fraud reduction savings could help speed the adoption process without having to invoke the FCC.
It works if I call a bank or insurance company or something like that. A robot voice will ask me to authenticate and when I have done so and is transferred to an operator they will see that I authenticated. So it works when I call them but not the other way around. We need a new system.
The trouble is, calling the number on the back of your card requires actually taking out your card, dialing it, wading through a million menus, and waiting who-knows-how-long for someone to pick up, and hoping you're not reaching a number that'll make you go through fifteen transfers to get to the right agent. People have stuff to do, they don't want to wait around with one hand occupied waiting for a phone call to get picked up for fifteen minutes. When the alternative is just telling your information on the phone... it's only natural that people do it.
Of course it's horrible for security, I'm not saying anyone should just give information on the phone. But the reality is that people will do it anyway, because the cost of the alternative isn't necessarily negligible.
Note that it's very important not to let them give you an actual phone number to call on. This sounds obvious but I know someone who hung up but called back on a number given by the scammers, which was of course controlled by them and not the bank.
I like the "hang up, call back" approach because it takes individual judgment out of the equation: you're not trying to evaluate in real time whether the call is legit, or whether whatever you're being asked to share is actually sensitive. That's the vulnerable area in our brains that scammers exploit.
You're saying it's natural for people not to want to call back and wade through a million menus, and I agree.
But the conclusion from this is that companies should change their processes so that calling back is easy, precisely because otherwise people won't do it.
And the more people that do it despite the costs, the more normalized it'll be, and the more companies will be incentivized to make it easier.
If your customers are captive, this is all upside. And most customers will tolerate this. The ones that do churn somehow don't generate blame for the psychopaths who implement this hostile practice, those bastards cut support costs and get promoted out.
Normally we just write message back and forth in the banking app, and if we talk it's an online meeting with video. Only for large business I go to the physical site.
There's a number of situations, not just credit card ones, where it's impossible or remarkably difficult to get back to the person that had the context of why they were calling.
Your advice holds, of course, because it's better to not be phished. But sometimes it means losing that conversation.
Honestly, the worst experiences are usually with large companies that funnel all customers into massive phone centers - I've probably lost the better part of a week to Comcast over my lifetime.
[1] For at least one place, the people that proactively identify things that could be fraud and call you...they aren't the same people you call to report fraud on your own. Why? No idea.
I have had at least one situation where I spent a while trying to get back to a quite convincing/legitimate sounding caller this way, where, as I escalated through support people it became increasingly clear that the initial call had been a high quality scam, and not in fact a real person from the bank.
Companies, including banks, don't call you to protect _your_ interests, they call you to protect themselves.
It's probably a good idea to program your bank's fraud number into your phone. The odds that someone hacks your bank's Contact Us page are small but not zero.
The bedrock of both PGP and .ssh/known_hosts could be restated as, "get information before anyone knows you need it".
Fraud departments contacting me about potentially fraudulent charges is always going to make me upset. Jury is still out on whether it will always trigger a rant, but the prognosis is not good.
There will need to be jail time for the idiots writing the government standards on these fraud departments before we get jail time for the idiots running these fraud departments before it gets better.
I'm not sure what grounds you issue arrest warrants on, but I appreciate the sentiment.
This recently happened to me, and bizarrely they wouldn’t tell me what’s actually going on on my account because of not being able to verify me. (They were also immediately asking for personal information on the outbound call, which apparently really was from them.)
A big one I'm aware of many others complaining about in the industry is local governments in the UK soliciting elector details via 'householdresponse.com/<councilname>' in a completely indistinguishable from phishing sort of way.
(They send you a single letter directing you to that address with 'security code part 1' and '2' in the same letter, along with inherently your postcode which is the only other identifier requested. It's an awful combination of security theatre and miseducation that scammy phishing techniques look legit.)
I received an email from my state’s RTA, saying they were adding 2-factor authentication to licences. Great! I assumed this might be an oauth type scenario, or maybe even just email.
Nope. The “second” factor is a different number printed on the licence. Surely this communication had to go through multiple departments, get vetted for accuracy. Yet no one picked up that this isn’t multi factor authentication.
Its only purpose is to make it easier for them to issue a new licence _after_ you’ve been defrauded out of all your money, because most states refuse to issue people with new licence numbers. It does nothing more than fix an incompetence in their system/process. Yet it was marketed as some kind of security breakthrough, as if it would add protection to your licence.
I had AWS of all places do this to me a year or two ago. The rep needed me to confirm some piece of information in order to talk to me about an ongoing issue with the account. If I recall correctly, the rep wanted a nonce that had been emailed to me.
"I'm terribly sorry but I won't do that. You called me."
Ultimately turned out to be legit, but I admit I was floored.
As part of that a genuine ask for a password would get the same response, and perhaps the button sends a nice message like "Looks like you have asked for a password. We get it, sometimes you need to get the job done, but please try to avoid this as it can make us insecure. Please read our security policy document here."
This is a policy I've implemented as well, both for myself and loved ones: don't provide any information to unverified incoming calls. Zero.
Sometimes I'll get some kind of sales call, which I may even be interested in. I'll say, proceed with the pitch to which they'll reply "first we need to confirm your identify". Then I refuse: you called me. Why do you need me to provide private information to confirm my identity?
It really irritates me that some significant companies openly encourage customers to ignore this advice, teaching then had practise. The most recent case I know of is PayPal calling myself. It was actually thenm new cc account, I thought I'd setup auto payment but it wasn't so I was a child if days late with the first payment) but it so easily could have not been. The person on the other end seemed rather taken aback that I wouldn't discuss my account or confirm any details on a call I'd not started, and all but insisted that I couldn't hang up and call back. In the end I just said I was hanging up and if I couldn't call back than that was a them problem because at that point I had no way of telling if it was really the company or not. At that point she said she'd send a message that is could read via my account online, which did actually happen so it wasn't a scammer. But to encourage customers to perform unsafe behaviour with personal and account details is highly irresponsible.
Especially OTP codes.
I can't understand how someone works at a tech company and is clueless to the point of sharing an auth code over the phone. My grandma, sure, but a Retool employee? C'mon, haven't we all read enough of these stories?
Security is a weak-link problem, not a strong-link one. You have to plan for the least security-minded people, the tired and stressed employee.
To the point of sharing an OTP code over the phone from a strange number? I'm sorry, no.
The millenials are right in not picking up the phone
Also, I think the article spends a lot of effort trying to blame Google Authenticator and make it seems like they had the best possible defense and yet attackers managed to get through because of Googles error. Nope, not even close. They would have had hardware 2FA if they were really concerned about security. Come on guys, it’s 2023 and hardware tokens are cheap. It’s not even a consumer product where one can say that hardware tokens hinder usability. It’s a finite set of employees, who need to do MFA certain times for certain services mostly using one device. Just start using hardware keys.
(I wish we could blog about this one day... maybe in a few decades, hah. Learning more about the government's surveillance capabilities has been interesting.)
I agree with you on hardware 2FA tokens. We've since ordered them and will start mandating them. The purpose of this blog post is to communicate that what is traditionally considered 2FA isn't actually 2FA if you follow the default Google flow. We're certainly not making any claims that "we are the world's most secure company"; we are just making the claim that "what appears to be MFA isn't always MFA".
(I may have to delete this comment in a bit...)
Law enforcement is currently attempting to ascertain whether or not the actor is within the US. If it's within the US, I (personally) believe there's a good chance they'll take the case on and presumably with enough digging, will find the attacker. (The people involved seem to be... pretty good.)
But if they're outside US (which is actually reasonably high probability, given the brazenness of the attack, and the fact that they're leaving a lot of exhaust [e.g. IP address, phone number, browser fingerprints, etc.]), then my understanding is that law enforcement is far less interested, since it's unlikely that even an identification of the hacker would lead to any concrete results (e.g. if they were in North Korea). (FWIW, the attack was not conducted via Tor, which to me implies that the actor isn't too worried about law enforcement.)
To give you a sense, we are in an active dialogue with "professionals". This isn't a "report this to your local police station" kind of situation.
Does that mean they have audio of the call?
I really like TOTP. It gives me more flexibility to control keys on my end. And you can still use a Yubikey to secure your private TOTP key. But you can also choose to copy your private key to multiple hardware tokens without needing anyone's permission. Properly used, you can get most of the benefit of FIDO2 with a lot more flexibility.
I actually recently deployed TOTP, and everyone was quite happy with it. But knowing that Google is syncing private keys around by default, I no longer think we can trust it.
Since you might have you delete the reply anyway, can I get a candid answer on why hardware 2FA tokens weren't a part of the default workflow before the incident? Was it concerns about the cost, the recovery modes, or was it just the trust in the existing approach?
If you have any vendors without SSO (like GitHub, because it's an Enterprise feature), you're lucky if they support hardware tokens (cool, GitHub does) and even luckier if their "require 2FA" option (which GitHub has, per organization) allows you to require hardware keys (which GitHub does not).
Distributing hardware keys to employees is one thing. Mandating them is quite another.
Much, much cheaper than $21/user/month for GitHub Enterprise. I'm not sure what universe you live in where buying hardware keys is expensive compared to Enterprise licensing?
How is this Google's fault?
Which rock was this employee living under to not have understood you NEVER give an OTP code to anyone?
I've set up my phone to record all calls. The employee could have too.
I'm surprised Google encourages syncing the codes to the cloud... kind of defeats the purpose. I sync my TOTP between devices using an encrypted backup, even if someone got that file they could not use the codes.
FIDO2 would go a long way to help with this issue. There is no code to share over the phone. FIDO2 can also detect the domain making the request, and will not provide the correct code even it the page looks correct to a human.
Depends on what you think the purpose is. People talk about TOTP solving all sorts of problems, but in practise the only one it really solves for most setups is people choosing bad passwords or reusing passwords on other insecure sites. Pretty much every other threat model for it is wishful thinking.
While i also think the design decision is questionable, the gain in security from people not constantly losing their phone probably outweighs for the average person the loss of security of it all being in a cloud account (as google cloud for most people is probably one of their most well secured account)
Ultimately, i think for the average user the attacker is mostly not in physical proximity (although there certainly are exceptions), and if you are being targeted explicitly then you are screwed if they are installing cameras and modifying your hardware.
Maybe the big exception would be a camera in a coffee shop place looking for people (not live) logging into their bank accounts. I could inagine this being a helpful defense.
The user can still put in an insecure password but uploading all your 2FA tokens to your primary email unencrypted is basically willingly putting all your eggs in one basket.
The vector was this: Blizzard let you disable the authenticator on your account by asking for 3 consecutive TOTP outputs from your device. That would let you delete the authenticator from your account.
The implementation was to spread a keylogger as a virus, and when it detected a Blizzard login, it would grab the key as you typed it, and make sure Blizzard got the wrong value when you hit submit. Blizzard would say try again, and the logger would collect the next two values, log into your account, remove the authenticator and change your password.
By the time you typed in the 4th attempt to log in, you'd already be locked out of your account, and by the time you called support, they would already have laundered your stuff.
This was targeting 10 million people for their imaginary money and a fraction of their imaginary goods. On the one hand that's a lot of effort for a small payoff. On the other, maybe the fact that it was so ridiculous insulated them from FBI intervention. If they were doing this to banks they'd have Feds on them like white on rice. But it definitely is a proof of concept for something much more nefarious.
The rouge javascript or keylogger would just steal the totp code, prevent the form submission, and submit its own form on the malicious person's server.
Not to mention if your threat model includes attacker has hacked the server and added javascript, why doesn't the attacker just take over the server directly?
If the attacker installed a keylogger why dont they just install software to steal your session cookies?
This threat model doesn't make sense. It assumes a powerful attacker doing the hard attack and totally ignoring the trivially easy one.
> in practise the only one it really solves for most setups is people choosing bad passwords or reusing passwords on other insecure sites. Pretty much every other threat model for it is wishful thinking.
Why is no one talking about this?
Ultimately totp & sms based 2fa is used because it solves the real business problem that websites face (the business problem being when enough users get hacked they blame the business not themselves, so we just need to save most of them not all). Yes there is some fear mongering to make people sign up for 2FA, but it is actually solving a big problem effectively. It doesn't matter its not helpful in more fanciful scenarios since those scenarios are largely imaginary to begin with (for the average user).
I could not agree more with this sentiment! We need more of this kind of automated checking going on for users. I'm tired of seeing "just check for typo's in the URL" or "make sure it's the real site!" advice given to the average user.
People are not able to do this even when they know how to protect themselves. Humans tire easily and are often fallible. We need more tooling like FIDO2 to automate away this problem for us. I hope the adoption of it will go smoothly in years to come.
That's kind of what I was trying to get at with my previous statement about humans being tired and fallible. The way we access and protect our digital assets feels incredibly un-human to me. It's wrapped up in complexity and difficulty that is forced upon the user (or kept away from, if you want to look at it that way).
As it is now, all of the solutions are only really available to someone who can afford it (by life circumstance, device availability, internet, etc) and those who can understand all the rules they have to play by to be safe. It's a very un-ideal world to live in.
When I brought up FIDO2, I was less saying "FIDO2 is the answer" and more saying, "we need someone to revolutionize the software authentication and security landscape because it is very very flawed".
Probably so when you upgrade/lose your phone you don't otherwise lose your MFA tokens. Yes, you're meant to note down some recovery MFA codes when you first set it up, but how many "normal people" do that?
I never did get around to doing all of them so I still have the old phone in a drawer for those rare times I need it.
Additionally, we have a program in place which periodically "baits" us with fake phishing emails, so we're constantly on the lookout for anything out of the ordinary.
I'm not sure what the punishment is for clicking on one of these links in a fake phishing email, but it's likely that you have to take the security training again, so there's a strong disincentive in place.
Company security should be based on the assumption that someone will click a phishing link and make that not a catastrophic event rather than trying to make employees worried to ever click on anything. And has been pointed out, that seems a likely result of that sort of testing. If I get put in a penalty box for clicking on fake links from HR or IT, I'm probably going to stop clicking on real ones as well, which doesn't seem like a desirable outcome.
What happened in the article — getting access to one person’s MFA one time — is not exactly a catastrophic event. It just happens, as with most security breaches, a bunch of things happened to line up together at one time to make intrusion possible. (And I skimmed the article but it sounded like the attacker didn’t get that much anyway, so it was not catastrophic.)
And things lining up rarely happens but it will happen enough times for there to be an article posted to Hacker News once in a while with someone saying that it’s possible to make it perfectly secure.
Meh, it's not that disruptive, maybe one email every couple of months.
> Company security should be based on the assumption that someone will click a phishing link and make that not a catastrophic event rather than trying to make employees worried to ever click on anything.
Agreed. I think both things are important: keeping employees on their toes, which reduces the possibility of a successful attack, as well as making it not catastrophic if a phishing attack succeeds.
After all if I can bypass 2FA with my email whether 2FA is backed up to the cloud doesn't matter from a security standpoint.
Certainly I would agree with the assertion that opting out for providers of codes would be nice. Even if it is an auto populated checkbox based on the QR code.
I mean it's a great reason to use U2F / Webauthn second factor that cannot be entered into a dodgy site
https://rakkhi.substack.com/p/how-to-make-phishing-impossibl...
That’s normal because that’s how the game is played. All the way up the chain to the org leader, there is no incentive to not do this.
I will tell you a truth: People who think they're smarter than everyone else are generally missing important context or information.
What do you use to accomplish this?
Not multifactor anymore, but also not vulnerable to catastrophic phone destruction or Google account banning. It is what it is.
No. If you think people at your company would fall for this, then IMO you have bad security training. The simple mantra of "Hang up, lookup, call back" (https://krebsonsecurity.com/2020/04/when-in-doubt-hang-up-lo...) would have prevented this.
Literally like 99% of social engineering attacks would be prevented this way. Seriously, make a little "hang up, look up, call back" jingle for your company. Test it frequently with phishing tests. It is possible in my opinion to make this an ingrained part of your corporate culture.
Agree that things like security keys should be in use (and given Retool's business I'm pretty shocked that they weren't), but there are other places that the "hang up, look up, call back" mantra is important, e.g. in other cases where finance people have been tricked into sending wires to fraudsters.
That's why I really like the "Hang up, look up, call back" mantra: it's so simple. It shouldn't be a part of "security training". If corporations care about security, it should be a mantra that corporate leaders begin all company-wide meetings with. It's basically teaching people to be suspicious of any inbound requests, because in this day and age those are difficult to authenticate.
In other words, skip all the rest of "security training". Only focus on "hang up, look up, call back". Essentially all the rest of security training (things like keeping machines up to date, etc.) should be handled by automated policies anyway. And while I agree TOTP is and should be on its way out, the "hang up, look up, call back" mantra is important for requests beyond just things like securing credentials.
You can have all the security training in the world, but every time IT or HR or whoever legitimately reaches out to an employee, especially when it's not based on something initiated by the employee, the company is training exactly the opposite behavior Krebs is suggesting. Hanging up and calling back will likely at minimum annoy the caller and inconvenience the employee. Is the company culture accepting of that, or even better are company policies and systems designed to avoid such a scenario? If a C-suite person calls you asking for some information and you hang up and call them back, are they going to congratulate you on how diligently you are following your security training?
You're not wrong that the Krebs advice would help prevent most phishing, but I'd argue it has to be an idea you design your company around, not just a matter of security training. Otherwise you're putting the burden on employees to compensate for an insecure company, often at their own cost.
(If you happen to be local-king, flip the trust direction, it ends up in the same place.)
If we look at the actual data, we have seen a reduction in employees who fall for phishing emails. Unfortunately we can’t really tell if it’s the training or if it’s the company story about all those million that got transferred out of the company when someone fell for a CEO phishing scam. I’m inclined to think it’s the latter considering how many people you can witness having the training videos run without sound (or anyone paying attention) when you walk around on the days of a new video.
The only way to really combat this isn’t with training and awareness it’s with better security tools. People are going to do stupid things when they are stressed out and it’s Thursday afternoon, so it’s better to make sure they at least need a MFA factor that can’t be hacked as easily as SMS, MFA spamming and so on.
"Hang up, look up, call back". That's it. Get rid of pretty much all other "security training", which is just a box ticking exercise for most people anyway.
I also agree with the comment about better security tools, but that's why I think "hang up, look up, call back" is still important, because it teaches people to be fundamentally suspicious of inbound requests even in ways where security tools wouldn't apply.
They should stop using OTPs. OTPs are obsolete. For the past decade, the industry has been migrating from OTPs to phishing-proof authenticators: U2F, then WebAuthn, and now Passkeys†. The entire motivation for these new 2FA schemes is that OTPs are susceptible to phishing, and it is practically impossible to prevent phishing attacks with real user populations, even (as Google discovered with internal studies) with ultra-technical user bases.
TOTP is dead. SMS is whatever "past dead" is. Whatever your system of record is for authentication (Okta, Google, what have you), it needs to require phishing-resistant authentication.
I'm not high-horsing this; until recently, it would have been complicated to do something other than TOTP with our service as well (though not internally). My only concern is the present tense in this post about OTPs, and the diagnosis of the problem this post reached. The problem here isn't software custody of secrets. It's authenticators that only authenticate one way, from the user to the service. That's the problem hardware keys fixed, and you can fix that same problem in software.
† (All three are closely related, and an investment you made in U2F in 2014 would still be paying off today.)
Less of an issue though once more non-platform vendors start supporting them (e.g. Bitwarden https://bitwarden.com/passwordless-passkeys/)
Knowing how dead simple TOTP is technically, it's blown my mind that more companies don't host their own totp authn server.
Push notifications are also, in my experience, a massive pain (both in terms of the user flow where you have to pull out your phone, and in terms of running infra that's wired up to send pushes to whatever device types your users have). Notably, now you need a plan for users that picked a weird smartphone (or don't have a smartphone).
The better option is to go for passwordless auth, which you could self-host with something like Authentik or Keycloak, and then it handles the full auth flow.
Wow that is quite sophisticated.
Are you close enough to members of your IT team to recognise their voices but not be close enough to them to make any sort of small talk that the attacker wouldn’t be able to respond to convincingly?
If you’re an attacker who can do a convincing french accent, pick an IT employee from LinkedIn with a french name. No need to do the hard work of tracking down source audio for a deepfake when voices are the least distinguishable part of our identity.
Every story about someone being conned over the phone now includes a line about deepfakes but these exact attacks have been happening for decades.
Is it plausible that if a good social engineer cold-called a bunch of employees, they'd eventually get one to reveal some info? Yes, it happens quite frequently.
So any suggestion that it was an inside job, or used deep fakes, or something like that would require additional evidence.
Kevin Mitnick's "The Art of Deception" covers this extensively. The first few calls to employees wouldn't be attempts to actually get the secret info, it'd be to get inside lingo so that future calls would sound like they were from the inside.
For example, the article says the caller was familiar with the floor plan of the office.
The first call might be something like "Hey, I'm a new employee. Where are the IT staff, are they on our floor?" - they might learn "What do you mean, everyone's on the 2nd floor, we don't have any other floors. IT are on the other side of the elevators from us."
They hang up, and now with their next call they can pretend to be someone from IT and say something about the floor plan to sound more convincing.
Retool needs to revise the basic security posture. There is no point in complicated technology if the warden just gives the key away.
Couldn't agree more. TBH I thought this post was an exercise in blame shifting, trying to blame Google.
> We use OTPs extensively at Retool: it’s how we authenticate into Google and Okta, how we authenticate into our internal VPN, and how we authenticate into our own internal instances of Retool. The fact that access to a Google account immediately gave access to all MFA tokens held within that account is the major reason why the attacker was able to get into our internal systems.
Google Workspace makes it very easy to set up "Advanced Protection" on accounts, in which case it requires using a hardware key as a second factor, instead of a phishable security code. Given Retool's business of hosting admin apps for lots of other companies, they should have known they'd be a prime target for something like this, and not requiring hardware keys is pretty inexcusable here.
This isn't immediately actionable for every company. I agree Retool should have hardware keys given their business, but at my company with 170 users we just haven't gotten around to figuring out the distribution and adoption of hardware keys internationally. We're also a Google Workspace customer. I think it's stupid for a company like Google, the company designing these widely used security apps for millions of users, to allow for cloud syncing without allowing administrators the ability to simply turn off the feature on a managed account. Google Workspace actually lacks a lot of granular security features, something I wish they did better.
What is a company like mine meant to do here to counter this problem?
edit: changed "viable" for "immediately actionable". It's easy for Google to change their apps. Not for every company to change their practices.
What is hard about mailing everyone a hardware key? I honestly don't see the problem. It's not like you need to track it or anything, people can even use their own hardware keys.
1. Mail everyone a hardware key, or tell them if they already have one of their own they can just use that.
2. Tell them to enroll at https://landing.google.com/advancedprotection/
> Google Workspace actually lacks a lot of granular security features, something I wish they did better.
Totally agree with that one. Last time I checked you couldn't enforce that all employees use Advanced Protection in a Google Workspace account. However, you can still get this info (enabled or disabled) as a column in the Workspace Admin console so you can report on people who don't have it enabled. I'm guessing there is also probably a way to alert if it is disabled.
The key to good security is layering. Attackers should need to break through multiple layers in order to get access to critical systems.
Compromising one employee's account should have granted them only limited access. The fact that this attack enabled them to get access to all of that employee's MFA tokens sounds like indeed the right thing to focus on.
It is great that this kind of security incident post-mortem is being shared. This will help the community to level-up in many ways, specially given that its content is super accessible and not heavily leaning on tech jargon.
I do agree that we should start using hardware keys (which we started last week).
The goal of this blog post was to make clear to others that Google Authenticator (through the default onboarding flow) syncs MFA codes to the cloud. This is unexpected (hence the title, "When MFA isn't MFA"), and something we think more people should be aware of.
FWIW, nearly every TOTP authenticator app I'm aware of supports some type of seed backup (e.g. Authy has a separate "backup password"). I actually like Google's solution here as long as the Workspace accounts are protected with a hardware key.
The only real lesson here is that you should have been using hardware keys.
Changing things to make it less offensive to someone who was offended really waters down your position.
TOTP and SMS based 2FA are NOT designed to prevent phishing. If you care about phishing use yubikeys.
huh.. this raises way more questions than it answers; my first two are: - how did the voice of some random employee (in IT for that matter) get learned by outside the company (enough to be deepfaked (and i presume on the fly) for that matter)? Maybe we should record less conversations (looks at Teams, Discord, Zoom) - where there already leaks of 'internal processes'?
Since Github now requires MFA, I'm throwing away my account: I'll never give them any physical evidence to connect me to other data they have on me.
In the company I work today (20-something thousands employees) the latest security breach was through MFA. Data was stolen. Perpetrators made jokes in company's Slack etc.
Last time I had to upgrade my phone (while working for the same company), it took IT about two weeks to give me all the necessary access again, which required a lot of phone calls, video conferences, including my boss and my boss' boss.
It's mind-boggling that this practice became the norm and is recommended by IT departments even of companies who have nothing to gain from collecting such data.
MFA is a easy and good way to prevent hostile account takeovers. Especially with the amount of data breaches, one time passwords are way more secure than memorized “static” passwords.
SMS based two factor is the one Google pushed. Even Google recommends other ways of MFA these days (using hardware like YubiKey or apps like Authy).
Public’s phone numbers are not that valuable for a company like Google . Until very recently they were listed in phone books publicly available.
It's not about being able to tie your phone number to your name. It's about being able to tie your browsing, purchasing, and other behavior history to an id that doesn't change much.
Google by itself doesn't run ad campaigns. It sort of has API to design a campaign yourself... but that's super ineffective. There are multiple companies who manage ad campaigns which run on Google. In order to be effective they need to have some predictive power over user's future browsing, purchasing etc. choices. Being able to consistently identify the user (and tie that to their history) is the most valuable ad-related info anyone can sell.
Whatever existed in the 80s has nothing to do with MFA is today. Today it's a scam that helps big tech companies who want to be an advertisement platform to harvest and to catalogue data helping advertisers predict user behavior. All it does to end users is inconvenience and less security. All it does to IT is an extra headache and more procedures that may potentially go wrong.
> Google first, and then others wanting to get users' phone numbers associated with more data they collect on them
Perhaps you mean SMS 2FA, instead of a non phone number related MFA such as T-OTP?
The whole point of this exercise is not to enhance security, but to have an edge as an advertisement platform. If today you can trick the system into not using a phone, it's a temporary thing. The more users join, the tighter will be the system's grip on each individual user, and the "privilege" of not divulging your phone number will be taken away.
Google did this before with e-mail access for example, multiple times, actually. Remember how GoogleTalk used Jabber? -- Not having to use a proprietary chat protocol was a feature that made more users join. As soon as there were enough users, they replaced GoogleTalk with Hangouts or w/e it's called.
GMail used to provide standard SMTP / IMAP access, but they continuously undermined all clients other than Google's. Started with removing POP access. Then requiring mandatory TLS. Then requiring a bunch of nonsense "trusted application registration". Finally, this feature is now behind MFA, which makes it useless anywhere outside Google's Web client / Android app etc. All of this was delivered as a "security improvements", while giving no tangible security benefits. It was a move to undermine competition.
There is no genuine interest on the other side to provide you with better security. There is no genuine interest on the other side to make your life easier / to care about your privacy. You are allowed to opt out through a complicated mechanism because the provider needs high volume of users. As time goes, either the law will catch up to the provider and will make them make mandatory exceptions to this nonsense, or they will just exploit you whichever way they can.
Key based authentication where both parties have private keys that are not shared is a much better alternative. Unfortunately, client side TLS certificates, which are application level protocol agnostic, never really caught on.
But a growing number. https://passkeys.directory/ is a good place to check.
Ask for it. MFA via SMS/email/etc was not very common 10 years ago, but it is now. That's due in part to people asking for it.
Perhaps we need a distinction from phishable MFA and unphishable U2F/WebAuthn style
> The additional OTP token shared over the call was critical, because it allowed the attacker to add their own personal device to the employee’s Okta account, which allowed them to produce their own Okta MFA from that point forward.
They needed to have a couple of minutes to set things up from their end, and then ask for the second OTP code. A phone call works well for that.
That is indeed interesting; keep the con going a bit longer to get a proper foothold.
So often I see these kinds of phishing attacks that have hugely negative consequences (see the MGM Resorts post earlier today), and the main problem is that just one relatively junior employee who falls for a targeted phishing attack can bring down the whole system.
Is anyone aware of systems that essentially require multiple logins from different users when accessing sensitive systems like internal admin tools? I'm thinking like the "turn the two keys simultaneously to launch the missile" systems. I'm thinking it would work like the following:
1. If a system detects a user is logging into a particularly sensitive area (e.g. a secrets store), and the user is from a new device, the user first needs to log in using their creds (including any appropriate MFA).
2. In addition, another user like an admin would need to log in simultaneously and approve this access from a new device. Otherwise, the access would be denied.
I've never seen a system like this in production, and I'm curious why it isn't more prevalent when I think it should be the default for accessing highly sensitive apps in a corporate environment.
In most cases the engineering time is better spent pursuing phishing resistant MFA like FIDO2. Admin/Operations time is better spent ensuring that RBAC is as tight as possible along with separate admin vs user accounts.
There are smartphone apps and various tools to send a multi-sig message:
Hello A, This is B. I was trying to reach out in regards to your [payroll system] being out of sync, which we need synced for Open Enrollment, but i wasn’t able to get ahold of you. Please let me know if you have a minute. Thanks
You can also just visit https://retool.okta.com.[oauthv2.app]/authorize-client/xxx and I can double check on my end if it went through. Thanks in advance and have a good night A.
If so, it would be a good idea for OTP apps to use it and display a prominent warning banner when opened during a call.
As for the MFA, google should have the on demand peer-peer sync rather than cloud save, for example, a new device is added, then your Google account is used to link between these new device and existing device, click sync and you will be asked on your old device that a new device is requesting bla bla would you allow it? And obviously nothing saved in the cloud, just a peer-peer sync and google is a connection broker.
So yes, Google Authenticator sync made the security worse, but it didn't downgrade the security from MFA to non-MFA. And even if the sync was off, the TOTP codes in Google Authenticator could have been phished as well, so Google Authenticator can't be blamed so heavily, because the attack could have been done without it.
Disclosure: I work at Google but not on Google Authenticator.
- Your on-premise customers are the smart ones. Networks containing sensitive information should be isolated, not all pooled together.
- Google still has actually no understanding of practical security. Literally ban their products from your networks.
There are not any details about the progress of the attackers or the speed of the attack, which would have been interesting to me. There are no details about any losses from the attack (or profits to the attacker).
Once the employee provided a TOTP code to the attacker, the only surprise is that they get control of the other codes by cloud sync (as extensively commented on here).
Regardless of the hate, this could happen to anyone. But... big L for reading out your TOTP code to somebody. (If more details about the deepfake come out, then it might be more exciting.)
2. pass otp add whatever/otp/me
3. paste in "otpauth://totp/whatever?secret=whateveritis
4. pass git init; push to remote
Now you you have MFA on any device that has git and your gpg key.
I guess we need a better way to handle "Old phone went swimming, had to buy another, now what?"
FIDO2/WebAuthn relies on public key technology - so does also have a secret key - but is designed to be kept secret from the service/server one authenticates against.
For use - FIDO2 is more like a multi-use id. Like a driver's license many services accept as id. If you lose it - you don't restore a backup copy from a safe - you use your passport until you get a new one issued.
This makes more sense than with TOTP as the services only need your public key(id) on file.
https://frontegg.com/blog/authentication-apps#How-Do-Authent...?
And it's a good thing, and damn any 2fa solution that blocks it. I don't want to go through onerous, incompetent, poorly designed account recovery procedures if a toddler smashes my phone. So I use authy personally, while a friend backs his up locally.
Why don't you use the printed recovery tokens?
Hell, no bank I use (several large and several regional) support generic totp. Some have sms, one has Symantec VIP, proprietary and not redundant.
Edit: since I'm posting too fast according to HN, even though I haven't posted in an hour, I'll say it here. Symantec is totp but You cannot back up your secrets and you cannot have backup codes.
I currently see 53 2fa tokens in my private bitwarden.
You expect me to print, keep safe and manually reset them all when I buy a new phone?
Seriously, though, it's hard to keep track of something that gets used once every five years.
Passkey syncing is more convenient, though, and probably an improvement on what most people do.
Look at the MFA help page for any website you use. One of the first sentences is probably something like "First you'll need to install a TOTP app on your phone, such as Google Authenticator or Authy..."
It really did used to be the best option. For example, see this comment from 10 years ago when Authy first launched:
> The Google Authenticator app is great. I recently got (TOTP) 2-factor auth for an IRC bot going with Google Authenticator; took about 5 minutes to code it up and set it up. It doesn't use any sort of 3rd party service, just the application running locally on my phone. TOTP/HOTP is dead simple and, with the open source Google Authenticator app, great for the end user.
Most of the time I am not using multifactor or 2factor the way it was designed
But it is accurately a one time passcode
But I'd disfavor TOTP over hardware tokens that can sign explicit requests.
It's the new advanced persistent threat, a perfect phrase to divert any resposibility.
(Yes, there are deepfakes. Yes, there are APTs. This is likely neither.)
It's not that it's impossible, but it's not trivial either. But mainly, it's just unnecessary.
If the user is not fooled by a well crafted phishing, by doing the most trivial countermeasures such as calling back, they are not going to be fooled by a deepfake. In practice work on phishing is mostly better spent elsewhere. So while we shouldn't dismiss it completely, it's clearly not the case with a smallish company with limited economic value, so very unlikely the case here.
There has been a handful of highly profile media cases involving deepfake. None of which has held up on further investigation. It is understandable, nobody wants to be known as the one who didn't recognize his own kid on the phone, but the truth is more simple and actually helps us when designing countermeasures.
We need a better name than MFA.
Something like “personal password like token that should only be entered into secure computer on specific website/app/field and never needed to be shared”
So users get used to sharing passwords between multiple accounts and no centralised authority for login. This causes the "hey what's your password? I need to quickly fix this thing" culture in smaller companies which should never be a thing in the first place.
If users knew the IT department would never need their passwords and 2FA codes they would never give them out, the reason they give them out is because at some point in the past that was a learned behaviour.
"Never give your totp or one time code over the phone" is good advice.
"Never give info to someone who called you, call them back on the official number" is another.
This is user error at this point.
Then they hang out in your inbox for months, learn your real business processes, and send a real invoice to your real customer using your real forms except an account number is wrong.
Then the forensics guy will have to determine every site that can be accessed from your email and if any PII can be seen. What used to be a simple 'hehe i sent spam' is now a 6 month consulting engagement and telling the state's attorney general how many customers were breached.
We need better terminology.