I'd love to see companies allow for opt in additional security measures, like banks or telco's calling me - having a verbal password to confirm things, that level of security seems to only be available to VIPs.
There's no opt-out for it, and no enforcement of the permission requirement. Their support had me snail mail a letter to some PO box. I never got a response.
And now they're going to start outright selling their customer activity after forcibly un-opt-outing* everyone who opted out in their privacy settings previously..
*un-opt-outing -- ??? I don't know what to call this. It's not 'opting-in' since nobody has a choice.. 'resetting user selection without notification or consent' seems too mild and wordy.
My last experience with them caused me to switch away from them permanently. I switched away from them after getting SIM jacked, with real money stolen from me. Happened exactly like in this article[0].
Another incident happened where my online account was merged with someone else's in California (I'm in Texas). Our billing information was merged, with the others paying for the whole account. I couldn't make changes online- only after sitting on hold and explaining what happened was I able to get the whole situation unfucked, but there's no telling what amount of my data still lives in that other account.
Come to think of it, my first experience with T-Mobile was as a Radio Shack employee, circa 2010. When a customer came to the store to pay their T-Mobile bill with cash, if I took too long to enter all the data into their awful online portal the money would sometimes go to a completely different person's account. Many hours were spent on the phone with the local and regional rep resolving multiple instances of this happening.
[0]: https://www.vice.com/en/article/3kx4ej/sim-jacking-mobile-ph...
I'd call it "forcing consent", all irony intended.
Regulators (particularly in Europe) soon put a stop to that to promote competition. While this was good, the majority of regulators failed to put in a consumer protection mechanism to stop identity theft through account stealing.
The article describes a more insiduous attack, as the mobile account is still active (hiding the existence of the attack from the user), but the message destination has been rerouted, making all the linked accounts that use SMS as their 2FA also vulnerable.
In other countries, the two channels are more closely coupled (but SIM swap and/or number porting attacks are still possible, depending on the provider‘s security protocols).
I suspect more like due to peculiarities of the United States of America. Such as a disinclination to regulate anything, trusting that somehow this time the most profitable course for corporations will also work out OK for its citizens even if it didn't on previous occasions.
This report lists a long chain of buck-passing companies that have exploited an obvious defect and then escaped any responsibility for the consequences. Notice how the only work they made the hacker do was legal paperwork to cover their backsides, no actual technical countermeasures. Because nobody at these companies cared if it was used this way, they only wanted to make sure if they got sued they would be able to blame somebody else and get away with it.
Number porting is trickier, requires a name and account number (or DOB in the case of a prepaid account) of the victim and they receive an SMS informing them their number was ported in advance.
I would disagree. Obviously, there are better approaches, but consider basic password auth on desktop, that is easily exploitable en masse by botnets. if you add 2FA via SMS, you would need to exploit both devices (or attack SS7, transfer number or some other trick) and match infos from these devices. Can be done in targetted attack, but harder in en masse botnet attacks.
“Sorry you’re locked out forever, good luck lol”
Is not a response you can give to them.
Pick you poison. Or even better, implement both and let your users pick.
SMS 2FA is, at best, just adding a little hassle for the hacker. If it's not a targeted attack, there's a chance that the extra effort means they'll move on, but that won't stop any remotely determined hacker.
and the companies that know better should be fined and sanctioned, particular the ones that are demanding SMS based OTP so they can also add your phone number to their social graph
The numerous emails I get when I log in from a new device serve me pretty well, all things considered
If you have multiple accounts, services, etc, then backing up your 2FA codes, or registering two devices/phones at the same time should be on your radar.
Feels like the industry needs to push for a dedicated, universal, probably physical, tool for 2FA.
This is not the case in my experience. Many apps that once used Authenticator-based TOTP now use app-based push alerts (Steam Authenticator, Blizzard Authenticator, Google->GMail App, etc.), but I haven't noticed a trend toward actual SMS.
Are there major orgs that switched to SMS 2FA and disabled authenticator apps? If so, I'd be interested in learning why, also.
This is also why they won't let you set up a good 2 factor authentication system (like a Yubikey) they'll force you to first set up a SMS 2 factor. It's very important to remember to delete that SMS second factor after setting up your good second factor or social engineers will use it to steal your account.
Simplicity: nearly everybody understands how texting works.
Weirdly it only works for a minority of services, I expect many use Twilio to send their auth texts and Twilio blocks sending these to their own numbers?
It's a really backwards and confusing system, I agree.
Unfortunately, big parts of the industry seem to be headed the other direction.
The threat model is beyond 2FA, imagine being able to impersonate anyone over text.
Social engineering gone to the next level. This isn't about just taking over accounts, it is about taking over a huge chunk of someone's social existence.
It is not stable in the least for millions of Americans, especially those who live in poverty (I'm not sure about the rest of the world). Phones are lost or stolen, phone numbers changed because of being harassed by debt collectors, ex-partners, current partners, etc. And if it isn't stable, it isn't convenient.
- First number. - Moved to a different city for Uni. Switched number so that people didn't have to pay long-distance to call. - Moved back. - Move to Europe for a job. - Moved back.
I would never consider an identifier that is (loosely) tied to your location stable.
I had a miserable time trying to get into Backblaze recently, with even the ability it offered to switch sms providers failing.
The list of valid keys they give you on setup bailed me out eventually, but it took me a while to remember them.
Uh. You're supposed to memorise them? I printed them out and stuck them in a safe place.
It’s better to assume that until phone numbers can be locked and unlocked the way domains can, with a random authorization code only accessible by real offline 2FA (though not all domain providers require it), and with the option of completely encrypted end-to-end texting (RCS?), well, then SMS won’t really be all that secure.
The process of changing the routing is pretty simple. It's a matter of being a trusted actor and having the ability to submit changes in routing for SMS to a central provider that maintains and propagates this info.
Heck, even with "port lock" enabled on a Google Voice number, that is the barest of security against an attacker who has any kind of access better than "retail store employee." Working for a telco with access to our back-end port system, access several other people had, I could forcibly acquire a number by simply checking a box that said I had verified a written LOA even if the losing carrier responded with code 6P ("port-out protection enabled").
So, yes, you're likely sitting in a security-by-obscurity, or at least security-by-slightly-more-difficult-than-someone-else, situation.
This is false.
"Mobile" numbers - numbers that are classified as belonging to an actual mobile carrier - are indeed different than non-mobile numbers.
For instance, you cannot send SMS from a short-code to a non-mobile number. Which means, your twilio number (which is not a mobile number) cannot receive 2FA (or any other SMS) from the 5-digit "short code" numbers that gmail (and most banks, etc.) use for new account verification, etc.
Non mobile numbers are, in many ways, second class citizens in the mobile-operator ecosystem.
A useful strategy to help against this in any case is to use a different email address for every online service. Hackers generally can't initiate an account password reset if they don't know the account.
Also if you use a different phone number for account security than your public one then it's a lot harder for them to know what SMS to intercept. Security by obscurity sucks but in this world it may be your only practical choice.
You absolutely can port a Google Voice number. End-user subscriber numbers must be portable per FCC rules. Google, operating services provided by Bandwidth.com (mentioned in the article), does enable port-protection by default but this is easy to bypass by an operator who, like in the article, checks the box that says something like "I have a valid written LOA, complete the port as an exception." This has legitimate uses (some losing providers are very ruthless about not following the rules and letting customers move numbers) but unscrupulous or lazy operators will check the box and move on.
There's got to be a better system.
If they have a nice phone (modern iPhone or Android phone that is able to recognise who you are by fingerprint or facial recognition ought to be enough) that can do WebAuthn too, the actual recognition remains local to your device (so you're not giving some mysterious entity your face or fingerprint).
I'm assuming since they're "nontechnical" that you mean as a user, the user experience for WebAuthn is trivial, one touch. You do this to enroll the Yubikey, and then you do it whenever you need to prove who you are to the same site. It's entirely phishing proof, the credentials can't be stolen, you can keep one on your keyring or just leave it plugged into a personal PC all the time, it has excellent privacy properties, the biggest problem is too few sites do WebAuthn but Google and Facebook do, so that's a good start for non-technical people.
Which brings me to the other side, if your non-technical friends are wondering what their organisation should mandate, then again, WebAuthn, but this time I admit it's somewhat complicated. Somebody is going to need to at least research what product suits the userbase, and check boxes in the software they use, and at worst they need to do a bunch of software development. It's not crazy hard, but it's a bit trickier than yet another stupid password rule requirement. However unlike requiring passwords to contain at least two state birds and the name of an African country requiring WebAuthn will actually make you safer.
Most important account / banks / etc services now offer this option.
The only thing is, though, make sure to keep backups of the codes you use to initialize the authenticator app, because for some services there is no recovery if you lose your phone or don't have backups.
> switch the 2FA to an authenticator on-your-phone code generator, which someone cannot hack easily.
I remember looking a few months ago and they only offered SMS 2FA.
Thanks
Everybody gets phished. Much easier than sim swaps.
Basically, avoid using your carrier provided phone number for anything related to an account.
I wonder how high-profile politicians and celebrities deal with security issues like this? If this is really such an easy attack to pull off, what's stopping someone from shilling cryptocurrencies on celebrity social media accounts (again)?
Lucky's company has this product that can monitor for the attack, but it won't prevent it: https://okeymonitor.com/
I note, however, that this attack seems to only be possible on VOIP routable numbers, and it’s my experience that banks, etc, will not allow you to use VOIP routable numbers for 2FA.
That’s definitely not the case for a naive implementation of sms 2fa as would be done by likely any dev using Twilio, etc.
Also, don’t forget that NIST deprecated SMS 2FA over 5 years ago. Here’s their reasoning: https://www.nist.gov/blogs/cybersecurity-insights/questionsa...
Im not sure what banks use, but I have had UK VOIP numbers flagged before when trying to register them for 2FA, so theres likely API providers for other countries.
And it's not just about 2FA, most of humanity expects that if someone else texts them, those texts will go to their phone and only their phone unless they've given explicit verifiable consent.
I mean, in this case all the hacker did was fill out a form and say pretty please. I hope phone companies that allow this get sued.
This is a growing trend in consumer services, and it's a privacy nightmare.
Imagine if they demanded your SSN to sign up? A phone number is no different or less sensitive a unique identifier, perhaps even moreso these days.
There are widespread reports of delivery businesses selling their phone number databases (with associated credit card suffixes, delivery addresses, order history, et c) to large advertising companies for data mining.
Providing your direct cell number to an app is basically like providing your home address and a bunch of other sensitive data. Don't do it, or make a burner gmail account to get a disposable Google Voice number for each account that you must have that demands a phone number. Then, that number isn't reused and an attacker that obtains your mobile number can't attack your login method for other apps.
Reusing phone numbers is about as bad as reusing passwords.
I have extremely bad news for you. US Social Security Numbers are not in fact unique, and the fact they're "sensitive" is a terrible joke because it's pretty easy to discover the SSN for an individual based on public information, especially older people because SSNs weren't even randomised at issuance until relatively recently.
Any system that depends on keeping public facts secret is horribly broken, yes that also includes "verifying" credit cards based on a bunch of digits that are written right on the card itself.
The goal is for the service to have a unique identifier, and phone numbers happen to be a really good one to prevent spam also since it outsources verification of human entity to the phone companies.
And there are obvious trade-offs here, if we make number portability harder, it means you're somewhat hostage to your phone provider.
The parent comment addressed this point. This is not just about 2FA. SMS users expect their communication are private, except (debatably) by the courts with a warrant.
Not sure which is why I'm asking.
So, no, not much of an expectation of privacy - at least, there shouldn't have been.
> "orsman added that, effective immediately, Sakari has added a security feature where a number will receive an automated call that requires the user to send a security code back to the company, to confirm they do have consent to transfer that number. As part of another test, Lucky225 did try to reroute texts for the same number with consent using a different service called *Beetexting*; the site already required a similar automated phone call to confirm the user's consent. This was in part "to avoid fraud," the automated verification call said when Motherboard received the call. Beetexting did not respond to a request for comment."
But it seems that the entire system is globally infested with security holes. Is this applicable worldwide or just limited to one country ?
Years ago I asked my carrier to not port or forward without me being physically present at a store. Maybe I should test them out to see if that’s still the case.
Regardless, I don’t use SMS MFA for anything important and even when I do, I have a 32 character password to go along with it.
Um, what?!
This is bad news because following the law isn't a top priority when trying to hack someone.
There are still grave vulnerabilities in mobile provider SMS (2FA or otherwise) due to how easy it is for a dedicated attacker to SIM swap, but this particular claim is completely misleading.
It's already too high up given it's a blatantly baseless accusation. I'm confused why you think it's more credible than the article when it provides zero evidence.
The attacker instead used the cell number of the author of the article, and supplied a fraudulent letter authorizing the re-routing of text messages through the bulk SMS service.
The attacker works for a service, which purports to verify the routing and carrier settings for a given mobile phone number; I expect that their solution periodically checks the results and issues an alert if the results differ from a known valid value.