I'd love to see companies allow for opt in additional security measures, like banks or telco's calling me - having a verbal password to confirm things, that level of security seems to only be available to VIPs.
Sakari (and the likes) will complain that government regulation is keeping the food of the hard workers table, and the gov has no right to intervene to the free market!!
In parallel they will 'lobby' (or as we call it in Europe "bribe") the key politicians and ask to a) either change that Bill down to the point that it is rendered useless, or b) cancel it altogether, and stock market will go up!!
There's no opt-out for it, and no enforcement of the permission requirement. Their support had me snail mail a letter to some PO box. I never got a response.
And now they're going to start outright selling their customer activity after forcibly un-opt-outing* everyone who opted out in their privacy settings previously..
*un-opt-outing -- ??? I don't know what to call this. It's not 'opting-in' since nobody has a choice.. 'resetting user selection without notification or consent' seems too mild and wordy.
My last experience with them caused me to switch away from them permanently. I switched away from them after getting SIM jacked, with real money stolen from me. Happened exactly like in this article[0].
Another incident happened where my online account was merged with someone else's in California (I'm in Texas). Our billing information was merged, with the others paying for the whole account. I couldn't make changes online- only after sitting on hold and explaining what happened was I able to get the whole situation unfucked, but there's no telling what amount of my data still lives in that other account.
Come to think of it, my first experience with T-Mobile was as a Radio Shack employee, circa 2010. When a customer came to the store to pay their T-Mobile bill with cash, if I took too long to enter all the data into their awful online portal the money would sometimes go to a completely different person's account. Many hours were spent on the phone with the local and regional rep resolving multiple instances of this happening.
[0]: https://www.vice.com/en/article/3kx4ej/sim-jacking-mobile-ph...
Some examples:
- Vice paid a bounty hunter $300 to track a phone number [1]
- Police have paid these services to avoid warrant requirements, and corrections facilities use aggregator services to track numbers that inmates have calls with [2][3]
Apparently carriers claim to have stopped after getting fined $200m last year [4].
It was typically done through aggregators. EG, services that have similar access to multiple carriers and in turn expose a single endpoint to their own customers.
The aggregators pass on responsibility for obtaining consent to their end customers. Again, with no enforcement or ability for a target to opt out.
The only protection is an authentication requirement. But that just confirms you have a valid credential. Which you get either as an aggregator (to tmobile/other carrier directly), or as the client to an aggregator (to the aggregator's API to query multiple carriers).
Though even that authentication requirement has failed in the past, like when LocationSmart had a public demo page exploited. Inspection of the requests the page sent made it trivial to replay them with any phone number, skipping any consent checking. They just had to add "privacyConsent":"True" to the payload [5].
But yeah, it sounds like that is less of a worry now.
Instead, T-mobile is selling the location data, and basically anything whatever usage data they collect from your phone with their root-privileged app to advertising networks. They say it's a
Although their privacy page has this statement [6]:
> We do not use or share Customer Proprietary Network Information (“CPNI”) or precise location data for advertising unless you give us your express permission.
The 'express permission' here is deceptive. Users default to permit this, so it's hardly 'express'.
Further, they recently mass reset user preferences to clear the opt-out setting for users who previously opted out. Without consent.
So basically everyone is 'consenting' unless they very recently opted-out. Though I have little faith they won't change this from underneath their users again in the future. No doubt in the fine print of one of those 'annual privacy notices' or some such.
Still, if the wording and definition of 'express consent' is questionable above, they word it more explicitly in the more detailed privacy policy [7]:
> We and others may also use information about your usage, device, location, and demographics to serve you personalized ads, measure performance of those ads, and conduct analytics and reporting.
Their privacy page is deceptive about how anonymized their collection is [6]:
> When we share this information with third parties, it is not tied to your name or information that directly identifies you. Instead, we tie it to your mobile advertising identifier or another unique identifier.
Tying it to a mobile advertising id, or any kind of unique identifier, is not de-identification. It is trivial to tie this to an email or a larger profile generated by an advertising network and combine with, say, your desktop web browser. Or any account you login with that is associated to your email..
It's despicable. But sorry, I'll stop ranting now.
[1] https://www.vice.com/en/article/nepxbz/i-gave-a-bounty-hunte...
[2] https://www.nytimes.com/2018/05/10/technology/cellphone-trac...
[3] https://www.zdnet.com/article/us-cell-carriers-selling-acces...
[4] https://www.nationalheraldindia.com/international/over-dolla...
[5] https://www.robertxiao.ca/hacking/locationsmart/
[6] https://www.t-mobile.com/privacy-center/our-practices/privac...
[7] https://www.t-mobile.com/privacy-center/education-and-resour...
I'd call it "forcing consent", all irony intended.
It is almost always better for the government to create "incentives" than to create "requirements" anyway. Instead of "requiring" a text before transfer. It would be better to hold both companies that facilitate a transfer without the customers autorization to large liabilities. This allows them to create a mechanism to prevent this that is probably better.
I may.
Regulators (particularly in Europe) soon put a stop to that to promote competition. While this was good, the majority of regulators failed to put in a consumer protection mechanism to stop identity theft through account stealing.
The article describes a more insiduous attack, as the mobile account is still active (hiding the existence of the attack from the user), but the message destination has been rerouted, making all the linked accounts that use SMS as their 2FA also vulnerable.
In other countries, the two channels are more closely coupled (but SIM swap and/or number porting attacks are still possible, depending on the provider‘s security protocols).
I suspect more like due to peculiarities of the United States of America. Such as a disinclination to regulate anything, trusting that somehow this time the most profitable course for corporations will also work out OK for its citizens even if it didn't on previous occasions.
This report lists a long chain of buck-passing companies that have exploited an obvious defect and then escaped any responsibility for the consequences. Notice how the only work they made the hacker do was legal paperwork to cover their backsides, no actual technical countermeasures. Because nobody at these companies cared if it was used this way, they only wanted to make sure if they got sued they would be able to blame somebody else and get away with it.
The regulation seeks to promote competition and consumer choice. An onerous verification process would undermine that goal. Security is not a consideration.
This is sort of the point with regulation. The regulator makes the rules it thinks are best according to the considerations it thinks are important at the time. If someone later shows up with different considerations, they can go to hell.
A disinclination to regulate anything is a good idea in a society that generally punishes bad behavior after the behavior has been perpetrated. I would have doubts for instance about government regulating the process for sending and receiving SMS - would you want every new software or protocol to have to go through some kind of bureaucratic review before it can be used?
Number porting is trickier, requires a name and account number (or DOB in the case of a prepaid account) of the victim and they receive an SMS informing them their number was ported in advance.
I would disagree. Obviously, there are better approaches, but consider basic password auth on desktop, that is easily exploitable en masse by botnets. if you add 2FA via SMS, you would need to exploit both devices (or attack SS7, transfer number or some other trick) and match infos from these devices. Can be done in targetted attack, but harder in en masse botnet attacks.
“Sorry you’re locked out forever, good luck lol”
Is not a response you can give to them.
Vast majority of users don't bother with such complexities.
SMS is the easiest minimum entry barrier to 2FA. It is better than having just passwords.
This is new to me. Most websites that I have seen offer only one 2FA token, but it could be scanned on any number of TOTP apps.
Luckily I was still signed in on a computer
These would probably be smaller businesses that earn their revenue directly from paying customers (and would lose if you give up and cancel your card/block their transactions)—I can’t imagine this ever working for ad-driven whales like Google or Facebook, or large corporations to whom you’re small fish and need them more than they need you.
Also, it’ll be interesting to see how 2FA reset options evolve in near future. A 24-hour slot to reset only 2FA, for example, looks like a valid attack vector. Also, I suspect deepfaking videocalls won’t be out of reach of a dedicated but average attacker for long.
Pick you poison. Or even better, implement both and let your users pick.
SMS 2FA is, at best, just adding a little hassle for the hacker. If it's not a targeted attack, there's a chance that the extra effort means they'll move on, but that won't stop any remotely determined hacker.
and the companies that know better should be fined and sanctioned, particular the ones that are demanding SMS based OTP so they can also add your phone number to their social graph
The problem is purely with how some companies are applying SMS as an auth factor. In cases where SMS us being used as a recovery factor, it should not be allow for immediate recovery. Instead the user should be notified via other channels (email, phone notifications) about the recovery attempt, be given the opportunity to reject it, and for the recovery to only succeed if it is not denied after e.g. 3 days.
The numerous emails I get when I log in from a new device serve me pretty well, all things considered
If you have multiple accounts, services, etc, then backing up your 2FA codes, or registering two devices/phones at the same time should be on your radar.
Feels like the industry needs to push for a dedicated, universal, probably physical, tool for 2FA.
and how do you do estate planning? I'd like to give my family access to all of my private keys for everything when I pass.
Just like any other password or data, 2fa strings also need to be backed up, like in a password database (separate from the usual one).
This is not the case in my experience. Many apps that once used Authenticator-based TOTP now use app-based push alerts (Steam Authenticator, Blizzard Authenticator, Google->GMail App, etc.), but I haven't noticed a trend toward actual SMS.
Are there major orgs that switched to SMS 2FA and disabled authenticator apps? If so, I'd be interested in learning why, also.
This is also why they won't let you set up a good 2 factor authentication system (like a Yubikey) they'll force you to first set up a SMS 2 factor. It's very important to remember to delete that SMS second factor after setting up your good second factor or social engineers will use it to steal your account.
Simplicity: nearly everybody understands how texting works.
Weirdly it only works for a minority of services, I expect many use Twilio to send their auth texts and Twilio blocks sending these to their own numbers?
It's a really backwards and confusing system, I agree.
Unfortunately, big parts of the industry seem to be headed the other direction.
Is there a stable inexpensive phone number service for folks that are outside the US a lot?
Something I hadnt considered. Thanks!
The threat model is beyond 2FA, imagine being able to impersonate anyone over text.
Social engineering gone to the next level. This isn't about just taking over accounts, it is about taking over a huge chunk of someone's social existence.
It is not stable in the least for millions of Americans, especially those who live in poverty (I'm not sure about the rest of the world). Phones are lost or stolen, phone numbers changed because of being harassed by debt collectors, ex-partners, current partners, etc. And if it isn't stable, it isn't convenient.
- First number. - Moved to a different city for Uni. Switched number so that people didn't have to pay long-distance to call. - Moved back. - Move to Europe for a job. - Moved back.
I would never consider an identifier that is (loosely) tied to your location stable.
I had a miserable time trying to get into Backblaze recently, with even the ability it offered to switch sms providers failing.
The list of valid keys they give you on setup bailed me out eventually, but it took me a while to remember them.
Uh. You're supposed to memorise them? I printed them out and stuck them in a safe place.
It doesn't get easier. It probably would if more of us did it, though.
I like telling companies who want my number: No, my phone is for people I know to call me, not corporations. Other times I tell them that I don't have a phone and ask them if they are refusing me service. Not saying I have a perfect record, sometimes it is convenient to have the mechanic call when the car is done, etc. But the more I see companies who don't need a dossier on me asking for my personal info the harder I want to push back. I seek out and appreciate organizations that don't do this. I joke that I might end up living in a commune one day!
I can see this becoming a bigger problem. I've been following the idea of covid-vaccination-tracking applications, and don't have a good feeling about any of that. I'm expecting that the vaccination campaign will work well enough for that idea to be a moot point, but also expecting that companies and governments will want to do the extra tracking anyways, because their incentives are not aligned with the general population for stuff like this.
It’s better to assume that until phone numbers can be locked and unlocked the way domains can, with a random authorization code only accessible by real offline 2FA (though not all domain providers require it), and with the option of completely encrypted end-to-end texting (RCS?), well, then SMS won’t really be all that secure.
The process of changing the routing is pretty simple. It's a matter of being a trusted actor and having the ability to submit changes in routing for SMS to a central provider that maintains and propagates this info.
Heck, even with "port lock" enabled on a Google Voice number, that is the barest of security against an attacker who has any kind of access better than "retail store employee." Working for a telco with access to our back-end port system, access several other people had, I could forcibly acquire a number by simply checking a box that said I had verified a written LOA even if the losing carrier responded with code 6P ("port-out protection enabled").
So, yes, you're likely sitting in a security-by-obscurity, or at least security-by-slightly-more-difficult-than-someone-else, situation.
This is false.
"Mobile" numbers - numbers that are classified as belonging to an actual mobile carrier - are indeed different than non-mobile numbers.
For instance, you cannot send SMS from a short-code to a non-mobile number. Which means, your twilio number (which is not a mobile number) cannot receive 2FA (or any other SMS) from the 5-digit "short code" numbers that gmail (and most banks, etc.) use for new account verification, etc.
Non mobile numbers are, in many ways, second class citizens in the mobile-operator ecosystem.
Any number can be provisioned at an SMSC, even toll-free numbers these days. But mobile providers—and the associated short code entities—are loathe to peer with many VoIP carriers. Partially for competitive reasons, partially because many short codes are premium billing numbers.
You’re right about non-mobile numbers being second class, but that’s largely because companies filter them out because “fraud,” which is also suspicious reasoning. I can get a hundred “mobile” numbers within a few minutes, rather inexpensively.
A useful strategy to help against this in any case is to use a different email address for every online service. Hackers generally can't initiate an account password reset if they don't know the account.
Also if you use a different phone number for account security than your public one then it's a lot harder for them to know what SMS to intercept. Security by obscurity sucks but in this world it may be your only practical choice.
You absolutely can port a Google Voice number. End-user subscriber numbers must be portable per FCC rules. Google, operating services provided by Bandwidth.com (mentioned in the article), does enable port-protection by default but this is easy to bypass by an operator who, like in the article, checks the box that says something like "I have a valid written LOA, complete the port as an exception." This has legitimate uses (some losing providers are very ruthless about not following the rules and letting customers move numbers) but unscrupulous or lazy operators will check the box and move on.
There's got to be a better system.
If they have a nice phone (modern iPhone or Android phone that is able to recognise who you are by fingerprint or facial recognition ought to be enough) that can do WebAuthn too, the actual recognition remains local to your device (so you're not giving some mysterious entity your face or fingerprint).
I'm assuming since they're "nontechnical" that you mean as a user, the user experience for WebAuthn is trivial, one touch. You do this to enroll the Yubikey, and then you do it whenever you need to prove who you are to the same site. It's entirely phishing proof, the credentials can't be stolen, you can keep one on your keyring or just leave it plugged into a personal PC all the time, it has excellent privacy properties, the biggest problem is too few sites do WebAuthn but Google and Facebook do, so that's a good start for non-technical people.
Which brings me to the other side, if your non-technical friends are wondering what their organisation should mandate, then again, WebAuthn, but this time I admit it's somewhat complicated. Somebody is going to need to at least research what product suits the userbase, and check boxes in the software they use, and at worst they need to do a bunch of software development. It's not crazy hard, but it's a bit trickier than yet another stupid password rule requirement. However unlike requiring passwords to contain at least two state birds and the name of an African country requiring WebAuthn will actually make you safer.
I currently use Aegis and Bitwarden. AndOTP also allows you to export tokens.
Might not be ideal for backup however
Depends on your threat model, but unlike SIM swapping this may not be out of the reach of even a mildly technical angry ex.
Most important account / banks / etc services now offer this option.
The only thing is, though, make sure to keep backups of the codes you use to initialize the authenticator app, because for some services there is no recovery if you lose your phone or don't have backups.
> switch the 2FA to an authenticator on-your-phone code generator, which someone cannot hack easily.
I remember looking a few months ago and they only offered SMS 2FA.
Thanks
And you are right, I have not seen any of the banks I use convert to authenticator (BofA, Chase, etc).
I can only guess that they think it's too difficult for the average consumer to understand or implement. But the fact that they don't even offer as an option is unfortunate.
edit: actually I correct myself, seems like BofA may actually offer something like this: https://play.google.com/store/apps/details?id=com.bankofamer...
However, I can't tell/test because I don't use Android
Everybody gets phished. Much easier than sim swaps.
Basically, avoid using your carrier provided phone number for anything related to an account.
I wonder how high-profile politicians and celebrities deal with security issues like this? If this is really such an easy attack to pull off, what's stopping someone from shilling cryptocurrencies on celebrity social media accounts (again)?
Lucky's company has this product that can monitor for the attack, but it won't prevent it: https://okeymonitor.com/
I note, however, that this attack seems to only be possible on VOIP routable numbers, and it’s my experience that banks, etc, will not allow you to use VOIP routable numbers for 2FA.
That’s definitely not the case for a naive implementation of sms 2fa as would be done by likely any dev using Twilio, etc.
Also, don’t forget that NIST deprecated SMS 2FA over 5 years ago. Here’s their reasoning: https://www.nist.gov/blogs/cybersecurity-insights/questionsa...
Im not sure what banks use, but I have had UK VOIP numbers flagged before when trying to register them for 2FA, so theres likely API providers for other countries.
And it's not just about 2FA, most of humanity expects that if someone else texts them, those texts will go to their phone and only their phone unless they've given explicit verifiable consent.
I mean, in this case all the hacker did was fill out a form and say pretty please. I hope phone companies that allow this get sued.
This is a growing trend in consumer services, and it's a privacy nightmare.
Imagine if they demanded your SSN to sign up? A phone number is no different or less sensitive a unique identifier, perhaps even moreso these days.
There are widespread reports of delivery businesses selling their phone number databases (with associated credit card suffixes, delivery addresses, order history, et c) to large advertising companies for data mining.
Providing your direct cell number to an app is basically like providing your home address and a bunch of other sensitive data. Don't do it, or make a burner gmail account to get a disposable Google Voice number for each account that you must have that demands a phone number. Then, that number isn't reused and an attacker that obtains your mobile number can't attack your login method for other apps.
Reusing phone numbers is about as bad as reusing passwords.
I have extremely bad news for you. US Social Security Numbers are not in fact unique, and the fact they're "sensitive" is a terrible joke because it's pretty easy to discover the SSN for an individual based on public information, especially older people because SSNs weren't even randomised at issuance until relatively recently.
Any system that depends on keeping public facts secret is horribly broken, yes that also includes "verifying" credit cards based on a bunch of digits that are written right on the card itself.
We should all stop doing either.
The goal is for the service to have a unique identifier, and phone numbers happen to be a really good one to prevent spam also since it outsources verification of human entity to the phone companies.
That's not the reason phone numbers are used. They are used, because they are something you have in addition to something you know like an SSN or password. This is two factor authentication.
And there are obvious trade-offs here, if we make number portability harder, it means you're somewhat hostage to your phone provider.
The parent comment addressed this point. This is not just about 2FA. SMS users expect their communication are private, except (debatably) by the courts with a warrant.
Not sure which is why I'm asking.
Do you happen to have any links regarding this? Would love to read more.
So, no, not much of an expectation of privacy - at least, there shouldn't have been.
> "orsman added that, effective immediately, Sakari has added a security feature where a number will receive an automated call that requires the user to send a security code back to the company, to confirm they do have consent to transfer that number. As part of another test, Lucky225 did try to reroute texts for the same number with consent using a different service called *Beetexting*; the site already required a similar automated phone call to confirm the user's consent. This was in part "to avoid fraud," the automated verification call said when Motherboard received the call. Beetexting did not respond to a request for comment."
But it seems that the entire system is globally infested with security holes. Is this applicable worldwide or just limited to one country ?
Years ago I asked my carrier to not port or forward without me being physically present at a store. Maybe I should test them out to see if that’s still the case.
Regardless, I don’t use SMS MFA for anything important and even when I do, I have a 32 character password to go along with it.
Um, what?!
This is bad news because following the law isn't a top priority when trying to hack someone.
There are still grave vulnerabilities in mobile provider SMS (2FA or otherwise) due to how easy it is for a dedicated attacker to SIM swap, but this particular claim is completely misleading.
It's already too high up given it's a blatantly baseless accusation. I'm confused why you think it's more credible than the article when it provides zero evidence.
The attacker instead used the cell number of the author of the article, and supplied a fraudulent letter authorizing the re-routing of text messages through the bulk SMS service.
The attacker works for a service, which purports to verify the routing and carrier settings for a given mobile phone number; I expect that their solution periodically checks the results and issues an alert if the results differ from a known valid value.
The article goes into detail; it's worth the read to answer your questions.
From my experience, there is very little process and oversight being followed. I had my number ported over (with my knowledge) to Tmobile by a 3rd party, however Tmobile had not attempted to verify my consent. The store associate took this person at her word. My then current phone stopping working caught me by surprise.
I can imagine if I signed up for a family plan, any store associate would be happy to move any number of phone numbers into my control.