I'm sure there's a few journalists out there that take cybersecurity seriously, but I'd wager the vast majority are pretty trivially monitored.
I think being responsive to their needs and building trust will go much further. Also, designing a one-size fits all model will just mean that your reporters will either ignore the guidance or find a way to work around it.
For instance, the most recent credible threat we have had against one of our reporters wasn't a state-level actor, but rather folks on the internet (trivially) finding their address and doxing/harassing them and their family. No amount of technology hygiene will change the fact that voter registrations are public records.
I don't envy your challenge. Security must make it more expensive to the attacker than it's worth. Even the housing reporter's data could be highly valuable; with inside knowledge, someone could make a killing on real estate. The value of the national security beat information is astronomical.
I don't grasp why, with all the news about breaches, reporters still don't care.
Was she getting leaks from NSA staffers? No. But it does feel kinda silly to me that journalists, generally speaking, have insecure setups by default. But I get it, it's a hard industry to squeeze a living out of these days.
However, it adds enough friction (especially with remote work) that it's hard to get it right 100% of the time.
If you want to share really sensitive documents, one way to ensure proper handling of your documents is to use a service like SecureDrop [0] which for e.g. only accepts submissions over Tor and requires the use of a secure viewing station [1] (air-gapped machine live-booting Tails w coreboot rom + webcam/networking card physically removed) to decrypt/access leaks.
That being said, I don't think there's a perfect tech-only solution because nothing is stopping folks handling it carelessly after they access the file.
[0] https://securedrop.org/directory/center-public-integrity/
Sounds like she dodged a potential honeypot and surveillance attempt.
iOS is the least worst mobile option and it’s ridiculous to say Apple is lying about security if any exploits are found, ever.
If you look at e.g. how messaging works in iOS 14 [0] you’ll see that they do in fact work on making secure systems. But parsing and memory safety are hard. Like, really hard. The fact that NSO found exploits doesn’t mean Apple is doing anything, but Apple is clearly making it more and more difficult to find and abuse such exploits.
For the average person that isn’t being specifically targeted by sophisticated malware from companies funded by -governments-, iOS is pretty damn secure. Dealing with being attacked is a different threat model.
[0]: https://googleprojectzero.blogspot.com/2021/01/a-look-at-ime...
[1] https://www.theregister.com/2020/05/14/zerodium_ios_flaws/
This doesn't have to be the case. Start by avoiding C and C++. Use Java (on Android) to write parsers. It is very hard to take a buggy parser written in Java, and to escalate to a memory corruption attack.
If you really can't use a language like Java, write your parser in safe Rust using slices over Vec<u8>. Then run a fuzzer over it. You'll find a few runtime panics, but you're vanishingly unlikely to encounter memory corruption.
Buffer overflows and memory corruption can be almost entirely avoided these days, at a price.
Speaking of companies lying... You are holding your phone wrong, and your keyboard works fine.
Oh and your apps might have a backdoor, but it took getting sued by Epic for us to let anyone know that.
Apply lying is about as common as a politician lying.
Absolutely, but creating a platform the encourages or forces users to do the wrong thing is a regression from where we were ten years ago.
>iOS is the least worst mobile option
No. Devices running a FOSS operating system like the Pinephone are the least worst mobile option, people don't like it because it's not sexy and it's currently very inconvenient. The rest of the options are so bad that you're probably better off without a mobile phone at all.
RE: iMessage
You have everyone using exactly the same messaging client, so you have one piece of software to exploit and now you can attack everyone. The extreme lack of diversity makes these sorts of complex exploits much more profitable.
>iOS is pretty damn secure
Sure, if you don't do anything with it. But it encourages users to download unaditable closed apps and reassures them that doing so is totally safe despite the fact that most of them are using 3rd party telemetry services run by data brokers.
I don't see how it's lying. If you are going to consider that iOS is not secure because they got owned by a couple 0 days, then by that definition there isn't a secure piece of software on the planet.
Based on supply and demand it would appear that iOS is less secure right?
It doesn’t mean it’s good enough but I’d be curious to hear your ideas for what could work as easily for the masses.
Attacks against the freedom of others and critics of government are a much larger threat to ordinary people than if they were surveilled themselves.
I go one step further and leave the SIM card out, which means the SMS vulnerability path is closed too.
So in either case... turn off native messaging and use Signal or something if you are paranoid. You aren't really using the "phone" part anymore, so buy an iPod touch or something.
Also, iMessage is fully E2E if you disable iCloud Backup. Which can easily do in Settings.
If you are this paranoid, you shouldn't be carrying an electronic device.
>you are using SMS
doesn't fit with this from GP
>leave the SIM card out
I know this because I used to carry an iPad Mini in my pants pocket.
The most shocking experience to me in trying to evaluate the Mac ecosystem when they released the M1 and I bought a Macbook Air is being in meetings where I'm using bluetooth headphones, take the headphones off and put them back on, and music.app automatically opens and comes to the foreground of my desktop. There is no supported way of disabling this user-hostile anti-feature. I look on Google and StackOverflow and all of the suggestions for how to disable it dating back to 2014 or whenever no longer work. Apparently, the likely answer is turn off System Integrity Projection, reboot, rename or remove the file containing the application launcher, turn SIP back on, and hope that doesn't break anything else and hope Apple doesn't revert your changes on the next system update.
That did not seem worth it. The fact that Apple Music can and has been used as an attack vector makes it even worse that it is so tightly integrated with the audio subsystem of the hardware as to take over your device thanks to movements you are making in the physical real world even when you may not be touching the device at all.
I just can't understand what the thought process was in making this a default behavior, let alone one that cannot be disabled.
I think your bluetooth headphones are sending a play command to your device when it's connected. I'm sure it's annoying, but I think your macbook is doing the right thing here.
Yep this is what's happening. I have a car bluetooth addon that I purchased that does the same thing -- it sends Play commands to the phone on-connect repeatedly until something starts playing.
By default the phone will open Apple Music but if I already had been playing music on the Spotify app, it'll just start playing that instead.
I do not get the bluetooth-automatically-starts Apple Music behavior.
I haven't tried but I just checked the iMessages preferences and you can disable being contacted via your phone number or email addresses, with check boxes for each. As Macs don't have phone numbers I think this would work? I do use apple messages (which is why I didn't try disabling it), but use WhatsApp and signal more than I use the default.
I have no idea how good the mac's security might be, just pointing out my experience.
I agree that Apple could do better with eliminating their bundled apps, but I use third party calendar, address book, reminders, photo, etc with no issues. And I hear quite a few people are willing to use chrome (ugh) as their default browser and safari doesn't get in the way.
Unfortunately it's not built in, but I think it's your headphones doing something nonstandard because my Sony XM4s and AirPods do not fire this behavior when I put them in.
I guess it's worth looking into to see if there is some outside of the OS way to force the OS to route requests to the application I already have opened and foregrounded that plays sound, but I would expect that to be the default behavior. What is a "play" request to music.app even supposed to do when I have never intentionally opened the app and don't have a playlist set up? It doesn't actually play anything since there is nothing to play. It just opens the app and takes over my screen.
>If you remove the headphones or put them back on, this will pause or resume playback. If you're not wearing the headphones, make sure there's nothing else around the sensor because it may activate and resume playback.[1]
[1]https://www.sony.com/electronics/support/articles/00229324
With Apple Configurator you can disable Music and Messages. It’s not the most user-friendly method, but it is possible.
Only allowing their own app to be associated with the default audio player is anti-competitive, at the very least
From: https://www.theguardian.com/news/2021/jul/18/huge-data-leak-...
Yeah, okay.
With windows server I used to have a target of balance in any attack footprint.. if Microsoft provided the OS, the component services that the server exists to provide should always try to be third party software (db, web server, etc) to try and minimize one type of escalation vulnerabilities… while possibly opening up to another, hopefully less worse set of holes.
A good way to disable iMessage and iTunes, though, is to simply not have an Apple ID. (This prevents the install of applications via the App Store, however.) You can of course set up the device with no Apple ID and then only add the Apple ID to the App Store (and not iTunes or iMessage/FaceTime/iCloud). This is what I do.
I think that one should probably buy an Apple (at least they control everything rather than the cobbled together android clones) and disable basically everything except exactly what is needed. At least that reduces the surface area. And keep personal stuff on a separate phone.
[1] https://www.apple.com/newsroom/2021/01/data-privacy-day-at-a...
[2] https://www.inc.com/jason-aten/apples-privacy-update-is-turn...
No non-open source "smart" phone is going to be secure enough. If you never store your data on your phone, you are safe from these hacks. Now you have to just protect from physical attacks :)
If you really care about security maybe it's better to get a really dumb 4G phone and share it's connection with a Linux small form tablet (but not running Android).
Of course, inconvenient as hell, but much more secure, especially since you are not running the iOS/Android mono-culture, so for anyone to target you it would require customized service.
Anyone who thinks they're up to the task (bulletproof?) should contact them.
I think "can't" here runs up against "choose not to". So far as we can tell opsec tends to be a pain in the ass in ways that are fundamental, not a problem with tools. Apple, like any other consumer focused company, doesn't lose focus of this.
And you don't need Apple's resources to make something better, just a more secure phone would have much worse UX. Just some examples for a much more secure phone, where you dont need Apple's budget:
- Runs some barebones Linux with minimal packages. An SMS app is an SMS app, not something that makes HTTP requests.
- app store is very heavily vetted
- forced updates, you can't dismiss update notifications.
- minimal attack interface, no smart connection features or accessories.
- Forced Updates? The FBI takes over the update server, forcibly sends out an update that sends all messages to the FBI immediately, and there's no way to stop it. That suggestion is idiotic. Or even better, install Pegasus on all the phones, have them be quietly reporting back to home for a few weeks, with journalists having no way to prevent updating.
- You forgot Hardware Root of Trust and Secure Enclave, like on an iPhone. Otherwise, the FBI can install a tool which just guesses PINs over and over while resetting the PIN attempts counter. It is not possible to build this protection in software only. You need chip-level hardware, and only iPhones in Fall 2020 and later have the Enclave set up to block repeated PIN attempts even if Apple-signed code is loaded. No other phone is safe from their own manufacturer like that.
(1) The ability to detect espionage from China and Russia (2) The inability to access journalists' phones
If you want an intel agency to be able to thwart Chinese intelligence activities, you can't also publicly state you won't be looking closely into members of a profession who act a lot like spies.
This is called "incidental collection" and it's a touchy subject for sure.
But this subject is different than the DoJ directly surveilling journalists who leak, which is a problem, and governments surveilling their own citizens directly, not incidentally.
We can and should hold our government(s) to a standard of effective fire-walling of acceptable intelligence gathering and holding them accountable when they go beyond to surveil citizens directly, or indirectly through spying agreements.
We can make sure that the people who surveil Chinese or Russian "diplomats" are totally different than the people who execute search warrants against our citizens, and expect there to be zero crossover there.
Yes, that happens all of the time but one difference here with Tucker is he was deliberately "unmasked." Normally when an American is caught up in foreign surveillance, their identity is blocked out or masked, "incidental collection" as you said. Someone purposefully unmasked it. And someone purposefully leaked it. The same thing was done to General Flynn.
https://en.wikipedia.org/wiki/Unmasking_by_U.S._intelligence...
How could an analyst understand the conversation without knowing both parties?
My understanding is that some 10,000 legal unmaskings occur per year, and Gen Flynn and Tucker's unmasking were both routine, legal, and integral to analyzing the intelligence
When one considers the litany of crimes the disgraced lunatic Flynn committed, it's no wonder he got caught up in collection and that his identity was important to understanding the collection!