Questions for HN:
- What problems do you see with the approach?
- Any better approaches?If you're saying you want this to be an optional thing, perhaps even defaulted to enabled to protect grandma from doing something, then I would support that. But it needs to be optional. This mindset of a company can dictate every potential use case to end users is foolishness and doesn't slow down hackers but just irritates normal users who need those features. Just like right now apps can flag whether you can do screenshots or not. In normal situations banking apps other secure messaging apps one time view of photo apps this is all quite useful but it shouldn't be an enforced requirement that you have to then hack your phone to override it. It should be a default that's enabled that you can then disable. Now if you want the sending party to know if you've disabled that feature that's fine as well it's simply more information. Then the sender can decide what they want to do. The idea of completely removing control as if you know better needs to stop.
Things will not get better with technology or literally anything if we cannot require people to be responsible for their actions. Sensible defaults are great a sensible default to have this on is fine a system that notifies other parties during a communication if it's on or off is fine. Removal of control from the user is wrong.
Hardware/OS choice is the first step. Some platforms, like iOS, are already more sandboxed than, say, Android. I personally use iOS and feel comfortable trusting Apple’s approach, even though zero-day exploits remain a possibility. For my server needs, I use Linux and appreciate full control and root access—but that’s a separate use case.
App choice is the second step. Secure messengers like Signal prioritize privacy as a core feature, while many other messaging apps don’t. If a few high-profile apps enforced “No Screen Sharing,” the people who genuinely need or want to share their screen could always switch to a different app. So in practice, this feature wouldn’t prevent screen sharing entirely; it would just block it in contexts where security is paramount.
All of which is to say: optionality still exists—you can choose a less-restrictive OS or a different communication app. But for those who opt into a more locked-down environment, having a secure messenger that outright prevents screen sharing can make all the difference in avoiding accidental leaks or social engineering attacks.
When you begin to view your users as the enemy by instituting manipulation and control over their use you set yourself up for a hostile relationship with your user base. When your company is big enough and you've locked in your market that does work for a while.
That said, I’d like to explore how we might achieve security by design without sacrificing user experience. First, let’s agree on one core principle: if a user decides to share their screen, the OS should treat that choice uniformly across all apps—meaning it must always share the entire screen.
Given that, let's think about other ideas to address the risk scenario: a user might unwittingly share their screen with an adversary and then start a top-secret chat, accidentally leaking sensitive information. Ideally, users handling top-secret data would be exceptionally cautious, but in practice, mistakes happen.
Here's an alternative approach: a "Secret Chat Room" feature, that would rely upon OS checks, explicitly authorized by the user. Think of it as akin to physical secret meeting rooms with soundproof walls and Faraday cages—places where sensitive conversations are truly isolated. When a user enters such a room, they'd see a prompt like:
You are now entering a Secret Chat Room.
This room is designed to ensure that no eavesdropping (such as keystroke logging, microphone tapping, or unauthorized screen sharing) is occurring.
To proceed, please authorize the OS to perform an integrity check. You’ll be allowed in only if this check is successful.
To preserve privacy and avoid penalizing users with poor security practices, the OS would return only one bit of information: 1: The user authorized the check AND it succeeded.
0: Either the user did not authorize the check OR the check failed.
This binary signal prevents the app from knowing whether a failure was due to a deliberate user choice or a technical issue, thus providing plausible deniability.What do you think about this approach? I'd love to hear your thoughts on refining it further to balance robust security with a seamless user experience.