Turns out that a substring of that product ID matched the client company's phone number and their security theatre intercepting proxy was replacing all occurrences of "sensitive" strings sent to the internet with asterisks.
The irony is, of course, that as the people running the site, I didn't know (and would never have wanted to know) the user's phone number until this incident.
How I loathe security theatre.
The right thing would be to add a lookup function to first verify the phone number is in use and then call the number to ask for permission to use it; followed by a webhook to send a confirmation back to the database to cache that info because this needs to be efficient!
/s
And now they can exfiltrate all the sensitive phone numbers -- just sending clients (you) long strings of numbers, and see what was replaced.
public Internet facing channel is rightfully scanned and screened for these kind of patterns to prevent unauthorized data loss
This is a end user making a request to an online shop and the POST request to "add product 123456 to the basket" gets changes to "add the product 12***6 to the basket" by a security* proxy between the end user and the web site.
This isn't specific to the site we run. This would have happened on any site they were posting to.
1) They install a root certificate on all machines and use that to MITM all TLS connections using a firewall appliance. They turn this MITM on one day without notifying any developer. Overnight, all our builds (run on-prem) fail because npm install, pip install etc fail and we spent a long time trying to figure it out. They are still failing to this day and I have to get off the VPN every time I need to run these simple commands. IT absolutely doesn't give a flying **** about developers.
2) They ban all non-Chrome browsers from being installed. As in, if you install such a browser and try to launch it, the system will say "browser X is banned. Contact IT." They would have banned Safari too had it not been part of the OS. Furthermore, they also disabled private browsing in Chrome (probably the ability to do this is why they allow Chrome). I think they're preventing people from hiding their internet browsing.
Firewall as bossware.
Firefox being banned is because it uses its own certificate store, so Firefox users would see a browser warning every time they visit any https site notifying them that their traffic is being MiM'd. Chrome and chrome reskins like MS Edge use the OS store which MS Windows centric organizations can easily (centrally using MS tools [GP]) add the trusted CA for MiM into. For the Macs, it probably wouldn't matter since the 3rd party mgmt tools could probably push out either.
FYI You can instruct FF to use system trust store: https://support.mozilla.org/en-US/kb/setting-certificate-aut...
They also used push notifications on the desktops to know when people were active or what they were doing, and had keyloggers installed/active. Once caught a manager's personal laptop on the network running mitm software. A friendly coworker in IT confirmed all of this with me in private.
Tried warning a couple coworkers, but got brushed off. People don't seem to care nor believe even though they're being manipulated.
That place was a nightmare to say the least
Our IT now blocks outbound SSH entirely. You know, the secure way to access VM's in, say, our cloud? Sigh. I'm sure there's a "jump" server somewhere that I'd have to log into, `sudo` to another account, THEN SSH to my target box. Whatever. I just avoid the VPN.
I used to use `cntlm` to tunnel requests through our firewall for things like Ruby's bundler, as it required NTLM authentication. Now they've also gone the additional mile, and installed a certificate (Cisco Umbrella) in all of our computers, and require its signature to pass the firewall. Unfortunately, it took me a long time to sort this out: why `cntlm` no longer worked, and why none of the usual suggestions on SO fixed it. I finally figured out that RubyInstaller for Windows included a nice facility to deal with this. You just place additional certs in a directory, run a Ruby script, and it will bundle the whole stack into a single .pem, which it will reference for all network-related commands. Thankfully, bundler's error messages were telling me the specific certs I needed, and I could download them from Cisco's web site.
Just about a month ago, my company started requiring that cert for ALL traffic, not just HTTP(S). Like for, say, Postgres connections on port 5432. I finally realized that I could reference that same SSL bundle in my Postgres client connections, and get through.
I've spent about 8 years here now, and it's been a cat-and-mouse game the whole time. I'm always wondering what's coming next.
They are not paid to. Their performance is judged against how close they get to zero compliance issues, not how close they get to zero times developers were unhappy!
> I think they're preventing people from hiding their internet browsing.
Without delving into the “do you have the right to privacy even on a company machine”, who would be daft enough to do something they want to hide from the company on a company machine, or the company network at all? Though there are valid useful uses of pron^H^H^H^Hicognito mode so it seems silly to ban it.
Maybe “you shouldn't be doing that anyway” is a key part of why they don't care to spend effort resolving the problem.
I've worked in various places from big finance, public sector, to privately owned. It was only the big finance institution (the biggest) that actually seemed to care about supply chain attacks. Everything was locked down super well and you could not! use a random library without it getting vetted by a central team. In fact, we were even locked down to specific versions of programming languages.
People see this as annoying and in the way of developers, but it really is the only way to secure your development "supply chain". When people cry about this I always ask: do you really want the entire financial industry grinding to a halt because someone took down left-pad?
I’m in charge of the IT goons somewhere. We aspire to provide a better level of service and maintain local repos of things you’re allowed to use. Stuff like Node isn’t allowed near anything important though.
I would be careful. An agency doing stuff like that is probably running an EDR that will detect and report on what you’re doing. If it catches what you’re up to, you’ll be jammed up.
If you don't have a budget for the vetted repositories, it means you don't have a budget for the project within the security requirements. You shouldn't be circumventing the security requirements, you should escalate the issue.
PS: of course I'm not talking about other things like MITM certs, that only reduces security.
There are portable apps for other browsers, Firefox for example. FF has its own certificate store that overwhelms IT.
about:config -> security.enterprise_roots.enabled and it uses the system store.
Overall Firefox is a very good browser to configure for different machines. The out of the box Chrome or Edge are just a bit more forgiving to MITM attacks. So the user doesn't notice perhaps? Aside from that they are horrible browsers with horrible priorities.
“It’s more secure” they said.
The “solution”? Disable certificate checking of course! What could go wrong?
Same place that ran a vulnerable instance of nexus (the package manager) for all internal npm and maven packages for a whole year before patching. It was publicly accessible. And it had a banner on the homepage that said “this version is vulnerable (severity 10/10 anonymous RCE), update NOW”. Anyone who went to https://nexus.1337company.com would see it.
That company did software for the government. I’m sure I wasn’t the only one who noticed the vulnerability and that some packages got tainted. But we’ll never know because no audit was ever performed and there were no backups of that server anyway.
Like I said, absolute joke of a workplace.
Or npm and pip use their own certificate stacks and refuse the firewall's cert, which is ... good I guess.
Corporate middleboxes come in all shades of stupid.
Combined with the fact that chrome is the only allowed browser, I suspect it is the other way around. Chrome uses its own certificate stack, and I would guess IT only added the MITM certificate to the chrome trusted CA list, not the system one.
you should use this command "pip install -i http://artifactory.mycompany.local pandas" and get url for artifactory from admins
> Overnight, all our builds (run on-prem) fail because npm install, pip install etc fail and we spent a long time trying to figure it out. They are still failing to this day and I have to get off the VPN every time I need to run these simple commands. IT absolutely doesn't give a flying ** about developers.
Add their cert to system store ? Won't help inside containers tho... without much fuckery
Security team should have provided you with golden image with hardened config, latest patches installed, and corporate certs installed in certificate store.
If they didnt, they aint doing correct DevSecOps/SecDevOps or whatever the fancy term is for integrating security within development team.
It is a big red flag that any developer can pull whatever image for container running in production, possibly with unpatched vulnerabilities and loose config and ports open, and running with root privileges, etc.
Usually stuff has to be vetted and checked prior to being deployed in production environment
That's the fun part: every technology and tool has its own bespoke way of handling certificates, and it often isn't as simple as adding a certificate to the system store.
What classifies this as an "overzealous" act of network configuration? There may be a subjectively legitimate reason the user's network was configured this way.
"I had no idea I was ever going to get anything different."
There's an entire list of HTTP status codes. That was your clue that you would get something different. You made a decision to not have handling for them all. Not implementing handling for 418 is understandable, but forbidden and service unavailable responses are common enough.
Worked at a large FI.
Our corporate firewall used to block any website or payload that contained the word "hack". At one point, the security team decided to roll out a change that blocked all verbs except GET and POST without telling anyone. I could go on.
What you tend to see is the web firewall is administered by someone who has only one duty (manage this firewall) and very narrow set of skills (certification in this appliance). They probably have a very shallow understanding of the http protocol.
How else are you going to stop employees from downloading and playing NetHack at work?
I understand that some companies want to block certain websites. However, if you're in such a restricted network, I wouldn't expect a website like "Thankbox" to work at all.
An overzealous filter like this prevents normal POST requests (logging in to websites, etc.), lets through random websites (gift card website) and allows all manner of data exfiltration and other nasty stuff. The goal is laudable, the implementation is laughable.
No, just no.
In a world where many website use GraphQL (POST request with content) (or gRPC) that's complete garbage decision.
- This kind of brain-dead admin decision is exactly what bring protocol abuse: people would just use GET query with a ton of parameters and violate semantic just to avoid stupid middle box problems. Same goes with TLS which is used everywhere (even in VPN) just to bypass the crappiness of corporate firewall and stupid managerial decisions.
Always check status codes. Don't assume that backend (even if it is your own server) behaves as you think it should - complain when the response is not what you would have expected.
This is why I hate those error responses that encode the error message into JSON and return status 200. Gee, thanks - your backend is so special that it is an honor to write custom error handling for it. /s
Glad OP solved it in the end, but I would suggest reacting to all 4xx and 5xx statuses. It's a standard, if you get 418 you know what "your" backend is saying.
Dash (or its Windows equivalent, name escapes me) can be used to view and search these dumps (as well as dumps from GitHub, language docs, etc) offline: https://kapeli.com/dash
The problem with security people is that they think security is the most important thing.
A good security admin will work within the bounds of compliance to make the business work. And any good blocks will be apparent to the user. Trust me, security doesn't enjoy pissing people off, we just accept that it happens sometimes.
Decided the idea was not worth it to fight with infosec guys.
Turns out that a one-checkbox-tick fix was all it took to make that go away. The woman in charge of the web proxies panicked, thinking that this change had "broken something", reverted the change, and then refused to change it back.
Fun times.
The A stands for availability and if you don't make things available, you're failing at your own job.
Also, the link to "request an exception" lead to a 404 and the IT team responsible for the blocking didn't respond to email.
Oddly enough, I can get on imgur
For example I wonder if javadocs shows you how to convert an InputStream to a string, as per this question: https://stackoverflow.com/questions/309424/how-do-i-read-con...
CC: Management
With such a strategy for learning everyone would end up in IT security.
Regards,
hyperman1
Good thing I never put any of my money in the company accounts...
It makes simple things really hard - like a links checker, package dependencies, remote servers or integrations with Google.
We can't even run test scenarios on the machine because we're also locked _out_ of the server. Instead, we rely on their IT department to run test scripts that we send them via email.
We were debugging an elastic server connection for 2 weeks that was working perfectly fine in their "QA". It's a horrible existence.
Even then, if you were using certificate pinning, it wouldn't work as the HTTP proxy would serve a "are you sure you want to continue" HTML page, which is of course not expected.
SSH is out of the question.
it's amazing what "simple" things break; like kubectl, gcloud, go get.
So frustrating. Countless development hours lost to bypasses.
If you're running Linux, there's a utility called "tsocks" which wraps any other command and redirects all network servers through a SOCKS proxy defined in /etc/tsocks.conf, e.g.:
tsocks pip install somepackage
One downside is that since it relies on some linker magic, it doesn't work for static binaries. But for most common usage, it served me just fine.Reminds me of a friend who started a government job, and they went 6m before they were fully onboarded and able to work. ????
I wish more front-end devs recognized that they're building HTTP clients whenever they make HTTP requests. There's a whole specification written about how to do that well so one doesn't have to learn things like this the hard way. Specs may look old and esoteric, but following them bakes hard-earned wisdom into your apps for free.
I prefer to let the users actually see such errors, although that seems to be an anti-pattern today.
Usually any message receiver should first check the status code and only proceed if it is 2xx and handle errors in any other case.
But such edge case errors (a 403 usually isn't an edge case) getting swallowed still happens on the most prominent on thoroughly tested sites. Had similar stuff on Amazon and Microsoft pages for example. Saw the errors in the console but they weren't displayed to the user.
Will it fail silently? If so, is your code prepared for null to be returned instead of parsed JSON objects. You'll have to check for that.
Using a library isn't going to save you from handling all the error states and unexpected response bodies. It'll just change the documentation you're reading and the name of the abstractions you're dealing with (e.g. status codes -> exceptions)
I think you did a great job cementing the "why"—usually this topic is very hypothetical. I also liked how you tied it to real end users. After all, that's who the internet is for!! [1]
My intention wasn't to criticize your post. I hoped my comment would help one or two readers recognize the underlying problem space a little sooner, which might help them learn a more broadly applicable lesson when the time comes.
And they wonder why I prefer to work from home?
Yeah, I can't believe how stupidly locked down some of these networks are.
I once had an employer said they needed a "whitelist" of websites we wanted to visit instead of a "blacklist" of ones we shouldn't. That was an interesting day...
We run a Saas and someone wrote an email saying that our server was down, and when we'd expect it to be up. Not having had a notification, I double checked from a couple of geographic locations that our application was indeed up and responding.
After a bit of investigation, it turns out that they have to whitelist every unique address with their corporate IT. And had only whitelisted our primary client-app URL (talks to a couple of different API endpoints), hence the strange error message.
It's been a long time since I've worked somewhere with whitelisting.
I currently no longer need to do so right this minute, but sometimes people do keep asking me why I still have that.
---
Not sure if this still works on modern corporate networks. These days I tether to a mobile phone with unlimited internet; which is all-around easier to work with.
If you don't have other relevant allow rules, your sshd traffic would just be dropped, regardless of port.
If the firewall administrator does things poorly, they will create an allow rule for port 443 and your sshd traffic on port 443 would be allowed (no inspection of traffic to determine if it is SSL or SSH).
BTW this is inspection, not decryption. Two very different things.
The business of developing algorithms to effectively detect various applications must be very interesting. You can see all the different "applications" here: https://applipedia.paloaltonetworks.com/
1. Disconnect from VPN and run `npm install` until it failed
2. Connect to VPN "Profile 1" and run the command again until it fails
3. Connect to VPN "Profile 2" and run the command again until it fails.
4. Disconnect from VPN and run the command another time to finish installing all dependencies.
5. Reconnect to VPN to actually run the app.
-> ᛯ ssh -T -p 443 git@ssh.github.com
Warning: Permanently added '[ssh.github.com]:443' (ED25519) to the list of known hosts.
Hi XANi! You've successfully authenticated, but GitHub does not provide shell access.Cannot judge by not knowing how he displays errors. But a question to HN public: Is opening unknown HTML under my domain within another window safe? Or is there any possibility to strip down any "permissions" to cookies, requests, resources etc for that dedicated page?
If he put it in a sandboxed iframe, it will have the same kinds of access as the main page, because it comes from the same domain. Everything is already as messed up as it can be, and there isn't anything the frontend can do to improve it.
What's the safest way to handle this? Open it in an iframe?
What's the safest way to handle this? Open it in an iframe?
But still, I'd go with safer practices. Even in the slightly unlikely case someone manages to hack 3rd party (Stripe) and send your users arbitrary HTML for some periods of time... :)
I always hear people cry and moan about this but having worked on that side of the fence I would like you to know that I know of instances where people have been downloading illegal material (involving children) and running tor. That's not to mention the 75% of staff who willing give details during phishing campaigns.
Saying that, I find 60%+ of cyber businesses to be a waste of time at best, and at worse just frauds. Core firewalls with L7 capabilities from vendors such as Palo Alto and CheckPoint are legitimate security devices, especially suited for enterprise networks.
I do think it's pretty pointless running those in the cloud though, unless you have admin VMs on vnets for your production resources. But that way lies madness anyway.
Take for example the scenario in question here. Is it really legitimate to allow GET requests to a domain but block all POST requests? That sounds questionable at best. How many sites is it safe to view pages, download files, etc from, but POSTing to them is dangerous? There may be a few, but it is not particularly common. Far more common is sites where any request could be harmful. (Malware, sites spoofing other sites, etc).
I get fully blocking a domain. That can be reasonable sensible, especially for domains in a known blocklist of porn, malware, etc.
I can get inspecting content and blocking if there is clear evidence of maliciousness (but this must be done carefully, since false positives can cause a lot of headache!), but for other content-matching scenarios, you may well be better off generating an alert to be reviewed manually, rather than blocking things.
There have been cases where these system incorrectly block business critical functionality, causing a company to completely shut down, losing huge sums of money while figuring out what is breaking things, before getting it sorted.
Yes, blocking phishing mails can be impossible with some hosted providers' spam filtering. But, here the solution should be to push back on e.g., Microsoft to fix their dumpster fire spam filtering, or switch the organization to a different product that works.
I don't think IT should be pretending at being police. It isn't their job. And, any infrastructure that can be used to catch "criminals" can be used to abuse employees.
Also, there is absolutely nothing wrong with using tor. I've used it often, at work, to test things as if from off-site.
I believe the role of IT is to respectfully facilitate users to safely get their work done. This involves a balance of security measures that do not invade the users' privacy, pushing back against management when appropriate to protect the users from managerial overreach, and sometimes just allowing something that could be dangerous because the alternative is worse e.g., MiM provides limited protection from exfiltration, but also enables horrible abuse by management and should be pushed back against.
This is hopefully a trend that is disappearing with a wave of modern transparent proxy solutions, but in general companies tend to set up proxies that get automatically authorized by your workstation. It may have some issues with less known browsers and your console tools will not be able to use that at all.
So when you build something locally, want to download a .deb, or a pypi package to have modern Python tools your are out of luck - you have to download it manually using a browser or not at all.
This is where such proxy comes into play.
Although I still think that breaking it up is a very bad idea in general and it is appalling that this became common practice. Especially because there are exceptions where it fails and you train users to just disregard TLS errors.
Even worse, the IT security industry shamelessly uses the data to spy on employees. For that alone it deserves its bad reputation. Still, there is no real solution to shield data from the the most careless users.
Ideally a subnet belonging to one of your competitors? I thought that nowadays only very ignorant people follow links or open attachments in spam emails. Certainly all the spam I've seen for a few years has been as plain as the nose on your face: only an ignorant person would mistake it for ham.
> Website blocked
> Not allowed to browse Shareware Download category
> You tried to visit:https://www.valcanbuild.tech/handling-corporate-firewalls/
The irony.
There are lots of reasons why the request would fail and returning a 403 or 503 from a corporate firewall is just one of them. What happens if the user's wifi is flaky and the HTTP request is canceled? What happens if the connection is slow and the request times out? What if, heaven forbid, the destination server is down or unreachable temporarily?
As a web developer, never let a user's action lead to nothing happening. Always give feedback. Whenever sending background HTTP requests, always provide a visible error message to the user when you encounter unexpected results or HTTP/network errors.