The narrative seems to be, L0pht testifies, world ignores them, chaos ensues. But Mudge's testimony coincides almost perfectly with a software security renaissance. The reality is more like: L0pht testifies, world ignores them, gigantic sea-change in security leads to 9-figure investment in securing Windows, the near eradication of SQL injection from popular applications, universal deployment of TLS in financial applications, chaos ensues anyways.
What, exactly, would be different if people had "listened" to the L0pht? Would we have S-BGP? DNSSEC?
The simple fact is: in 1998, when this happened, nobody knew how to fix any of the problems. If we had known, we'd have been doing that. There were still servers in 1998 that used deslogin.
I'm very happy that a bunch of people I like got to put their handles on nameplates and get recorded testifying to dummies in Congress. I do not, however, think it was an event with much meaning.
Later.
I think it's literally the opposite of the gist of this story. Everything is much, much better than it was in 1998. We have made surprising progress, and addressed security problems with an improbable seriousness:
1. Most new software is no longer shipped in C/C++.
2. The devastating bug class introduced with new languages (SQLI) was, for public-facing software, ratcheted back from "universally prevalent" to "rare" within a decade.
3. 3 billion Internet users all run software that downloads unsigned code in a complex, full-featured language with a dizzying variety of local C library bindings, right off web pages, executes it locally, and it's a news story when Pinkie Pie wins Pwn2Own with a working reliable Chrome clientside.
4. Anyone who wants strong crypto can have forward-secret elliptic-curve DH AEAD transports with a config file tweak on their servers.
5. Microsoft went from MSDOS levels of security to "you can live like an investment banker if you can reliably produce a couple Windows exploits a year" levels of security, again inside a decade.
6. Despite its emergence as an entire new category of computing platform, with its own new feature set, the most popular mobile OS has --- it appears --- zero effective malware outbreaks.
7. Remember Sendmail? Remember BIND? Probably only if you're a security nerd. The last working SSH vulnerability was how many years ago?
As usual: everything's amazing and nobody cares.
There is a great chart from an analysis on the history of software bugs showing the rise and decline of SQLi/XSS bugs from between 2005-2010:
Memory bugs remain relatively consistent (despite PaX and others effort to change that).
Source: https://www.isg.rhul.ac.uk/sullivan/pubs/tr/technicalreport-...
I am sure you know this too well but let me remind a few people here who were not even born when some of this happened: The early years were mostly of trust. As an example, I remember running around even in 1994 with a 10BASE2 network tester to find where the network broke. 10BASE2 was a perfect example of a more trustworthy age: many machines shared a single cable and eavesdropping required zero effort. Every machine got all the frames and there was no encryption. Then came Fast Ethernet and with it 100BASE-TX and slowly switches replaced active hubs and this went away. But it often required rewiring buildings which took a long time. I do not have hard data and it's hard to define the start and end points but I would say it took at least five years if not a whole decade to really sunset 10BASE2.
Now of course doing the same on a world level would've been a daunting task. However the number of websites at this point were growing somewhat sedentary at least compared to years prior and later, see the data http://www.internetlivestats.com/total-number-of-websites/ here, compared to the meteoric growth in 1997 (334%) and 2000 (438%) the years 1998 (116%) and 1999 (32%) were slow and peaceful.
Here, let me sum it up this way: I think it's possible that the L0pht testimony predates SQL injection.
So, tear it down and rebuild it securely is actually many years to late 1998.
There are a few that know how to build software that isn't swiss cheese. Just picking two that I know of, nobody reads the whole volume set (as noted in Coders at Work), and for the other one, nobody wants to use it because it isn't under active development. The idea, even today, that a chunk of important internet software can actually be finished seems to be met with cognitive dissonance.
And there are organizations that know how to build very good software. But in todays fast moving businesses, who wants to be in a CMM 5 organization? Doesn't sound like much fun to me, and probably not to you either.
My estimate of the last year that all of this could have been fixed was at least 30 years earlier than your estimate. Or sooner. I worked in an organization whose core software, running today, was first written about 50 years ago.
The people this message should have been directed at were non-technical executives and managers.
What language are the applications in? What makes them more secure than C/C++?
Modern C++ can be written in a secure way, however, this requires more discipline and knowledge when compared to languages such as Java or C#, which are very forgiving with programmer errors.
As for what software is written in, that depends. For Linux, I guess that's C and C++. For Mac it's mostly Objective-C, C and C++. For Windows probably a mix of C#, C and C++. On iOS it Objective-C and C or C++, but new apps will maybe move to Swift. On Android it's mostly Java, with some C and C++. Web apps don't use C or C++, however, they're not as hot as they used to be, mobile is the cool thing now.
As one can see, there's actually a lot of C and C++ in production or still being written.
Are the folks in congress actually stupid? Or do they practice a different profession than you? Namely: the structure and interpretation of laws and policies.
How much do you know about, say... the field of nursing?
Say what you like about programmers, but most of them don't actually have any job-related responsibilities in the field of nursing, breaking the analogy.
Not all. But some of them are, for lack of a better term, really fucking stupid.
The two big problems in computer security used to be Microsoft and C. Amit Yoran said that publicly when he was Homeland Security's head of computer security. That made him unpopular, and he resigned in 2004. Yoran was then replaced by a Cisco lobbyist who kept his mouth shut. (Yoran did OK; he's now the CEO of RSA.)
[1] http://www.cultdeadcow.com/ [2] http://www.cultdeadcow.com/cDc_files/cDc-351/
Microsoft, to their credit, responded admirably to the events: they invested a spectacular amount of money shoring up the nuts-and-bolts quality of their software, training their entire development team (one of the largest in the world) on secure coding standards, hiring researchers to revise their libraries and deprecate unsafe interfaces, and adopting hardened C/C++ runtimes.
If this wasn't exaggeration, we should study the fortunate circumstances by which this calamity has been avoided for 17 years.
The hunting and taxidermy of corrupted BGP advertisements is basically what got the NANOG crowd out of bed every morning; it's a pretty big chunk of the job. I always felt like the alarmism over BGP was a bit tone-deaf. Certainly, nothing Peiter said came as any surprise to anyone who'd ever managed default-free peering.
Imagine someone using all of the advanced techniques from today (DNS cache poisoning, DNS/NTP amplification attacks, BGP hacking, SQL injection, and so on and so on) and taking them back to 17 years ago when the world was naive and unsecure.
Also, we've had calamities, we just got over them. Code Red and then NIMDA caused huge disruptions, so did sasser and SQL Slammer. And we've gotten used to a world where people will use DDoS to try to take down sites or services for a variety of reasons ranging from profit to spite to boredom.
No, the internet never completely fell over and was unusable for days or weeks at a time, but a lot of people have been affected and it's just sort of become background noise in our lives the way tuberculosis and smallpox used to be.
So I wouldn't say that it's been entirely avoided.
Or, alternatively, you could go after some smaller internet companies, demand extortion money, buy a nice car and treat your friends to drinks.
Leaving aside the truth of their claims at the time–because it's irrelevant–your comment makes the fatal error of assuming conditions haven't changed at all in 17 years.
But it is certainly not especially hard for _governments_ to take down the net in their own country and in many cases reduce the degree of interconnectedness with other countries so far as to effectively take down large chunks of the Internet. The problem is that we do not truly have a network, instead we have a tree structure connected to a very small number of fat pipes. As originally envisaged the internet would be resilient in the face of the failure of one route because there would be many alternative routes but that is not what we have today.
This is a much bigger threat than the cracking of individual machines.
"Due to the sensitivity of the work done at the L0pht they'll be using their hacker names of Mudge, Weld, Brian Oblivion, Kingpin, Space Rogue, Tan, and Stefan."
Isn't that like saying "Many accidents happen to models of cars first built during that era?" Just because they debuted then doesn't mean they are substantially, or even remotely the same thing. How many complete rewrites of Internet Explorer have we had since then?
How many complete rewrites of Internet Explorer have we had since then?
None?There's an ongoing one that was announced in January with a preview released in March. https://en.wikipedia.org/wiki/Microsoft_Edge
https://twitter.com/dildog/status/612795030345007104
"It feels like 1996 again."
The first is a problem is with incentives over time. (The same thing happened with global warming, with overfishing, with deforestation, with cyber privacy rights, etc.) The problem is that the immediate incentives do not align with the long term incentives. If the country that can cut down the most forest or burn the most oil is the one that wins, relative to the other, a global race for power projection - no country will want to perform in the short term what it must in the long term.
Alas, today the short term incentives in software and hardware development are mostly the same. The security community has long preached that built in security as a crucial and fundamental engineering design goal. Today, as it has been for decades before, software is not competitive if it has security built in. It raises the costs of development and it slows production and building security awareness into every developer would require years of additional professional experience or schooling: building in security is a competitive disadvantage.
The second problem is that everyone's threat model is different:
- Consumers want their computers to run quickly and do not want their information or identity stolen. They want to have convenient and reliable control over the privacy of their online interactions - from the public and from law enforcement.
- Industry does not want to spend more time and treasure creating fewer visible features. Their existential threat model is losing their business by being too slow at production. Corporations are also scared dumb of having a SONY-style or Target-style breach.
- Government wants to be able to peek into all communications of everyone including its citizens. It wants to be able to hack into other countries - both their industrial and their government sectors - and those of private foreign citizens. It does not want the same to be true in reverse.
It's also true that the types of systems used by the military are different than those used in industry which are further different than those used by consumers. Where do you allocate investment in security? Consumer internet browsers? Virtualization for enterprises? Network intrusion detection for corporate LANs? Access control for government systems? Which do you prioritize? (Granted, its true that some technologies are shared between these classes, such as web browsers)
What's happening right now is that the discussion about threat model is being negotiated (though not in those conscious terms). Governments make their case about national security - how they need backdoors - and how they would like computer security to work. Security professionals - many of them private citizens - have separate threat models and can't agree with government. Individual citizens want privacy - and can't agree with government or industry. Industry wants to get customer and competitor data but also doesn't want to leak their own.
To the degree that the threat models are compatible, some level of real investment can be made (today there do happen to be large scale efforts to mitigate cyber security risks - particularly threat intelligence sharing programs).
Yet fundamental contradictions in threat models will keep the direction of security in limbo and worse if some threat model 'wins' it will comes at he expense of the others. Government's goals, even in so labeled 'free' countries, are disaligned with their citizens on threat model. Government goals themselves are further internally contradictory, as they would like computer networks to be both secure and insecure (giving birth to phraseology such as "NOBUS").
Today not only are we not able to secure the internet and computer systems, we still don't really know what a secure internet would mean.
beard: hacker credibility +1
nickname/handle: hacker credibility +1
glasses: hacker credibility +1
suit: hacker credibility -1