"Once software becomes pervasive in devices that surround us, that are online, and that can kill us, the software industry will have to come of age. As security becomes ever more about safety rather than just privacy, we will have sharper policy debates about surveillance, competition, and consumer protection. The notion that software engineers are not responsible for things that go wrong will be put to rest for good, and we will have to work out how to develop and maintain code that will go on working dependably for decades in environments that change and evolve."
So, to look at the words:
The notion that software engineers are not
responsible for things that go wrong will be
put to rest for good.
I'd have to say that this sort of high-minded platonic concept needs some revision. The notion that *some* software engineers *cannot*
be found as responsible (in part or in whole)
for *some* things that go wrong will be
put to rest in *some* situations.
There needs to be a degree of responsibility ascribed to some classes of systems development.Meanwhile, there is very obviously a line to be drawn between the programmer that programs their VCR clock to time a recording, the programmer that programmed the VCR as a consumer-grade product intended for purchase by unlicensed individuals, the TV network that broadcast the television show at the time the individual programmed their VCR to record 60 minutes of broadcast on a given channel, and the programmer who locked me out of the firmware on my smart phone.
I've had the idea for a while that most of us practice software development rather than software engineering. I have a degree in computer engineering but i consider myself software developer now rather than a software engineer. The reason, I don't practice engineering in the legal and professional sense.
In engineering school we learn about engineering as a formal process and professional responsibility. Both of these things are largely absent in most shops now. I get that not all projects need to be professionally engineered with all the costs and timelines associated with it. I think this is why agile came along. Sometimes it's just good enough to hack something together and demo it until a manager says it's time to release.
But there are many other projects which are extremely important to society and should follow more traditional engineering practices. There shall be external and internal engineers who must formally approve any product before release. There shall be specific and testable formal requirements. There shall be a formal design and documentation for engineers to review and people to develop from. etc. etc.
"... first-of-its-kind method for testing and scoring the security of software — a method inspired partly by Underwriters Laboratories, that century-old entity responsible for the familiar circled UL seal that tells you your toaster and hair dryer have been tested for safety and won’t burst into flames. Called the Cyber Independent Testing Lab, the Zatkos’ operation won’t tell you if your software is literally incendiary, but it will give you a way to comparison-shop browsers, applications, and antivirus products according to how hardened they are against attack. It may also push software makers to improve their code to avoid a low score and remain competitive."
ON your latter point, I agree that there needs to be a degree of responsibility to some software dev, but it's a complex area and I can't really organize 'blame'. What's the blame I should get for writing a better webserver that some dicator uses to serve up his orders and have people killed? Compare that to the firmware of a phone programmer, and the person who teaches a robot how to determine if someone is killed by a weapon. It's too easy to look to that last person and say without thinking that they are the bad one.
That sounds like a really hard problem to solve. Not only would you need to have an audit trail for the history of changes to the code, you'd also have to figure out a way for third parties to audit the code and make sure they're looking at the same data that is being released by the project.
Additionally, reacting to arbitrary changes in the environment probably requires more resources than even a multinational corporation can provide. So you'd need a way for third parties to be able to make their own changes to the software, somehow add them to the codebase without creating too much administrative overhead for the core team, and audit those changes in an automated fashion so that they don't create thousands of new exploits.
And how would you even manage the sum total of all these Frankenstein versions of the original software with the changed versions? How would all these geographically disparate groups of programmers even communicate?
We obviously need funding for a lot more CS professors to come up with solutions to these issues.
"As we build more complex artifacts, which last longer and are more safety critical, the long-term maintenance cost may become the limiting factor. Two things follow. First, software sustainability will be a big research challenge for computer scientists. Second, it will also be a major business opportunity for firms who can cut the cost. On the technical side, at present it is hard to patch even five-year-old software. The toolchain usually will not compile on a modern platform, leaving options such as keeping the original development environment of computers and test rigs, but not connecting it to the Internet. Could we develop on virtual platforms that would support multiple versions?"
ELC 2016: Approaches to Ultra-Long Software Maintenance: https://elinux.org/images/f/fb/Approaches_to_Ultra-Long_Soft...
ELC 2017: Long-Term Maintenance, or How to (Mis-)Manage Embedded Systems for 10+ Years: https://www.linux.com/news/event/ELCE/2017/long-term-embedde...
Yes, but we still have to solve it… or, at the very least, make a good effort to improve the situation.
Biology and chemistry already have systems around them to help limit any damage from, for example, smallpox research or mishandling chlorine triflouride, and they manage this without needing everyone to wear safety goggles when mixing yeast and water.
It's good for non-technical folks to watch, but nothing really new since the 'Humans need not apply' 15 min documentary [0]
[0] https://www.youtube.com/watch?v=7Pq-S557XQU (2014)
Edit: added link to humans need not apply
The film is likely a bit long to trigger much by way of discussion here, though that's not always bad.
Word is that free play / download is this weekend only. Grab a copy via yt-download if you can't watch immediately.
Who is misrepresenting the expert consensus? Or are they both misrepresenting the fact that there is a consensus?
- that documentary was pretty unhelpful
- Terminator images are usually inappropriate
- AlphaGo Zero (if not other Alpha•s) was pretty cool"""
(https://twitter.com/Miles_Brundage/status/983063456424308736)
I don't have any survey results to point to, but my impression from following AI researchers from industry/academia is:
* Modern methods are many leaps of understanding behind anything resembling AGI, so any concern about research groups developing a sentient computer program behind close doors with no warning is probably misplaced.
* AI/ML causing large-scale unemployment will be a serious issue eventually, but it's difficult to make a strong case that's it's happening right now.
* The ability to monitor and manipulate individuals using ML/AI is dangerous, doesn't depend on particularly advanced technology, and is already being used by corporations and governments right now. It's a lot easier to get the public worried about terminator-style robots than about (what appear to be) simple advances in advertising or law enforcement.
* There's a strong incentive for those selling "AI technology" to oversell its ability to generalize/improve automatically. To quote Elon (of all people), "It's a mistake to think that technology automatically improves. It does not automatically improve. It only improves if a lot of people work hard to make it better."–this applies to "deep neural networks" as much as anything else.
There remains the possibility that there is a difference between "the set of top experts in the field included in this documentary" and "a survey or statement from the larger share of practitioners within the field". This ... runs into a few additional problems.
Documentaries can (and with some frequency are) selective and nonrepresentational. That is, a documentary's goal is not to Reveal The Statistical Truth, but To Tell a Specific Story. Documentaries are driven by narrative, not random sampling and statistical analysis.
That said, stats can lie and crafted stories can be quite useful.
A large-scale sampling of opinion is also ... largely just that. A large-scale sampling of opinion. Even if that's expert opinion. It is not the same as arriving at a truth (unless the truth you're seeking to arrive at is "what is the typical or general opinion held on some matter by some population?").
The views and concerns of top practitioners within a field is often highly significant. It may still be inaccurate (or be inaccurately portrayed). But these are the people who've worked with a thing for a long time, who've seen what does and doesn't work, what is or isn't hyped. And you'll often find exceptionally strong critics of various fields or technologies.
Leading atomic scientists in the U.S. and USSR both came to oppose nuclear weapons: Robert J. Oppenheimer and Andrei Sakharov. The Father of the Nuclear Navy, USN Admiral Hyman Rickover, came to oppose nuclear power. There are numerous technologists who are now questioning the goal of universal connectivity (myself included, though my qualifications are expressly not available for or any basis for credence). And in the field of AI, there are numerous significant, long-term, and leading researchers who are raising profound questions of advisability and risk.
Or, you know, you could go with the Good Humour guy, Pinker.
Also, I tend to agree with Mark Cuban[1] about the importance of philosophy degree in the near future. There will be so many issues to be assessed that such degree would bring much value to the society.
[1] https://www.cnbc.com/2018/02/20/mark-cuban-philosophy-degree...
I'm not saying we shouldn't try to make friendly AI (One of Musk's initiatives), rather I'm just saying I don't see how it's possible to remotely regulate this.
This tech is almost inevitable. We are creator beings. We want to make something that matches our intelligence-- and we always have. We've made pantheons of Gods to match and overcome us, we've made these autonomous beings in the stories we tell and in the dolls we played with when we were kids. The problem is that we don't know what machine intelligence may do when we let it out in the world. If they can learn the tasks we want them to learn, can they then learn how to learn a task we didn't assign them to? What kind of task would that be? How would we control it?
There may be no controlling it. But if we started focusing on this problem right now we may be able to figure it out. Instead, most everyone in AI is working to pump out as many "smart" things as possible, trying to develop better learning algorithms, get AI to become as human as possible, trying to make machines that behave like us but better, without paying attention to the future costs.
Maybe for the first time in human history we can learn without making the mistake.