https://www.ieee.org/about/corporate/governance/p7-8.html
We, the members of the IEEE, in recognition of the importance of our technologies in affecting the quality of life throughout the world, and in accepting a personal obligation to our profession, its members, and the communities we serve, do hereby commit ourselves to the highest ethical and professional conduct and agree:
1) to hold paramount the safety, health, and welfare of the public, to strive to comply with ethical design and sustainable development practices, and to disclose promptly factors that might endanger the public or the environment;
2) to avoid real or perceived conflicts of interest whenever possible, and to disclose them to affected parties when they do exist;
3) to be honest and realistic in stating claims or estimates based on available data;
4) to reject bribery in all its forms;
5) to improve the understanding by individuals and society of the capabilities and societal implications of conventional and emerging technologies, including intelligent systems;
6) to maintain and improve our technical competence and to undertake technological tasks for others only if qualified by training or experience, or after full disclosure of pertinent limitations;
7) to seek, accept, and offer honest criticism of technical work, to acknowledge and correct errors, and to credit properly the contributions of others;
8) to treat fairly all persons and to not engage in acts of discrimination based on race, religion, gender, disability, age, national origin, sexual orientation, gender identity, or gender expression;
9) to avoid injuring others, their property, reputation, or employment by false or malicious action;
10) to assist colleagues and co-workers in their professional development and to support them in following this code of ethics.
1) Access to computers - and anything which might teach you something about the way the world really works - should be unlimited and total. Always yield to the Hands-On Imperative!
2) All information should be free.
3) Mistrust authority - promote decentralization.
4) Hackers should be judged by their acting, not bogus criteria such as degrees, age, race, or position.
5) You can create art and beauty on a computer.
6) Computers can change your life for the better.
7) Don't litter other people's data.
8) Make public data available, protect private data.
Originally penned down by Steven Levy in "Hackers: Heroes of the Computer Revolution" and slightly modified by the CCC after that.
There must be a line somewhere between authority and expertise—also information and intellectual property.
It's more that I don't think I understand, even though I've probably read [most of] the same books.
Where does a journalistic piece of writing fall? Certainly, and often, the results of an investigative piece of writing would be considered quite valuable information. Does that mean it should be free? If so, how should that work be funded in the future. Most efforts outside of selling the final product have failed disastrously. Most people want it for free—free as in free like rainfall. I suppose some software may fall under a similar question.
On authority—who gets to speak to a subject? Decentralizing "facts" has put us in the position where empirical evidence is actively opposed on baseless grounds, with decentralizing the ability to compound nonsensical ramblings and dress them up as if they are of equal value as something hypothesized, experimented on, iterated, and proven within the confines of our scientific process.
Those are two more practical and immediate questions, but a sibling comment also raised a salient point.
I like a lot of the points they make, but also think some of them are naive as they came before the normalizing of internet and computers.
Curious to hear from others on this one.
The whole image of amoral, selfish programming implies that we either lost sight of or failed to publicize the strong ethical stances which have been around for decades.
Like medical and legal professionals, it is concerned with professional ethics. Many people would be surprised to find their personal ethics are not present.
On the blogs of notable programmers, best practices for security sit next to demands for closed-shop union membership. In comments sections (like this one), avoiding malicious dark patterns comes alongside demands that no programmer work for the US military. In the news, respecting user data is paired with enormously controversial positions on censorship and encryption.
This is far from the only reason codes of tech ethics have yet to gather major attention, but I think it would suffice even if the others disappear. Professional codes like the ACM's are valuable, but attempts to popularize and enforce them are derailed by attempts to add in specific personal ethics.
Many people expect ethical codes to expressly and explicitly include their personal ethical framework in clear detail, that they might use it to hold others to account. A reference to what someone personally considers to be a contribution to society and humanity might not rise to this standard.
User data is an asset, because you use it to make money, and it is a liability, because it can be stolen and misused. Companies currently get all of the benefit but very little of the risk. If user data had to be insured then there would be a financial incentive to only keep what's needed, and to treat it more carefully.
Interesting thought, how would you imagine this would work in practice? Insurance against what? In the event of loss, who would make a payment to who?
There are "cybersecurity" insurance policies available to companies now but they really only cover the cost of mailing notification letters to impacted people and sometimes the cost of credit monitoring for a year. They're way overpriced and usually not at all worth it. I suspect that isn't really what you had in mind though?
Codes of professional ethics seem to offer some defense against total indifference to malpractice and incompetence, as with civil engineering restrictions. People holding out for them to do more, especially around conscious decisions like political work, seem to seriously overestimate what they've achieved in the past.
Facebook is just trying to shift the blame to its employees. Gross.
And if I work for the Department of Defense making missile guidance software, then it's OK to kill people, I suppose...
There is a distinction between personal ethics and professional ethics. In my opinion, that position crosses the line and lands squarely in the realm of personal ethics.
War is an ever-present possibility - do you want your neighbor's kids (or yours, if they join the service) dying or being maimed because you didn't help with what you know?
1000 years ago, if you were a blacksmith and the Vikings were coming, would you make swords or ornamental railings?
People who think "it's unethical to make weapons" always assume that someone else is making the weapons, which makes them feel good about themselves (not sure why) - they're willing to sacrifice themselves and their neighbor's children for their ethics.
I'm not. I make weapons, and I think the 'ethical' stance on defense is really just virtue-signalling.
/Edited. ;)
So professional ethics can run counter to what a polite member of society would normally wish to do.
I think this is an oversimplification. A lot of doctors deal with non-life threatening diseases and ailments, and some even deal with purely cosmetic stuff (e.g. plastic surgeons). Similarly, a lot of lawyers deal with issues that don't directly carry a risk of a jail sentence, e.g. workers compensation.
The number of doctors or prospective doctors who deny the medical code of ethics is very nearly zero. Almost all debate around it centers on subtle questions like defining 'harm' in self-determination cases (e.g. assisted suicide). The number of lawyers who would reject a given the legal code of ethics is perhaps larger, but the code is accordingly narrower. Lawyers are for instance free (and in fact obligated) to support "not guilty" pleas by clients they know are guilty. The boldest parts of the code (e.g. that lawyers cannot actively lie on behalf of clients, or mingle personal business with professional work) are also the the parts which are regularly broken.
Striving to avert something you consider immoral is a sensible decision, even if other people dispute that ethic. But doing so with a professional code of ethics is not only a doomed task, it's one which dooms the rest of the code in the process.
When a situation has escalated to the point that someone is launching missiles, it has usually reached the point where destroying the target is more important to them then the risk of collateral damage. If the guidance software is not good enough to give a small margin of error, they will launch several missiles to ensure that one is likely to hit the target, which pretty much guarantees a lot of damage throughout the margin of error of the guidance software.
Because of that, the ethical codes have teeth, as the medical board or bar can strip you of your license and thus the ability to practice for ethical violations.
There is so such equivalent to the medical board or bar for programmers. Anybody who has the inclination can program. Because of that, any ethical codes will be just words. In fact, companies may actually pay more for people willing to violate these codes.
While I think ethical codes are good, I don’t think they are worth having a licensing authority. I am happy that anyone with the ability is allowed to program and that I don’t need a state license.
They did try, though. They went just far enough that it's almost possible to suggest that one couldn't adhere to the current code and still work for Google, Facebook, or China in any engineering capacity. But I still don't really expect them to start penalizing members, as the ACM seems to see itself a lot more as an academic/research group than professional org.
I don't subscribe to "mental injury" as a quantifiable or philosophical harm. There is no way to protect against interpretation or experience, that necessitates increasingly complex and ineffective measures to adapt to subjective and possibly aberrant, mental states. This will always devolve into a discussion of normality, which is immoral at the core.
> 2.6 Perform work only in areas of competence.
This perhaps, can be salvaged. I would not agree to this as-is. How anyone measures competence is too open, given the state of software development.
https://www.ieee.org/about/corporate/governance/p7-8.html
These seem like good starting points to me.
The simplest difference between professions with clear ethical codes (doctors, lawyers, civil engineers) and ones without is that professions with strong codes provide outcomes. A doctor treats a specific patient, a lawyer litigates a case, a civil engineer builds a specific bridge. Even indirect work like medical research has specific recipients and a fairly clear course for future use. Programmers, like chemists or machinists, frequently create tools.
Alfred Nobel famously created his prize in atonement for the destruction caused by his invention, but he invented and marketed dynamite for use in mining and construction, where it provided real benefit to humanity. Edward Teller spent the second half of his life championing civilian uses for nuclear power and atomic bombs, having seen the world reshaped by fear of a weapon he hoped would be demonstrated to prevent future wars.
The beauty of professional codes in medicine and law is that they achieve moral good without taking heavily-disputed moral stances; if doctors do no harm and refuse instructions to do so, harm will generally be prevented. A chemist studying nitrates or a programmer designing GPS guidance has no such guarantees; the same work is very likely to create both good and bad in uncertain amounts, depending on where it's put to use.
I think of something like legal or medical confidentiality, which can involve their clients doing pretty horrible things but it being the "ethical" thing to not reveal that. If one takes this same thing to a Facebook scenario, does the developer who works for Facebook then have the obligation to protect the confidentiality of Facebook even if they're doing awful things? Would it be any different where a lawyer is working for a horrible client, trying to use the law to do things that are a net-negative for society, but ethics would put their obligation towards their direct client rather than society at large.
If one then wants to write a piece about ethics, one should start with examples of fields with ethical codes that have a structure closer to the relationship of users/Facebook/developers.
This is very similar to structure I operated under as a corporate attorney at a bank but also as the Chief Information Security Officer. I owed a duty to the bank, as my "client," and also to the bank's clients, whose information I was charged with protecting. You are indeed correct that it's a difficult and uncomfortable situation to manage. I always tried to be a "zealous advocate" for the bank in all matters except those related to privacy and I'll be the first to admit that much of my zealous advocacy was indeed a net-negative for society. I did ultimately leave because of a disagreement over how to respond to (or in their case, chose to ignore) an ongoing breach of customer accounts.
All of that being said, I don't think it's an impossible ask with a bit of help from the government. I also served as the AML Officer and in that capacity I had the absolute final say on all matters related to money laundering thanks to the PATRIOT Act. The only way to override my decision would be a board vote. I never had any "disagreements" about how to handle an AML situation because my decisions were final while my decisions as CISO were merely a recommendation that management could (and did) ignore in order to save time, money, and bad PR.
Curious to find out more about this, and how you felt about it while you were doing it, if you're willing to talk about it.
I don't want ethics exams and accreditation requirements to constrain software development to people with traditional software backgrounds. The cost of a traditional college education in the United States is easily five or six figures, and socioeconomic forces means that black people face systemic barriers to getting access to that education (versus whites, indians, asians, etc.). I want there to be more YCs, more Startup Schools, more Lambda Schools, and I want them open to immigrants and non-US-citizens, too.
I also don't want ethics exams to constrain supply. A lot of software withers on the vine because there's no "business case" for it, but if there was a way to divorce the need to put dinner on the table from how much people (or advertisers, or...) are willing to pay for software, then there's a lot of software that is yet to be written that could make the world better (say, enabling the creation of more art and music, or something).
Here are the main points:
1. GENERAL MORAL IMPERATIVES.
As an ACM member I will
1.1 Contribute to society and human well-being.
1.2 Avoid harm to others.
1.3 Be honest and trustworthy.
1.4 Be fair and take action not to discriminate.
1.5 Honor property rights including copyrights and patent.
1.6 Give proper credit for intellectual property.
1.7 Respect the privacy of others.
1.8 Honor confidentiality.
2. MORE SPECIFIC PROFESSIONAL RESPONSIBILITIES.
As an ACM computing professional I will
2.1 Strive to achieve the highest quality, effectiveness
and dignity in both the process and products of
professional work.
2.2 Acquire and maintain professional competence.
2.3 Know and respect existing laws pertaining to
professional work.
2.4 Accept and provide appropriate professional review.
2.5 Give comprehensive and thorough evaluations of computer
systems and their impacts, including analysis of
possible risks.
2.6 Honor contracts, agreements, and assigned
responsibilities.
2.7 Improve public understanding of computing and its
consequences.
2.8 Access computing and communication resources only when
authorized to do so.
3. ORGANIZATIONAL LEADERSHIP IMPERATIVES.
As an ACM member and an organizational leader, I will
3.1 Articulate social responsibilities of members of an
organizational unit and encourage full acceptance of
those responsibilities.
3.2 Manage personnel and resources to design and build
information systems that enhance the quality of working
life.
3.3 Acknowledge and support proper and authorized uses of
an organization’s computing and communication
resources.
3.4 Ensure that users and those who will be affected by a
system have their needs clearly articulated during the
assessment and design of requirements; later the system
must be validated to meet requirements.
3.5 Articulate and support policies that protect the
dignity of users and others affected by a computing
system.
3.6 Create opportunities for members of the organization to
learn the principles and limitations of computer
systems.
4. COMPLIANCE WITH THE CODE.
As an ACM member I will
4.1 Uphold and promote the principles of this Code.
4.2 Treat violations of this code as inconsistent with
membership in the ACM.Among other things, section 1.5 was always a disaster because copyright and patent are not natural rights, but special privileged statuses granted to (in theory) promote the accrual public benefit, and there's no reason practitioners should be required to subscribe to copyright or patent maximalist perspectives.
Section 2.8 was too broad because it didn't take into account security researchers, stuff like insulin pump hacking, etc., (which also ties back to 1.5 being too heavy handed).
Not that the 2018 version is tons better, but it tries more.
> 1.2 Avoid harm to others
So no smart weapons which minimize collateral damage?
> 1.5 Honor property rights including copyrights and patent.
So no Sci-Hub? What about bad copyrights?
> 1.8 Honor confidentiality.
What about whistleblowing?
> So no smart weapons which minimize collateral damage?
This section is concerned with indirect and unintentional harm. Weapons tend to be a problem for every ethical system. Here, my reading would be that weapons should work as intended.
> So no Sci-Hub? What about bad copyrights?
What is a bad copyright is not a question for the ACM code of ethics. You'll find a more detailed examination of intellectual property in the full code of ethics.
> What about whistleblowing?
An excellent question! I'll just quote from the discussion directly:
> Computing professionals are often entrusted with confidential information such as trade secrets, client data, nonpublic business strategies, financial information, research data, pre-publication scholarly articles, and patent applications. Computing professionals should protect confidentiality except in cases where it is evidence of the violation of law, of organizational regulations, or of the Code. In these cases, the nature or contents of that information should not be disclosed except to appropriate authorities. A computing professional should consider thoughtfully whether such disclosures are consistent with the Code.
The ACM are a major academic publisher.
Would they call for the same for journalists too? Arguably, the media has an equal need for ethical oversight, but we generally prefer the media to be free of partisan interference.
I believe journalists and technologists are better off without state or federal licensing to enforce ethical codes.
It's a fools errand; at most serving the purpose of 'feeling good about yourself' rather than any societal impact
What's really happening here is that Facebook wants to push for blaming individuals, because it let's them keep doing whatever unethical things they want to, since they know that individual responsibility is toothless in the face of such a massive system like itself.
Tech is fun. Let's keep it that way.
That's actually a good demonstration of why we do need a code of ethics.
As far as a code for all of tech, who would write such a thing? And how would it be enforced? And who would be obligated to follow it? Just people that write code? Or what about designers? And if designers have to follow the code, that would mean that they can be “kicked out” for non compliance, which means they have to be “allowed in.” So how does that work? Professional licensing for designers now too? A four-year degree along with a test on the Bauhaus and Human Interface Guidelines?
As far as licensing, that would make tech less fun. What would be the licensing requirements? Would it be like a coding interview with ridiculous whiteboard algorithm examples? Would you have to pass a test on front end frameworks? How about a test on Kubernetes? Would you have to have a degree in “tech?” Steve Wozniak wouldn’t have been eligible to work on Apple I. How about the Wright Brothers? They worked in a bike shop, how were they qualified to invent airplanes with engines? A proper authority ought to have shut them down promptly because clearly they weren’t qualified to be working on flying machines! The Bicycle Mechanics Code of Professional Ethics expressly prohibits working outside of the scope of your training and licensure right? Howard Hughes wasn’t a professional engineer, yet somehow he helped design the fastest airplane at the time.
You want ethics? Then act ethically. But the last thing we need is a gatekeeper organization deciding who is worthy or not to invent the future.
I much prefer laws like GDPR where there is a set of rules everyone has to follow.
Edit: to be clear that doesn't mean we shouldn't have one!
Think about it. Imagine Facebook not doing anything evil. Never selling your data without your permission. Never spying on you. Putting lots of restrictions on advertisements to make sure they are all ok. Banning all the bad users. Reporting analytics accurately. Etc.
How would Facebook make any money? The only ethical sources of income would be to sell non evil ads (if you believe there even is such a thing) or to charge users money to have accounts like Netflix. Non-evil ads are not very effective, and with accurate analytics, they will not fetch a high price.
What other ethical business model is there? If operating ethically, they would be lucky to cover their operating costs. Giving any kind of ROI to investors would be impossible. The same applies to every Silicon Valley operation. Google, Twitter, etc. If they operate ethically and morally, they can't make enough money.
Of course, if the power were mine, I would force them to behave ethically anyway. I don't care if they all shut down. Gonna eat me some rich people.
If a product cant be profitable without abusing its power, it shouldn't exist.