NihilismAbsurdism.Blogspot.com

"The Absurd" refers to the conflict between the human tendency to seek inherent meaning in life and the human inability to find any.

Nihilism : from the Latin nihil, nothing) is the philosophical doctrine suggesting the negation of one or more putatively meaningful aspects of life

Friday, August 26, 2011

Computer and Information Ethics

In most countries of the world, the “information revolution” has altered many aspects of life significantly: commerce, employment, medicine, security, transportation, entertainment, and so on. Consequently, information and communication technology (ICT) has affected — in both good ways and bad ways — community life, family life, human relationships, education, careers, freedom, and democracy (to name just a few examples). “Computer and information ethics”, in the broadest sense of this phrase, can be understood as that branch of applied ethics which studies and analyzes such social and ethical impacts of ICT. The present essay concerns this broad new field of applied ethics.

The more specific term “computer ethics” has been used to refer to applications by professional philosophers of traditional Western theories like utilitarianism, Kantianism, or virtue ethics, to ethical cases that significantly involve computers and computer networks. “Computer ethics” also has been used to refer to a kind of professional ethics in which computer professionals apply codes of ethics and standards of good practice within their profession. In addition, other more specific names, like “cyberethics” and “Internet ethics”, have been used to refer to aspects of computer ethics associated with the Internet.

During the past several decades, the robust and rapidly growing field of computer and information ethics has generated new university courses, research professorships, research centers, conferences, workshops, professional organizations, curriculum materials, books and journals.


1. Some Historical Milestones

1.1 The Foundation of Computer and Information Ethics

In the mid 1940s, innovative developments in science and philosophy led to the creation of a new branch of ethics that would later be called “computer ethics” or “information ethics”. The founder of this new philosophical field was the American scholar Norbert Wiener, a professor of mathematics and engineering at MIT. During the Second World War, together with colleagues in America and Great Britain, Wiener helped to develop electronic computers and other new and powerful information technologies. While engaged in this war effort, Wiener and colleagues created a new branch of applied science that Wiener named “cybernetics” (from the Greek word for the pilot of a ship). Even while the War was raging, Wiener foresaw enormous social and ethical implications of cybernetics combined with electronic computers. He predicted that, after the War, the world would undergo “a second industrial revolution” — an “automatic age” with “enormous potential for good and for evil” that would generate a staggering number of new ethical challenges and opportunities.

When the War ended, Wiener wrote the book Cybernetics (1948) in which he described his new branch of applied science and identified some social and ethical implications of electronic computers. Two years later he published The Human Use of Human Beings (1950), a book in which he explored a number of ethical issues that computer and information technology would likely generate. The issues that he identified in those two books, plus his later book God and Golem, Inc. (1963), included topics that are still important today: computers and security, computers and unemployment, responsibilities of computer professionals, computers for persons with disabilities, computers and religion, information networks and globalization, virtual communities, teleworking, merging of human bodies with machines, robot ethics, artificial intelligence, and a number of other subjects. (See Bynum 2000, 2004, 2005, 2006.)

Although he coined the name “cybernetics” for his new science, Wiener apparently did not see himself as also creating a new branch of ethics. As a result, he did not coin a name like “computer ethics” or “information ethics”. These terms came into use decades later. (See the discussion below.) In spite of this, Wiener's three relevant books (1948, 1950, 1963) do lay down a powerful foundation, and do use an effective methodology, for today's field of computer and information ethics. His thinking, however, was far ahead of other scholars; and, at the time, many people considered him to be an eccentric scientist who was engaging in flights of fantasy about ethics. Apparently, no one — not even Wiener himself — recognized the profound importance of his ethics achievements; and nearly two decades would pass before some of the social and ethical impacts of information technology, which Wiener had predicted in the late 1940s, would become obvious to other scholars and to the general public.

In The Human Use of Human Beings, Wiener explored some likely effects of information technology upon key human values like life, health, happiness, abilities, knowledge, freedom, security, and opportunities. The metaphysical ideas and analytical methods that he employed were so powerful and wide-ranging that they could be used effectively for identifying, analyzing and resolving social and ethical problems associated with all kinds of information technology, including, for example, computers and computer networks; radio, television and telephones; news media and journalism; even books and libraries. Because of the breadth of Wiener's concerns and the applicability of his ideas and methods to every kind of information technology, the term “information ethics” is an apt name for the new field of ethics that he founded. As a result, the term “computer ethics”, as it is typically used today, names only a subfield of Wiener's much broader concerns.[1]

In laying down a foundation for information ethics, Wiener developed a cybernetic view of human nature and society, which led him to an ethically suggestive account of the purpose of a human life. Based upon this, he adopted “great principles of justice” that he believed all societies ought to follow. These powerful ethical concepts enabled Wiener to analyze information ethics issues of all kinds.

A cybernetic view of human nature

Wiener's cybernetic understanding of human nature stressed the physical structure of the human body and the remarkable potential for learning and creativity that human physiology makes possible. While explaining human intellectual potential, he regularly compared the human body to the physiology of less intelligent creatures like insects:

Cybernetics takes the view that the structure of the machine or of the organism is an index of the performance that may be expected from it. The fact that the mechanical rigidity of the insect is such as to limit its intelligence while the mechanical fluidity of the human being provides for his almost indefinite intellectual expansion is highly relevant to the point of view of this book. … man's advantage over the rest of nature is that he has the physiological and hence the intellectual equipment to adapt himself to radical changes in his environment. The human species is strong only insofar as it takes advantage of the innate, adaptive, learning faculties that its physiological structure makes possible. (Wiener 1954, pp. 57-58, italics in the original)

Given the physiology of human beings, it is possible for them to take in a wide diversity of information from the external world, access information about conditions and events within their own bodies, and process all that information in ways that constitute reasoning, calculating, wondering, deliberating, deciding and many other intellectual activities. Wiener concluded that the purpose of a human life is to flourish as the kind of information-processing organisms that humans naturally are:

I wish to show that the human individual, capable of vast learning and study, which may occupy almost half of his life, is physically equipped, as the ant is not, for this capacity. Variety and possibility are inherent in the human sensorium — and are indeed the key to man's most noble flights — because variety and possibility belong to the very structure of the human organism. (Wiener 1954, pp. 51-52)

Underlying metaphysics

Wiener's account of human nature presupposed a metaphysical view of the universe that considers the world and all the entities within it, including humans, to be combinations of matter-energy and information. Everything in the world is a mixture of both of these, and thinking, according to Wiener, is actually a kind of information processing. Consequently, the brain

does not secrete thought “as the liver does bile”, as the earlier materialists claimed, nor does it put it out in the form of energy, as the muscle puts out its activity. Information is information, not matter or energy. No materialism which does not admit this can survive at the present day. (Wiener 1948, p. 155)

According to Wiener's metaphysical view, everything in the universe comes into existence, persists, and then disappears because of the continuous mixing and mingling of information and matter-energy. Living organisms, including human beings, are actually patterns of information that persist through an ongoing exchange of matter-energy. Thus, he says of human beings,

We are but whirlpools in a river of ever-flowing water. We are not stuff that abides, but patterns that perpetuate themselves. (Wiener 1954, p. 96)

The individuality of the body is that of a flame…of a form rather than of a bit of substance. (Wiener 1954, p. 102)

Using the language of today's “information age” we would say that, according to Wiener, human beings are “information objects”; and their intellectual capacities, as well as their personal identities, are dependent upon persisting patterns of information and information processing within the body, rather than on specific bits of matter-energy.

Justice and human flourishing

According to Wiener, for human beings to flourish they must be free to engage in creative and flexible actions and thereby maximize their full potential as intelligent, decision-making beings in charge of their own lives. This is the purpose of a human life. Because people have various levels of talent and possibility, however, one person's achievements will be different from those of others. It is possible, though, to lead a good human life — to flourish — in an indefinitely large number of ways; for example, as a diplomat, scientist, teacher, nurse, doctor, soldier, housewife, midwife, musician, artist, tradesman, artisan, and so on.

This understanding of the purpose of a human life led Wiener to adopt what he called “great principles of justice” upon which society should be built. He believed that adherence to those principles by a society would maximize a person's ability to flourish through variety and flexibility of human action. Although Wiener stated his “great principles”, he did not assign names to them. For purposes of easy reference, let us call them “The Principle of Freedom”, “The Principle of Equality” and “The Principle of Benevolence”. Using Wiener's own words yields the following list of “great principles” (1954, pp. 105-106):

The Principle of Freedom

Justice requires “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.”

The Principle of Equality

Justice requires “the equality by which what is just for A and B remains just when the positions of A and B are interchanged.”

The Principle of Benevolence

Justice requires “a good will between man and man that knows no limits short of those of humanity itself.”

Given Wiener's cybernetic account of human nature and society, it follows that people are fundamentally social beings, and that they can reach their full potential only when they are part of a community of similar beings. Society, therefore, is essential to a good human life. Despotic societies, however, actually stifle human freedom; and indeed they violate all three of the “great principles of justice”. For this reason, Wiener explicitly adopted a fourth principle of justice to assure that the first three would not be violated. Let us call this additional principle “The Principle of Minimum Infringement of Freedom”:

The Principle of Minimum Infringement of Freedom What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom (1954, p. 106).

A refutation of ethical relativism

If one grants Wiener's account of a good society and of human nature, it follows that a wide diversity of cultures — with different customs, languages, religions, values and practices — could provide a context in which humans can flourish. Sometimes ethical relativists use the existence of different cultures as proof that there is not — and could not be — an underlying ethical foundation for societies all around the globe. In response to such relativism, Wiener could argue that, given his understanding of human nature and the purpose of a human life, we can embrace and welcome a rich variety of cultures and practices while still advocating adherence to “the great principles of justice”. Those principles offer a cross-cultural foundation for ethics, even though they leave room for immense cultural diversity. The one restriction that Wiener would require in any society is that it must provide a context where humans can realize their full potential as sophisticated information-processing agents, making decisions and choices, and thereby taking responsibility for their own lives. Wiener believed that this is possible only where significant freedom, equality and human compassion prevail.

Methodology in information ethics

Because Wiener did not think of himself as creating a new branch of ethics, he did not provide metaphilosophical comments about what he was doing while analyzing an information ethics issue or case. Instead, he plunged directly into his analyses. Consequently, if we want to know about Wiener's method of analysis, we need to observe what he does, rather than look for any metaphilosophical commentary upon his own procedures.

When observing Wiener's way of analyzing information ethics issues and trying to resolve them, we find — for example, in The Human Use of Human Beings — that he tries to assimilate new cases by applying already existing, ethically acceptable laws, rules, and practices. In any given society, there is a network of existing practices, laws, rules and principles that govern human behavior within that society. These “policies” — to borrow a helpful word from Moor (1985) — constitute a “received policy cluster” (see Bynum and Schubert 1997); and in a reasonably just society, they can serve as a good starting point for developing an answer to any information ethics question. Wiener's methodology is to combine the “received policy cluster” of one's society with his account of human nature, plus his “great principles of justice”, plus critical skills in clarifying vague or ambiguous language. In this way, he achieved a very effective method for analyzing information ethics issues. Borrowing from Moor's later, and very apt, description of computer ethics methodology (Moor 1985), we can describe Wiener's methodology as follows:

  1. Identify an ethical question or case regarding the integration of information technology into society. Typically this focuses upon technology-generated possibilities that could affect (or are already affecting) life, health, security, happiness, freedom, knowledge, opportunities, or other key human values.
  2. Clarify any ambiguous or vague ideas or principles that may apply to the case or the issue in question.
  3. If possible, apply already existing, ethically acceptable principles, laws, rules, and practices (the “received policy cluster”) that govern human behavior in the given society.
  4. If ethically acceptable precedents, traditions and policies are insufficient to settle the question or deal with the case, use the purpose of a human life plus the great principles of justice to find a solution that fits as well as possible into the ethical traditions of the given society.

In an essentially just society — that is, in a society where the “received policy cluster” is reasonably just — this method of analyzing and resolving information ethics issues will likely result in ethically good solutions that can be assimilated into the society.

Note that this way of doing information ethics does not require the expertise of a trained philosopher (although such expertise might prove to be helpful in many situations). Any adult who functions successfully in a reasonably just society is likely to be familiar with the existing customs, practices, rules and laws that govern a person's behavior in that society and enable one to tell whether a proposed action or policy would be accepted as ethical. So those who must cope with the introduction of new information technology — whether they are computer professionals, business people, workers, teachers, parents, public-policy makers, or others — can and should engage in information ethics by helping to integrate new information technology into society in an ethically acceptable way. Information ethics, understood in this very broad sense, is too important to be left only to information professionals or to philosophers. Wiener's information ethics interests, ideas and methods were very broad, covering not only topics in the specific field of “computer ethics”, as we would call it today, but also issues in related areas that, today, are called “agent ethics”, “Internet ethics”, and “nanotechnology ethics”. The purview of Wiener's ideas and methods is even broad enough to encompass subfields like journalism ethics, library ethics, and the ethics of bioengineering.

Even in the late 1940s, Wiener made it clear that, on his view, the integration into society of the newly invented computing and information technology would lead to the remaking of society — to “the second industrial revolution” — “the automatic age”. It would affect every walk of life, and would be a multi-faceted, on-going process requiring decades of effort. In Wiener's own words, the new information technology had placed human beings “in the presence of another social potentiality of unheard-of importance for good and for evil.” (1948, p. 27) However, because he did not think of himself as creating a new branch of ethics, Wiener did not coin names, such as “computer ethics” or “information ethics”, to describe what he was doing. These terms — beginning with “computer ethics” — came into common use years later, starting in the mid 1970s with the work of Walter Maner.

Today, the “information age” that Wiener predicted half a century ago has come into existence; and the metaphysical and scientific foundation for information ethics that he laid down continues to provide insight and effective guidance for understanding and resolving ethical challenges engendered by information technologies of all kinds.

1.2 Defining Computer Ethics

In 1976, nearly three decades after the publication of Wiener's book Cybernetics, Walter Maner noticed that the ethical questions and problems considered in his Medical Ethics course at Old Dominion University often became more complicated or significantly altered when computers got involved. Sometimes the addition of computers, it seemed to Maner, actually generated wholly new ethics problems that would not have existed if computers had not been invented. He concluded that there should be a new branch of applied ethics similar to already existing fields like medical ethics and business ethics; and he decided to name the proposed new field “computer ethics”. (At that time, Maner did not know about the computer ethics works of Norbert Wiener.) He defined the proposed new field as one that studies ethical problems “aggravated, transformed or created by computer technology”. He developed an experimental computer ethics course designed primarily for students in university-level computer science programs. His course was a success, and students at his university wanted him to teach it regularly. He complied with their wishes and also created, in 1978, a “starter kit” on teaching computer ethics, which he prepared for dissemination to attendees of workshops that he ran and speeches that he gave at philosophy conferences and computing science conferences in America. In 1980, Helvetia Press and the National Information and Resource Center on Teaching Philosophy published Maner's computer ethics “starter kit” as a monograph (Maner 1980). It contained curriculum materials and pedagogical advice for university teachers. It also included a rationale for offering such a course in a university, suggested course descriptions for university catalogs, a list of course objectives, teaching tips, and discussions of topics like privacy and confidentiality, computer crime, computer decisions, technological dependence and professional codes of ethics. During the early 1980s, Maner's Starter Kit was widely disseminated by Helvetia Press to colleges and universities in America and elsewhere. Meanwhile Maner continued to conduct workshops and teach courses in computer ethics. As a result, a number of scholars, especially philosophers and computer scientists, were introduced to computer ethics because of Maner's trailblazing efforts.

The “uniqueness debate”

While Maner was developing his new computer ethics course in the mid-to-late 1970s, a colleague of his in the Philosophy Department at Old Dominion University, Deborah Johnson, became interested in his proposed new field. She was especially interested in Maner's view that computers generate wholly new ethical problems, for she did not believe that this was true. As a result, Maner and Johnson began discussing ethics cases that allegedly involved new problems brought about by computers. In these discussions, Johnson granted that computers did indeed transform old ethics problems in interesting and important ways — that is, “give them a new twist” — but she did not agree that computers generated ethically unique problems that had never been seen before. The resulting Maner-Johnson discussion initiated a fruitful series of comments and publications on the nature and uniqueness of computer ethics — a series of scholarly exchanges that started with Maner and Johnson and later spread to other scholars. The following passage, from Maner's ETHICOMP95 keynote address, drew a number of other people into the discussion:

I have tried to show that there are issues and problems that are unique to computer ethics. For all of these issues, there was an essential involvement of computing technology. Except for this technology, these issues would not have arisen, or would not have arisen in their highly altered form. The failure to find satisfactory non-computer analogies testifies to the uniqueness of these issues. The lack of an adequate analogy, in turn, has interesting moral consequences. Normally, when we confront unfamiliar ethical problems, we use analogies to build conceptual bridges to similar situations we have encountered in the past. Then we try to transfer moral intuitions across the bridge, from the analog case to our current situation. Lack of an effective analogy forces us to discover new moral values, formulate new moral principles, develop new policies, and find new ways to think about the issues presented to us. (Maner 1996, p. 152)

Over the decade that followed this provocative passage, the extended “uniqueness debate” led to a number of useful contributions to computer and information ethics. (For some example publications, see Johnson 1985, 1994, 1999, 2001; Maner 1980, 1996, 1999; Gorniak-Kocikowska 1996; Tavani 2002, 2005; Himma 2003; Floridi and Sanders 2004; Mather 2005; and Bynum 2006, 2007.)

An agenda-setting textbook

By the early 1980s, Johnson had joined the staff of Rensselaer Polytechnic Institute and had secured a grant to prepare a set of teaching materials — pedagogical modules concerning computer ethics — that turned out to be very successful. She incorporated them into a textbook, Computer Ethics, which was published in 1985 (Johnson 1985). On page 1, she noted that computers “pose new versions of standard moral problems and moral dilemmas, exacerbating the old problems, and forcing us to apply ordinary moral norms in uncharted realms.” She did not grant Maner's claim, however, that computers create wholly new ethical problems. Instead, she described computer ethics issues as old ethical problems that are “given a new twist” by computer technology.

Johnson's book Computer Ethics was the first major textbook in the field, and it quickly became the primary text used in computer ethics courses offered at universities in English-speaking countries. For more than a decade, her textbook set the computer ethics research agenda on topics, such as ownership of software and intellectual property, computing and privacy, responsibility of computer professionals, and fair distribution of technology and human power. In later editions (1994, 2001), Johnson added new ethical topics like “hacking” into people's computers without their permission, computer technology for persons with disabilities, and the Internet's impact upon democracy.

Also in later editions of Computer Ethics, Johnson continued the “uniqueness-debate” discussion, noting for example that new information technologies provide new ways to “instrument” human actions. Because of this, she agreed with Maner that new specific ethics questions had been generated by computer technology — for example, “Should ownership of software be protected by law?” or “Do huge databases of personal information threaten privacy?” — but she argued that such questions are merely “new species of old moral issues”, such as protection of human privacy or ownership of intellectual property. They are not, she insisted, wholly new ethics problems requiring additions to traditional ethical theories, as Maner had claimed (Maner 1996).

1.3 An Influential Computer Ethics Theory

The year 1985 was a “watershed year” in the history of computer ethics, not only because of the appearance of Johnson's agenda-setting textbook, but also because James Moor's classic paper, “What Is Computer Ethics?” was published in a special computer-ethics issue of the journal Metaphilosophy.[2] There Moor provided an account of the nature of computer ethics that was broader and more ambitious than the definitions of Maner or Johnson. He went beyond descriptions and examples of computer ethics problems by offering an explanation of why computing technology raises so many ethical questions compared to other kinds of technology. Moor's explanation of the revolutionary power of computer technology was that computers are “logically malleable”:

Computers are logically malleable in that they can be shaped and molded to do any activity that can be characterized in terms of inputs, outputs and connecting logical operations … . Because logic applies everywhere, the potential applications of computer technology appear limitless. The computer is the nearest thing we have to a universal tool. Indeed, the limits of computers are largely the limits of our own creativity. (Moor, 1985, 269)

The logical malleability of computer technology, said Moor, makes it possible for people to do a vast number of things that they were not able to do before. Since no one could do them before, the question never arose as to whether one ought to do them. In addition, because they could not be done before, no laws or standards of good practice or specific ethical rules were established to govern them. Moor called such situations “policy vacuums”, and some of them might generate “conceptual muddles”:

A typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate. A central task of computer ethics is to determine what we should do in such cases, that is, formulate policies to guide our actions … . One difficulty is that along with a policy vacuum there is often a conceptual vacuum. Although a problem in computer ethics may seem clear initially, a little reflection reveals a conceptual muddle. What is needed in such cases is an analysis that provides a coherent conceptual framework within which to formulate a policy for action. (Moor, 1985, 266)

In the late 1980s, Moor's “policy vacuum” explanation of the need for computer ethics and his account of the revolutionary “logical malleability” of computer technology quickly became very influential among a growing number of computer ethics scholars. He added additional ideas in the 1990s, including the important notion of core human values: According to Moor, some human values — such as life, health, happiness, security, resources, opportunities, and knowledge — are so important to the continued survival of any community that essentially all communities do value them. Indeed, if a community did not value the “core values”, it soon would cease to exist. Moor used “core values” to examine computer ethics topics like privacy and security (Moor 1997), and to add an account of justice, which he called “just consequentialism” (Moor, 1999), a theory that combines “core values” and consequentialism with Bernard Gert's deontological notion of “moral impartiality” using “the blindfold of justice” (Gert,1998).

Moor's approach to computer ethics is a practical theory that provides a broad perspective on the nature of the “information revolution”. By using the notions of “logical malleability”, “policy vacuums”, “conceptual muddles”, “core values” and “just consequentialism”, he provides the following problem-solving method:

  1. Identify a policy vacuum generated by computing technology.
  2. Eliminate any conceptual muddles.
  3. Use the core values and the ethical resources of just consequentialism to revise existing — but inadequate — policies, or else to create new policies that justly eliminate the vacuum and resolve the original ethical issue.

The third step is accomplished by combining deontology and consequentialism — which traditionally have been considered incompatible rival ethics theories — to achieve the following practical results:

If the blindfold of justice is applied to [suggested] computing policies, some policies will be regarded as unjust by all rational, impartial people, some policies will be regarded as just by all rational, impartial people, and some will be in dispute. This approach is good enough to provide just constraints on consequentialism. We first require that all computing policies pass the impartiality test. Clearly, our computing policies should not be among those that every rational, impartial person would regard as unjust. Then we can further select policies by looking at their beneficial consequences. We are not ethically required to select policies with the best possible outcomes, but we can assess the merits of the various policies using consequentialist considerations and we may select very good ones from those that are just. (Moor, 1999, 68)

1.4 Computing and Human Values

Beginning with the computer ethics works of Norbert Wiener (1948, 1950, 1963), a common thread has run through much of the history of computer ethics; namely, concern for protecting and advancing central human values, such a life, health, security, happiness, freedom, knowledge, resources, power and opportunity. Thus, most of the specific issues that Wiener dealt with are cases of defending or advancing such values. For example, by working to prevent massive unemployment caused by robotic factories, Wiener tried to preserve security, resources and opportunities for factory workers. Similarly, by arguing against the use of decision-making war-game machines, Wiener tried to diminish threats to security and peace.

This “human-values approach” to computer ethics has been very fruitful. It has served, for example, as an organizing theme for major computer-ethics conferences, such as the 1991 National Conference on Computing and Values at Southern Connecticut State University (see the section below on “exponential growth”), which was devoted to the impacts of computing upon security, property, privacy, knowledge, freedom and opportunities.[3] In the late 1990s, a similar approach to computer ethics, called “value-sensitive computer design”, emerged based upon the insight that potential computer-ethics problems can be avoided, while new technology is under development, by anticipating possible harm to human values and designing new technology from the very beginning in ways that prevent such harm. (See, for example, Friedman and Nissenbaum, 1996; Friedman, 1997; Brey, 2000; Introna and Nissenbaum, 2000; Introna, 2005a; Flanagan, et al., 2007.)

1.5 Professional Ethics and Computer Ethics

In the early 1990s, a different emphasis within computer ethics was advocated by Donald Gotterbarn. He believed that computer ethics should be seen as a professional ethics devoted to the development and advancement of standards of good practice and codes of conduct for computing professionals. Thus, in 1991, in the article “Computer Ethics: Responsibility Regained”, Gotterbarn said:

There is little attention paid to the domain of professional ethics — the values that guide the day-to-day activities of computing professionals in their role as professionals. By computing professional I mean anyone involved in the design and development of computer artifacts. … The ethical decisions made during the development of these artifacts have a direct relationship to many of the issues discussed under the broader concept of computer ethics. (Gotterbarn, 1991)

Throughout the 1990s, with this aspect of computer ethics in mind, Gotterbarn worked with other professional-ethics advocates (for example, Keith Miller, Dianne Martin, Chuck Huff and Simon Rogerson) in a variety of projects to advance professional responsibility among computer practitioners. Even before 1991, Gotterbarn had been part of a committee of the ACM (Association for Computing Machinery) to create the third version of that organization's “Code of Ethics and Professional Conduct” (adopted by the ACM in 1992, see Anderson, et al., 1993). Later, Gotterbarn and colleagues in the ACM and the Computer Society of the IEEE (Institute of Electrical and Electronic Engineers) developed licensing standards for software engineers. In addition, Gotterbarn headed a joint taskforce of the IEEE and ACM to create the “Software Engineering Code of Ethics and Professional Practice” (adopted by those organizations in 1999; see Gotterbarn, Miller and Rogerson, 1997).

In the late 1990s, Gotterbarn created the Software Engineering Ethics Research Institute (SEERI) at East Tennessee State University (see http://seeri.etsu.edu/); and in the early 2000s, together with Simon Rogerson, he developed a computer program called SoDIS (Software Development Impact Statements) to assist individuals, companies and organizations in the preparation of ethical “stakeholder analyses” for determining likely ethical impacts of software development projects (Gotterbarn and Rogerson, 2005). These and many other projects focused attention upon professional responsibility and advanced the professionalization and ethical maturation of computing practitioners. (See the bibliography below for works by R. Anderson, D. Gotterbarn, C. Huff, C. D. Martin, K. Miller, and S. Rogerson.)

1.6 Uniqueness and Global Information Ethics

In 1995, in her ETHICOMP95 presentation “The Computer Revolution and the Problem of Global Ethics”, Krystyna Górniak-Kocikowska, made a startling prediction (see Górniak, 1996). She argued that computer ethics eventually will evolve into a global ethic applicable in every culture on earth. According to this “Górniak hypothesis”, regional ethical theories like Europe's Benthamite and Kantian systems, as well as the diverse ethical systems embedded in other cultures of the world, all derive from “local” histories and customs and are unlikely to be applicable world-wide. Computer and information ethics, on the other hand, Górniak argued, has the potential to provide a global ethic suitable for the Information Age:

  • a new ethical theory is likely to emerge from computer ethics in response to the computer revolution. The newly emerging field of information ethics, therefore, is much more important than even its founders and advocates believe. (p. 177)
  • The very nature of the Computer Revolution indicates that the ethic of the future will have a global character. It will be global in a spatial sense, since it will encompass the entire globe. It will also be global in the sense that it will address the totality of human actions and relations. (p.179)
  • Computers do not know borders. Computer networks … have a truly global character. Hence, when we are talking about computer ethics, we are talking about the emerging global ethic. (p. 186)
  • the rules of computer ethics, no matter how well thought through, will be ineffective unless respected by the vast majority of or maybe even all computer users. … In other words, computer ethics will become universal, it will be a global ethic. (p.187)

The provocative “Górniak hypothesis” was a significant contribution to the ongoing “uniqueness debate”, and it reinforced Maner's claim — which he made at the same ETHICOMP95 conference in his keynote address — that information technology “forces us to discover new moral values, formulate new moral principles, develop new policies, and find new ways to think about the issues presented to us.” (Maner 1996, p. 152) Górniak did not speculate about the globally relevant concepts and principles that would evolve from information ethics. She merely predicted that such a theory would emerge over time because of the global nature of the Internet and the resulting ethics conversation among all the cultures of the world.

1.7 Information Ethics

Some important recent developments, which began after 1995, seem to be confirming Górniak's hypothesis — in particular, the information ethics theory of Luciano Floridi (see, for example, Floridi, 1999 and Floridi, 2005a) and the “Flourishing Ethics” theory that combines ideas from Aristotle, Wiener, Moor and Floridi (see Section 1.8 below, and also Bynum, 2006).

In developing his information ethics theory (henceforth FIE), Floridi argued that the purview of computer ethics — indeed of ethics in general — should be widened to include much more than simply human beings, their actions, intentions and characters. He offered FIE as another “macroethics” (his term) which is similar to utilitarianism, deontologism, contractualism, and virtue ethics, because it is intended to be applicable to all ethical situations. On the other hand, IE is different from these more traditional Western theories because it is not intended to replace them, but rather to supplement them with further ethical considerations that go beyond the traditional theories, and that can be overridden, sometimes, by traditional ethical considerations. (Floridi, 2006)

The name ‘information ethics’ is appropriate to Floridi's theory, because it treats everything that exists as “informational” objects or processes:

[All] entities will be described as clusters of data, that is, as informational objects. More precisely, [any existing entity] will be a discrete, self-contained, encapsulated package containing

  1. the appropriate data structures, which constitute the nature of the entity in question, that is, the state of the object, its unique identity and its attributes; and
  2. a collection of operations, functions, or procedures, which are activated by various interactions or stimuli (that is, messages received from other objects or changes within itself) and correspondingly define how the object behaves or reacts to them.

At this level of abstraction, informational systems as such, rather than just living systems in general, are raised to the role of agents and patients of any action, with environmental processes, changes and interactions equally described informationally. (Floridi 2006, 9-10)

Since everything that exists, according to FIE, is an informational object or process, he calls the totality of all that exists — the universe considered as a whole — “the infosphere”. Objects and processes in the infosphere can be significantly damaged or destroyed by altering their characteristic data structures. Such damage or destruction Floridi calls “entropy”, and it results in partial “empoverishment of the infosphere”. Entropy in this sense is an evil that should be avoided or minimized, and Floridi offers four “fundamental principles”:

  1. Entropy ought not to be caused in the infosphere (null law).
  2. Entropy ought to be prevented in the infosphere.
  3. Entropy ought to be removed from the infosphere.
  4. The flourishing of informational entities as well as the whole infosphere ought to be promoted by preserving, cultivating and enriching their properties.

FIE is based upon the idea that everything in the infosphere has at least a minimum worth that should be ethically respected, even if that worth can be overridden by other considerations:

FIE suggests that there is something even more elemental than life, namely being — that is, the existence and flourishing of all entities and their global environment — and something more fundamental than suffering, namely entropy … . FIE holds that being/information has an intrinsic worthiness. It substantiates this position by recognizing that any informational entity has a Spinozian right to persist in its own status, and a Constructionist right to flourish, i.e., to improve and enrich its existence and essence. (Floridi 2006, p. 11)

By construing every existing entity in the universe as “informational”, with at least a minimal moral worth, FIE can supplement traditional ethical theories and go beyond them by shifting the focus of one's ethical attention away from the actions, characters, and values of human agents toward the “evil” (harm, dissolution, destruction) — “entropy” — suffered by objects and processes in the infosphere. With this approach, every existing entity — humans, other animals, plants, organizations, even non-living artifacts, electronic objects in cyberspace, pieces of intellectual property — can be interpreted as potential agents that affect other entities, and as potential patients that are affected by other entities. In this way, Floridi treats FIE as a “patient-based” non-anthropocentric ethical theory to be used in addition to the traditional “agent-based” anthropocentric ethical theories like utilitarianism, deontologism and virtue theory.

FIE, with its emphasis on “preserving and enhancing the infosphere”, enables Floridi to provide, among other things, an insightful and practical ethical theory of robot behavior and the behavior of other “artificial agents” like softbots and cyborgs. (See, for example, Floridi and Sanders, 2004.) FIE is an important component of a more ambitious project covering the entire new field of the Philosophy of Information.

1.8 Exponential Growth

The paragraphs above describe key contributions to “the history of ideas” in information and computer ethics, but the history of a discipline includes much more. The birth and development of a new academic field require cooperation among a “critical mass” of scholars, plus the creation of university courses, research centers, conferences, and academic journals. In this regard, the year 1985 was pivotal for information and computer ethics. The publication of Johnson's textbook, Computer Ethics, plus a special issue of the journal Metaphilosophy (October 1985) — including especially Moor's article “What Is Computer Ethics?” — provided excellent curriculum materials and a conceptual foundation for the field. In addition, Maner's earlier trailblazing efforts, and those of other people who had been inspired by Maner, had generated a “ready-made audience” of enthusiastic computer science and philosophy scholars. The stage was set for exponential growth.

In the United States, rapid growth occurred in information and computer ethics beginning in the mid-1980s. In 1987 the Research Center on Computing & Society (RCCS) was founded at Southern Connecticut State University. Shortly thereafter, the Director (the present author) joined with Walter Maner to organize “the National Conference on Computing and Values” (NCCV), an NSF-funded conference to bring together computer scientists, philosophers, public policy makers, lawyers, journalists, sociologists, psychologists, business people, and others. The goal was to examine and push forward some of the major sub-areas of information and computer ethics; namely, computer security, computers and privacy, ownership of intellectual property, computing for persons with disabilities, and the teaching of computer ethics. More than a dozen scholars from several different disciplines joined with Bynum and Maner to plan NCCV, which occurred in August 1991 at Southern Connecticut State University. Four hundred people from thirty-two American states and seven other countries attended; and the conference generated a wealth of new computer ethics materials — monographs, video programs and an extensive bibliography — that were disseminated to hundreds of colleges and universities during the following two years.

In that same decade, professional ethics advocates, such as Donald Gotterbarn, Keith Miller and Dianne Martin — and professional organizations, such as Computer Professionals for Social Responsibility (www.cpsr.org), the Electronic Frontier Foundation (www.eff.org), and the Special Interest Group on Computing and Society (SIGCAS) of the ACM — spearheaded projects focused upon professional responsibility for computer practitioners. Information and computer ethics became a required component of undergraduate computer science programs that were nationally accredited by the Computer Sciences Accreditation Board. In addition, the annual “Computers, Freedom and Privacy” conferences began in 1991 (see www.cfp.org), and the ACM adopted a new version of its Code of Ethics and Professional Conduct in 1992.

In 1995, rapid growth of information and computer ethics spread to Europe when the present author joined with Simon Rogerson of De Montfort University in Leicester, England to create the Centre for Computing and Social Responsibility (www.ccsr.cse.dmu.ac.uk) and to organize the first computer ethics conference in Europe, ETHICOMP95. That conference included attendees from fourteen different countries, mostly in Europe, and it became a key factor in generating a “critical mass” of computer ethics scholars in Europe. After 1995, every 18 months, another ETHICOMP conference was held in a different European country, including Spain (1996), the Netherlands (1998), Italy (1999), Poland (2001), Portugal (2002), Greece (2004) and Sweden (2005). In addition, in 1999, with assistance from Bynum and Rogerson, the Australian scholars John Weckert and Christopher Simpson created the Australian Institute of Computer Ethics (aice.net.au) and organized AICEC99 (Melbourne, Australia), which was the first international computer ethics conference south of the equator. In 2007 Rogerson and Bynum also headed ETHICOMP2007 in Tokyo, Japan and an ETHICOMP “Working Conference” in Kunming, China to help spread interest in information ethics to Asia.

A central figure in the rapid growth of information and computer ethics in Europe was Simon Rogerson. In addition to creating the Centre for Computing and Social Responsibility at De Montfort University and co-heading the influential ETHICOMP conferences, he also (1) added computer ethics to De Montfort University's curriculum, (2) created a graduate program with advanced computer ethics degrees, including the PhD, and (3) co-founded and co-edited (with Ben Fairweather) two computer ethics journals — The Journal of Information, Communication and Ethics in Society in 2003 (see the link the Other Internet Resources section), and the electronic journal The ETHICOMP Journal in 2004 (see Other Internet Resources). Rogerson also served on the Information Technology Committee of the British Parliament, and participated in several computer ethics projects with agencies of the European Union.

Other important computer ethics developments in Europe in the late 1990s and early 2000s included, for example, (1) Luciano Floridi's creation of the Information Ethics Research Group at Oxford University in the mid 1990s; (2) Jeroen van den Hoven's founding, in 1997, of the CEPE (Computer Ethics: Philosophical Enquiry) series of computer ethics conferences, which occur alternately in Europe and America; (3) van den Hoven's creation of the journal Ethics and Information Technology in 1999; (4) Rafael Capurro's creation of the International Center for Information Ethics (icie.zkm.de) in 1999; (5) Capurro's creation of the journal International Review of Information Ethics in 2004; and Bernd Carsten Stahl's creation of The International Journal of Technology and Human Interaction in 2005.

In summary, since 1985 computer ethics developments have proliferated exponentially with new conferences and conference series, new organizations, new research centers, new journals, textbooks, web sites, university courses, university degree programs, and distinguished professorships. Additional “sub-fields” and topics in information and computer ethics continually emerge as information technology itself grows and proliferates. Recent new topics include on-line ethics, “agent” ethics (robots, softbots), cyborg ethics (part human, part machine), the “open source movement”, electronic government, global information ethics, information technology and genetics, computing for developing countries, computing and terrorism, ethics and nanotechnology, to name only a few examples. (For specific publications and examples, see the list of selected resources below.)

Compared to many other scholarly disciplines, the field of computer ethics is very young. It has existed only since the late 1940s when Norbert Wiener created it. During the first three decades, it grew very little because Wiener's insights were far ahead of everyone else's. In the past 25 years, however, information and computer ethics has grown exponentially in the industrialized world, and the rest of the world has begun to take notice.

2. Example Topics in Computer Ethics

No matter which re-definition of computer ethics one chooses, the best way to understand the nature of the field is through some representative examples of the issues and problems that have attracted research and scholarship. Consider, for example, the following topics:

(See also the wide range of topics included in the recent anthology [Spinello and Tavani, 2001].)

2.1 Computers in the Workplace

As a “universal tool” that can, in principle, perform almost any task, computers obviously pose a threat to jobs. Although they occasionally need repair, computers don't require sleep, they don't get tired, they don't go home ill or take time off for rest and relaxation. At the same time, computers are often far more efficient than humans in performing many tasks. Therefore, economic incentives to replace humans with computerized devices are very high. Indeed, in the industrialized world many workers already have been replaced by computerized devices — bank tellers, auto workers, telephone operators, typists, graphic artists, security guards, assembly-line workers, and on and on. In addition, even professionals like medical doctors, lawyers, teachers, accountants and psychologists are finding that computers can perform many of their traditional professional duties quite effectively.

The employment outlook, however, is not all bad. Consider, for example, the fact that the computer industry already has generated a wide variety of new jobs: hardware engineers, software engineers, systems analysts, webmasters, information technology teachers, computer sales clerks, and so on. Thus it appears that, in the short run, computer-generated unemployment will be an important social problem; but in the long run, information technology will create many more jobs than it eliminates.

Even when a job is not eliminated by computers, it can be radically altered. For example, airline pilots still sit at the controls of commercial airplanes; but during much of a flight the pilot simply watches as a computer flies the plane. Similarly, those who prepare food in restaurants or make products in factories may still have jobs; but often they simply push buttons and watch as computerized devices actually perform the needed tasks. In this way, it is possible for computers to cause “de-skilling” of workers, turning them into passive observers and button pushers. Again, however, the picture is not all bad because computers also have generated new jobs which require new sophisticated skills to perform — for example, “computer assisted drafting” and “keyhole” surgery.

Another workplace issue concerns health and safety. As Forester and Morrison point out [Forester and Morrison, 140-72, Chapter 8], when information technology is introduced into a workplace, it is important to consider likely impacts upon health and job satisfaction of workers who will use it. It is possible, for example, that such workers will feel stressed trying to keep up with high-speed computerized devices — or they may be injured by repeating the same physical movement over and over — or their health may be threatened by radiation emanating from computer monitors. These are just a few of the social and ethical issues that arise when information technology is introduced into the workplace.

2.2 Computer Crime

In this era of computer “viruses” and international spying by “hackers” who are thousands of miles away, it is clear that computer security is a topic of concern in the field of Computer Ethics. The problem is not so much the physical security of the hardware (protecting it from theft, fire, flood, etc.), but rather “logical security”, which Spafford, Heaphy and Ferbrache [Spafford, et al, 1989] divide into five aspects:

  1. Privacy and confidentiality
  2. Integrity — assuring that data and programs are not modified without proper authority
  3. Unimpaired service
  4. Consistency — ensuring that the data and behavior we see today will be the same tomorrow
  5. Controlling access to resources

Malicious kinds of software, or “programmed threats”, provide a significant challenge to computer security. These include “viruses”, which cannot run on their own, but rather are inserted into other computer programs; “worms” which can move from machine to machine across networks, and may have parts of themselves running on different machines; “Trojan horses” which appear to be one sort of program, but actually are doing damage behind the scenes; “logic bombs” which check for particular conditions and then execute when those conditions arise; and “bacteria” or “rabbits” which multiply rapidly and fill up the computer's memory.

Computer crimes, such as embezzlement or planting of logic bombs, are normally committed by trusted personnel who have permission to use the computer system. Computer security, therefore, must also be concerned with the actions of trusted computer users.

Another major risk to computer security is the so-called “hacker” who breaks into someone's computer system without permission. Some hackers intentionally steal data or commit vandalism, while others merely “explore” the system to see how it works and what files it contains. These “explorers” often claim to be benevolent defenders of freedom and fighters against rip-offs by major corporations or spying by government agents. These self-appointed vigilantes of cyberspace say they do no harm, and claim to be helpful to society by exposing security risks. However every act of hacking is harmful, because any known successful penetration of a computer system requires the owner to thoroughly check for damaged or lost data and programs. Even if the hacker did indeed make no changes, the computer's owner must run through a costly and time-consuming investigation of the compromised system [Spafford, 1992].

2.3 Privacy and Anonymity

One of the earliest computer ethics topics to arouse public interest was privacy. For example, in the mid-1960s the American government already had created large databases of information about private citizens (census data, tax records, military service records, welfare records, and so on). In the US Congress, bills were introduced to assign a personal identification number to every citizen and then gather all the government's data about each citizen under the corresponding ID number. A public outcry about “big-brother government” caused Congress to scrap this plan and led the US President to appoint committees to recommend privacy legislation. In the early 1970s, major computer privacy laws were passed in the USA. Ever since then, computer-threatened privacy has remained as a topic of public concern. The ease and efficiency with which computers and computer networks can be used to gather, store, search, compare, retrieve and share personal information make computer technology especially threatening to anyone who wishes to keep various kinds of “sensitive” information (e.g., medical records) out of the public domain or out of the hands of those who are perceived as potential threats. During the past decade, commercialization and rapid growth of the internet; the rise of the world-wide-web; increasing “user-friendliness” and processing power of computers; and decreasing costs of computer technology have led to new privacy issues, such as data-mining, data matching, recording of “click trails” on the web, and so on [see Tavani, 1999].

The variety of privacy-related issues generated by computer technology has led philosophers and other thinkers to re-examine the concept of privacy itself. Since the mid-1960s, for example, a number of scholars have elaborated a theory of privacy defined as “control over personal information” (see, for example, [Westin, 1967], [Miller, 1971], [Fried, 1984] and [Elgesem, 1996]). On the other hand, philosophers Moor and Tavani have argued that control of personal information is insufficient to establish or protect privacy, and “the concept of privacy itself is best defined in terms of restricted access, not control” [Tavani and Moor, 2001] (see also [Moor, 1997]). In addition, Nissenbaum has argued that there is even a sense of privacy in public spaces, or circumstances “other than the intimate.” An adequate definition of privacy, therefore, must take account of “privacy in public” [Nissenbaum, 1998]. As computer technology rapidly advances — creating ever new possibilities for compiling, storing, accessing and analyzing information — philosophical debates about the meaning of “privacy” will likely continue (see also [Introna, 1997]).

Questions of anonymity on the internet are sometimes discussed in the same context with questions of privacy and the internet, because anonymity can provide many of the same benefits as privacy. For example, if someone is using the internet to obtain medical or psychological counseling, or to discuss sensitive topics (for example, AIDS, abortion, gay rights, venereal disease, political dissent), anonymity can afford protection similar to that of privacy. Similarly, both anonymity and privacy on the internet can be helpful in preserving human values such as security, mental health, self-fulfillment and peace of mind. Unfortunately, privacy and anonymity also can be exploited to facilitate unwanted and undesirable computer-aided activities in cyberspace, such as money laundering, drug trading, terrorism, or preying upon the vulnerable (see [Marx, 2001] and [Nissenbaum, 1999]).

2.4 Intellectual Property

One of the more controversial areas of computer ethics concerns the intellectual property rights connected with software ownership. Some people, like Richard Stallman who started the Free Software Foundation, believe that software ownership should not be allowed at all. He claims that all information should be free, and all programs should be available for copying, studying and modifying by anyone who wishes to do so [Stallman, 1993]. Others argue that software companies or programmers would not invest weeks and months of work and significant funds in the development of software if they could not get the investment back in the form of license fees or sales [Johnson, 1992]. Today's software industry is a multibillion dollar part of the economy; and software companies claim to lose billions of dollars per year through illegal copying (“software piracy”). Many people think that software should be ownable, but “casual copying” of personally owned programs for one's friends should also be permitted (see [Nissenbaum, 1995]). The software industry claims that millions of dollars in sales are lost because of such copying. Ownership is a complex matter, since there are several different aspects of software that can be owned and three different types of ownership: copyrights, trade secrets, and patents. One can own the following aspects of a program:

  1. The “source code” which is written by the programmer(s) in a high-level computer language like Java or C++.
  2. The “object code”, which is a machine-language translation of the source code.
  3. The “algorithm”, which is the sequence of machine commands that the source code and object code represent.
  4. The “look and feel” of a program, which is the way the program appears on the screen and interfaces with users.

A very controversial issue today is owning a patent on a computer algorithm. A patent provides an exclusive monopoly on the use of the patented item, so the owner of an algorithm can deny others use of the mathematical formulas that are part of the algorithm. Mathematicians and scientists are outraged, claiming that algorithm patents effectively remove parts of mathematics from the public domain, and thereby threaten to cripple science. In addition, running a preliminary “patent search” to make sure that your “new” program does not violate anyone's software patent is a costly and time-consuming process. As a result, only very large companies with big budgets can afford to run such a search. This effectively eliminates many small software companies, stifling competition and decreasing the variety of programs available to the society [The League for Programming Freedom, 1992].

2.5 Professional Responsibility

Computer professionals have specialized knowledge and often have positions with authority and respect in the community. For this reason, they are able to have a significant impact upon the world, including many of the things that people value. Along with such power to change the world comes the duty to exercise that power responsibly [Gotterbarn, 2001]. Computer professionals find themselves in a variety of professional relationships with other people [Johnson, 1994], including:

employer employee
client professional
professional professional
society professional

These relationships involve a diversity of interests, and sometimes these interests can come into conflict with each other. Responsible computer professionals, therefore, will be aware of possible conflicts of interest and try to avoid them.

Professional organizations in the USA, like the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronic Engineers (IEEE), have established codes of ethics, curriculum guidelines and accreditation requirements to help computer professionals understand and manage ethical responsibilities. For example, in 1991 a Joint Curriculum Task Force of the ACM and IEEE adopted a set of guidelines (“Curriculum 1991”) for college programs in computer science. The guidelines say that a significant component of computer ethics (in the broad sense) should be included in undergraduate education in computer science [Turner, 1991].

In addition, both the ACM and IEEE have adopted Codes of Ethics for their members. The most recent ACM Code (1992), for example, includes “general moral imperatives”, such as “avoid harm to others” and “be honest and trustworthy”. And also included are “more specific professional responsibilities” like “acquire and maintain professional competence” and “know and respect existing laws pertaining to professional work.” The IEEE Code of Ethics (1990) includes such principles as “avoid real or perceived conflicts of interest whenever possible” and “be honest and realistic in stating claims or estimates based on available data.”

The Accreditation Board for Engineering Technologies (ABET) has long required an ethics component in the computer engineering curriculum. And in 1991, the Computer Sciences Accreditation Commission/Computer Sciences Accreditation Board (CSAC/CSAB) also adopted the requirement that a significant component of computer ethics be included in any computer sciences degree granting program that is nationally accredited [Conry, 1992].

It is clear that professional organizations in computer science recognize and insist upon standards of professional responsibility for their members.

2.6 Globalization

Computer ethics today is rapidly evolving into a broader and even more important field, which might reasonably be called “global information ethics”. Global networks like the Internet and especially the world-wide-web are connecting people all over the earth. As Krystyna Gorniak-Kocikowska perceptively notes in her paper, “The Computer Revolution and the Problem of Global Ethics” [Gorniak-Kocikowska, 1996], for the first time in history, efforts to develop mutually agreed standards of conduct, and efforts to advance and defend human values, are being made in a truly global context. So, for the first time in the history of the earth, ethics and values will be debated and transformed in a context that is not limited to a particular geographic region, or constrained by a specific religion or culture. This may very well be one of the most important social developments in history. Consider just a few of the global issues:

Global Laws

If computer users in the United States, for example, wish to protect their freedom of speech on the internet, whose laws apply? Nearly two hundred countries are already interconnected by the internet, so the United States Constitution (with its First Amendment protection for freedom of speech) is just a “local law” on the internet — it does not apply to the rest of the world. How can issues like freedom of speech, control of “pornography”, protection of intellectual property, invasions of privacy, and many others to be governed by law when so many countries are involved? If a citizen in a European country, for example, has internet dealings with someone in a far-away land, and the government of that land considers those dealings to be illegal, can the European be tried by the courts in the far-away country?

Global Cyberbusiness

The world is very close to having technology that can provide electronic privacy and security on the internet sufficient to safely conduct international business transactions. Once this technology is in place, there will be a rapid expansion of global “cyberbusiness”. Nations with a technological infrastructure already in place will enjoy rapid economic growth, while the rest of the world lags behind. What will be the political and economic fallout from rapid growth of global cyberbusiness? Will accepted business practices in one part of the world be perceived as “cheating” or “fraud” in other parts of the world? Will a few wealthy nations widen the already big gap between rich and poor? Will political and even military confrontations emerge?

Global Education

If inexpensive access to the global information net is provided to rich and poor alike — to poverty-stricken people in ghettos, to poor nations in the “third world”, etc. — for the first time in history, nearly everyone on earth will have access to daily news from a free press; to texts, documents and art works from great libraries and museums of the world; to political, religious and social practices of peoples everywhere. What will be the impact of this sudden and profound “global education” upon political dictatorships, isolated communities, coherent cultures, religious practices, etc.? As great universities of the world begin to offer degrees and knowledge modules via the internet, will “lesser” universities be damaged or even forced out of business?

Information Rich and Information Poor

The gap between rich and poor nations, and even between rich and poor citizens in industrialized countries, is already disturbingly wide. As educational opportunities, business and employment opportunities, medical services and many other necessities of life move more and more into cyberspace, will gaps between the rich and the poor become even worse?

Bibliography

  • Adam, A. (2000), “Gender and Computer Ethics,” Computers and Society, 30(4): 17-24.
  • Adam, A. and J. Ofori-Amanfo (2000), “Does Gender Matter in Computer Ethics?” Ethics and Information Technology, 2(1): 37-47.
  • Anderson, R, D. Johnson, D. Gotterbarn and J. Perrolle (1993), “Using the New ACM Code of Ethics in Decision Making,” Communications of the ACM, 36: 98-107.
  • Begg, M.M. (2005), “Muslim Parents Guide: Making Responsible Use of Information and Communication Technologies at Home,” Centre for Computing and Social Responsibility, De Montfort University, Leicester, UK.
  • Bohman, James (2008), “The Transformation of the Public Sphere: Political Authority, Communicative Freedom, and Internet Publics,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 66-92.
  • Brennan, G. and P. Pettit (2008), “Esteem, Identiiability, and the Internet,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 175-94.
  • Brey, P. (2001), “Disclosive Computer Ethics,” in R. Spinello and H. Tavani (eds.), Readings in CyberEthics, Sudbury, MA: Jones and Bartlett.
  • Brey, P. (2006), “Evaluating the Social and Cultural Implications of the Internet,” Computers and Society, 36(3): 41-44.
  • Bynum, T. (1982), “A Discipline in its Infancy,” The Dallas Morning News, January 12, 1982, D/1, D/6.
  • Bynum, T. (1999), “The Development of Computer Ethics as a Philosophical Field of Study,” The Australian Journal of Professional and Applied Ethics, 1(1): 1-29.
  • Bynum, T. (2000), “The Foundation of Computer Ethics,” Computers and Society, 30(2): 6-13.
  • Bynum, T. (2004), “Ethical Challenges to Citizens of the ‘Automatic Age’: Norbert Wiener on the Information Society,” Journal of Information, Communication and Ethics in Society, 2(2): 65-74.
  • Bynum, T. (2005), “Norbert Wiener's Vision: the Impact of the ‘Automatic Age’ on our Moral Lives,” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany, NY: SUNY Press, 11-25.
  • Bynum, T. (2006), “Flourishing Ethics,” Ethics and Information Technology, 8(4): 157-173.
  • Bynum, T. (2007), “Norbert Wiener and the Rise of Information Ethics,” in J. van den Hoven and J. Weckert (eds.), Moral Philosophy and Information Technology, Cambridge: Cambridge University Press.
  • Bynum, T. (2008), “Norbert Weiner and the Rise of Information Ethics,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 8-25.
  • Bynum, T. and P. Schubert (1997), “How to do Computer Ethics — A Case Study: The Electronic Mall Bodensee,” in J. van den Hoven (ed.), Computer Ethics—Philosophical Enquiry, Rotterdam: Erasmus University Press, 85-95.
  • Capurro, R. (2007a), “Information Ethics for and from Africa,” International Review of Information Ethics, 2007: 3-13.
  • Capurro, R. (2007b), “Intercultural Information Ethics,” in R. Capurro, J. Frühbauer and T. Hausmanninger (eds.), Localizing the Internet: Ethical Issues in Intercultural Perspective, (ICIE Series, Volume 4), Munich: Fink, 2007: 21-38.
  • Capurro, R. (2006), “Towards an Ontological Foundation for Information Ethics,” Ethics and Information Technology, 8(4): 157-186.
  • Capurro, R. (2004), “The German Debate on the Information Society,” The Journal of Information, Communication and Ethics in Society, 2 (Supplement): 17-18.
  • Cavalier, R. (ed.) (2005), The Impact of the Internet on Our Moral Lives, Albany, NY: SUNY Press.
  • Cocking, D. (2008), “Plural Selves and Relational Identity: Intimacy and Privacy Online,” In J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 123-41.
  • Conry, S. (1992), “Interview on Computer Science Accreditation,” in T. Bynum and J. Fodor (creators), Computer Ethics in the Computer Science Curriculum (a video program), Kingston, NY: Educational Media Resources, Inc.
  • Edgar, S. (1997), Morality and Machines: Perspectives on Computer Ethics, Sudbury, MA: Jones and Bartlett.
  • Elgesem, D. (1995), “Data Privacy and Legal Argumentation,” Communication and Cognition, 28(1): 91-114.
  • Elgesem, D. (1996), “Privacy, Respect for Persons, and Risk,” in C. Ess (ed.), Philosophical Perspectives on Computer-Mediated Communication, Albany: SUNY Press, 45-66.
  • Elgesem, D. (2002), “What is Special about the Ethical Problems in Internet Research?” Ethics and Information Technology, 4(3): 195-203.
  • Elgesem, D. (2008), “Information Technology Research Ethics,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 354-75.
  • Ess, C. (1996), “The Political Computer: Democracy, CMC, and Habermas,” in C. Ess (ed.), Philosophical Perspectives on Computer-Mediated Communication, Albany: SUNY Press, 197-230.
  • Ess, C. (ed.) (2001a), Culture, Technology, Communication: Towards an Intercultural Global Village, Albany: SUNY Press.
  • Ess, C. (2001b), “What's Culture got to do with it? Cultural Collisions in the Electronic Global Village,” in C. Ess (ed.), Culture, Technology, Communication: Towards an Intercultural Global Village, Albany: SUNY Press, 1-50.
  • Ess, C. (2004), “Computer-Mediated Communication and Human-Computer Interaction,” in L. Floridi (ed.), The Blackwell Guide to the Philosophy of Computing and Information, Oxford: Blackwell, 76-91.
  • Ess, C. (2005), “Moral Imperatives for Life in an Intercultural Global Village, ” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 161-193.
  • Ess, C. (2008), “Culture and Global Networks: Hope for a Global Ethics?” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 195-225.
  • Fairweather, B. (1998), “No PAPA: Why Incomplete Codes of Ethics are Worse than None at all,” in G. Collste (ed.), Ethics and Information Technology, New Delhi: New Academic Publishers.
  • Flanagan, M., D. Howe, and H. Nissenbaum (2008), “Embodying Value in Technology: Theory and Practice,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 322-53.
  • Floridi, L. (1999), “Information Ethics: On the Theoretical Foundations of Computer Ethics”, Ethics and Information Technology, 1(1): 37-56.
  • Floridi, L. (ed.) (2004), The Blackwell Guide to the Philosophy of Computing and Information, Oxford: Blackwell.
  • Floridi, L. (2005b), “Internet Ethics: The Constructionist Values of Homo Poieticus,” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 195-214.
  • Floridi, L. (2006a), “Information Ethics: Its Nature and Scope,” Computers and Society, 36(3): 21-36.
  • Floridi, L. (2006b), “Information Technologies and the Tragedy of the Good Will,” Ethics and Information Technology, 8(4): 253-262.
  • Floridi, L. (2008), “Information Ethics: Its Nature and Scope,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 40-65.
  • Floridi, L. and J. Sanders (2004), “The Foundationalist Debate in Computer Ethics,” in R. Spinello and H. Tavani (eds.), Readings in CyberEthics, 2nd edition, Sudbury, MA: Jones and Bartlett, 81-95.
  • Fodor, J. and T. Bynum (1992), What Is Computer Ethics? (a video program), Kingston, NY: Educational Media Resources, Inc.
  • Forester, T. and P. Morrison (1990), Computer Ethics: Cautionary Tales and Ethical Dilemmas in Computing, Cambridge, MA: MIT Press.
  • Fried, C. (1984), “Privacy,” in F. Schoeman (ed.), Philosophical Dimensions of Privacy, Cambridge: Cambridge University Press.
  • Friedman, B. (ed.) (1997), Human Values and the Design of Computer Technology, Cambridge: Cambridge University Press.
  • Friedman, B. and H. Nissenbaum (1996), “Bias in Computer Systems,” ACM Transactions on Information Systems, 14(3): 330-347.
  • Gert, B. (1998), Morality: Its Nature and Justification, Oxford: Oxford University Press.
  • Gert, B. (1999), “Common Morality and Computing,” Ethics and Information Technology, 1(1): 57-64.
  • Goldman, A. (2008), “The Social Epistemology of Blogging,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 111-22.
  • Gordon, W. (2008), “Moral Philosophy, Information Technology, and Copyright: The Grokster Case,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 270-300.
  • Gorniak-Kocikowska, K. (1996), “The Computer Revolution and the Problem of Global Ethics,” in T. Bynum and S. Rogerson (eds.), Global Information Ethics, Guildford, UK: Opragen Publications, 177-90.
  • Gorniak-Kocikowska, K. (2005) “From Computer Ethics to the Ethics of Global ICT Society,” in T. Bynum, G. Collste, and S. Rogerson (eds.), Proceedings of ETHICOMP2005 (CD-ROM), Center for Computing and Social Responsibility, Linköpings University.
  • Gorniak-Kocikowska, K. (2007), “ICT, Globalization and the Pursuit of Happiness: The Problem of Change,” in Proceedings of ETHICOMP2007, Tokyo: Meiji University Press.
  • Gotterbarn, D. (1991), “Computer Ethics: Responsibility Regained,” National Forum: The Phi Beta Kappa Journal, 71: 26-31.
  • Gotterbarn, D. (2001), “Informatics and Professional Responsibility,” Science and Engineering Ethics, 7(2): 221-30.
  • Gotterbarn, D. (2002) “Reducing Software Failures: Addressing the Ethical Risks of the Software Development Life Cycle,” Australian Journal of Information Systems, 9(2): 155-65.
  • Gotterbarn, D., K. Miller, and S. Rogerson (1997), “Software Engineering Code of Ethics,” Information Society, 40(11): 110-118.
  • Gotterbarn, D. and K. Miller (2004), “Computer Ethics in the Undergraduate Curriculum: Case Studies and the Joint Software Engineer's Code,” Journal of Computing Sciences in Colleges, 20(2): 156-167.
  • Gotterbarn, D. and S. Rogerson (2005), “Responsible Risk Analysis for Software Development: Creating the Software Development Impact Statement,” Communications of the Association for Information Systems, 15(40): 730-50.
  • Grodzinsky, F. (1997), “Computer Access for Students with Disabilities,” SIGSCE Bulletin, 29(1): 292-295; [Available online].
  • Grodzinsky, F. (1999), “The Practitioner from Within: Revisiting the Virtues,” Computers and Society, 29(2): 9-15.
  • Grodzinsky, F., K. Miller and M. Wolfe (2003), “Ethical Issues in Open Source Software,” Journal of Information, Communication and Ethics in Society, 1(4): 193-205.
  • Grodzinsky, F. and H. Tavani (2002), “Ethical Reflections on Cyberstalking,” Computers and Society, 32(1): 22-32.
  • Grodzinsky, F. and H. Tavani (2004), “Verizon vs. the RIAA: Implications for Privacy and Democracy,” in J. Herkert (ed.), Proceedings of ISTAS 2004: The International Symposium on Technology and Society, Los Alamitos, CA: IEEE Computer Society Press.
  • Himma, K. (2003), “The Relationship Between the Uniqueness of Computer Ethics and its Independence as a Discipline in Applied Ethics,” Ethics and Information Technology, 5(4): 225-237.
  • Himma, K. (2004), “The Moral Significance of the Interest in Information: Reflections on a Fundamental Right to Information,” Journal of Information, Communication, and Ethics in Society, 2(4): 191-202.
  • Himma, K. (2007), “Artificial Agency, Consciousness, and the Criteria for Moral Agency: What Properties Must an Artificial Agent Have to be a Moral Agent?” in Proceedings of ETHICOMP2007, Tokyo: Meiji University Press.
  • Himma, K. (2004), “There's Something about Mary: The Moral Value of Things qua Information Objects”, Ethics and Information Technology, 6(3): 145-159.
  • Himma, K. (2006), “Hacking as Politically Motivated Civil Disobedience: Is Hacktivism Morally Justified?” in K. Himma (ed.), Readings in Internet Security: Hacking, Counterhacking, and Society, Sudbury, MA: Jones and Bartlett.
  • Huff. C., J. Fleming, and J. Cooper (1991), “The Social Basis of Gender Differences in Human-computer Interaction.” in C. Martin (ed.), In Search of Gender-free Paradigms for Computer Science Education, Eugene, OR: ISTE Research Monographs, 19-32.
  • Huff, C. and T. Finholt (eds.) (1994), Social Issues in Computing: Putting Computers in Their Place, New York: McGraw-Hill.
  • Huff, C. and D. Martin (1995), “Computing Consequences: A Framework for Teaching Ethical Computing,” Communications of the ACM, 38(12): 75-84.
  • Huff, C. (2002), “Gender, Software Design, and Occupational Equity,” SIGCSE Bulletin: Inroads, 34: 112-115.
  • Huff, C. (2004), “Unintentional Power in the Design of Computing Systems.” in T. Bynum and S. Rogerson (eds.), Computer Ethics and Professional Responsibility, Oxford: Blackwell.
  • Huff, C., D. Johnson, and K. Miller (2003), “Virtual Harms and Real Responsibility,” Technology and Society Magazine (IEEE), 22(2): 12-19.
  • Introna, L. (1997), “Privacy and the Computer: Why We Need Privacy in the Information Society,” Metaphilosophy, 28(3): 259-275.
  • Introna, L. (2002), “On the (Im)Possibility of Ethics in a Mediated World,” Information and Organization, 12(2): 71-84.
  • Introna, L. (2005a), “Disclosive Ethics and Information Technology: Disclosing Facial Recognition Systems,” Ethics and Information Technology, 7(2): 75-86.
  • Introna, L. (2005b) “Phenomenological Approaches to Ethics and Information Technology,” The Stanford Encyclopedia of Philosophy (Fall 2005 Edition), Edward N. Zalta (ed.), URL = .
  • Introna, L. and H. Nissenbaum (2000), “Shaping the Web: Why the Politics of Search Engines Matters,” The Information Society, 16(3): 1-17.
  • Introna, L. and N. Pouloudi (2001), “Privacy in the Information Age: Stakeholders, Interests and Values.” in J. Sheth (ed.), Internet Marketing, Fort Worth, TX: Harcourt College Publishers, 373-388.
  • Johnson, D. (1985), Computer Ethics, First Edition, Englewood Cliffs, NJ: Prentice-Hall; Second Edition, Englewood Cliffs, NJ: Prentice-Hall, 1994; Third Edition Upper Saddle River, NJ: Prentice-Hall, 2001.
  • Johnson, D. (1997a), “Ethics Online,” Communications of the ACM, 40(1): 60-65.
  • Johnson, D. (1997b), “Is the Global Information Infrastructure a Democratic Technology?” Computers and Society, 27(4): 20-26.
  • Johnson, D. (2004), “Computer Ethics,” in L. Floridi (ed.), The Blackwell Guide to the Philosophy of Computing and Information, Oxford: Blackwell, 65-75.
  • Johnson, D. and H. Nissenbaum (eds.) (1995), Computing, Ethics & Social Values, Englewood Cliffs, NJ: Prentice Hall.
  • Johnson, D. and T. Powers (2008), “Computers as Surrogate Agents,” in J. van den Hoven and J. Weckert, (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 251-69.
  • Kocikowski, A. (1996), “Geography and Computer Ethics: An Eastern European Perspective,” in T. Bynum and S. Rogerson (eds.), Science and Engineering Ethics (Special Issue: Global Information Ethics), 2(2): 201-10.
  • Maner, W. (1980), Starter Kit in Computer Ethics, Hyde Park, NY: Helvetia Press and the National Information and Resource Center for Teaching Philosophy.
  • Maner, W. (1996), “Unique Ethical Problems in Information Technology,” in T. Bynum and S. Rogerson (eds.), Science and Engineering Ethics (Special Issue: Global Information Ethics), 2(2): 137-154.
  • Martin, C. and D. Martin (1990), “Professional Codes of Conduct and Computer Ethics Education,” Social Science Computer Review, 8(1): 96-108.
  • Martin, C., C. Huff, D. Gotterbarn, K. Miller, et al. (1996), “A Framework for Implementing and Teaching the Social and Ethical Impact of Computing,” Education and Information Technologies, 1(2): 101-122.
  • Martin, C., C. Huff, D. Gotterbarn, and K. Miller (1996), “Implementing a Tenth Strand in the Computer Science Curriculum” (Second Report of the Impact CS Steering Committee), Communications of the ACM, 39(12): 75-84.
  • Marx, G. (2001), “Identity and Anonymity: Some Conceptual Distinctions and Issues for Research,” in J. Caplan and J. Torpey (eds.), Documenting Individual Identity, Princeton: Princeton University Press.
  • Mather, K. (2005), “The Theoretical Foundation of Computer Ethics: Stewardship of the Information Environment,” in Contemporary Issues in Governance (Proceedings of GovNet Annual Conference, Melbourne, Australia, 28-30 November, 2005), Melbourne: Monash University.
  • Matthews, S. (2008), “Identity and Information Technology.” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 142-60.
  • Miller, A. (1971), The Assault on Privacy: Computers, Data Banks, and Dossiers, Ann Arbor: University of Michigan Press.
  • Miller, K. (2005), “Web standards: Why So Many Stray from the Narrow Path,” Science and Engineering Ethics, 11(3): 477-479.
  • Miller, K. and D. Larson (2005a), “Agile Methods and Computer Ethics: Raising the Level of Discourse about Technological Choices,” IEEE Technology and Society, 24(4): 36-43.
  • Miller, K. and D. Larson (2005b), “Angels and Artifacts: Moral Agents in the Age of Computers and Networks,” Journal of Information, Communication & Ethics in Society, 3(3): 151-157.
  • Miller, S. (2008), “Collective Responsibility and Information and Communication Technology.” in J. van den Hoven and J> Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 226-50.
  • Moor, J.. (1979), “Are there Decisions Computers Should Never Make?” Nature and System, 1: 217-29.
  • Moor, J. (1985) “What Is Computer Ethics?” Metaphilosophy, 16(4): 266-75.
  • Moor, J. (1996), “Reason, Relativity and Responsibility in Computer Ethics,” in Computers and Society, 28(1) (1998): 14-21; originally a keynote address at ETHICOMP96 in Madrid, Spain, 1996.
  • Moor, J. (1997), “Towards a Theory of Privacy in the Information Age,” Computers and Society, 27(3): 27-32.
  • Moor, J. (1999), “Just Consequentialism and Computing,” Ethics and Information Technology, 1(1): 65-69.
  • Moor, J. (2001), “The Future of Computer Ethics: You Ain't Seen Nothin' Yet,” Ethics and Information Technology, 3(2): 89-91.
  • Moor, J. 2005), “Should We Let Computers Get under Our Skin?” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 121-138.
  • Moor, J. (2006), “The Nature, Importance, and Difficulty of Machine Ethics,” IEEE Intelligent Systems, 21(4): 18-21.
  • Moor, J. (2008) “Why We Need Better Ethics for Emerging Technologies,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 26-39.
  • Nissenbaum, H. (1995), “Should I Copy My Neighbor's Software?” in D. Johnson and H. Nissenbaum (eds), Computers, Ethics, and Social Responsibility, Englewood Cliffs, NJ: Prentice Hall.
  • Nissenbaum, H. (1997), “Can We Protect Privacy in Public?” in Proceedings of Computer Ethics—Philosophical Enquiry 97 (CEPE97), Rotterdam: Erasmus University Press, 191-204; reprinted Nissenbaum 1998a.
  • Nissenbaum, H. (1998a), “Protecting Privacy in an Information Age: The Problem of Privacy in Public,” Law and Philosophy, 17: 559-596.
  • Nissenbaum, H. (1998b), “Values in the Design of Computer Systems,” Computers in Society, 1998: 38-39.
  • Nissenbaum, H. (1999), “The Meaning of Anonymity in an Information Age,” The Information Society, 15: 141-144.
  • Nissenbaum, H. (2005a), “Hackers and the Contested Ontology of Cyberspace,” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 139-160.
  • Nissenbaum, H. (2005b), “Where Computer Security Meets National Security,” Ethics and Information Technology, 7(2): 61-73.
  • Parker, D. (1968), “Rules of Ethics in Information Processing,” Communications of the ACM, 11: 198-201.
  • Parker, D. (1979), Ethical Conflicts in Computer Science and Technology. Arlington, VA: AFIPS Press.
  • Parker, D., S. Swope and B. Baker (1990), Ethical Conflicts in Information & Computer Science, Technology & Business, Wellelsey, MA: QED Information Sciences.
  • Pecorino, P. and W. Maner (1985), “A Proposal for a Course on Computer Ethics,” Metaphilosophy, 16(4): 327-337.
  • Perrolle, J. (1987), Computers and Social Change: Information, Property, and Power, Belmont, CA: Wadsworth.
  • Pettit, P. (2008), “Trust, Reliance, and the Internet,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 161-74.
  • Rogerson, S. (1996), “The Ethics of Computing: The First and Second Generations,” The UK Business Ethics Network News, 6: 1-4.
  • Rogerson, S. (1998), “Computer and Information Ethics,” in R. Chadwick (ed.), Encylopedia of Applied Ethics, San Diego, CA: Academic Press, 563-570.
  • Rogerson, S. (2004), “The Ethics of Software Development Project Management,” in T. Bynum and S. Rogerson (eds.), Computer Ethics and Professional Responsibility, Oxford: Blackwell, 119-128.
  • Rogerson, S. and T. Bynum (1995), “Cyberspace: The Ethical Frontier,” The Times Higher Education Supplement (The London Times), No. 1179, June, 9, 1995, iv.
  • Rogerson, S., B. Fairweather, and M. Prior (2002), “The Ethical Attitudes of Information Systems Professionals: Outcomes of an Initial Survey,” Telematics and Informatics, 19: 21-36.
  • Rogerson, S. and D. Gotterbarn (1998), “The Ethics of Software Project Management,” in G. Collste (ed.), Ethics and Information Technology, New Delhi: New Academic Publishers, 137-154.
  • Sojka, J. (1996), “Business Ethics and Computer Ethics: The View from Poland,” in T. Bynum and S. Rogerson (eds.), Global Information Ethics, Guildford, UK: Opragen Publications, 191-200.
  • Spafford, E., K. Heaphy, and D. Ferbrache (eds.) (1989), Computer Viruses: Dealing with Electronic Vandalism and Programmed Threats, Arlington, VA: ADAPSO (now ITAA).
  • Spafford, E. (1992), “Are Computer Hacker Break-Ins Ethical?” Journal of Systems and Software, 17: 41-47.
  • Spinello, R. (1997), Case Studies in Information and Computer Ethics, Upper Saddle River, NJ: Prentice-Hall.
  • Spinello, R. (2000), CyberEthics: Morality and Law in Cyberspace, Sudbury, MA: Jones and Bartlett.
  • Spinello, R. and H. Tavani (2001a), “The Internet, Ethical Values, and Conceptual Frameworks: An Introduction to Cyberethics,” Computers and Society, 31(2): 5-7.
  • Spinello, R, and H. Tavani (eds.) (2001b), Readings in CyberEthics, Sudbury, MA: Jones and Bartlett; Second Edition, 2004.
  • Spinello, R. and H.. Tavani (eds.) (2005), Intellectual Property Rights in a Networked World: Theory and Practice, Hershey, PA: Idea Group/Information Science Publishing.
  • Stahl, B. (2004a), “Information, Ethics and Computers: The Problem of Autonomous Moral Agents,” Minds and Machines, 14: 67-83.
  • Stahl, B. (2004b), Responsible Management of Information Systems, Hershey, PA: Idea Group/Information Science Publishing.
  • Stahl, B. (2005), “The Ethical Problem of Framing E-Government in Terms of E-Commerce,” Electronic Journal of E-Government, 3(2): 77-86.
  • Stahl, B. (2006), “Responsible Computers? A Case for Ascribing Quasi-responsibility to Computers Independent of Personhood or Agency,” Ethics and Information Technology, 8(4): 205-213.
  • Sunstein, C. (2008), “Democracy and the Internet,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 93-110.
  • Tavani, H. (ed.) (1996), Computing, Ethics, and Social Responsibility: A Bibliography, Palo Alto, CA: Computer Professionals for Social Responsibility Press.
  • Tavani, H. (1999a), “Privacy and the Internet,” Proceedings of the Fourth Annual Ethics and Technology Conference, Chestnut Hill, MA: Boston College Press, 114-25.
  • Tavani, H. (1999b), “Privacy On-Line,” Computers and Society, 29(4): 11-19.
  • Tavani, H. (2002), “The Uniqueness Debate in Computer Ethics: What Exactly is at Issue and Why Does it Matter?” Ethics and Information Technology, 4(1): 37-54.
  • Tavani, H. (2004), Ethics and Technology: Ethical Issues in an Age of Information and Communication Technology, Hoboken, NJ: John Wiley and Sons; Second Edition, 2007.
  • Tavani, H. (2005), “The Impact of the Internet on our Moral Condition: Do We Need a New Framework of Ethics?” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 215-237.
  • Tavani, H. (2006), Ethics, Computing, and Genomics, Sudbury, MA: Jones and Bartlett.
  • Tavani, H. and J. Moor (2001), “Privacy Protection, Control of Information, and Privacy-Enhancing Technologies,” Computers and Society, 31(1): 6-11.
  • Turkle, S. (1984), The Second Self: Computers and the Human Spirit, New York: Simon & Schuster.
  • Turner, A.J. (1991), “Summary of the ACM/IEEE-CS Joint Curriculum Task Force Report: Computing Curricula, 1991,” Communications of the ACM, 34(6): 69-84.
  • Turner, E. (2006), “Teaching Gender-Inclusive Computer Ethics, ” in I. Trauth (ed.), Encyclopedia of Gender and Information Technology: Exploring the Contributions, Challenges, Issues and Experiences of Women in Information Technology, Hershey, PA: Idea Group/Information Science Publishing, 1142-1147.
  • van den Hoven, J. (1997a), “Computer Ethics and Moral Methodology,” Metaphilosophy, 28(3): 234-48.
  • van den Hoven, J. (1997b), “Privacy and the Varieties of Informational Wrongdoing,” Computers and Society, 27(3): 33-37.
  • van den Hoven, J. (1998), “Ethics, Social Epistemics, Electronic Communication and Scientific Research,” European Review, 7(3): 341-349.
  • van den Hoven, J. (2008a), “Information Technology, Privacy, and the Protection of Personal Data,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 301-321.
  • van den Hoven, J. and E. Rooksby (2008), “Distributive Justice and the Value of Information: A (Broadly) Rawlsian Approach,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 376-96.
  • van den Hoven, J. and J. Weckert (2008), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press.
  • Volkman, R. (2003), “Privacy as Life, Liberty, Property,” Ethics and Information Technology, 5(4): 199-210.
  • Volkman, R. (2005), “Dynamic Traditions: Why Globalization Does Not Mean Homogenization,” in Proceedings of ETHICOMP2005 (CD-ROM), Center for Computing and Social Responsibility, Linköpings University.
  • Volkman, R. (2007), “The Good Computer Professional Does Not Cheat at Cards,” in Proceedings of ETHICOMP2007, Tokyo: Meiji University Press.
  • Weckert, J. (2002), “Lilliputian Computer Ethics,” Metaphilosophy, 33(3): 366-375.
  • Weckert, J. (2005), “Trust in Cyberspace,” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 95-117.
  • Weckert, J. and D. Adeney (1997), Computer and Information Ethics, Westport, CT: Greenwood Press.
  • Weizenbaum, J. (1976), Computer Power and Human Reason: From Judgment to Calculation, San Francisco, CA: Freeman.
  • Westin, A. (1967), Privacy and Freedom, New York: Atheneum.
  • Wiener, N. (1948), Cybernetics: or Control and Communication in the Animal and the Machine, New York: Technology Press/John Wiley & Sons.
  • Wiener, N. (1950), The Human Use of Human Beings: Cybernetics and Society, Boston: Houghton Mifflin; Second Edition Revised, New York, NY: Doubleday Anchor 1954.
  • Wiener, N. (1964), God & Golem, Inc.: A Comment on Certain Points Where Cybernetics Impinges on Religion, Cambridge, MA: MIT Press.

Academic Tools

sep man icon How to cite this entry.
sep man icon Preview the PDF version of this entry at the Friends of the SEP Society.
inpho icon Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO).
phil papers icon Enhanced bibliography for this entry at PhilPapers, with links to its database.

Other Internet Resources

Papers and Books

Journals and Web Sites

No comments: