NihilismAbsurdism.Blogspot.com

"The Absurd" refers to the conflict between the human tendency to seek inherent meaning in life and the human inability to find any.

Nihilism : from the Latin nihil, nothing) is the philosophical doctrine suggesting the negation of one or more putatively meaningful aspects of life

Saturday, August 27, 2011

Craig Venter

Berkeley

1. Life and philosophical works

Berkeley was born in 1685 near Kilkenny, Ireland. After several years of schooling at Kilkenny College, he entered Trinity College, in Dublin, at age 15. He was made a fellow of Trinity College in 1707 (three years after graduating) and was ordained in the Anglican Church shortly thereafter. At Trinity, where the curriculum was notably modern, Berkeley encountered the new science and philosophy of the late seventeenth century, which was characterized by hostility towards Aristotelianism. Berkeley's philosophical notebooks (sometimes styled the Philosophical Commentaries), which he began in 1707, provide rich documentation of Berkeley's early philosophical evolution, enabling the reader to track the emergence of his immaterialist philosophy from a critical response to Descartes, Locke, Malebranche, Newton, Hobbes, and others.

Berkeley's first important published work, An Essay Towards a New Theory of Vision (1709), was an influential contribution to the psychology of vision and also developed doctrines relevant to his idealist project. In his mid-twenties, he published his most enduring works, the Treatise concerning the Principles of Human Knowledge (1710) and the Three Dialogues between Hylas and Philonous (1713), whose central doctrines we will examine below.

In 1720, while completing a four year tour of Europe as tutor to a young man, Berkeley composed De Motu, a tract on the philosophical foundations of mechanics which developed his views on philosophy of science and articulated an instrumentalist approach to Newtonian dynamics. After his continental tour, Berkeley returned to Ireland and resumed his position at Trinity until 1724, when he was appointed Dean of Derry. At this time, Berkeley began developing his scheme for founding a college in Bermuda. He was convinced that Europe was in spiritual decay and that the New World offered hope for a new golden age. Having secured a charter and promises of funding from the British Parliament, Berkeley set sail for America in 1728, with his new bride, Anne Forster. They spent three years in Newport, Rhode Island awaiting the promised money, but Berkeley's political support had collapsed and they were forced to abandon the project and return to Britain in 1731. While in America, Berkeley composed Alciphron, a work of Christian apologetics directed against the "free-thinkers" whom he took to be enemies of established Anglicanism. Alciphron is also a significant philosophical work and a crucial source of Berkeley's views on language.

Shortly after returning to London, Berkeley composed the Theory of Vision, Vindicated and Explained, a defense of his earlier work on vision, and the Analyst, an acute and influential critique of the foundations of Newton's calculus. In 1734 he was made Bishop of Cloyne, and thus he returned to Ireland. It was here that Berkeley wrote his last, strangest, and best-selling (in his own lifetime) philosophical work. Siris (1744) has a three-fold aim: to establish the virtues of tar-water (a liquid prepared by letting pine tar stand in water) as a medical panacea, to provide scientific background supporting the efficacy of tar-water, and to lead the mind of the reader, via gradual steps, toward contemplation of God. Berkeley died in 1753, shortly after moving to Oxford to supervise the education of his son George, one of the three out of seven of his children to survive childhood.

2. Berkeley's critique of materialism in the Principles and Dialogues

In his two great works of metaphysics, Berkeley defends idealism by attacking the materialist alternative. What exactly is the doctrine that he's attacking? Readers should first note that “materialism” is here used to mean “the doctrine that material things exist”. This is in contrast with another use, more standard in contemporary discussions, according to which materialism is the doctrine that only material things exist. Berkeley contends that no material things exist, not just that some immaterial things exist. Thus, he attacks Cartesian and Lockean dualism, not just the considerably less popular (in Berkeley's time) view, held by Hobbes, that only material things exist. But what exactly is a material thing? Interestingly, part of Berkeley's attack on matter is to argue that this question cannot be satisfactorily answered by the materialists, that they cannot characterize their supposed material things. However, an answer that captures what exactly it is that Berkeley rejects is that material things are mind-independent things or substances. And a mind-independent thing is something whose existence is not dependent on thinking/perceiving things, and thus would exist whether or not any thinking things (minds) existed. Berkeley holds that there are no such mind-independent things, that, in the famous phrase, esse est percipi (aut percipere) — to be is to be perceived (or to perceive).

Berkeley charges that materialism promotes skepticism and atheism: skepticism because materialism implies that our senses mislead us as to the natures of these material things, which moreover need not exist at all, and atheism because a material world could be expected to run without the assistance of God. This double charge provides Berkeley's motivation for questioning materialism (one which he thinks should motivate others as well), though not, of course, a philosophical argument against materialism. Fortunately, the Principles and Dialogues overflow with such arguments. Below, we will examine some of the main elements of Berkeley's argumentative campaign against matter.

2.1 The attack on representationalist materialism

2.1.1 The core argument

The starting point of Berkeley's attack on the materialism of his contemporaries is a very short argument presented in Principles 4:

It is indeed an opinion strangely prevailing amongst men, that houses, mountains, rivers, and in a word all sensible objects have an existence natural or real, distinct from their being perceived by the understanding. But with how great an assurance and acquiescence soever this principle may be entertained in the world; yet whoever shall find in his heart to call it in question, may, if I mistake not, perceive it to involve a manifest contradiction. For what are the forementioned objects but the things we perceive by sense, and what do we perceive besides our own ideas or sensations; and is it not plainly repugnant that any one of these or any combination of them should exist unperceived?

Berkeley presents here the following argument (see Winkler 1989, 138):

(1) We perceive ordinary objects (houses, mountains, etc.).

(2) We perceive only ideas.

Therefore,

(3) Ordinary objects are ideas.

The argument is valid, and premise (1) looks hard to deny. What about premise (2)? Berkeley believes that this premise is accepted by all the modern philosophers. In the Principles, Berkeley is operating within the idea-theoretic tradition of the seventeenth and eighteenth centuries. In particular, Berkeley believes that some version of this premise is accepted by his main targets, the influential philosophers Descartes and Locke.

However, Berkeley recognizes that these philosophers have an obvious response available to this argument. This response blocks Berkeley's inference to (3) by distinguishing two sorts of perception, mediate and immediate. Thus, premises (1) and (2) are replaced by the claims that (1′) we mediately perceive ordinary objects, while (2′) we immediately perceive only ideas. From these claims, of course, no idealist conclusion follows. The response reflects a representationalist theory of perception, according to which we indirectly (mediately) perceive material things, by directly (immediately) perceiving ideas, which are mind-dependent items. The ideas represent external material objects, and thereby allow us to perceive them.

Whether Descartes, Malebranche, and Locke were representationalists of this kind is a matter of some controversy (see e.g. Yolton 1984, Chappell 1994). However, Berkeley surely had good grounds for understanding his predecessors in this way: it reflects the most obvious interpretation of Locke's account of perception and Descartes' whole procedure in the Meditations tends to suggest this sort of view, given the meditator's situation as someone contemplating her own ideas, trying to determine whether something external corresponds to them.

2.1.2 The likeness principle

Berkeley devotes the succeeding sections of the Principles to undermining the representationalist response to his initial argument. In effect, he poses the question: What allows an idea to represent a material object? He assumes, again with good grounds, that the representationalist answer is going to involve resemblance:

But say you, though the ideas themselves do not exist without the mind, yet there may be things like them whereof they are copies or resemblances, which things exist without the mind, in an unthinking substance. I answer, an idea can be like nothing but an idea; a colour or figure can be like nothing but another colour or figure. (PHK 8)

Berkeley argues that this supposed resemblance is nonsensical; an idea can only be like another idea.

But why? The closest Berkeley ever comes to directly addressing this question is in his early philosophical notebooks, where he observes that “Two things cannot be said to be alike or unlike till they have been compar'd” (PC 377). Thus, because the mind can compare nothing but its own ideas, which by hypothesis are the only things immediately perceivable, the representationalist cannot assert a likeness between an idea and a non-ideal mind-independent material object. (For further discussion, see Winkler 1989, 145-9.)

If Berkeley's Likeness Principle, the thesis that an idea can only be like another idea, is granted, representationalist materialism is in serious trouble. For how are material objects now to be characterized? If material objects are supposed to be extended, solid, or colored, Berkeley will counter that these sensory qualities pertain to ideas, to that which is immediately perceived, and that the materialist cannot assert that material objects are like ideas in these ways. Many passages in the Principles and Dialogues drive home this point, arguing that matter is, if not an incoherent notion, at best a completely empty one.

2.1.3 Anti-abstractionism

One way in which Berkeley's anti-abstractionism comes into play is in reinforcing this point. Berkeley argues in the “Introduction” to the Principles[1] that we cannot form general ideas in the way that Locke often seems to suggest—by stripping particularizing qualities from an idea of a particular, creating a new, intrinsically general, abstract idea.[2] Berkeley then claims that notions the materialist might invoke in a last-ditch attempt to characterize matter, e.g. being or mere extension, are objectionably abstract and unavailable.[3]

2.1.4 What does materialism explain?

Berkeley is aware that the materialist has one important card left to play: Don't we need material objects in order to explain our ideas? And indeed, this seems intuitively gripping: Surely the best explanation of the fact that I have a chair idea every time I enter my office and that my colleague has a chair idea when she enters my office is that a single enduring material object causes all these various ideas. Again, however, Berkeley replies by effectively exploiting the weaknesses of his opponents' theories:

…though we give the materialists their external bodies, they by their own confession are never the nearer knowing how our ideas are produced: since they own themselves unable to comprehend in what manner body can act upon spirit, or how it is possible it should imprint any idea in the mind. Hence it is evident the production of ideas or sensations in our minds, can be no reason why we should suppose matter or corporeal substances, since that is acknowledged to remain equally inexplicable with, or without this supposition. (PHK 19)

Firstly, Berkeley contends, a representationalist must admit that we could have our ideas without their being any external objects causing them (PHK 18). (This is one way in which Berkeley sees materialism as leading to skepticism.) More devastatingly, however, he must admit that the existence of matter does not help to explain the occurrence of our ideas. After all, Locke himself diagnosed the difficulty:

Body as far as we can conceive being able only to strike and affect body; and Motion, according to the utmost reach of our Ideas, being able to produce nothing but Motion, so that when we allow it to produce pleasure or pain, or the Idea of a Colour, or Sound, we are fain to quit our Reason, go beyond our Ideas, and attribute it wholly to the good Pleasure of our Maker. (Locke 1975, 541;Essay 4.3.6)

And, when Descartes was pressed by Elizabeth as to how mind and body interact,[4] she rightly regarded his answers as unsatisfactory. The basic problem here is set by dualism: how can one substance causally affect another substance of a fundamentally different kind? In its Cartesian form, the difficulty is particularly severe: how can an extended thing, which affects other extended things only by mechanical impact, affect a mind, which is non-extended and non-spatial?

Berkeley's point is thus well taken. It is worth noting that, in addition to undermining the materialist's attempted inference to the best explanation, Berkeley's point also challenges any attempt to explain representation and mediate perception in terms of causation. That is, the materialist might try to claim that ideas represent material objects, not by resemblance, but in virtue of being caused by the objects. (Though neither Descartes nor Locke spells out such an account, there are grounds in each for attributing such an account to them. For Descartes see Wilson 1999, 73-76; for Locke see Chappell 1994, 53.) However, PHK 19 implies that the materialists are not in a position to render this account of representation philosophically satisfactory.

2.2 Contra direct realist materialism

As emphasized above, Berkeley's campaign against matter, as he presents it in the Principles, is directed against materialist representationalism and presupposes representationalism. In particular, Berkeley presupposes that all anyone ever directly or immediately perceives are ideas. As contemporary philosophers, we might wonder whether Berkeley has anything to say to a materialist who denies this representationalist premise and asserts instead that we ordinarily directly/immediately perceive material objects themselves. The answer is ‘yes’.

2.2.1 The master argument?

However, one place where one might naturally look for such an argument is not, in fact, as promising as might initially appear. In both the Principles (22-3) and the Dialogues (200), Berkeley gives a version of what has come to be called “The Master Argument”[5] because of the apparent strength with which he endorses it:

… I am content to put the whole upon this issue; if you can but conceive it possible for one extended moveable substance, or in general, for any one idea or any thing like an idea, to exist otherwise than in a mind perceiving it, I shall readily give up the cause…. But say you, surely there is nothing easier than to imagine trees, for instance, in a park, or books existing in a closet, and no body by to perceive them. I answer, you may so, there is no difficulty in it: but what is all this, I beseech you, more than framing in your mind certain ideas which you call books and trees, and at the same time omitting to frame the idea of any one that may perceive them? But do not you your self perceive or think of them all the while? This therefore is nothing to the purpose: it only shows you have the power of imagining or forming ideas in your mind; but it doth not shew that you can conceive it possible, the objects of your thought may exist without the mind: to make out this, it is necessary that you conceive them existing unconceived or unthought of, which is a manifest repugnancy. When we do our utmost to conceive the existence of external bodies, we are all the while only contemplating our own ideas. But the mind taking no notice of itself, is deluded to think it can and doth conceive bodies existing unthought of or without the mind; though at the same time they are apprehended by or exist in it self. (PHK 22-23)

The argument seems intended to establish that we cannot actually conceive of mind-independent objects, that is, objects existing unperceived and unthought of. Why not? Simply because in order to conceive of any such things, we must ourselves be conceiving, i.e., thinking, of them. However, as Pitcher (1977, 113) nicely observes, such an argument seems to conflate the representation (what we conceive with) and the represented (what we conceive of—the content of our thought). Once we make this distinction, we realize that although we must have some conception or representation in order to conceive of something, and that representation is in some sense thought of, it does not follow (contra Berkeley) that what we conceive of must be a thought-of object. That is, when we imagine a tree standing alone in a forest, we (arguably) conceive of an unthought-of object, though of course we must employ a thought in order to accomplish this feat.[6] Thus (as many commentators have observed), this argument fails.

A more charitable reading of the argument (see Winkler 1989, 184-7; Lennon 1988) makes Berkeley's point that we cannot represent unconceivedness, because we have never and could never experience it.[7] Because we cannot represent unconceivedness, we cannot conceive of mind-independent objects. While this is a rather more promising argument, it clearly presupposes representationalism, just as Berkeley's earlier Principles arguments did.[8] (This, however, is not necessarily a defect of the interpretation, since the Principles, as we saw above, is aimed against representationalism, and in the Dialogues the Master Argument crops up only after Hylas has been converted to representationalism (see below).)[9]

2.2.1 The First Dialogue and relativity arguments

Thus, if we seek a challenge to direct realist materialism, we must turn to the Three Dialogues, where the character Hylas (the would-be materialist) begins from a sort of naïve realism, according to which we perceive material objects themselves, directly. Against this position, Philonous (lover of spirit—Berkeley's spokesperson) attempts to argue that the sensible qualities—the qualities immediately perceived by sense—must be ideal, rather than belonging to material objects. (The following analysis of these first dialogue arguments is indebted to Margaret Wilson's account in “Berkeley on the Mind-Dependence of Colors,” Wilson 1999, 229-242.[10])

Philonous begins his first argument by contending that sensible qualities such as heat are not distinct from pleasure or pain. Pleasure and pain, Philonous argues, are allowed by all to be merely in the mind; therefore the same must be true for the sensible qualities. The most serious difficulties with this argument are (1) whether we should grant the “no distinction” premise in the case of the particular sensory qualities invoked by Berkeley (why not suppose that I can distinguish between the heat and the pain?) and (2) if we do, whether we should generalize to all sensory qualities as Berkeley would have us do.

Secondly, Philonous invokes relativity arguments to suggest that because sensory qualities are relative to the perceiver, e.g. what is hot to one hand may be cold to the other and what is sweet to one person may be bitter to another, they cannot belong to mind-independent material objects, for such objects could not bear contradictory qualities.

As Berkeley is well aware, one may reply to this sort of argument by claiming that only one of the incompatible qualities is truly a quality of the object and that the other apparent qualities result from misperception. But how then, Berkeley asks, are these “true” qualities to be identified and distinguished from the “false” ones (3D 184)? By noting the differences between animal perception and human perception, Berkeley suggests that it would be arbitrary anthropomorphism to claim that humans have special access to the true qualities of objects. Further, Berkeley uses the example of microscopes to undermine the prima facie plausible thought that the true visual qualities of objects are revealed by close examination. Thus, Berkeley provides a strong challenge to any direct realist attempt to specify standard conditions under which the true (mind-independent) qualities of objects are (directly) perceived by sense.

Under this pressure from Philonous, Hylas retreats (perhaps a bit quickly) from naïve realism to a more “philosophical” position. He first tries to make use of the primary/secondary quality distinction associated with mechanism and, again, locatable in the thought of Descartes and Locke. Thus, Hylas allows that color, taste, etc. may be mind-dependent (secondary) qualities, but contends that figure, solidity, motion and rest (the primary qualities) exist in mind-independent material bodies. The mechanist picture behind this proposal is that bodies are composed of particles with size, shape, motion/rest, and perhaps solidity, and that our sensory ideas arise from the action of such particles on our sense organs and, ultimately, on our minds. Berkeley opposes this sort of mechanism throughout his writings, believing that it engenders skepticism by dictating that bodies are utterly unlike our sensory experience of them. Here Philonous has a two-pronged reply: (1) The same sorts of relativity arguments that were made against secondary qualities can be made against primary ones. (2) We cannot abstract the primary qualities (e.g. shape) from secondary ones (e.g. color), and thus we cannot conceive of mechanist material bodies which are extended but not (in themselves) colored.[11]

When, after some further struggles, Hylas finally capitulates to Philonous' view that all of existence is mind-dependent, he does so unhappily and with great reluctance. Philonous needs to convince him (as Berkeley needed to convince his readers in both books) that a commonsensical philosophy could be built on an immaterialist foundation, that no one but a skeptic or atheist would ever miss matter. As a matter of historical fact, Berkeley persuaded few of his contemporaries, who for the most part regarded him as a purveyor of skeptical paradoxes (Bracken 1965). Nevertheless, we can and should appreciate the way in which Berkeley articulated a positive idealist philosophical system, which, if not in perfect accord with common sense, is in many respects superior to its competitors.

3. Berkeley's positive program: idealism and common sense

3.1 The basics of Berkeley's ontology

3.1.1 The status of ordinary objects

The basics of Berkeley's metaphysics are apparent from the first section of the main body of the Principles:

It is evident to any one who takes a survey of the objects of human knowledge, that they are either ideas actually imprinted on the senses, or else such as are perceived by attending to the passions and operations of the mind, or lastly ideas formed by help of memory and imagination, either compounding, dividing, or barely representing those originally perceived in the aforesaid ways. By sight I have the ideas of light and colours with their several degrees and variations. By touch I perceive, for example, hard and soft, heat and cold, motion and resistance, and of all these more and less either as to quantity or degree. Smelling furnishes me with odours; the palate with tastes, and hearing conveys sounds to the mind in all their variety of tone and composition. And as several of these are observed to accompany each other, they come to be marked by one name, and so to be reputed as one thing. Thus, for example, a certain colour, taste, smell, figure and consistence having been observed to go together, are accounted one distinct thing, signified by the name apple. Other collections of ideas constitute a stone, a tree, a book, and the like sensible things; which, as they are pleasing or disagreeable, excite the passions of love, hatred, joy, grief, and so forth.

As this passage illustrates, Berkeley does not deny the existence of ordinary objects such as stones, trees, books, and apples. On the contrary, as was indicated above, he holds that only an immaterialist account of such objects can avoid skepticism about their existence and nature. What such objects turn out to be, on his account, are bundles or collections of ideas. An apple is a combination of visual ideas (including the sensible qualities of color and visual shape), tangible ideas, ideas of taste, smell, etc.[12] The question of what does the combining is a philosophically interesting one which Berkeley does not address in detail. He does make clear that there are two sides to the process of bundling ideas into objects: (1) co-occurrence, an objective fact about what sorts of ideas tend to accompany each other in our experience, and (2) something we do when we decide to single out a set of co-occurring ideas and refer to it with a certain name (NTV 109).

Thus, although there is no material world for Berkeley, there is a physical world, a world of ordinary objects. This world is mind-dependent, for it is composed of ideas, whose existence consists in being perceived. For ideas, and so for the physical world, esse est percipi.

3.1.2 Spirits as active substances

Berkeley's ontology is not exhausted by the ideal, however. In addition to perceived things (ideas), he posits perceivers, i.e., minds or spirits, as he often terms them. Spirits, he emphasizes, are totally different in kind from ideas, for they are active where ideas are passive. This suggests that Berkeley has replaced one kind of dualism, of mind and matter, with another kind of dualism, of mind and idea. There is something to this point, given Berkeley's refusal to elaborate upon the relation between active minds and passive ideas. At Principles 49, he famously dismisses quibbling about how ideas inhere in the mind (are minds colored and extended when such sensible qualities “exist in” them?) with the declaration that “those qualities are in the mind only as they are perceived by it, that is, not by way of mode or attribute, but only by way of idea”. Berkeley's dualism, however, is a dualism within the realm of the mind-dependent.

3.1.3 God's existence

The last major item in Berkeley's ontology is God, himself a spirit, but an infinite one. Berkeley believes that once he has established idealism, he has a novel and convincing argument for God's existence as the cause of our sensory ideas. He argues by elimination: What could cause my sensory ideas? Candidate causes, supposing that Berkeley has already established that matter doesn't exist, are (1) other ideas, (2) myself, or (3) some other spirit. Berkeley eliminates the first option with the following argument (PHK 25):

(1) Ideas are manifestly passive—no power or activity is perceived in them.

(2) But because of the mind-dependent status of ideas, they cannot have any characteristics which they are not perceived to have.

Therefore,

(3) Ideas are passive, that is, they possess no causal power.

It should be noted that premise (2) is rather strong; Phillip Cummins (1990) identifies it as Berkeley's “manifest qualities thesis” and argues that it commits Berkeley to the view that ideas are radically and completely dependent on perceivers in the way that sensations of pleasure and pain are typically taken to be.[13]

The second option is eliminated with the observation that although I clearly can cause some ideas at will (e.g. ideas of imagination), sensory ideas are involuntary; they present themselves whether I wish to perceive them or not and I cannot control their content. The hidden assumption here is that any causing the mind does must be done by willing and such willing must be accessible to consciousness. Berkeley is hardly alone in presupposing this model of the mental; Descartes, for example, makes a similar set of assumptions.

This leaves us, then, with the third option: my sensory ideas must be caused by some other spirit. Berkeley thinks that when we consider the stunning complexity and systematicity of our sensory ideas, we must conclude that the spirit in question is wise and benevolent beyond measure, that, in short, he is God.

3.2 Replies to objections

With the basic ingredients of Berkeley's ontology in place, we can begin to consider how his system works by seeing how he responds to a number of intuitively compelling objections to it. Berkeley himself sees very well how necessary this is: Much of the Principles is structured as a series of objections and replies, and in the Three Dialogues, once Philonous has rendered Hylas a reluctant convert to idealism, he devotes the rest of the book to convincing him that this is a philosophy which coheres well with common sense, at least better than materialism ever did.

3.2.1 Real things vs. imaginary ones

Perhaps the most obvious objection to idealism is that it makes real things no different from imaginary ones—both seem fleeting figments of our own minds, rather than the solid objects of the materialists. Berkeley replies that the distinction between real things and chimeras retains its full force on his view. One way of making the distinction is suggested by his argument for the existence of God, examined above: Ideas which depend on our own finite human wills are not (constituents of) real things. Not being voluntary is thus a necessary condition for being a real thing, but it is clearly not sufficient, since hallucinations and dreams do not depend on our wills, but are nevertheless not real. Berkeley notes that the ideas that constitute real things exhibit a steadiness, vivacity, and distinctness that chimerical ideas do not. The most crucial feature that he points to, however, is order. The ideas imprinted by the author of nature as part of rerum natura occur in regular patterns, according to the laws of nature (“the set rules or established methods, wherein the mind we depend on excites in us the ideas of sense, are called the Laws of Nature” PHK 30). They are thus regular and coherent, that is, they constitute a coherent real world.

3.2.2 Hidden structures and internal mechanisms

The related notions of regularity and of the laws of nature are central to the workability of Berkeley's idealism. They allow him to respond to the following objection, put forward in PHK 60:

…it will be demanded to what purpose serves that curious organization of plants, and the admirable mechanism in the parts of animals; might not vegetables grow, and shoot forth leaves and blossoms, and animals perform all their motions, as well without as with all that variety of internal parts so elegantly contrived and put together, which being ideas have nothing powerful or operative in them, nor have any necessary connexion with the effects ascribed to them? […] And how comes it to pass, that whenever there is any fault in the going of a watch, there is some corresponding disorder to be found in the movements, which being mended by a skilful hand, all is right again? The like may be said of all the clockwork of Nature, great part whereof is so wonderfully fine and subtle, as scarce to be discerned by the best microscope. In short, it will be asked, how upon our principles any tolerable account can be given, or any final cause assigned of an innumerable multitude of bodies and machines framed with the most exquisite art, which in the common philosophy have very apposite uses assigned them, and serve to explain abundance of phenomena.

Berkeley's answer, for which he is indebted to Malebranche,[14] is that, although God could make a watch run (that is, produce in us ideas of a watch running) without the watch having any internal mechanism (that is, without it being the case that, were we to open the watch, we would have ideas of an internal mechanism), he cannot do so if he is to act in accordance with the laws of nature, which he has established for our benefit, to make the world regular and predictable. Thus, whenever we have ideas of a working watch, we will find that if we open it,[15] we will see (have ideas of) an appropriate internal mechanism. Likewise, when we have ideas of a living tulip, we will find that if we pull it apart, we will observe the usual internal structure of such plants, with the same transport tissues, reproductive parts, etc.

3.2.3 Scientific explanation

Implicit in the answer above is Berkeley's insightful account of scientific explanation and the aims of science. A bit of background is needed here to see why this issue posed a special challenge for Berkeley. One traditional understanding of science, derived from Aristotle, held that it aims at identifying the causes of things. Modern natural philosophers such as Descartes narrowed science's domain to efficient causes and thus held that science should reveal the efficient causes of natural things, processes, and events.[16] Berkeley considers this as the source of an objection at Principles 51:

Seventhly, it will upon this be demanded whether it does not seem absurd to take away natural causes, and ascribe every thing to the immediate operation of spirits? We must no longer say upon these principles that fire heats, or water cools, but that a spirit heats, and so forth. Would not a man be deservedly laughed at, who should talk after this manner? I answer, he would so; in such things we ought to think with the learned, and speak with the vulgar.

On Berkeley's account, the true cause of any phenomenon is a spirit, and most often it is the same spirit, namely, God.

But surely, one might object, it is a step backwards to abandon our scientific theories and simply note that God causes what happens in the physical world! Berkeley's first response here, that we should think with the learned but speak with the vulgar, advises us to continue to say that fire heats, that the heart pumps blood, etc. What makes this advice legitimate is that he can reconstrue such talk as being about regularities in our ideas. In Berkeley's view, the point of scientific inquiry is to reveal such regularities:

If therefore we consider the difference there is betwixt natural philosophers and other men, with regard to their knowledge of the phenomena, we shall find it consists, not in an exacter knowledge of the efficient cause that produces them, for that can be no other than the will of a spirit, but only in a greater largeness of comprehension, whereby analogies, harmonies, and agreements are discovered in the works of Nature, and the particular effects explained, that is, reduced to general rules, see Sect. 62, which rules grounded on the analogy, and uniformness observed in the production of natural effects, are most agreeable, and sought after by the mind; for that they extend our prospect beyond what is present, and near to us, and enable us to make very probable conjectures, touching things that may have happened at very great distances of time and place, as well as to predict things to come…. (PHK 105)

Natural philosophers thus consider signs, rather than causes (PHK 108), but their results are just as useful as they would be under a materialist system. Moreover, the regularities they discover provide the sort of explanation proper to science, by rendering the particular events they subsume unsurprising (PHK 104). The sort of explanation proper to science, then, is not causal explanation, but reduction to regularity.[17]

3.2.4 Unperceived objects—Principles vs. Dialogues

Regularity provides a foundation for one of Berkeley's responses to the objection summarized in the famous limerick:

There was a young man who said God,

must find it exceedingly odd

when he finds that the tree

continues to be

when noone's about in the Quad.[18]

The worry, of course, is that if to be is to be perceived (for non-spirits), then there are no trees in the Quad at 3 a.m. when no one is there to perceive them and there is no furniture in my office when I leave and close the door. Interestingly, in the Principles Berkeley seems relatively unperturbed by this natural objection to idealism. He claims that there is no problem for

…anyone that shall attend to what is meant by the term exist when applied to sensible things. The table I write on, I say, exists, that is, I see and feel it; and if I were out of my study I should say it existed, meaning thereby that if I was in my study I might perceive it, or that some other spirit actually does perceive it. (PHK 3)

So, when I say that my desk still exists after I leave my office, perhaps I just mean that I would perceive it if I were in my office, or, more broadly, that a finite mind would perceive the desk were it in the appropriate circumstances (in my office, with the lights on, with eyes open, etc.). This is to provide a sort of counterfactual analysis of the continued existence of unperceived objects. The truth of the counterfactuals in question is anchored in regularity: because God follows set patterns in the way he causes ideas, I would have a desk idea if I were in the office.

Unfortunately, this analysis has counterintuitive consequences when coupled with the esse est percipi doctrine (McCracken 1979, 286). If to be is, as Berkeley insists, to be perceived, then the unperceived desk does not exist, despite the fact that it would be perceived and thus would exist if someone opened the office door. Consequently, on this view the desk would not endure uninterrupted but would pop in and out of existence, though it would do so quite predictably. One way to respond to this worry would be to dismiss it—what does it matter if the desk ceases to exist when unperceived, as long as it exists whenever we need it? Berkeley shows signs of this sort of attitude in Principles 45-46, where he tries to argue that his materialist opponents and scholastic predecessors are in much the same boat.[19] This “who cares?” response to the problem of continued existence is fair enough as far as it goes, but it surely does conflict with common sense, so if Berkeley were to take this route he would have to moderate his claims about his system's ability to accommodate everything desired by the person on the street.

Another strategy, however, is suggested by Berkeley's reference in PHK 3 and 48 to “some other spirit,” a strategy summarized in a further limerick:

Dear Sir, your astonishment's odd

I'm always about in the Quad

And that's why the tree

continues to be

Since observed by, yours faithfully, God

If the other spirit in question is God, an omnipresent being, then perhaps his perception can be used to guarantee a completely continuous existence to every physical object. In the Three Dialogues, Berkeley very clearly invokes God in this context. Interestingly, whereas in the Principles, as we have seen above, he argued that God must exist in order to cause our ideas of sense, in the Dialogues (212, 214-5) he argues that our ideas must exist in God when not perceived by us.[20] If our ideas exist in God, then they presumably exist continuously. Indeed, they must exist continuously, since standard Christian doctrine dictates that God is unchanging.

Although this solves one problem for Berkeley, it creates several more. The first is that Berkeley's other commitments, religious and philosophical, dictate that God cannot literally have our ideas. Our ideas are sensory ideas and God is a being who “can suffer nothing, nor be affected with any painful sensation, or indeed any sensation at all” (3D 206). Nor can our sensory ideas be copies of God's nonsensory ones (McCracken 1979):

How can that which is sensible be like that which is insensible? Can a real thing in itself invisible be like a colour; or a real thing which is not audible, be like a sound? (3D 206)

A second problem is that God's ideas are eternal, whereas physical objects typically have finite duration. And, even worse, God has ideas of all possible objects (Pitcher 1977, 171-2), not just the ones which we would commonsensically wish to say exist.

A solution (proposed by McCracken) to these related problems is to tie the continued existence of ordinary objects to God's will, rather than to his understanding. McCracken's suggestion is that unperceived objects continue to exist as God's decrees. Such an account in terms of divine decrees or volitions looks promising: The tree continues to exist when unperceived just in case God has an appropriate volition or intention to cause a tree-idea in finite perceivers under the right circumstances. Furthermore, this solution has important textual support: In the Three Dialogues, Hylas challenges Philonous to account for the creation, given that all existence is mind-dependent, in his view, but everything must exist eternally in the mind of God. Philonous responds as follows:

May we not understand it [the creation] to have been entirely in respect of finite spirits; so that things, with regard to us, may properly be said to begin their existence, or be created, when God decreed they should become perceptible to intelligent creatures, in that order and manner which he then established, and we now call the laws of Nature? You may call this a relative, or hypothetical existence if you please. (3D 253)

Here Berkeley ties the actual existence of created physical beings to God's decrees, that is, to his will.

As with the counterfactual analysis of continued existence, however, this account also fails under pressure from the esse est percipi principle:

Hylas. Yes, Philonous, I grant the existence of a sensible thing consists in being perceivable, but not in being actually perceived.

Philonous. And what is perceivable but an idea? And can an idea exist without being actually perceived? These are points long since agreed between us. (3D 234)

Thus, if the only grounds of continued existence are volitions in God's mind, rather than perceived items (ideas), then ordinary objects do not exist continuously, but rather pop in and out of existence in a lawful fashion.

Fortunately, Kenneth Winkler has put forward an interpretation which goes a great distance towards resolving this difficulty. In effect, he proposes that we amend the “volitional” interpretation of the existence of objects with the hypothesis that Berkeley held “the denial of blind agency” (Winkler 1989, 207-224). This principle, which can be found in many authors of the period (including Locke), dictates that any volition must have an idea behind it, that is, must have a cognitive component that gives content to the volition, which would otherwise be empty or “blind”. While the principle is never explicitly invoked or argued for by Berkeley, in a number of passages he does note the interdependence of will and understanding. Winkler plausibly suggests that Berkeley may have found this principle so obvious as to need no arguing. With it in place, we have a guarantee that anything willed by God, e.g. that finite perceivers in appropriate circumstances should have elm tree ideas, also has a divine idea associated with it. Furthermore, we have a neat explanation of Berkeley's above-noted leap in the Dialogues from the claim that God must cause our ideas to the claim that our ideas must exist in God.

Of course, it remains true that God cannot have ideas that are, strictly speaking, the same as ours. This problem is closely related to another that confronts Berkeley: Can two people ever perceive the same thing? Common sense demands that two students can perceive the same tree, but Berkeley's metaphysics seems to dictate that they never truly perceive the same thing, since they each have their own numerically distinct ideas. One way to dissolve this difficulty is to recall that objects are bundles of ideas. Although two people cannot perceive/have the numerically same idea, they can perceive the same object, assuming that perceiving a component of the bundle suffices for perception of the bundle.[21] Another proposal (Baxter 1991) is to invoke Berkeley's doctrine that “same” has both a philosophical and a vulgar sense (3D 247) in order to declare that my tree-idea and your tree-idea are strictly distinct but loosely (vulgarly) the same. Either account might be applied in order to show either that God and I may perceive the same object, or that God and I may perceive, loosely speaking, the same thing.

From this discussion we may draw a criterion for the actual existence of ordinary objects, one which summarizes Berkeley's considered views:

An X exists at time t if and only if God has an idea that corresponds to a volition that if a finite mind at t is in appropriate circumstances (e.g. in a particular place, looking in the right direction, or looking through a microscope), then it will have an idea that we would be disposed to call a perception of an X.
This captures the idea that existence depends on God's perceptions, but only on the perceptions which correspond to or are included in his volitions about what we should perceive. It also captures the fact that the bundling of ideas into objects is done by us.[22]

3.2.5 The possibility of error

A further worry about Berkeley's system arises from the idea-bundle account of objects.[23] If there is no mind-independent object against which to measure my ideas, but rather my ideas help to constitute the object, then how can my ideas ever fail—how is error possible? Here is another way to raise to raise the worry that I have in mind: We saw above that Berkeley's arguments against commonsense realism in the first Dialogue attempt to undermine (1) claims that heat, odor, taste are distinguishable from pleasure/pain and (2) the claim that objects have one true color, one true shape, one true taste, etc. If we then consider what this implies about Berkeleyian objects, we must conclude that Berkeley's cherry is red, purple, gray, tart, sweet, small, large, pleasant, and painful! It seems that Berkeley's desire to refute the mechanist representationalism which dictates that objects are utterly unlike our experience of them has lead him to push beyond common sense to the view that objects are exactly like our experience of them.[24] There is no denying that Berkeley is out of sync with common sense here. He does, however, have an account of error, as he shows us in the Dialogues:

Hylas. What say you to this? Since, according to you, men judge of the reality of things by their senses, how can a man be mistaken in thinking the moon a plain lucid surface, about a foot in diameter; or a square tower, seen at a distance, round; or an oar, with one end in the water, crooked?

Philonous. He is not mistaken with regard to the ideas he actually perceives; but in the inferences he makes from his present perceptions. Thus in the case of the oar, what he immediately perceives by sight is certainly crooked; and so far he is in the right. But if he thence conclude, that upon taking the oar out of the water he shall perceive the same crookedness; or that it would affect his touch, as crooked things are wont to do: in that he is mistaken. (3D 238)

Extrapolating from this, we may say that my gray idea of the cherry, formed in dim light, is not in itself wrong and forms a part of the bundle-object just as much as your red idea, formed in daylight. However, if I judge that the cherry would look gray in bright light, I'm in error. Furthermore, following Berkeley's directive to speak with the vulgar, I ought not to say (in ordinary circumstances) that “the cherry is gray,” since that will be taken to imply that the cherry would look gray to humans in daylight.

3.2.6 Spirits and causation

We have spent some time examining the difficulties Berkeley faces in the “idea/ordinary object” half of his ontology. Arguably, however, less tractable difficulties confront him in the realm of spirits. Early on, Berkeley attempts to forestall materialist skeptics who object that we have no idea of spirit by arguing for this position himself:

A spirit is one simple, undivided, active being: as it perceives ideas, it is called the understanding, and as it produces or otherwise operates about them, it is called the will. Hence there can be no idea formed of a soul or spirit: for all ideas whatever, being passive and inert, vide Sect. 25, they cannot represent unto us, by way of image or likeness, that which acts. A little attention will make it plain to any one, that to have an idea which shall be like that active principle of motion and change of ideas, is absolutely impossible. Such is the nature of spirit or that which acts, that it cannot be of it self perceived, but only by the effects which it produceth. (PHK 27)

Surely the materialist will be tempted to complain, however, that Berkeley's unperceivable spiritual substances, lurking behind the scenes and supporting that which we can perceive, sound a lot like the material substances which he so emphatically rejects.

Two very different responses are available to Berkeley on this issue, each of which he seems to have made at a different point in his philosophical development. One response would be to reject spiritual substance just as he rejected material substance. Spirits, then, might be understood in a Humean way, as bundles of ideas and volitions. Fascinatingly, something like this view is considered by Berkeley in his early philosophical notebooks (see PC 577ff). Why he abandons it is an interesting and difficult question;[25] it seems that one worry he has is how the understanding and the will are to be integrated and rendered one thing.

The second response would be to explain why spiritual substances are better posits than material ones. To this end, Berkeley emphasizes that we have a notion of spirit, which is just to say that we know what the word means. This purportedly contrasts with “matter,” which Berkeley thinks has no determinate content. Of course, the real question is: How does the term “spirit” come by any content, given that we have no idea of it? In the Principles, Berkeley declares only that we know spirit through our own case and that the content we assign to “spirit” is derived from the content each of us assigns to “I” (PHK 139-140). In the Dialogues, however, Berkeley shows a better appreciation of the force of the problem that confronts him:

[Hylas.] You say your own soul supplies you with some sort of an idea or image of God. But at the same time you acknowledge you have, properly speaking, no idea of your own soul. You even affirm that spirits are a sort of beings altogether different from ideas. Consequently that no idea can be like a spirit. We have therefore no idea of any spirit. You admit nevertheless that there is spiritual substance, although you have no idea of it; while you deny there can be such a thing as material substance, because you have no notion or idea of it. Is this fair dealing? To act consistently, you must either admit matter or reject spirit. (3D 232)

To the main point of Hylas' attack, Philonous replies that each of us has, in our own case, an immediate intuition of ourselves, that is, we know our own minds through reflection (3D 231-233). Berkeley's considered position, that we gain access to ourselves as thinking things through conscious awareness, is surely an intuitive one. Nevertheless, it is disappointing that he never gave an explicit response to the Humean challenge he entertained in his notebooks:

+ Mind is a congeries of Perceptions. Take away Perceptions & you take away the Mind put the Perceptions & you put the mind. (PC 580)

A closely related problem which confronts Berkeley is how to make sense of the causal powers that he ascribes to spirits. Here again, the notebooks suggest a surprisingly Humean view:

+ The simple idea call'd Power seems obscure or rather none at all. but onely the relation ‘twixt cause & Effect. Wn I ask whether A can move B. if A be an intelligent thing. I mean no more than whether the volition of A that B move be attended with the motion of B, if A be be senseless whether the impulse of A against B be follow'd by ye motion of B. 461[26]

S What means Cause as distinguish'd from Occasion? nothing but a Being wch wills wn the Effect follows the volition. Those things that happen from without we are not the Cause of therefore there is some other Cause of them i.e., there is a being that wills these perceptions in us. 499

S There is a difference betwixt Power & Volition. There may be volition without Power. But there can be no Power without Volition. Power implyeth volition & at the same time a Connotation of the Effects following the Volition. 699

461 suggests the Humean view that a cause is whatever is (regularly)[27] followed by an effect. 499 and 699 revise this doctrine by requiring that a cause not only (regularly) precede an effect but also be a volition. Berkeley's talk of occasion here reveals the immediate influence of Malebranche. Malebranche held that the only true cause is God and that apparent finite causes are only “occasional causes,” which is to say that they provide occasions for God to act on his general volitional policies. Occasional “causes” thus regularly precede their “effects” but are not truly responsible for producing them. In these notebook entries, however, Berkeley seems to be suggesting that all there is to causality is this regular consequence, with the first item being a volition. Such an account, unlike Malebranche's, would make my will and God's will causes in exactly the same thin sense.

Some commentators, most notably Winkler, suppose that Berkeley retains this view of causality in the published works. The main difficulty with this interpretation is that Berkeley more than once purports to inspect our idea of body, and the sensory qualities included therein, and to conclude from that inspection that bodies are passive (DM 22, PHK 25). This procedure would make little sense if bodies, according to Berkeley, fail to be causes by definition, simply because they are not minds with wills.[28] What is needed is an explanation of what Berkeley means by activity, which he clearly equates with causal power. Winkler (1989, 130-1) supplies such an account, according to which activity means direction towards an end. But this is to identify efficient causation with final causation, a controversial move at best which Berkeley would be making without comment or argument.

The alternative would be to suppose, as De Motu 33 suggests, that Berkeley holds that we gain a notion of activity, along with a notion of spirit as substance, through reflective awareness/internal consciousness:

[W]e feel it [mind] as a faculty of altering both our own state and that of other things, and that is properly called vital, and puts a wide distinction between soul and bodies. (DM 33)

On this interpretation, Berkeley would again have abandoned the radical Humean position entertained in his notebooks, as he clearly did on the question of the nature of spirit. One can only speculate as to whether his reasons would have been primarily philosophical, theological, or practical. Berkeley's writings, however, are not generally characterized by deference to authority, quite the contrary,[29] as he himself proclaims:

… one thing, I know, I am not guilty of. I do not pin my faith on the sleeve of any great man. I act not out of prejudice & prepossession. I do not adhere to any opinion because it is an old one, a receiv'd one, a fashionable one, or one that I have spent much time in the study and cultivation of. (PC 465)

Bibliography

Berkeley's Works

The standard edition of Berkeley's works is:

  • Berkeley, G. (1948-1957). The Works of George Berkeley, Bishop of Cloyne. A.A. Luce and T.E. Jessop (eds.). London, Thomas Nelson and Sons. 9 vols.

I use the following abbreviations for Berkeley's works:

PC "Philosophical Commentaries" Works 1:9-104
NTV An Essay Towards a New Theory of Vision Works 1:171-239
PHK Of the Principles of Human Knowledge: Part 1 Works 2:41-113
3D Three Dialogues between Hylas and Philonous Works 2:163-263
DM De Motu, or The Principle and Nature of Motion and the Cause of the Communication of Motions, trans. A.A. Luce Works 4:31-52

References to these works are by section numbers (or entry numbers, for PC), except for 3D, where they are by page number.

Other useful editions include:

  • Berkeley, G. (1944). Philosophical commentaries, generally called the Commonplace book [of] George Berkeley, bishop of Cloyne. A.A. Luce (ed.). London, Thomas Nelson and Sons.
  • Berkeley, G. (1975). Philosophical Works; Including the Works on Vision. M. Ayers (ed.). London, Dent.
  • Berkeley, G. (1987). George Berkeley's Manuscript Introduction. B. Belfrage (ed.). Oxford, Doxa.
  • Berkeley, G. (1992). "De Motu” and "The Analyst": A Modern Edition with Introductions and Commentary. D. Jesseph (trans. and ed.). Dordrecht: Kluwer Academic Publishers.

Bibliographical studies

  • Jessop, T. E. (1973). A bibliography of George Berkeley, by T.E. Jessop. With inventory of Berkeley's manuscript remains, by A.A. Luce. The Hague, M. Nijhoff.
  • Turbayne, C., Ed. (1982). Berkeley: Critical and Interpretive Essays. Minneapolis, University of Minnesota Press. [Contains a bibliography of George Berkeley 1963-1979.]

References cited

  • Atherton, M. (1987). Berkeley's Anti-Abstractionism. In Essays on the Philosophy of George Berkeley. E. Sosa (ed.). Dordrecht, D. Reidel: 85-102.
  • Atherton, M. (1990). Berkeley's Revolution in Vision. Ithaca, Cornell University Press.
  • Atherton, M., Ed. (1994). Women Philosophers of the Early Modern Period. Indianapolis, Hackett.
  • Atherton, M. (1995). Berkeley Without God. In Berkeley's Metaphysics: Structural, Interpretive, and Critical Essays. R. G. Muehlmann (ed.). University Park, Pennsylvania State University Press: 231-248.
  • Bennett, J. (1971). Locke, Berkeley, Hume: Central Themes. Oxford, Clarendon Press.
  • Bolton, M. B. (1987). Berkeley's Objection to Abstract Ideas and Unconceived Objects. In Essays on the Philosophy of George Berkeley. E. Sosa (ed.). Dordrecht, D. Reidel.
  • Bracken, H. M. (1965). The Early Reception of Berkeley's Immaterialism 1710-1733. The Hague, Martinus Nijhoff.
  • Campbell, J. (2002). Berkeley's Puzzle. In Conceivability and Possibility. T. S. Gendler and J. Hawthorne (eds.). Oxford, Oxford University Press: 127-143.
  • Chappell, V. (1994). Locke's theory of ideas. In The Cambridge Companion to Locke. V. Chappell (ed.). Cambridge, Cambridge University Press: 26-55.
  • Cummins, P. (1990). "Berkeley's Manifest Qualities Thesis." Journal of the History of Philosophy 28: 385-401.
  • Downing, L. (forthcoming). Berkeley's Natural Philosophy and Philosophy of Science. In The Cambridge Companion to Berkeley. K. P. Winkler (ed.). Cambridge, Cambridge University Press.
  • Fleming, N. (1985). "The Tree in the Quad." American Philosophical Quarterly 22: 22-36.
  • Gallois, A. (1974). "Berkeley's Master Argument." The Philosophical Review 83: 55-69.
  • Jesseph, D. (1993). Berkeley's Philosophy of Mathematics. Chicago, University of Chicago Press.
  • Lennon, T. M. (1988). "Berkeley and the Ineffable." Synthese 75: 231-250.
  • Locke, J. (1975). An essay concerning human understanding. Oxford, Clarendon Press.
  • Luce, A. A. (1963). The Dialectic of Immaterialism. London, Hodder & Stoughten.
  • Malebranche, N. (1980). The Search After Truth. Columbus, The Ohio State University Press.
  • McCracken, C. (1979). "What Does Berkeley's God See in the Quad?" Archiv fur Geschichte der Philosophie 61: 280-92.
  • McCracken, C. J. (1995). Godless Immaterialism: On Atherton's Berkeley. In Berkeley's Metaphysics: Structural, Interpretive, and Critical Essays. R. G. Muehlmann (ed.). University Park, Pennsylvania State University Press: 249-260.
  • McKim, R. (1997-8). "Abstraction and Immaterialism: Recent Interpretations." Berkeley Newsletter 15: 1-13.
  • Muehlmann, R. G. (1992). Berkeley's Ontology. Indianapolis, Hackett.
  • Nadler, S. (1998). Doctrines of Explanation in Late Scholasticism and in the Mechanical Philosophy. In The Cambridge History of Seventeenth-Century Philosophy. D. Garber and M. Ayers (eds.). Cambridge, Cambridge University Press. 1: 513-552.
  • Pappas, G. S. (2000). Berkeley's Thought. Ithaca, Cornell University Press.
  • Pitcher, G. (1977). Berkeley. London, Routledge.
  • Saidel, E. (1993). "Making Sense of Berkeley's Challenge." History of Philosophy Quarterly 10(4): 325-339.
  • Tipton, I. C. (1974). Berkeley: The Philosophy of Immaterialism. London, Methuen & Co Ltd.
  • Wilson, M. D. (1999). Ideas and mechanism: essays on early modern philosophy. Princeton, Princeton University Press.
  • Winkler, K. P. (1989). Berkeley: An Interpretation. Oxford, Clarendon Press.
  • Yolton, J. W. (1984). Perceptual Acquaintance from Descartes to Reid. Minneapolis, University of Minnesota Press.

Additional Selected Secondary Literature

  • Berman, D. (1994). George Berkeley: Idealism and the Man. Oxford, Clarendon Press.
  • Creery, W. E., Ed. (1991). George Berkeley: Critical Assessments. London, Routledge. 3 vols.
  • Fogelin, R. J. (2001). Berkeley and the Principles of Human Knowledge. London, Routledge.
  • Foster, J. and H. Robinson, Eds. (1985). Essays on Berkeley: A Tercentennial Celebration. Oxford, Clarendon Press.
  • Stoneham, T. (2002). Berkeley's World. Oxford, Oxford University Press.
  • Urmson, J. O. (1982). Berkeley. Oxford, Oxford University Press.

Other Internet Resources

  • International Berkeley Society
  • George Berkeley, maintained by David R. Wilkins, School of Mathematics, Trinity College, Dublin (especially useful on the Analyst controversy, but good general information also).
  • Images of Berkeley, maintained by David Hilbert, Philosophy, University of Illinois at Chicago (images of Berkeley, Berkeley's poems, and a short biography)

Related Entries

Descartes, René | Hume, David | idealism | Locke, John | Malebranche, Nicolas

Friday, August 26, 2011

Computer and Information Ethics

In most countries of the world, the “information revolution” has altered many aspects of life significantly: commerce, employment, medicine, security, transportation, entertainment, and so on. Consequently, information and communication technology (ICT) has affected — in both good ways and bad ways — community life, family life, human relationships, education, careers, freedom, and democracy (to name just a few examples). “Computer and information ethics”, in the broadest sense of this phrase, can be understood as that branch of applied ethics which studies and analyzes such social and ethical impacts of ICT. The present essay concerns this broad new field of applied ethics.

The more specific term “computer ethics” has been used to refer to applications by professional philosophers of traditional Western theories like utilitarianism, Kantianism, or virtue ethics, to ethical cases that significantly involve computers and computer networks. “Computer ethics” also has been used to refer to a kind of professional ethics in which computer professionals apply codes of ethics and standards of good practice within their profession. In addition, other more specific names, like “cyberethics” and “Internet ethics”, have been used to refer to aspects of computer ethics associated with the Internet.

During the past several decades, the robust and rapidly growing field of computer and information ethics has generated new university courses, research professorships, research centers, conferences, workshops, professional organizations, curriculum materials, books and journals.


1. Some Historical Milestones

1.1 The Foundation of Computer and Information Ethics

In the mid 1940s, innovative developments in science and philosophy led to the creation of a new branch of ethics that would later be called “computer ethics” or “information ethics”. The founder of this new philosophical field was the American scholar Norbert Wiener, a professor of mathematics and engineering at MIT. During the Second World War, together with colleagues in America and Great Britain, Wiener helped to develop electronic computers and other new and powerful information technologies. While engaged in this war effort, Wiener and colleagues created a new branch of applied science that Wiener named “cybernetics” (from the Greek word for the pilot of a ship). Even while the War was raging, Wiener foresaw enormous social and ethical implications of cybernetics combined with electronic computers. He predicted that, after the War, the world would undergo “a second industrial revolution” — an “automatic age” with “enormous potential for good and for evil” that would generate a staggering number of new ethical challenges and opportunities.

When the War ended, Wiener wrote the book Cybernetics (1948) in which he described his new branch of applied science and identified some social and ethical implications of electronic computers. Two years later he published The Human Use of Human Beings (1950), a book in which he explored a number of ethical issues that computer and information technology would likely generate. The issues that he identified in those two books, plus his later book God and Golem, Inc. (1963), included topics that are still important today: computers and security, computers and unemployment, responsibilities of computer professionals, computers for persons with disabilities, computers and religion, information networks and globalization, virtual communities, teleworking, merging of human bodies with machines, robot ethics, artificial intelligence, and a number of other subjects. (See Bynum 2000, 2004, 2005, 2006.)

Although he coined the name “cybernetics” for his new science, Wiener apparently did not see himself as also creating a new branch of ethics. As a result, he did not coin a name like “computer ethics” or “information ethics”. These terms came into use decades later. (See the discussion below.) In spite of this, Wiener's three relevant books (1948, 1950, 1963) do lay down a powerful foundation, and do use an effective methodology, for today's field of computer and information ethics. His thinking, however, was far ahead of other scholars; and, at the time, many people considered him to be an eccentric scientist who was engaging in flights of fantasy about ethics. Apparently, no one — not even Wiener himself — recognized the profound importance of his ethics achievements; and nearly two decades would pass before some of the social and ethical impacts of information technology, which Wiener had predicted in the late 1940s, would become obvious to other scholars and to the general public.

In The Human Use of Human Beings, Wiener explored some likely effects of information technology upon key human values like life, health, happiness, abilities, knowledge, freedom, security, and opportunities. The metaphysical ideas and analytical methods that he employed were so powerful and wide-ranging that they could be used effectively for identifying, analyzing and resolving social and ethical problems associated with all kinds of information technology, including, for example, computers and computer networks; radio, television and telephones; news media and journalism; even books and libraries. Because of the breadth of Wiener's concerns and the applicability of his ideas and methods to every kind of information technology, the term “information ethics” is an apt name for the new field of ethics that he founded. As a result, the term “computer ethics”, as it is typically used today, names only a subfield of Wiener's much broader concerns.[1]

In laying down a foundation for information ethics, Wiener developed a cybernetic view of human nature and society, which led him to an ethically suggestive account of the purpose of a human life. Based upon this, he adopted “great principles of justice” that he believed all societies ought to follow. These powerful ethical concepts enabled Wiener to analyze information ethics issues of all kinds.

A cybernetic view of human nature

Wiener's cybernetic understanding of human nature stressed the physical structure of the human body and the remarkable potential for learning and creativity that human physiology makes possible. While explaining human intellectual potential, he regularly compared the human body to the physiology of less intelligent creatures like insects:

Cybernetics takes the view that the structure of the machine or of the organism is an index of the performance that may be expected from it. The fact that the mechanical rigidity of the insect is such as to limit its intelligence while the mechanical fluidity of the human being provides for his almost indefinite intellectual expansion is highly relevant to the point of view of this book. … man's advantage over the rest of nature is that he has the physiological and hence the intellectual equipment to adapt himself to radical changes in his environment. The human species is strong only insofar as it takes advantage of the innate, adaptive, learning faculties that its physiological structure makes possible. (Wiener 1954, pp. 57-58, italics in the original)

Given the physiology of human beings, it is possible for them to take in a wide diversity of information from the external world, access information about conditions and events within their own bodies, and process all that information in ways that constitute reasoning, calculating, wondering, deliberating, deciding and many other intellectual activities. Wiener concluded that the purpose of a human life is to flourish as the kind of information-processing organisms that humans naturally are:

I wish to show that the human individual, capable of vast learning and study, which may occupy almost half of his life, is physically equipped, as the ant is not, for this capacity. Variety and possibility are inherent in the human sensorium — and are indeed the key to man's most noble flights — because variety and possibility belong to the very structure of the human organism. (Wiener 1954, pp. 51-52)

Underlying metaphysics

Wiener's account of human nature presupposed a metaphysical view of the universe that considers the world and all the entities within it, including humans, to be combinations of matter-energy and information. Everything in the world is a mixture of both of these, and thinking, according to Wiener, is actually a kind of information processing. Consequently, the brain

does not secrete thought “as the liver does bile”, as the earlier materialists claimed, nor does it put it out in the form of energy, as the muscle puts out its activity. Information is information, not matter or energy. No materialism which does not admit this can survive at the present day. (Wiener 1948, p. 155)

According to Wiener's metaphysical view, everything in the universe comes into existence, persists, and then disappears because of the continuous mixing and mingling of information and matter-energy. Living organisms, including human beings, are actually patterns of information that persist through an ongoing exchange of matter-energy. Thus, he says of human beings,

We are but whirlpools in a river of ever-flowing water. We are not stuff that abides, but patterns that perpetuate themselves. (Wiener 1954, p. 96)

The individuality of the body is that of a flame…of a form rather than of a bit of substance. (Wiener 1954, p. 102)

Using the language of today's “information age” we would say that, according to Wiener, human beings are “information objects”; and their intellectual capacities, as well as their personal identities, are dependent upon persisting patterns of information and information processing within the body, rather than on specific bits of matter-energy.

Justice and human flourishing

According to Wiener, for human beings to flourish they must be free to engage in creative and flexible actions and thereby maximize their full potential as intelligent, decision-making beings in charge of their own lives. This is the purpose of a human life. Because people have various levels of talent and possibility, however, one person's achievements will be different from those of others. It is possible, though, to lead a good human life — to flourish — in an indefinitely large number of ways; for example, as a diplomat, scientist, teacher, nurse, doctor, soldier, housewife, midwife, musician, artist, tradesman, artisan, and so on.

This understanding of the purpose of a human life led Wiener to adopt what he called “great principles of justice” upon which society should be built. He believed that adherence to those principles by a society would maximize a person's ability to flourish through variety and flexibility of human action. Although Wiener stated his “great principles”, he did not assign names to them. For purposes of easy reference, let us call them “The Principle of Freedom”, “The Principle of Equality” and “The Principle of Benevolence”. Using Wiener's own words yields the following list of “great principles” (1954, pp. 105-106):

The Principle of Freedom

Justice requires “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.”

The Principle of Equality

Justice requires “the equality by which what is just for A and B remains just when the positions of A and B are interchanged.”

The Principle of Benevolence

Justice requires “a good will between man and man that knows no limits short of those of humanity itself.”

Given Wiener's cybernetic account of human nature and society, it follows that people are fundamentally social beings, and that they can reach their full potential only when they are part of a community of similar beings. Society, therefore, is essential to a good human life. Despotic societies, however, actually stifle human freedom; and indeed they violate all three of the “great principles of justice”. For this reason, Wiener explicitly adopted a fourth principle of justice to assure that the first three would not be violated. Let us call this additional principle “The Principle of Minimum Infringement of Freedom”:

The Principle of Minimum Infringement of Freedom What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom (1954, p. 106).

A refutation of ethical relativism

If one grants Wiener's account of a good society and of human nature, it follows that a wide diversity of cultures — with different customs, languages, religions, values and practices — could provide a context in which humans can flourish. Sometimes ethical relativists use the existence of different cultures as proof that there is not — and could not be — an underlying ethical foundation for societies all around the globe. In response to such relativism, Wiener could argue that, given his understanding of human nature and the purpose of a human life, we can embrace and welcome a rich variety of cultures and practices while still advocating adherence to “the great principles of justice”. Those principles offer a cross-cultural foundation for ethics, even though they leave room for immense cultural diversity. The one restriction that Wiener would require in any society is that it must provide a context where humans can realize their full potential as sophisticated information-processing agents, making decisions and choices, and thereby taking responsibility for their own lives. Wiener believed that this is possible only where significant freedom, equality and human compassion prevail.

Methodology in information ethics

Because Wiener did not think of himself as creating a new branch of ethics, he did not provide metaphilosophical comments about what he was doing while analyzing an information ethics issue or case. Instead, he plunged directly into his analyses. Consequently, if we want to know about Wiener's method of analysis, we need to observe what he does, rather than look for any metaphilosophical commentary upon his own procedures.

When observing Wiener's way of analyzing information ethics issues and trying to resolve them, we find — for example, in The Human Use of Human Beings — that he tries to assimilate new cases by applying already existing, ethically acceptable laws, rules, and practices. In any given society, there is a network of existing practices, laws, rules and principles that govern human behavior within that society. These “policies” — to borrow a helpful word from Moor (1985) — constitute a “received policy cluster” (see Bynum and Schubert 1997); and in a reasonably just society, they can serve as a good starting point for developing an answer to any information ethics question. Wiener's methodology is to combine the “received policy cluster” of one's society with his account of human nature, plus his “great principles of justice”, plus critical skills in clarifying vague or ambiguous language. In this way, he achieved a very effective method for analyzing information ethics issues. Borrowing from Moor's later, and very apt, description of computer ethics methodology (Moor 1985), we can describe Wiener's methodology as follows:

  1. Identify an ethical question or case regarding the integration of information technology into society. Typically this focuses upon technology-generated possibilities that could affect (or are already affecting) life, health, security, happiness, freedom, knowledge, opportunities, or other key human values.
  2. Clarify any ambiguous or vague ideas or principles that may apply to the case or the issue in question.
  3. If possible, apply already existing, ethically acceptable principles, laws, rules, and practices (the “received policy cluster”) that govern human behavior in the given society.
  4. If ethically acceptable precedents, traditions and policies are insufficient to settle the question or deal with the case, use the purpose of a human life plus the great principles of justice to find a solution that fits as well as possible into the ethical traditions of the given society.

In an essentially just society — that is, in a society where the “received policy cluster” is reasonably just — this method of analyzing and resolving information ethics issues will likely result in ethically good solutions that can be assimilated into the society.

Note that this way of doing information ethics does not require the expertise of a trained philosopher (although such expertise might prove to be helpful in many situations). Any adult who functions successfully in a reasonably just society is likely to be familiar with the existing customs, practices, rules and laws that govern a person's behavior in that society and enable one to tell whether a proposed action or policy would be accepted as ethical. So those who must cope with the introduction of new information technology — whether they are computer professionals, business people, workers, teachers, parents, public-policy makers, or others — can and should engage in information ethics by helping to integrate new information technology into society in an ethically acceptable way. Information ethics, understood in this very broad sense, is too important to be left only to information professionals or to philosophers. Wiener's information ethics interests, ideas and methods were very broad, covering not only topics in the specific field of “computer ethics”, as we would call it today, but also issues in related areas that, today, are called “agent ethics”, “Internet ethics”, and “nanotechnology ethics”. The purview of Wiener's ideas and methods is even broad enough to encompass subfields like journalism ethics, library ethics, and the ethics of bioengineering.

Even in the late 1940s, Wiener made it clear that, on his view, the integration into society of the newly invented computing and information technology would lead to the remaking of society — to “the second industrial revolution” — “the automatic age”. It would affect every walk of life, and would be a multi-faceted, on-going process requiring decades of effort. In Wiener's own words, the new information technology had placed human beings “in the presence of another social potentiality of unheard-of importance for good and for evil.” (1948, p. 27) However, because he did not think of himself as creating a new branch of ethics, Wiener did not coin names, such as “computer ethics” or “information ethics”, to describe what he was doing. These terms — beginning with “computer ethics” — came into common use years later, starting in the mid 1970s with the work of Walter Maner.

Today, the “information age” that Wiener predicted half a century ago has come into existence; and the metaphysical and scientific foundation for information ethics that he laid down continues to provide insight and effective guidance for understanding and resolving ethical challenges engendered by information technologies of all kinds.

1.2 Defining Computer Ethics

In 1976, nearly three decades after the publication of Wiener's book Cybernetics, Walter Maner noticed that the ethical questions and problems considered in his Medical Ethics course at Old Dominion University often became more complicated or significantly altered when computers got involved. Sometimes the addition of computers, it seemed to Maner, actually generated wholly new ethics problems that would not have existed if computers had not been invented. He concluded that there should be a new branch of applied ethics similar to already existing fields like medical ethics and business ethics; and he decided to name the proposed new field “computer ethics”. (At that time, Maner did not know about the computer ethics works of Norbert Wiener.) He defined the proposed new field as one that studies ethical problems “aggravated, transformed or created by computer technology”. He developed an experimental computer ethics course designed primarily for students in university-level computer science programs. His course was a success, and students at his university wanted him to teach it regularly. He complied with their wishes and also created, in 1978, a “starter kit” on teaching computer ethics, which he prepared for dissemination to attendees of workshops that he ran and speeches that he gave at philosophy conferences and computing science conferences in America. In 1980, Helvetia Press and the National Information and Resource Center on Teaching Philosophy published Maner's computer ethics “starter kit” as a monograph (Maner 1980). It contained curriculum materials and pedagogical advice for university teachers. It also included a rationale for offering such a course in a university, suggested course descriptions for university catalogs, a list of course objectives, teaching tips, and discussions of topics like privacy and confidentiality, computer crime, computer decisions, technological dependence and professional codes of ethics. During the early 1980s, Maner's Starter Kit was widely disseminated by Helvetia Press to colleges and universities in America and elsewhere. Meanwhile Maner continued to conduct workshops and teach courses in computer ethics. As a result, a number of scholars, especially philosophers and computer scientists, were introduced to computer ethics because of Maner's trailblazing efforts.

The “uniqueness debate”

While Maner was developing his new computer ethics course in the mid-to-late 1970s, a colleague of his in the Philosophy Department at Old Dominion University, Deborah Johnson, became interested in his proposed new field. She was especially interested in Maner's view that computers generate wholly new ethical problems, for she did not believe that this was true. As a result, Maner and Johnson began discussing ethics cases that allegedly involved new problems brought about by computers. In these discussions, Johnson granted that computers did indeed transform old ethics problems in interesting and important ways — that is, “give them a new twist” — but she did not agree that computers generated ethically unique problems that had never been seen before. The resulting Maner-Johnson discussion initiated a fruitful series of comments and publications on the nature and uniqueness of computer ethics — a series of scholarly exchanges that started with Maner and Johnson and later spread to other scholars. The following passage, from Maner's ETHICOMP95 keynote address, drew a number of other people into the discussion:

I have tried to show that there are issues and problems that are unique to computer ethics. For all of these issues, there was an essential involvement of computing technology. Except for this technology, these issues would not have arisen, or would not have arisen in their highly altered form. The failure to find satisfactory non-computer analogies testifies to the uniqueness of these issues. The lack of an adequate analogy, in turn, has interesting moral consequences. Normally, when we confront unfamiliar ethical problems, we use analogies to build conceptual bridges to similar situations we have encountered in the past. Then we try to transfer moral intuitions across the bridge, from the analog case to our current situation. Lack of an effective analogy forces us to discover new moral values, formulate new moral principles, develop new policies, and find new ways to think about the issues presented to us. (Maner 1996, p. 152)

Over the decade that followed this provocative passage, the extended “uniqueness debate” led to a number of useful contributions to computer and information ethics. (For some example publications, see Johnson 1985, 1994, 1999, 2001; Maner 1980, 1996, 1999; Gorniak-Kocikowska 1996; Tavani 2002, 2005; Himma 2003; Floridi and Sanders 2004; Mather 2005; and Bynum 2006, 2007.)

An agenda-setting textbook

By the early 1980s, Johnson had joined the staff of Rensselaer Polytechnic Institute and had secured a grant to prepare a set of teaching materials — pedagogical modules concerning computer ethics — that turned out to be very successful. She incorporated them into a textbook, Computer Ethics, which was published in 1985 (Johnson 1985). On page 1, she noted that computers “pose new versions of standard moral problems and moral dilemmas, exacerbating the old problems, and forcing us to apply ordinary moral norms in uncharted realms.” She did not grant Maner's claim, however, that computers create wholly new ethical problems. Instead, she described computer ethics issues as old ethical problems that are “given a new twist” by computer technology.

Johnson's book Computer Ethics was the first major textbook in the field, and it quickly became the primary text used in computer ethics courses offered at universities in English-speaking countries. For more than a decade, her textbook set the computer ethics research agenda on topics, such as ownership of software and intellectual property, computing and privacy, responsibility of computer professionals, and fair distribution of technology and human power. In later editions (1994, 2001), Johnson added new ethical topics like “hacking” into people's computers without their permission, computer technology for persons with disabilities, and the Internet's impact upon democracy.

Also in later editions of Computer Ethics, Johnson continued the “uniqueness-debate” discussion, noting for example that new information technologies provide new ways to “instrument” human actions. Because of this, she agreed with Maner that new specific ethics questions had been generated by computer technology — for example, “Should ownership of software be protected by law?” or “Do huge databases of personal information threaten privacy?” — but she argued that such questions are merely “new species of old moral issues”, such as protection of human privacy or ownership of intellectual property. They are not, she insisted, wholly new ethics problems requiring additions to traditional ethical theories, as Maner had claimed (Maner 1996).

1.3 An Influential Computer Ethics Theory

The year 1985 was a “watershed year” in the history of computer ethics, not only because of the appearance of Johnson's agenda-setting textbook, but also because James Moor's classic paper, “What Is Computer Ethics?” was published in a special computer-ethics issue of the journal Metaphilosophy.[2] There Moor provided an account of the nature of computer ethics that was broader and more ambitious than the definitions of Maner or Johnson. He went beyond descriptions and examples of computer ethics problems by offering an explanation of why computing technology raises so many ethical questions compared to other kinds of technology. Moor's explanation of the revolutionary power of computer technology was that computers are “logically malleable”:

Computers are logically malleable in that they can be shaped and molded to do any activity that can be characterized in terms of inputs, outputs and connecting logical operations … . Because logic applies everywhere, the potential applications of computer technology appear limitless. The computer is the nearest thing we have to a universal tool. Indeed, the limits of computers are largely the limits of our own creativity. (Moor, 1985, 269)

The logical malleability of computer technology, said Moor, makes it possible for people to do a vast number of things that they were not able to do before. Since no one could do them before, the question never arose as to whether one ought to do them. In addition, because they could not be done before, no laws or standards of good practice or specific ethical rules were established to govern them. Moor called such situations “policy vacuums”, and some of them might generate “conceptual muddles”:

A typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate. A central task of computer ethics is to determine what we should do in such cases, that is, formulate policies to guide our actions … . One difficulty is that along with a policy vacuum there is often a conceptual vacuum. Although a problem in computer ethics may seem clear initially, a little reflection reveals a conceptual muddle. What is needed in such cases is an analysis that provides a coherent conceptual framework within which to formulate a policy for action. (Moor, 1985, 266)

In the late 1980s, Moor's “policy vacuum” explanation of the need for computer ethics and his account of the revolutionary “logical malleability” of computer technology quickly became very influential among a growing number of computer ethics scholars. He added additional ideas in the 1990s, including the important notion of core human values: According to Moor, some human values — such as life, health, happiness, security, resources, opportunities, and knowledge — are so important to the continued survival of any community that essentially all communities do value them. Indeed, if a community did not value the “core values”, it soon would cease to exist. Moor used “core values” to examine computer ethics topics like privacy and security (Moor 1997), and to add an account of justice, which he called “just consequentialism” (Moor, 1999), a theory that combines “core values” and consequentialism with Bernard Gert's deontological notion of “moral impartiality” using “the blindfold of justice” (Gert,1998).

Moor's approach to computer ethics is a practical theory that provides a broad perspective on the nature of the “information revolution”. By using the notions of “logical malleability”, “policy vacuums”, “conceptual muddles”, “core values” and “just consequentialism”, he provides the following problem-solving method:

  1. Identify a policy vacuum generated by computing technology.
  2. Eliminate any conceptual muddles.
  3. Use the core values and the ethical resources of just consequentialism to revise existing — but inadequate — policies, or else to create new policies that justly eliminate the vacuum and resolve the original ethical issue.

The third step is accomplished by combining deontology and consequentialism — which traditionally have been considered incompatible rival ethics theories — to achieve the following practical results:

If the blindfold of justice is applied to [suggested] computing policies, some policies will be regarded as unjust by all rational, impartial people, some policies will be regarded as just by all rational, impartial people, and some will be in dispute. This approach is good enough to provide just constraints on consequentialism. We first require that all computing policies pass the impartiality test. Clearly, our computing policies should not be among those that every rational, impartial person would regard as unjust. Then we can further select policies by looking at their beneficial consequences. We are not ethically required to select policies with the best possible outcomes, but we can assess the merits of the various policies using consequentialist considerations and we may select very good ones from those that are just. (Moor, 1999, 68)

1.4 Computing and Human Values

Beginning with the computer ethics works of Norbert Wiener (1948, 1950, 1963), a common thread has run through much of the history of computer ethics; namely, concern for protecting and advancing central human values, such a life, health, security, happiness, freedom, knowledge, resources, power and opportunity. Thus, most of the specific issues that Wiener dealt with are cases of defending or advancing such values. For example, by working to prevent massive unemployment caused by robotic factories, Wiener tried to preserve security, resources and opportunities for factory workers. Similarly, by arguing against the use of decision-making war-game machines, Wiener tried to diminish threats to security and peace.

This “human-values approach” to computer ethics has been very fruitful. It has served, for example, as an organizing theme for major computer-ethics conferences, such as the 1991 National Conference on Computing and Values at Southern Connecticut State University (see the section below on “exponential growth”), which was devoted to the impacts of computing upon security, property, privacy, knowledge, freedom and opportunities.[3] In the late 1990s, a similar approach to computer ethics, called “value-sensitive computer design”, emerged based upon the insight that potential computer-ethics problems can be avoided, while new technology is under development, by anticipating possible harm to human values and designing new technology from the very beginning in ways that prevent such harm. (See, for example, Friedman and Nissenbaum, 1996; Friedman, 1997; Brey, 2000; Introna and Nissenbaum, 2000; Introna, 2005a; Flanagan, et al., 2007.)

1.5 Professional Ethics and Computer Ethics

In the early 1990s, a different emphasis within computer ethics was advocated by Donald Gotterbarn. He believed that computer ethics should be seen as a professional ethics devoted to the development and advancement of standards of good practice and codes of conduct for computing professionals. Thus, in 1991, in the article “Computer Ethics: Responsibility Regained”, Gotterbarn said:

There is little attention paid to the domain of professional ethics — the values that guide the day-to-day activities of computing professionals in their role as professionals. By computing professional I mean anyone involved in the design and development of computer artifacts. … The ethical decisions made during the development of these artifacts have a direct relationship to many of the issues discussed under the broader concept of computer ethics. (Gotterbarn, 1991)

Throughout the 1990s, with this aspect of computer ethics in mind, Gotterbarn worked with other professional-ethics advocates (for example, Keith Miller, Dianne Martin, Chuck Huff and Simon Rogerson) in a variety of projects to advance professional responsibility among computer practitioners. Even before 1991, Gotterbarn had been part of a committee of the ACM (Association for Computing Machinery) to create the third version of that organization's “Code of Ethics and Professional Conduct” (adopted by the ACM in 1992, see Anderson, et al., 1993). Later, Gotterbarn and colleagues in the ACM and the Computer Society of the IEEE (Institute of Electrical and Electronic Engineers) developed licensing standards for software engineers. In addition, Gotterbarn headed a joint taskforce of the IEEE and ACM to create the “Software Engineering Code of Ethics and Professional Practice” (adopted by those organizations in 1999; see Gotterbarn, Miller and Rogerson, 1997).

In the late 1990s, Gotterbarn created the Software Engineering Ethics Research Institute (SEERI) at East Tennessee State University (see http://seeri.etsu.edu/); and in the early 2000s, together with Simon Rogerson, he developed a computer program called SoDIS (Software Development Impact Statements) to assist individuals, companies and organizations in the preparation of ethical “stakeholder analyses” for determining likely ethical impacts of software development projects (Gotterbarn and Rogerson, 2005). These and many other projects focused attention upon professional responsibility and advanced the professionalization and ethical maturation of computing practitioners. (See the bibliography below for works by R. Anderson, D. Gotterbarn, C. Huff, C. D. Martin, K. Miller, and S. Rogerson.)

1.6 Uniqueness and Global Information Ethics

In 1995, in her ETHICOMP95 presentation “The Computer Revolution and the Problem of Global Ethics”, Krystyna Górniak-Kocikowska, made a startling prediction (see Górniak, 1996). She argued that computer ethics eventually will evolve into a global ethic applicable in every culture on earth. According to this “Górniak hypothesis”, regional ethical theories like Europe's Benthamite and Kantian systems, as well as the diverse ethical systems embedded in other cultures of the world, all derive from “local” histories and customs and are unlikely to be applicable world-wide. Computer and information ethics, on the other hand, Górniak argued, has the potential to provide a global ethic suitable for the Information Age:

  • a new ethical theory is likely to emerge from computer ethics in response to the computer revolution. The newly emerging field of information ethics, therefore, is much more important than even its founders and advocates believe. (p. 177)
  • The very nature of the Computer Revolution indicates that the ethic of the future will have a global character. It will be global in a spatial sense, since it will encompass the entire globe. It will also be global in the sense that it will address the totality of human actions and relations. (p.179)
  • Computers do not know borders. Computer networks … have a truly global character. Hence, when we are talking about computer ethics, we are talking about the emerging global ethic. (p. 186)
  • the rules of computer ethics, no matter how well thought through, will be ineffective unless respected by the vast majority of or maybe even all computer users. … In other words, computer ethics will become universal, it will be a global ethic. (p.187)

The provocative “Górniak hypothesis” was a significant contribution to the ongoing “uniqueness debate”, and it reinforced Maner's claim — which he made at the same ETHICOMP95 conference in his keynote address — that information technology “forces us to discover new moral values, formulate new moral principles, develop new policies, and find new ways to think about the issues presented to us.” (Maner 1996, p. 152) Górniak did not speculate about the globally relevant concepts and principles that would evolve from information ethics. She merely predicted that such a theory would emerge over time because of the global nature of the Internet and the resulting ethics conversation among all the cultures of the world.

1.7 Information Ethics

Some important recent developments, which began after 1995, seem to be confirming Górniak's hypothesis — in particular, the information ethics theory of Luciano Floridi (see, for example, Floridi, 1999 and Floridi, 2005a) and the “Flourishing Ethics” theory that combines ideas from Aristotle, Wiener, Moor and Floridi (see Section 1.8 below, and also Bynum, 2006).

In developing his information ethics theory (henceforth FIE), Floridi argued that the purview of computer ethics — indeed of ethics in general — should be widened to include much more than simply human beings, their actions, intentions and characters. He offered FIE as another “macroethics” (his term) which is similar to utilitarianism, deontologism, contractualism, and virtue ethics, because it is intended to be applicable to all ethical situations. On the other hand, IE is different from these more traditional Western theories because it is not intended to replace them, but rather to supplement them with further ethical considerations that go beyond the traditional theories, and that can be overridden, sometimes, by traditional ethical considerations. (Floridi, 2006)

The name ‘information ethics’ is appropriate to Floridi's theory, because it treats everything that exists as “informational” objects or processes:

[All] entities will be described as clusters of data, that is, as informational objects. More precisely, [any existing entity] will be a discrete, self-contained, encapsulated package containing

  1. the appropriate data structures, which constitute the nature of the entity in question, that is, the state of the object, its unique identity and its attributes; and
  2. a collection of operations, functions, or procedures, which are activated by various interactions or stimuli (that is, messages received from other objects or changes within itself) and correspondingly define how the object behaves or reacts to them.

At this level of abstraction, informational systems as such, rather than just living systems in general, are raised to the role of agents and patients of any action, with environmental processes, changes and interactions equally described informationally. (Floridi 2006, 9-10)

Since everything that exists, according to FIE, is an informational object or process, he calls the totality of all that exists — the universe considered as a whole — “the infosphere”. Objects and processes in the infosphere can be significantly damaged or destroyed by altering their characteristic data structures. Such damage or destruction Floridi calls “entropy”, and it results in partial “empoverishment of the infosphere”. Entropy in this sense is an evil that should be avoided or minimized, and Floridi offers four “fundamental principles”:

  1. Entropy ought not to be caused in the infosphere (null law).
  2. Entropy ought to be prevented in the infosphere.
  3. Entropy ought to be removed from the infosphere.
  4. The flourishing of informational entities as well as the whole infosphere ought to be promoted by preserving, cultivating and enriching their properties.

FIE is based upon the idea that everything in the infosphere has at least a minimum worth that should be ethically respected, even if that worth can be overridden by other considerations:

FIE suggests that there is something even more elemental than life, namely being — that is, the existence and flourishing of all entities and their global environment — and something more fundamental than suffering, namely entropy … . FIE holds that being/information has an intrinsic worthiness. It substantiates this position by recognizing that any informational entity has a Spinozian right to persist in its own status, and a Constructionist right to flourish, i.e., to improve and enrich its existence and essence. (Floridi 2006, p. 11)

By construing every existing entity in the universe as “informational”, with at least a minimal moral worth, FIE can supplement traditional ethical theories and go beyond them by shifting the focus of one's ethical attention away from the actions, characters, and values of human agents toward the “evil” (harm, dissolution, destruction) — “entropy” — suffered by objects and processes in the infosphere. With this approach, every existing entity — humans, other animals, plants, organizations, even non-living artifacts, electronic objects in cyberspace, pieces of intellectual property — can be interpreted as potential agents that affect other entities, and as potential patients that are affected by other entities. In this way, Floridi treats FIE as a “patient-based” non-anthropocentric ethical theory to be used in addition to the traditional “agent-based” anthropocentric ethical theories like utilitarianism, deontologism and virtue theory.

FIE, with its emphasis on “preserving and enhancing the infosphere”, enables Floridi to provide, among other things, an insightful and practical ethical theory of robot behavior and the behavior of other “artificial agents” like softbots and cyborgs. (See, for example, Floridi and Sanders, 2004.) FIE is an important component of a more ambitious project covering the entire new field of the Philosophy of Information.

1.8 Exponential Growth

The paragraphs above describe key contributions to “the history of ideas” in information and computer ethics, but the history of a discipline includes much more. The birth and development of a new academic field require cooperation among a “critical mass” of scholars, plus the creation of university courses, research centers, conferences, and academic journals. In this regard, the year 1985 was pivotal for information and computer ethics. The publication of Johnson's textbook, Computer Ethics, plus a special issue of the journal Metaphilosophy (October 1985) — including especially Moor's article “What Is Computer Ethics?” — provided excellent curriculum materials and a conceptual foundation for the field. In addition, Maner's earlier trailblazing efforts, and those of other people who had been inspired by Maner, had generated a “ready-made audience” of enthusiastic computer science and philosophy scholars. The stage was set for exponential growth.

In the United States, rapid growth occurred in information and computer ethics beginning in the mid-1980s. In 1987 the Research Center on Computing & Society (RCCS) was founded at Southern Connecticut State University. Shortly thereafter, the Director (the present author) joined with Walter Maner to organize “the National Conference on Computing and Values” (NCCV), an NSF-funded conference to bring together computer scientists, philosophers, public policy makers, lawyers, journalists, sociologists, psychologists, business people, and others. The goal was to examine and push forward some of the major sub-areas of information and computer ethics; namely, computer security, computers and privacy, ownership of intellectual property, computing for persons with disabilities, and the teaching of computer ethics. More than a dozen scholars from several different disciplines joined with Bynum and Maner to plan NCCV, which occurred in August 1991 at Southern Connecticut State University. Four hundred people from thirty-two American states and seven other countries attended; and the conference generated a wealth of new computer ethics materials — monographs, video programs and an extensive bibliography — that were disseminated to hundreds of colleges and universities during the following two years.

In that same decade, professional ethics advocates, such as Donald Gotterbarn, Keith Miller and Dianne Martin — and professional organizations, such as Computer Professionals for Social Responsibility (www.cpsr.org), the Electronic Frontier Foundation (www.eff.org), and the Special Interest Group on Computing and Society (SIGCAS) of the ACM — spearheaded projects focused upon professional responsibility for computer practitioners. Information and computer ethics became a required component of undergraduate computer science programs that were nationally accredited by the Computer Sciences Accreditation Board. In addition, the annual “Computers, Freedom and Privacy” conferences began in 1991 (see www.cfp.org), and the ACM adopted a new version of its Code of Ethics and Professional Conduct in 1992.

In 1995, rapid growth of information and computer ethics spread to Europe when the present author joined with Simon Rogerson of De Montfort University in Leicester, England to create the Centre for Computing and Social Responsibility (www.ccsr.cse.dmu.ac.uk) and to organize the first computer ethics conference in Europe, ETHICOMP95. That conference included attendees from fourteen different countries, mostly in Europe, and it became a key factor in generating a “critical mass” of computer ethics scholars in Europe. After 1995, every 18 months, another ETHICOMP conference was held in a different European country, including Spain (1996), the Netherlands (1998), Italy (1999), Poland (2001), Portugal (2002), Greece (2004) and Sweden (2005). In addition, in 1999, with assistance from Bynum and Rogerson, the Australian scholars John Weckert and Christopher Simpson created the Australian Institute of Computer Ethics (aice.net.au) and organized AICEC99 (Melbourne, Australia), which was the first international computer ethics conference south of the equator. In 2007 Rogerson and Bynum also headed ETHICOMP2007 in Tokyo, Japan and an ETHICOMP “Working Conference” in Kunming, China to help spread interest in information ethics to Asia.

A central figure in the rapid growth of information and computer ethics in Europe was Simon Rogerson. In addition to creating the Centre for Computing and Social Responsibility at De Montfort University and co-heading the influential ETHICOMP conferences, he also (1) added computer ethics to De Montfort University's curriculum, (2) created a graduate program with advanced computer ethics degrees, including the PhD, and (3) co-founded and co-edited (with Ben Fairweather) two computer ethics journals — The Journal of Information, Communication and Ethics in Society in 2003 (see the link the Other Internet Resources section), and the electronic journal The ETHICOMP Journal in 2004 (see Other Internet Resources). Rogerson also served on the Information Technology Committee of the British Parliament, and participated in several computer ethics projects with agencies of the European Union.

Other important computer ethics developments in Europe in the late 1990s and early 2000s included, for example, (1) Luciano Floridi's creation of the Information Ethics Research Group at Oxford University in the mid 1990s; (2) Jeroen van den Hoven's founding, in 1997, of the CEPE (Computer Ethics: Philosophical Enquiry) series of computer ethics conferences, which occur alternately in Europe and America; (3) van den Hoven's creation of the journal Ethics and Information Technology in 1999; (4) Rafael Capurro's creation of the International Center for Information Ethics (icie.zkm.de) in 1999; (5) Capurro's creation of the journal International Review of Information Ethics in 2004; and Bernd Carsten Stahl's creation of The International Journal of Technology and Human Interaction in 2005.

In summary, since 1985 computer ethics developments have proliferated exponentially with new conferences and conference series, new organizations, new research centers, new journals, textbooks, web sites, university courses, university degree programs, and distinguished professorships. Additional “sub-fields” and topics in information and computer ethics continually emerge as information technology itself grows and proliferates. Recent new topics include on-line ethics, “agent” ethics (robots, softbots), cyborg ethics (part human, part machine), the “open source movement”, electronic government, global information ethics, information technology and genetics, computing for developing countries, computing and terrorism, ethics and nanotechnology, to name only a few examples. (For specific publications and examples, see the list of selected resources below.)

Compared to many other scholarly disciplines, the field of computer ethics is very young. It has existed only since the late 1940s when Norbert Wiener created it. During the first three decades, it grew very little because Wiener's insights were far ahead of everyone else's. In the past 25 years, however, information and computer ethics has grown exponentially in the industrialized world, and the rest of the world has begun to take notice.

2. Example Topics in Computer Ethics

No matter which re-definition of computer ethics one chooses, the best way to understand the nature of the field is through some representative examples of the issues and problems that have attracted research and scholarship. Consider, for example, the following topics:

(See also the wide range of topics included in the recent anthology [Spinello and Tavani, 2001].)

2.1 Computers in the Workplace

As a “universal tool” that can, in principle, perform almost any task, computers obviously pose a threat to jobs. Although they occasionally need repair, computers don't require sleep, they don't get tired, they don't go home ill or take time off for rest and relaxation. At the same time, computers are often far more efficient than humans in performing many tasks. Therefore, economic incentives to replace humans with computerized devices are very high. Indeed, in the industrialized world many workers already have been replaced by computerized devices — bank tellers, auto workers, telephone operators, typists, graphic artists, security guards, assembly-line workers, and on and on. In addition, even professionals like medical doctors, lawyers, teachers, accountants and psychologists are finding that computers can perform many of their traditional professional duties quite effectively.

The employment outlook, however, is not all bad. Consider, for example, the fact that the computer industry already has generated a wide variety of new jobs: hardware engineers, software engineers, systems analysts, webmasters, information technology teachers, computer sales clerks, and so on. Thus it appears that, in the short run, computer-generated unemployment will be an important social problem; but in the long run, information technology will create many more jobs than it eliminates.

Even when a job is not eliminated by computers, it can be radically altered. For example, airline pilots still sit at the controls of commercial airplanes; but during much of a flight the pilot simply watches as a computer flies the plane. Similarly, those who prepare food in restaurants or make products in factories may still have jobs; but often they simply push buttons and watch as computerized devices actually perform the needed tasks. In this way, it is possible for computers to cause “de-skilling” of workers, turning them into passive observers and button pushers. Again, however, the picture is not all bad because computers also have generated new jobs which require new sophisticated skills to perform — for example, “computer assisted drafting” and “keyhole” surgery.

Another workplace issue concerns health and safety. As Forester and Morrison point out [Forester and Morrison, 140-72, Chapter 8], when information technology is introduced into a workplace, it is important to consider likely impacts upon health and job satisfaction of workers who will use it. It is possible, for example, that such workers will feel stressed trying to keep up with high-speed computerized devices — or they may be injured by repeating the same physical movement over and over — or their health may be threatened by radiation emanating from computer monitors. These are just a few of the social and ethical issues that arise when information technology is introduced into the workplace.

2.2 Computer Crime

In this era of computer “viruses” and international spying by “hackers” who are thousands of miles away, it is clear that computer security is a topic of concern in the field of Computer Ethics. The problem is not so much the physical security of the hardware (protecting it from theft, fire, flood, etc.), but rather “logical security”, which Spafford, Heaphy and Ferbrache [Spafford, et al, 1989] divide into five aspects:

  1. Privacy and confidentiality
  2. Integrity — assuring that data and programs are not modified without proper authority
  3. Unimpaired service
  4. Consistency — ensuring that the data and behavior we see today will be the same tomorrow
  5. Controlling access to resources

Malicious kinds of software, or “programmed threats”, provide a significant challenge to computer security. These include “viruses”, which cannot run on their own, but rather are inserted into other computer programs; “worms” which can move from machine to machine across networks, and may have parts of themselves running on different machines; “Trojan horses” which appear to be one sort of program, but actually are doing damage behind the scenes; “logic bombs” which check for particular conditions and then execute when those conditions arise; and “bacteria” or “rabbits” which multiply rapidly and fill up the computer's memory.

Computer crimes, such as embezzlement or planting of logic bombs, are normally committed by trusted personnel who have permission to use the computer system. Computer security, therefore, must also be concerned with the actions of trusted computer users.

Another major risk to computer security is the so-called “hacker” who breaks into someone's computer system without permission. Some hackers intentionally steal data or commit vandalism, while others merely “explore” the system to see how it works and what files it contains. These “explorers” often claim to be benevolent defenders of freedom and fighters against rip-offs by major corporations or spying by government agents. These self-appointed vigilantes of cyberspace say they do no harm, and claim to be helpful to society by exposing security risks. However every act of hacking is harmful, because any known successful penetration of a computer system requires the owner to thoroughly check for damaged or lost data and programs. Even if the hacker did indeed make no changes, the computer's owner must run through a costly and time-consuming investigation of the compromised system [Spafford, 1992].

2.3 Privacy and Anonymity

One of the earliest computer ethics topics to arouse public interest was privacy. For example, in the mid-1960s the American government already had created large databases of information about private citizens (census data, tax records, military service records, welfare records, and so on). In the US Congress, bills were introduced to assign a personal identification number to every citizen and then gather all the government's data about each citizen under the corresponding ID number. A public outcry about “big-brother government” caused Congress to scrap this plan and led the US President to appoint committees to recommend privacy legislation. In the early 1970s, major computer privacy laws were passed in the USA. Ever since then, computer-threatened privacy has remained as a topic of public concern. The ease and efficiency with which computers and computer networks can be used to gather, store, search, compare, retrieve and share personal information make computer technology especially threatening to anyone who wishes to keep various kinds of “sensitive” information (e.g., medical records) out of the public domain or out of the hands of those who are perceived as potential threats. During the past decade, commercialization and rapid growth of the internet; the rise of the world-wide-web; increasing “user-friendliness” and processing power of computers; and decreasing costs of computer technology have led to new privacy issues, such as data-mining, data matching, recording of “click trails” on the web, and so on [see Tavani, 1999].

The variety of privacy-related issues generated by computer technology has led philosophers and other thinkers to re-examine the concept of privacy itself. Since the mid-1960s, for example, a number of scholars have elaborated a theory of privacy defined as “control over personal information” (see, for example, [Westin, 1967], [Miller, 1971], [Fried, 1984] and [Elgesem, 1996]). On the other hand, philosophers Moor and Tavani have argued that control of personal information is insufficient to establish or protect privacy, and “the concept of privacy itself is best defined in terms of restricted access, not control” [Tavani and Moor, 2001] (see also [Moor, 1997]). In addition, Nissenbaum has argued that there is even a sense of privacy in public spaces, or circumstances “other than the intimate.” An adequate definition of privacy, therefore, must take account of “privacy in public” [Nissenbaum, 1998]. As computer technology rapidly advances — creating ever new possibilities for compiling, storing, accessing and analyzing information — philosophical debates about the meaning of “privacy” will likely continue (see also [Introna, 1997]).

Questions of anonymity on the internet are sometimes discussed in the same context with questions of privacy and the internet, because anonymity can provide many of the same benefits as privacy. For example, if someone is using the internet to obtain medical or psychological counseling, or to discuss sensitive topics (for example, AIDS, abortion, gay rights, venereal disease, political dissent), anonymity can afford protection similar to that of privacy. Similarly, both anonymity and privacy on the internet can be helpful in preserving human values such as security, mental health, self-fulfillment and peace of mind. Unfortunately, privacy and anonymity also can be exploited to facilitate unwanted and undesirable computer-aided activities in cyberspace, such as money laundering, drug trading, terrorism, or preying upon the vulnerable (see [Marx, 2001] and [Nissenbaum, 1999]).

2.4 Intellectual Property

One of the more controversial areas of computer ethics concerns the intellectual property rights connected with software ownership. Some people, like Richard Stallman who started the Free Software Foundation, believe that software ownership should not be allowed at all. He claims that all information should be free, and all programs should be available for copying, studying and modifying by anyone who wishes to do so [Stallman, 1993]. Others argue that software companies or programmers would not invest weeks and months of work and significant funds in the development of software if they could not get the investment back in the form of license fees or sales [Johnson, 1992]. Today's software industry is a multibillion dollar part of the economy; and software companies claim to lose billions of dollars per year through illegal copying (“software piracy”). Many people think that software should be ownable, but “casual copying” of personally owned programs for one's friends should also be permitted (see [Nissenbaum, 1995]). The software industry claims that millions of dollars in sales are lost because of such copying. Ownership is a complex matter, since there are several different aspects of software that can be owned and three different types of ownership: copyrights, trade secrets, and patents. One can own the following aspects of a program:

  1. The “source code” which is written by the programmer(s) in a high-level computer language like Java or C++.
  2. The “object code”, which is a machine-language translation of the source code.
  3. The “algorithm”, which is the sequence of machine commands that the source code and object code represent.
  4. The “look and feel” of a program, which is the way the program appears on the screen and interfaces with users.

A very controversial issue today is owning a patent on a computer algorithm. A patent provides an exclusive monopoly on the use of the patented item, so the owner of an algorithm can deny others use of the mathematical formulas that are part of the algorithm. Mathematicians and scientists are outraged, claiming that algorithm patents effectively remove parts of mathematics from the public domain, and thereby threaten to cripple science. In addition, running a preliminary “patent search” to make sure that your “new” program does not violate anyone's software patent is a costly and time-consuming process. As a result, only very large companies with big budgets can afford to run such a search. This effectively eliminates many small software companies, stifling competition and decreasing the variety of programs available to the society [The League for Programming Freedom, 1992].

2.5 Professional Responsibility

Computer professionals have specialized knowledge and often have positions with authority and respect in the community. For this reason, they are able to have a significant impact upon the world, including many of the things that people value. Along with such power to change the world comes the duty to exercise that power responsibly [Gotterbarn, 2001]. Computer professionals find themselves in a variety of professional relationships with other people [Johnson, 1994], including:

employer employee
client professional
professional professional
society professional

These relationships involve a diversity of interests, and sometimes these interests can come into conflict with each other. Responsible computer professionals, therefore, will be aware of possible conflicts of interest and try to avoid them.

Professional organizations in the USA, like the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronic Engineers (IEEE), have established codes of ethics, curriculum guidelines and accreditation requirements to help computer professionals understand and manage ethical responsibilities. For example, in 1991 a Joint Curriculum Task Force of the ACM and IEEE adopted a set of guidelines (“Curriculum 1991”) for college programs in computer science. The guidelines say that a significant component of computer ethics (in the broad sense) should be included in undergraduate education in computer science [Turner, 1991].

In addition, both the ACM and IEEE have adopted Codes of Ethics for their members. The most recent ACM Code (1992), for example, includes “general moral imperatives”, such as “avoid harm to others” and “be honest and trustworthy”. And also included are “more specific professional responsibilities” like “acquire and maintain professional competence” and “know and respect existing laws pertaining to professional work.” The IEEE Code of Ethics (1990) includes such principles as “avoid real or perceived conflicts of interest whenever possible” and “be honest and realistic in stating claims or estimates based on available data.”

The Accreditation Board for Engineering Technologies (ABET) has long required an ethics component in the computer engineering curriculum. And in 1991, the Computer Sciences Accreditation Commission/Computer Sciences Accreditation Board (CSAC/CSAB) also adopted the requirement that a significant component of computer ethics be included in any computer sciences degree granting program that is nationally accredited [Conry, 1992].

It is clear that professional organizations in computer science recognize and insist upon standards of professional responsibility for their members.

2.6 Globalization

Computer ethics today is rapidly evolving into a broader and even more important field, which might reasonably be called “global information ethics”. Global networks like the Internet and especially the world-wide-web are connecting people all over the earth. As Krystyna Gorniak-Kocikowska perceptively notes in her paper, “The Computer Revolution and the Problem of Global Ethics” [Gorniak-Kocikowska, 1996], for the first time in history, efforts to develop mutually agreed standards of conduct, and efforts to advance and defend human values, are being made in a truly global context. So, for the first time in the history of the earth, ethics and values will be debated and transformed in a context that is not limited to a particular geographic region, or constrained by a specific religion or culture. This may very well be one of the most important social developments in history. Consider just a few of the global issues:

Global Laws

If computer users in the United States, for example, wish to protect their freedom of speech on the internet, whose laws apply? Nearly two hundred countries are already interconnected by the internet, so the United States Constitution (with its First Amendment protection for freedom of speech) is just a “local law” on the internet — it does not apply to the rest of the world. How can issues like freedom of speech, control of “pornography”, protection of intellectual property, invasions of privacy, and many others to be governed by law when so many countries are involved? If a citizen in a European country, for example, has internet dealings with someone in a far-away land, and the government of that land considers those dealings to be illegal, can the European be tried by the courts in the far-away country?

Global Cyberbusiness

The world is very close to having technology that can provide electronic privacy and security on the internet sufficient to safely conduct international business transactions. Once this technology is in place, there will be a rapid expansion of global “cyberbusiness”. Nations with a technological infrastructure already in place will enjoy rapid economic growth, while the rest of the world lags behind. What will be the political and economic fallout from rapid growth of global cyberbusiness? Will accepted business practices in one part of the world be perceived as “cheating” or “fraud” in other parts of the world? Will a few wealthy nations widen the already big gap between rich and poor? Will political and even military confrontations emerge?

Global Education

If inexpensive access to the global information net is provided to rich and poor alike — to poverty-stricken people in ghettos, to poor nations in the “third world”, etc. — for the first time in history, nearly everyone on earth will have access to daily news from a free press; to texts, documents and art works from great libraries and museums of the world; to political, religious and social practices of peoples everywhere. What will be the impact of this sudden and profound “global education” upon political dictatorships, isolated communities, coherent cultures, religious practices, etc.? As great universities of the world begin to offer degrees and knowledge modules via the internet, will “lesser” universities be damaged or even forced out of business?

Information Rich and Information Poor

The gap between rich and poor nations, and even between rich and poor citizens in industrialized countries, is already disturbingly wide. As educational opportunities, business and employment opportunities, medical services and many other necessities of life move more and more into cyberspace, will gaps between the rich and the poor become even worse?

Bibliography

  • Adam, A. (2000), “Gender and Computer Ethics,” Computers and Society, 30(4): 17-24.
  • Adam, A. and J. Ofori-Amanfo (2000), “Does Gender Matter in Computer Ethics?” Ethics and Information Technology, 2(1): 37-47.
  • Anderson, R, D. Johnson, D. Gotterbarn and J. Perrolle (1993), “Using the New ACM Code of Ethics in Decision Making,” Communications of the ACM, 36: 98-107.
  • Begg, M.M. (2005), “Muslim Parents Guide: Making Responsible Use of Information and Communication Technologies at Home,” Centre for Computing and Social Responsibility, De Montfort University, Leicester, UK.
  • Bohman, James (2008), “The Transformation of the Public Sphere: Political Authority, Communicative Freedom, and Internet Publics,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 66-92.
  • Brennan, G. and P. Pettit (2008), “Esteem, Identiiability, and the Internet,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 175-94.
  • Brey, P. (2001), “Disclosive Computer Ethics,” in R. Spinello and H. Tavani (eds.), Readings in CyberEthics, Sudbury, MA: Jones and Bartlett.
  • Brey, P. (2006), “Evaluating the Social and Cultural Implications of the Internet,” Computers and Society, 36(3): 41-44.
  • Bynum, T. (1982), “A Discipline in its Infancy,” The Dallas Morning News, January 12, 1982, D/1, D/6.
  • Bynum, T. (1999), “The Development of Computer Ethics as a Philosophical Field of Study,” The Australian Journal of Professional and Applied Ethics, 1(1): 1-29.
  • Bynum, T. (2000), “The Foundation of Computer Ethics,” Computers and Society, 30(2): 6-13.
  • Bynum, T. (2004), “Ethical Challenges to Citizens of the ‘Automatic Age’: Norbert Wiener on the Information Society,” Journal of Information, Communication and Ethics in Society, 2(2): 65-74.
  • Bynum, T. (2005), “Norbert Wiener's Vision: the Impact of the ‘Automatic Age’ on our Moral Lives,” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany, NY: SUNY Press, 11-25.
  • Bynum, T. (2006), “Flourishing Ethics,” Ethics and Information Technology, 8(4): 157-173.
  • Bynum, T. (2007), “Norbert Wiener and the Rise of Information Ethics,” in J. van den Hoven and J. Weckert (eds.), Moral Philosophy and Information Technology, Cambridge: Cambridge University Press.
  • Bynum, T. (2008), “Norbert Weiner and the Rise of Information Ethics,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 8-25.
  • Bynum, T. and P. Schubert (1997), “How to do Computer Ethics — A Case Study: The Electronic Mall Bodensee,” in J. van den Hoven (ed.), Computer Ethics—Philosophical Enquiry, Rotterdam: Erasmus University Press, 85-95.
  • Capurro, R. (2007a), “Information Ethics for and from Africa,” International Review of Information Ethics, 2007: 3-13.
  • Capurro, R. (2007b), “Intercultural Information Ethics,” in R. Capurro, J. Frühbauer and T. Hausmanninger (eds.), Localizing the Internet: Ethical Issues in Intercultural Perspective, (ICIE Series, Volume 4), Munich: Fink, 2007: 21-38.
  • Capurro, R. (2006), “Towards an Ontological Foundation for Information Ethics,” Ethics and Information Technology, 8(4): 157-186.
  • Capurro, R. (2004), “The German Debate on the Information Society,” The Journal of Information, Communication and Ethics in Society, 2 (Supplement): 17-18.
  • Cavalier, R. (ed.) (2005), The Impact of the Internet on Our Moral Lives, Albany, NY: SUNY Press.
  • Cocking, D. (2008), “Plural Selves and Relational Identity: Intimacy and Privacy Online,” In J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 123-41.
  • Conry, S. (1992), “Interview on Computer Science Accreditation,” in T. Bynum and J. Fodor (creators), Computer Ethics in the Computer Science Curriculum (a video program), Kingston, NY: Educational Media Resources, Inc.
  • Edgar, S. (1997), Morality and Machines: Perspectives on Computer Ethics, Sudbury, MA: Jones and Bartlett.
  • Elgesem, D. (1995), “Data Privacy and Legal Argumentation,” Communication and Cognition, 28(1): 91-114.
  • Elgesem, D. (1996), “Privacy, Respect for Persons, and Risk,” in C. Ess (ed.), Philosophical Perspectives on Computer-Mediated Communication, Albany: SUNY Press, 45-66.
  • Elgesem, D. (2002), “What is Special about the Ethical Problems in Internet Research?” Ethics and Information Technology, 4(3): 195-203.
  • Elgesem, D. (2008), “Information Technology Research Ethics,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 354-75.
  • Ess, C. (1996), “The Political Computer: Democracy, CMC, and Habermas,” in C. Ess (ed.), Philosophical Perspectives on Computer-Mediated Communication, Albany: SUNY Press, 197-230.
  • Ess, C. (ed.) (2001a), Culture, Technology, Communication: Towards an Intercultural Global Village, Albany: SUNY Press.
  • Ess, C. (2001b), “What's Culture got to do with it? Cultural Collisions in the Electronic Global Village,” in C. Ess (ed.), Culture, Technology, Communication: Towards an Intercultural Global Village, Albany: SUNY Press, 1-50.
  • Ess, C. (2004), “Computer-Mediated Communication and Human-Computer Interaction,” in L. Floridi (ed.), The Blackwell Guide to the Philosophy of Computing and Information, Oxford: Blackwell, 76-91.
  • Ess, C. (2005), “Moral Imperatives for Life in an Intercultural Global Village, ” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 161-193.
  • Ess, C. (2008), “Culture and Global Networks: Hope for a Global Ethics?” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 195-225.
  • Fairweather, B. (1998), “No PAPA: Why Incomplete Codes of Ethics are Worse than None at all,” in G. Collste (ed.), Ethics and Information Technology, New Delhi: New Academic Publishers.
  • Flanagan, M., D. Howe, and H. Nissenbaum (2008), “Embodying Value in Technology: Theory and Practice,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 322-53.
  • Floridi, L. (1999), “Information Ethics: On the Theoretical Foundations of Computer Ethics”, Ethics and Information Technology, 1(1): 37-56.
  • Floridi, L. (ed.) (2004), The Blackwell Guide to the Philosophy of Computing and Information, Oxford: Blackwell.
  • Floridi, L. (2005b), “Internet Ethics: The Constructionist Values of Homo Poieticus,” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 195-214.
  • Floridi, L. (2006a), “Information Ethics: Its Nature and Scope,” Computers and Society, 36(3): 21-36.
  • Floridi, L. (2006b), “Information Technologies and the Tragedy of the Good Will,” Ethics and Information Technology, 8(4): 253-262.
  • Floridi, L. (2008), “Information Ethics: Its Nature and Scope,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 40-65.
  • Floridi, L. and J. Sanders (2004), “The Foundationalist Debate in Computer Ethics,” in R. Spinello and H. Tavani (eds.), Readings in CyberEthics, 2nd edition, Sudbury, MA: Jones and Bartlett, 81-95.
  • Fodor, J. and T. Bynum (1992), What Is Computer Ethics? (a video program), Kingston, NY: Educational Media Resources, Inc.
  • Forester, T. and P. Morrison (1990), Computer Ethics: Cautionary Tales and Ethical Dilemmas in Computing, Cambridge, MA: MIT Press.
  • Fried, C. (1984), “Privacy,” in F. Schoeman (ed.), Philosophical Dimensions of Privacy, Cambridge: Cambridge University Press.
  • Friedman, B. (ed.) (1997), Human Values and the Design of Computer Technology, Cambridge: Cambridge University Press.
  • Friedman, B. and H. Nissenbaum (1996), “Bias in Computer Systems,” ACM Transactions on Information Systems, 14(3): 330-347.
  • Gert, B. (1998), Morality: Its Nature and Justification, Oxford: Oxford University Press.
  • Gert, B. (1999), “Common Morality and Computing,” Ethics and Information Technology, 1(1): 57-64.
  • Goldman, A. (2008), “The Social Epistemology of Blogging,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 111-22.
  • Gordon, W. (2008), “Moral Philosophy, Information Technology, and Copyright: The Grokster Case,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 270-300.
  • Gorniak-Kocikowska, K. (1996), “The Computer Revolution and the Problem of Global Ethics,” in T. Bynum and S. Rogerson (eds.), Global Information Ethics, Guildford, UK: Opragen Publications, 177-90.
  • Gorniak-Kocikowska, K. (2005) “From Computer Ethics to the Ethics of Global ICT Society,” in T. Bynum, G. Collste, and S. Rogerson (eds.), Proceedings of ETHICOMP2005 (CD-ROM), Center for Computing and Social Responsibility, Linköpings University.
  • Gorniak-Kocikowska, K. (2007), “ICT, Globalization and the Pursuit of Happiness: The Problem of Change,” in Proceedings of ETHICOMP2007, Tokyo: Meiji University Press.
  • Gotterbarn, D. (1991), “Computer Ethics: Responsibility Regained,” National Forum: The Phi Beta Kappa Journal, 71: 26-31.
  • Gotterbarn, D. (2001), “Informatics and Professional Responsibility,” Science and Engineering Ethics, 7(2): 221-30.
  • Gotterbarn, D. (2002) “Reducing Software Failures: Addressing the Ethical Risks of the Software Development Life Cycle,” Australian Journal of Information Systems, 9(2): 155-65.
  • Gotterbarn, D., K. Miller, and S. Rogerson (1997), “Software Engineering Code of Ethics,” Information Society, 40(11): 110-118.
  • Gotterbarn, D. and K. Miller (2004), “Computer Ethics in the Undergraduate Curriculum: Case Studies and the Joint Software Engineer's Code,” Journal of Computing Sciences in Colleges, 20(2): 156-167.
  • Gotterbarn, D. and S. Rogerson (2005), “Responsible Risk Analysis for Software Development: Creating the Software Development Impact Statement,” Communications of the Association for Information Systems, 15(40): 730-50.
  • Grodzinsky, F. (1997), “Computer Access for Students with Disabilities,” SIGSCE Bulletin, 29(1): 292-295; [Available online].
  • Grodzinsky, F. (1999), “The Practitioner from Within: Revisiting the Virtues,” Computers and Society, 29(2): 9-15.
  • Grodzinsky, F., K. Miller and M. Wolfe (2003), “Ethical Issues in Open Source Software,” Journal of Information, Communication and Ethics in Society, 1(4): 193-205.
  • Grodzinsky, F. and H. Tavani (2002), “Ethical Reflections on Cyberstalking,” Computers and Society, 32(1): 22-32.
  • Grodzinsky, F. and H. Tavani (2004), “Verizon vs. the RIAA: Implications for Privacy and Democracy,” in J. Herkert (ed.), Proceedings of ISTAS 2004: The International Symposium on Technology and Society, Los Alamitos, CA: IEEE Computer Society Press.
  • Himma, K. (2003), “The Relationship Between the Uniqueness of Computer Ethics and its Independence as a Discipline in Applied Ethics,” Ethics and Information Technology, 5(4): 225-237.
  • Himma, K. (2004), “The Moral Significance of the Interest in Information: Reflections on a Fundamental Right to Information,” Journal of Information, Communication, and Ethics in Society, 2(4): 191-202.
  • Himma, K. (2007), “Artificial Agency, Consciousness, and the Criteria for Moral Agency: What Properties Must an Artificial Agent Have to be a Moral Agent?” in Proceedings of ETHICOMP2007, Tokyo: Meiji University Press.
  • Himma, K. (2004), “There's Something about Mary: The Moral Value of Things qua Information Objects”, Ethics and Information Technology, 6(3): 145-159.
  • Himma, K. (2006), “Hacking as Politically Motivated Civil Disobedience: Is Hacktivism Morally Justified?” in K. Himma (ed.), Readings in Internet Security: Hacking, Counterhacking, and Society, Sudbury, MA: Jones and Bartlett.
  • Huff. C., J. Fleming, and J. Cooper (1991), “The Social Basis of Gender Differences in Human-computer Interaction.” in C. Martin (ed.), In Search of Gender-free Paradigms for Computer Science Education, Eugene, OR: ISTE Research Monographs, 19-32.
  • Huff, C. and T. Finholt (eds.) (1994), Social Issues in Computing: Putting Computers in Their Place, New York: McGraw-Hill.
  • Huff, C. and D. Martin (1995), “Computing Consequences: A Framework for Teaching Ethical Computing,” Communications of the ACM, 38(12): 75-84.
  • Huff, C. (2002), “Gender, Software Design, and Occupational Equity,” SIGCSE Bulletin: Inroads, 34: 112-115.
  • Huff, C. (2004), “Unintentional Power in the Design of Computing Systems.” in T. Bynum and S. Rogerson (eds.), Computer Ethics and Professional Responsibility, Oxford: Blackwell.
  • Huff, C., D. Johnson, and K. Miller (2003), “Virtual Harms and Real Responsibility,” Technology and Society Magazine (IEEE), 22(2): 12-19.
  • Introna, L. (1997), “Privacy and the Computer: Why We Need Privacy in the Information Society,” Metaphilosophy, 28(3): 259-275.
  • Introna, L. (2002), “On the (Im)Possibility of Ethics in a Mediated World,” Information and Organization, 12(2): 71-84.
  • Introna, L. (2005a), “Disclosive Ethics and Information Technology: Disclosing Facial Recognition Systems,” Ethics and Information Technology, 7(2): 75-86.
  • Introna, L. (2005b) “Phenomenological Approaches to Ethics and Information Technology,” The Stanford Encyclopedia of Philosophy (Fall 2005 Edition), Edward N. Zalta (ed.), URL = .
  • Introna, L. and H. Nissenbaum (2000), “Shaping the Web: Why the Politics of Search Engines Matters,” The Information Society, 16(3): 1-17.
  • Introna, L. and N. Pouloudi (2001), “Privacy in the Information Age: Stakeholders, Interests and Values.” in J. Sheth (ed.), Internet Marketing, Fort Worth, TX: Harcourt College Publishers, 373-388.
  • Johnson, D. (1985), Computer Ethics, First Edition, Englewood Cliffs, NJ: Prentice-Hall; Second Edition, Englewood Cliffs, NJ: Prentice-Hall, 1994; Third Edition Upper Saddle River, NJ: Prentice-Hall, 2001.
  • Johnson, D. (1997a), “Ethics Online,” Communications of the ACM, 40(1): 60-65.
  • Johnson, D. (1997b), “Is the Global Information Infrastructure a Democratic Technology?” Computers and Society, 27(4): 20-26.
  • Johnson, D. (2004), “Computer Ethics,” in L. Floridi (ed.), The Blackwell Guide to the Philosophy of Computing and Information, Oxford: Blackwell, 65-75.
  • Johnson, D. and H. Nissenbaum (eds.) (1995), Computing, Ethics & Social Values, Englewood Cliffs, NJ: Prentice Hall.
  • Johnson, D. and T. Powers (2008), “Computers as Surrogate Agents,” in J. van den Hoven and J. Weckert, (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 251-69.
  • Kocikowski, A. (1996), “Geography and Computer Ethics: An Eastern European Perspective,” in T. Bynum and S. Rogerson (eds.), Science and Engineering Ethics (Special Issue: Global Information Ethics), 2(2): 201-10.
  • Maner, W. (1980), Starter Kit in Computer Ethics, Hyde Park, NY: Helvetia Press and the National Information and Resource Center for Teaching Philosophy.
  • Maner, W. (1996), “Unique Ethical Problems in Information Technology,” in T. Bynum and S. Rogerson (eds.), Science and Engineering Ethics (Special Issue: Global Information Ethics), 2(2): 137-154.
  • Martin, C. and D. Martin (1990), “Professional Codes of Conduct and Computer Ethics Education,” Social Science Computer Review, 8(1): 96-108.
  • Martin, C., C. Huff, D. Gotterbarn, K. Miller, et al. (1996), “A Framework for Implementing and Teaching the Social and Ethical Impact of Computing,” Education and Information Technologies, 1(2): 101-122.
  • Martin, C., C. Huff, D. Gotterbarn, and K. Miller (1996), “Implementing a Tenth Strand in the Computer Science Curriculum” (Second Report of the Impact CS Steering Committee), Communications of the ACM, 39(12): 75-84.
  • Marx, G. (2001), “Identity and Anonymity: Some Conceptual Distinctions and Issues for Research,” in J. Caplan and J. Torpey (eds.), Documenting Individual Identity, Princeton: Princeton University Press.
  • Mather, K. (2005), “The Theoretical Foundation of Computer Ethics: Stewardship of the Information Environment,” in Contemporary Issues in Governance (Proceedings of GovNet Annual Conference, Melbourne, Australia, 28-30 November, 2005), Melbourne: Monash University.
  • Matthews, S. (2008), “Identity and Information Technology.” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 142-60.
  • Miller, A. (1971), The Assault on Privacy: Computers, Data Banks, and Dossiers, Ann Arbor: University of Michigan Press.
  • Miller, K. (2005), “Web standards: Why So Many Stray from the Narrow Path,” Science and Engineering Ethics, 11(3): 477-479.
  • Miller, K. and D. Larson (2005a), “Agile Methods and Computer Ethics: Raising the Level of Discourse about Technological Choices,” IEEE Technology and Society, 24(4): 36-43.
  • Miller, K. and D. Larson (2005b), “Angels and Artifacts: Moral Agents in the Age of Computers and Networks,” Journal of Information, Communication & Ethics in Society, 3(3): 151-157.
  • Miller, S. (2008), “Collective Responsibility and Information and Communication Technology.” in J. van den Hoven and J> Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 226-50.
  • Moor, J.. (1979), “Are there Decisions Computers Should Never Make?” Nature and System, 1: 217-29.
  • Moor, J. (1985) “What Is Computer Ethics?” Metaphilosophy, 16(4): 266-75.
  • Moor, J. (1996), “Reason, Relativity and Responsibility in Computer Ethics,” in Computers and Society, 28(1) (1998): 14-21; originally a keynote address at ETHICOMP96 in Madrid, Spain, 1996.
  • Moor, J. (1997), “Towards a Theory of Privacy in the Information Age,” Computers and Society, 27(3): 27-32.
  • Moor, J. (1999), “Just Consequentialism and Computing,” Ethics and Information Technology, 1(1): 65-69.
  • Moor, J. (2001), “The Future of Computer Ethics: You Ain't Seen Nothin' Yet,” Ethics and Information Technology, 3(2): 89-91.
  • Moor, J. 2005), “Should We Let Computers Get under Our Skin?” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 121-138.
  • Moor, J. (2006), “The Nature, Importance, and Difficulty of Machine Ethics,” IEEE Intelligent Systems, 21(4): 18-21.
  • Moor, J. (2008) “Why We Need Better Ethics for Emerging Technologies,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 26-39.
  • Nissenbaum, H. (1995), “Should I Copy My Neighbor's Software?” in D. Johnson and H. Nissenbaum (eds), Computers, Ethics, and Social Responsibility, Englewood Cliffs, NJ: Prentice Hall.
  • Nissenbaum, H. (1997), “Can We Protect Privacy in Public?” in Proceedings of Computer Ethics—Philosophical Enquiry 97 (CEPE97), Rotterdam: Erasmus University Press, 191-204; reprinted Nissenbaum 1998a.
  • Nissenbaum, H. (1998a), “Protecting Privacy in an Information Age: The Problem of Privacy in Public,” Law and Philosophy, 17: 559-596.
  • Nissenbaum, H. (1998b), “Values in the Design of Computer Systems,” Computers in Society, 1998: 38-39.
  • Nissenbaum, H. (1999), “The Meaning of Anonymity in an Information Age,” The Information Society, 15: 141-144.
  • Nissenbaum, H. (2005a), “Hackers and the Contested Ontology of Cyberspace,” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 139-160.
  • Nissenbaum, H. (2005b), “Where Computer Security Meets National Security,” Ethics and Information Technology, 7(2): 61-73.
  • Parker, D. (1968), “Rules of Ethics in Information Processing,” Communications of the ACM, 11: 198-201.
  • Parker, D. (1979), Ethical Conflicts in Computer Science and Technology. Arlington, VA: AFIPS Press.
  • Parker, D., S. Swope and B. Baker (1990), Ethical Conflicts in Information & Computer Science, Technology & Business, Wellelsey, MA: QED Information Sciences.
  • Pecorino, P. and W. Maner (1985), “A Proposal for a Course on Computer Ethics,” Metaphilosophy, 16(4): 327-337.
  • Perrolle, J. (1987), Computers and Social Change: Information, Property, and Power, Belmont, CA: Wadsworth.
  • Pettit, P. (2008), “Trust, Reliance, and the Internet,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 161-74.
  • Rogerson, S. (1996), “The Ethics of Computing: The First and Second Generations,” The UK Business Ethics Network News, 6: 1-4.
  • Rogerson, S. (1998), “Computer and Information Ethics,” in R. Chadwick (ed.), Encylopedia of Applied Ethics, San Diego, CA: Academic Press, 563-570.
  • Rogerson, S. (2004), “The Ethics of Software Development Project Management,” in T. Bynum and S. Rogerson (eds.), Computer Ethics and Professional Responsibility, Oxford: Blackwell, 119-128.
  • Rogerson, S. and T. Bynum (1995), “Cyberspace: The Ethical Frontier,” The Times Higher Education Supplement (The London Times), No. 1179, June, 9, 1995, iv.
  • Rogerson, S., B. Fairweather, and M. Prior (2002), “The Ethical Attitudes of Information Systems Professionals: Outcomes of an Initial Survey,” Telematics and Informatics, 19: 21-36.
  • Rogerson, S. and D. Gotterbarn (1998), “The Ethics of Software Project Management,” in G. Collste (ed.), Ethics and Information Technology, New Delhi: New Academic Publishers, 137-154.
  • Sojka, J. (1996), “Business Ethics and Computer Ethics: The View from Poland,” in T. Bynum and S. Rogerson (eds.), Global Information Ethics, Guildford, UK: Opragen Publications, 191-200.
  • Spafford, E., K. Heaphy, and D. Ferbrache (eds.) (1989), Computer Viruses: Dealing with Electronic Vandalism and Programmed Threats, Arlington, VA: ADAPSO (now ITAA).
  • Spafford, E. (1992), “Are Computer Hacker Break-Ins Ethical?” Journal of Systems and Software, 17: 41-47.
  • Spinello, R. (1997), Case Studies in Information and Computer Ethics, Upper Saddle River, NJ: Prentice-Hall.
  • Spinello, R. (2000), CyberEthics: Morality and Law in Cyberspace, Sudbury, MA: Jones and Bartlett.
  • Spinello, R. and H. Tavani (2001a), “The Internet, Ethical Values, and Conceptual Frameworks: An Introduction to Cyberethics,” Computers and Society, 31(2): 5-7.
  • Spinello, R, and H. Tavani (eds.) (2001b), Readings in CyberEthics, Sudbury, MA: Jones and Bartlett; Second Edition, 2004.
  • Spinello, R. and H.. Tavani (eds.) (2005), Intellectual Property Rights in a Networked World: Theory and Practice, Hershey, PA: Idea Group/Information Science Publishing.
  • Stahl, B. (2004a), “Information, Ethics and Computers: The Problem of Autonomous Moral Agents,” Minds and Machines, 14: 67-83.
  • Stahl, B. (2004b), Responsible Management of Information Systems, Hershey, PA: Idea Group/Information Science Publishing.
  • Stahl, B. (2005), “The Ethical Problem of Framing E-Government in Terms of E-Commerce,” Electronic Journal of E-Government, 3(2): 77-86.
  • Stahl, B. (2006), “Responsible Computers? A Case for Ascribing Quasi-responsibility to Computers Independent of Personhood or Agency,” Ethics and Information Technology, 8(4): 205-213.
  • Sunstein, C. (2008), “Democracy and the Internet,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 93-110.
  • Tavani, H. (ed.) (1996), Computing, Ethics, and Social Responsibility: A Bibliography, Palo Alto, CA: Computer Professionals for Social Responsibility Press.
  • Tavani, H. (1999a), “Privacy and the Internet,” Proceedings of the Fourth Annual Ethics and Technology Conference, Chestnut Hill, MA: Boston College Press, 114-25.
  • Tavani, H. (1999b), “Privacy On-Line,” Computers and Society, 29(4): 11-19.
  • Tavani, H. (2002), “The Uniqueness Debate in Computer Ethics: What Exactly is at Issue and Why Does it Matter?” Ethics and Information Technology, 4(1): 37-54.
  • Tavani, H. (2004), Ethics and Technology: Ethical Issues in an Age of Information and Communication Technology, Hoboken, NJ: John Wiley and Sons; Second Edition, 2007.
  • Tavani, H. (2005), “The Impact of the Internet on our Moral Condition: Do We Need a New Framework of Ethics?” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 215-237.
  • Tavani, H. (2006), Ethics, Computing, and Genomics, Sudbury, MA: Jones and Bartlett.
  • Tavani, H. and J. Moor (2001), “Privacy Protection, Control of Information, and Privacy-Enhancing Technologies,” Computers and Society, 31(1): 6-11.
  • Turkle, S. (1984), The Second Self: Computers and the Human Spirit, New York: Simon & Schuster.
  • Turner, A.J. (1991), “Summary of the ACM/IEEE-CS Joint Curriculum Task Force Report: Computing Curricula, 1991,” Communications of the ACM, 34(6): 69-84.
  • Turner, E. (2006), “Teaching Gender-Inclusive Computer Ethics, ” in I. Trauth (ed.), Encyclopedia of Gender and Information Technology: Exploring the Contributions, Challenges, Issues and Experiences of Women in Information Technology, Hershey, PA: Idea Group/Information Science Publishing, 1142-1147.
  • van den Hoven, J. (1997a), “Computer Ethics and Moral Methodology,” Metaphilosophy, 28(3): 234-48.
  • van den Hoven, J. (1997b), “Privacy and the Varieties of Informational Wrongdoing,” Computers and Society, 27(3): 33-37.
  • van den Hoven, J. (1998), “Ethics, Social Epistemics, Electronic Communication and Scientific Research,” European Review, 7(3): 341-349.
  • van den Hoven, J. (2008a), “Information Technology, Privacy, and the Protection of Personal Data,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 301-321.
  • van den Hoven, J. and E. Rooksby (2008), “Distributive Justice and the Value of Information: A (Broadly) Rawlsian Approach,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 376-96.
  • van den Hoven, J. and J. Weckert (2008), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press.
  • Volkman, R. (2003), “Privacy as Life, Liberty, Property,” Ethics and Information Technology, 5(4): 199-210.
  • Volkman, R. (2005), “Dynamic Traditions: Why Globalization Does Not Mean Homogenization,” in Proceedings of ETHICOMP2005 (CD-ROM), Center for Computing and Social Responsibility, Linköpings University.
  • Volkman, R. (2007), “The Good Computer Professional Does Not Cheat at Cards,” in Proceedings of ETHICOMP2007, Tokyo: Meiji University Press.
  • Weckert, J. (2002), “Lilliputian Computer Ethics,” Metaphilosophy, 33(3): 366-375.
  • Weckert, J. (2005), “Trust in Cyberspace,” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 95-117.
  • Weckert, J. and D. Adeney (1997), Computer and Information Ethics, Westport, CT: Greenwood Press.
  • Weizenbaum, J. (1976), Computer Power and Human Reason: From Judgment to Calculation, San Francisco, CA: Freeman.
  • Westin, A. (1967), Privacy and Freedom, New York: Atheneum.
  • Wiener, N. (1948), Cybernetics: or Control and Communication in the Animal and the Machine, New York: Technology Press/John Wiley & Sons.
  • Wiener, N. (1950), The Human Use of Human Beings: Cybernetics and Society, Boston: Houghton Mifflin; Second Edition Revised, New York, NY: Doubleday Anchor 1954.
  • Wiener, N. (1964), God & Golem, Inc.: A Comment on Certain Points Where Cybernetics Impinges on Religion, Cambridge, MA: MIT Press.

Academic Tools

sep man icon How to cite this entry.
sep man icon Preview the PDF version of this entry at the Friends of the SEP Society.
inpho icon Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO).
phil papers icon Enhanced bibliography for this entry at PhilPapers, with links to its database.

Other Internet Resources

Papers and Books

Journals and Web Sites