Moral Rights, Moral Responsibility and the Contemporary Failure of Moral Knowledge


This talk was given at the first annual Human Rights conference hosted by the IPFW Institute for Human Rights at Purdue University, December 10, 2004.

THESIS: The prospects for human rights observance in real life rise and fall with the prospects of moral knowledge, at present quite dim.


Human rights are in desperate straits around the world. They are widely proclaimed, but brutally violated on a mind-numbing scale. The basic outlook which I wish to represent in this talk is that moral rights depend, for their effective implementation, upon a certain condition in human community. If the community is not one of a high level of moral substance (that is, not predominantly one of morally good people, both in official positions and throughout the population), then moral rights will, at best, degenerate into mere legal rights; and even then they will be continually subject to failure in the context of need, because the individuals involved in such contexts do not act to support them. Those legal rights—where they exist—will also be, at most, honored in the letter, and not in the spirit of human dignity, as Kant and those of similar moral outlook would understand human dignity.

When this is the case, those who have legal rights (blacks, women, prisoners of war, homosexuals) may be able to bring governmental processes and forces to bear to secure themselves in certain (obviously important) respects, and that is no small thing. But even that is not a given, and in any case they will not achieve the type of acceptance and endorsement that persons of genuinely good moral will and character extend to others in a moral community. This will be even more true of people outside of ethnic and national groups, and especially when hostilities prevail between such groups.

Professor Clark Butler has written:

In large impersonal societies, individuals steeped in duty consciousness often lack a sufficient knowledge of others and their claims to guarantee protection of their rights even when they would wish to do so. However conscientious individuals are, they are often unconscious of the secondary consequences of actions. Even continuous duty consciousness is thus compatible with periodic justified eruptions of rights consciousness. Yet a significant difference exists between the rights consciousness of individuals who must arouse a non-existence sense of duty and that of individuals who can call on a pre-established sense of duty in others.1

This is a very penetrating observation about the unfortunate human condition. The lack of “a pre-established sense of duty in others” does indeed make “periodic justified eruptions of rights consciousness” inevitable. But I would add that more than such a sense of duty in others is required for a proper functioning of rights in human society. Conscious dutifulness to rights is never enough, and not just for the reasons Professor Butler points out. Rather, such a dutifulness can succeed only as a part of a moral character of pro-active concern for human goods. Beyond such a sense of duty lies the sense of moral identity that each person carries as a marginal presence in all their acts and activities. That is, the sense of what makes me a good person, a person worthy of approval, inclusion and support from normal human beings around me. This sense of moral worth contains a presumption of the reality of moral worth, and a presumption of shared knowledge of that reality. When the sense of moral reality and knowledge is lacking or mistaken—e.g., takes there to be no such thing as moral reality, or takes moral worth to consist in ethnic identity, or in success at pursuing one’s own interest above all, etc.—then the sense of moral identity of the individual (and the group) will lead to the denial or suppression of the human goods which it is the primary function of morality to protect and advance.

Among human goods, of course, rights themselves stand very high. In fact, they are, if you wish, a kind of meta-good, for their point is always to assure the accessibility of other goods. Their point is never just themselves, never just having rights, but a kind of life in which respect and active support for human dignity and well-being is paramount.

Now, what I have called “the sense of moral identity,” which each person carries in all their acts and activities, rests upon a presumption of a shared knowledge of life and of what makes one morally acceptable or praiseworthy or not. However fragmentary or misguided the presumed knowledge may be, it is, I think, impossible for a normal human being (I leave out of account sociopathic and extremely traumatized individuals) to conduct their life except upon the assumption that there is shared or sharable knowledge of who is a morally good person and who is not—and, by extension, of what is right and wrong, of what is morally obligatory or praiseworthy or not, and so forth. Thus, the normal human being accepts the necessity and the possibility of moral guidance and of learning about such matters, and the possibility of being wrong with regard to them. That is, of holding false views regarding them.

*

Throughout the history of ethical theorizing in the Western world, well up into the 20th Century, every important thinker has agreed with that. What most strikingly characterizes 20th Century ethical theorizing is the emergence of Non-Cognitivism as a serious contender in the field of moral understanding. Far from being a passing phase, as often seems assumed currently, Non-Cognitivism (now usually in the guise of one “Constructionism” or another) has entered the life-blood of Western culture. As a result, there is now no recognized, systematic body of moral teaching that can be presented as moral knowledge by the institutions of Western society: chiefly, by the universities, and only slightly less so by the churches or religious institutions—and certainly not by law and government. This fact is the result of what I refer to as “the disappearance of moral knowledge in the 20th Century.” If one wishes to see the process through which this came about from the viewpoint of the universities, Julie Reuben’s book, The Making of the Modern University gives the institutional history.2 It was only during the mid- and late 20th Century that the University became the center of cultural authority and set the societal standard of what counts as knowledge and what does not. Currently, by the standard it sets, moral understanding and judgment do not count as knowledge. This is simply the case, though very few people seem to recognize it.

But the university in the 20th Century was in this respect informed and controlled by long-range developments in ethical thought—not by these alone, of course, but essentially by them. Those developments laid the foundation for the emergence and continuing dominance of Non-Cognitivism in our academic culture: indeed, of a Non-Cognitivist culture generally. I want to briefly survey those developments to show how we got where, I take it, we stand today. I am not going to try to convince you here that there has been no recovery from Non-Cognitivism. But I believe that a thesis to that effect can be sustained by a careful examination of the work of writers from Hare to Rawls, Williams, MacIntyre and Gibbard.

For purposes of this discussion I shall use the work of G. E. Moore as a dividing line. Although there is an increasing interest today in the immediate predecessors of Moore, such as T. H. Green and F. H. Bradley, it is still true, as it has been for many decades, that discussions of the history of ethics, proceeding backward, stop at Moore, and only resume with more distant figures such as Mill and Kant. This, I think, is because there really was a profound transformation that occurred with Moore, but it was one which had little to do with his famous Intuitionism or the other usual topics of ethical theory in the 20th Century. Rather, it had to do with what is to be regarded as the primary subject matter of ethical theorizing.

*

In the 1880’s and 1890’s, in the United States and Great Britain at least, a broad consensus about the moral conduct of life prevailed, and was regarded as a systematic body of knowledge. It was a consensus that was thought to be rationally grounded in moral theorizing of the sort commonly done in the universities at that time. This consensus was incorporated in a number of widely used textbooks in ethics, prominent among which were John Dewey’s Outlines of a Critical Theory of Ethics3 (and, later on, Dewey and Tuft’sEthics4), J. H. Muirhead’s The Elements of Ethics5, and J. S. Mackenzie’s A Manual of Ethics6, to mention only three of several textbooks that went through repeated revisions and editions in widespread use.

The main source, by far, for this consensus was the personality and lectures of T. H. Green, forcefully expressed in his short teaching career at Oxford and in his posthumously published Prolegomena to Ethics. I shall refer to this body of university teaching simply as “the pre-Moore synthesis,” because on the theoretical side, it was primarily Moore’s work that resulted in that consensus evaporating, with nothing explicitly replacing it in the academic (and later the cultural) context.

*

Looking back at the pre-Moore synthesis in ethical theorizing, the first point that stands out is what it took to be the central subject matter of ethical inquiry. The favorite term for that subject matter among these writers was “conduct,” by which voluntary action, or action with an end in view, was meant. (Sometimes—and especially later on in this period—conduct was approached by way of the moral judgment. On this approach, one first identified and examined the characteristically moral judgments, and then moved on to an examination of what those judgments are about—which was found to be primarily conduct, or action with an end in view. Then the analysis was turned upon conduct to see what it is and how it divides into “good” and “bad” conduct, and what that means. In other cases one might speak, not of the judgment, but of the “idea” of obligation, etc.)

As for conduct itself, it was regarded as a type of complex and ‘organic’whole.7 John Dewey, for example, said: “Conduct implies more than something taking place; it implies purpose, motive, intention; that the agent knows what he is about, that he has something which he is aiming at.” (Outlines, p. 242) And, on this broad understanding, conduct is not separable from character. Conduct arises out of the whole person. “Character and conduct are, morally, the same thing, looked at first inwardly and then outwardly.” (p. 246) Thus, “To say that a man’s conduct is good, unless it is the manifestation of a good character, is to pass a judgment that is self-contradictory.” (p. 246)

This view of ethical reality was widely assumed among pre-Moore teachers and writers. They were, generally, people who believed life to be an organic whole, where the components of conduct were not atomistic units, but thoroughly inter-penetrated one another, making the “meaning” or nature of each component dependent upon that of all the others. So the motive and intention, feelings or sentiments, the consequences and the personal character, that go into an action which is conduct are not things that can be separately considered in ethical analysis. Considered together, however, they allow us to understand and know—indeed, to teach—what human beings ought to be and to do.

Nevertheless, it is the will that stands out in this literature as primary for moral goodness or badness. MacKenzie remarks that “the good will ...supremely good and ...the ultimate object approved by the moral judgment.” (Manual, p. 129) But, of course, “A good will cannot be there without good action,” he says, “and there can be no good action without a good will.” (p. 129)

T. H. Green had earlier held that the distinction between the good and bad will “must lie at the root of every system of ethics.” On his view, “The statement that the distinction between good and bad will must lie at the basis of any system of ethics, and the further statement that this distinction itself must depend on the nature of the objects willed, would in some sense or other be accepted by all recognized ‘schools’ of moralists, but they would be accepted in very different senses.“8 The good will certainly will be thought of in these writers as a will that is a settled, coherent body of dispositions to act in ways that promote the goods influenced by the action. As James Seth, another luminary in the pre-Moore consensus remarked, “Conduct, therefore, points to character, or settled habit of will. But will is here no mere faculty, it is a man’s ‘proper self’. The will is the self in action; and in order to act, the self must also feel and know.“9

*

The second point that stands out in the pre-Moore synthesis is that it assumed the substance of the moral life, centered on conduct, will and character, to be an object (subject) of knowledge. (Here, let us say that one has knowledge of a certain subject matter if he is capable of—or, in the occurrent sense of “know,” if he actually is—representing that subject matter as it is, on an appropriate basis of thought and experience.) Thus, all of the authors concerned, without exception, speak of “the Science of Ethics,” as the field of inquiry in which they are engaged, and on the basis of which they naturally give fairly specific directions concerning what people ought to do and to be. That is a language and a practice which you can hardly imagine anyone in the field of ethical theory using today. But they used it quite confidently—even without a thought. This followed from what they took the subject matter of ethical theorizing to be, plus the assumption that that subject matter is open to examination by observation, abstraction and theorization. It is the failure of this assumption about the accessibility of will, character, etc. to knowledge that, more than any other single thing, accounts for the current situation with regard to moral knowledge and authority, described above as “the disappearance of moral knowledge.”

*

The third point about the pre-Moore synthesis that must be noted here is that normative, first level moral judgments were regarded as a natural part of moral theory. That is, given the appropriate inquiry into and understanding of the good person or character, and of the good or right action (“conduct”), it was thought that normative judgments of specific application to persons and actions were not only appropriate, but were required as a natural part of the work of the ethical theorist. Ethical theorists thought it to be a natural part of their work to say, to teach, that certain lines of action were right or wrong, and that certain (types of) people were of good or bad—even “evil”—character. They thought that “moral guidance” through instruction and personal influence was a proper part of their work, for which they were responsible, and that it should be expressed “in class,” when appropriate and appropriately. The division between what later came to be known as “meta-ethics” and practical or normative ethics, as that distinction comes into play post-Moore, would have been something inconceivable to them. Contrary to Professors of ethics nowadays, they all would have thought that they had moral knowledge that their students did not have, and had a ‘moral authority’ based thereon.

The effect of this was that they expected their teaching to strongly effect the actions of their students, and by many reports it did. R. G. Collingwood said, in his Autobiography, that “The School of Green sent out into public life a stream of ex-pupils who carried with them the conviction that philosophy and particularly the philosophy they had learned at Oxford was an important thing and that their vocation was to put it into practice.... Through this effect on the minds of its pupils, the philosophy of Green’s school might be found, from about 1880 to about 1910, penetrating and fertilizing every part of the national life.“10

In America, much of the moral drive back of the “Progressive Movement,” of the 1890’s on to the 1930’s and later, came from the teachings of John Dewey (and like-minded university and professional people) about moral reality, moral knowledge, and the moral life. This was the last time there existed in America a generally shared understanding of moral worth that could publicly serve as the basis of a public program of legal and social reform. (Note how far the work of John Rawls, for example, falls short of any such real effect.)

Dewey at mid-career had this to say about moral worth: “We have reached the conclusion that disposition as manifest in endeavor is the seat of moral worth, and that this worth itself consists in a readiness to regard the general happiness—even against contrary promptings of personal comfort and gain.” (Ethics, p. 364) The words are Dewey’s, but he would have been first to tell you that they fairly accurately express the outcome of a remarkably rich period of ethical reflection, running from T. H. Green to Dewey’s middle years. They mark the end of that period, however, and the influence of G. E. Moore and “the analysis of ethical concepts” was to change the subject matter of ethical theory away from the moral life itself, and would institute the period of ethical nihilism—“Non-Cognitivism” or, at least, agnosticism—that continues up to today.

In After Virtue Alaster MacIntyre, who has long been deeply concerned with the state of affairs I call the disappearance of moral knowledge, perceptively comments: “We have not yet fully understood the claims of any moral philosophy until we have spelled out what its social embodiment would be.... Since Moore the dominant narrow conception of moral philosophy has ensured that the moral philosophers could ignore this task.“11 If that is true, we have not yet fully understood the claims of the post-Moore moral philosophers.

*

Now the pre-Moore attitude toward the relevance of moral theory and teaching to responsible moral instruction and guidance, and to the formation of character and society, was the received view from Socrates through the pre-Moore thinkers. It is hard to find any serious exceptions. I know of none. I doubt anyone will seriously question this with respect to Classical and Medieval thinkers. But the assumed connection between moral theory and moral guidance is strong and vital right up through the pre-Moore period. David Hume in the late 1700’s remarks that “The end of all moral speculations is to teach us our duty; and, by proper representations of the deformity of vice and the beauty of virtue, beget correspondent habits, and engage us to avoid the one, and embrace the other.... What is honourable, what is fair, what is becoming, what is noble, what is generous, takes possession of the heart, and animates us to embrace it and maintain it.“12 For all the professed admiration of Hume currently, who today would follow him in this? One wants to keep in mind, however, that it was precisely such a conviction about moral reality and life that animated earlier discussions of rights.

Henry Sidgwick, toward the end of the 1800’s said: “The moralist has a practical aim: We desire knowledge of right conduct in order to act on it.“13

An older contemporary of Sidgwick, Matthew Arnold, in the opening paragraph of his essay “Marcus Aurelius,” in Essays in Criticism, Vol. I, expressed the view that was the common cultural outlook at the time: “The object of systems of morality is to take possession of human life, to save it from being abandoned to passion or allowed to drift at hazard, to give it happiness by establishing it in the practice of virtue; and this object they seek to attain by presenting to human life fixed principles of action, fixed rules of conduct. In its uninspired as well as in its inspired moments, in its days of languor or gloom as well as in its days of sunshine and energy, human life has thus always a clue to follow, and may always be making way toward its goal.“14

*

The obvious if not pressing question is: What happened? In particular, was it actually discovered that there is no possible body of knowledge about moral distinctions and relations upon the basis of which one person might give moral instruction or guidance to another, and moral institutions of right and law be maintained? I cannot believe it was. Of course that whole group of mid-20th Century theorists known as Non-Cognitivists (“Emotivists”) claimed to discover just that. They had a powerful impact upon ethical theory as professionally practiced, and one from which it has not yet recovered to any significant degree. But I suspect that they and the situation they created are more a symptom of deeper-lying causes than a primary cause in their own right.

Certainly they (the Non-Cognitivists) did not discover there was no moral knowledge. Even if there is none, they didn’t discover it. Rather, they were engaged in a project (now long-recognized as failing) of redefining knowledge, and redefining knowledge in such a way that moral distinctions could not be “known” in their new sense. A thin triumph at best, from a rational point of view. But they claimed to have discovered that knowledge was not what it had long been taken to be, and that, among other astonishing results, there could, in the nature of the case, be no knowledge of the domain which pre-Moore ethical theory had taken as its subject matter. What had passed as moral knowledge (for them, now, “moral language”—a not insignificant change of subject) would have to be re-interpreted as something else altogether. In the shadow to the “Linguistic Turn” in philosophy, such a re-interpretation is exactly what the Non-Cognitivists (Ayer, Stevenson, etc.—and later R. M. Hare and the “multifunctionalists”) offered. It is important to notice that that effort at re-interpretation has continued unabated up to the present, still with nothing in the way of an established or promising result on the horizon. But this failure has not led people to question the fundamental change—the turn to “concepts” and the “logic of moral discourse”—which was instituted at Moore. Rather, they just work all the harder in the direction that took its rise from Moore. Surely something deep is driving them.

*

To understand what actually happened to bring about the shift from a pre- to a post-Moore understanding of moral knowledge and of the practice of moral theory and guidance, one must look, more broadly, to the universities of the late 1800’s and early 1900’s. The attempt by the Non-Cognitivists to redefine knowledge was part of a much larger social process that can be aptly called “The Secularization of the Academy.” This process marked a shift that certainly was historically necessary, but it also was one that had many inessential and unforeseen consequences.

A part of what was involved comes out in a statement by Professor John Lyons, made in 1998, on how he understands his role as a teacher in the university to exclude moral instruction: “I do not claim to be morally superior to my students, to have a source of moral knowledge that they do not have, or to convince them of my authority as a teacher of ethics.“15 Now this statement raises a number of questions. Why would one think that to give moral guidance is to presume one is morally superior? And why think that to have moral knowledge would require that one have a “special source” that others (who don’t have the knowledge) do not have, making you something special—and then, perhaps, morally superior? And why think giving moral guidance involves trying to get people to believe and act on my authority?

A part of the irony here is that Lyons, a Professor of French, is clearly teaching that it would be morally odious for him or others to do such things as he mentions. There is no doubt that he is prepared to say and to teach this in class, and that it is part of the moral guidance he was given by his teachers and cohorts in his socialization as an academic. He is giving moral guidance to one and all in this very statement in which he is explaining why he does not give moral guidance to students. No doubt the things which Lyons here morally reproaches have been done in the past, and in ways deserving of his reproach. Inappropriate and even immoral moralizing by teachers has been done and is now being done (as Lyons acknowledges, p. 155); and no doubt there is a special danger of this occurring around social institutions, such as universities. But to avoid these dangers it is not necessary (Is it even possible?) to deny the existence or possession of moral knowledge, or to deny that it is possible or morally permissible—or even morally required—to pass such knowledge on in appropriate ways when that is suited to the academic situation. Clearly, in making his remarks Lyons presupposes moral knowledge (He knows, no doubt, that it is morally wrong to claim to be morally superior to students, etc.), and that it is right to pass this knowledge on. And I venture he would feel free, or even obliged, to make his statements here quoted in the classroom, expecting his students to believe them. But what he is doing is all a part of what was involved in the secularization of the academy. The professor had to get out of the business of moral guidance, which had been so closely involved with religion and religious authority. That will be easy if there is no moral knowledge.

*

Now secularization, with its essential as well as inessential accompaniments, went hand-in-hand with the professionalization of the academic areas. This might be viewed as the positive side of the divorce from religious institutions. The maintenance of standards in a social enterprise such as the university requires appropriate social organizations. Such maintenance is one mark of a profession, and, in the past, it has been necessary for the purposes of guaranteeing the expertise of the individual practitioner and the responsibility of the profession to society at large.

But professionalization requires careful identification of a subject matter so that its boundaries may be respected. Philosophy, and especially Ethical Theory, had long been concerned with the understanding and guidance of life as a whole. But Philosophy after 1900 resolutely turns away from that, as one part of secularization, and increasingly does so as its professionalization develops. This required the identification of a different and unique subject matter for Philosophy. That subject matter turned out to be ‘concepts’, and Philosophy dutifully turns out to be ‘logic’. A new subject matter and a new method are then in hand—if we can only find out what they are. Verbally at least, “Logic, Language and Meaning” are the center of focus in what was promised to be a “Revolution in Philosophy.”

Now it should be noted that, in fairly close correspondence with all this, Psychology was trying to become scientific. (Actually, becoming scientific was high on the agenda for Philosophy as well, and was the main reason it ‘became’ logic.) In Psychology one must forget about the “soul.” (See Edward Reed’s marvelous book, From Soul to Mind.16) Becoming scientific meant experimental psychology: laboratories and only what could be studied in them. Then Behaviorism (Watson), or Deep Theory (Freud and others), and most recently brain theory mixed in with computers. What must be noted here for our concerns is that none of these directions of Psychology dealt with, or allowed one to deal with, the traditional subject matter of ethical theory, though many efforts were made to include that subject matter: “conduct,” will and character.17

But it needs to be said once again with emphasis that, in all of these developments in Philosophy and Psychology, and in the fields of professionalized learning in general, no one discovered that we cannot know, in the ways routinely practiced by pre-Moore ethical theorists, the nature of rational deliberation and choice, of “conduct,” will and character, and of the primary moral distinctions embedded therein. But, regardless of that, choice, will and character disappear from the field of acceptable knowledge—and especially as they were thought to be known by observation (of oneself and others), conceptualization or abstraction, and theoretical organization—the practice of the pre-Moore consensus.

*

What is the effect of all this on the status of rights and right claims in guiding human behavior, collectively and individually?

Rights claims were always the most resilient segment of moral discourse in the face of Non-cognitivism. Even in the heyday of Emotivism, many never surrendered the view that such claims stand in logical relations to other statements. They simply could not accept the view that rights claims were inherently non-rational. “I have a right to X” was thought of as logically entailing “You have an obligation not to interfere.” And as logical relations were slowly pried loose from truth, in the progression of ethical theorizing in the mid 1900’s, rights claims became even more acceptably “cognitive.” Overall, however, the reason why rights talk survived the Emotivist onslaught, to the extent it did, was not because of some insight into their objective, truth-bearing status, but because the social and political situation would not tolerate the idea that opposition to the draft, racial segregation and economic deprivation were simply matters of taste or feeling. In these matters the objective reality of right and wrong, justice and injustice, good and evil, and the assurance of knowledge thereof, were just undeniable to most citizens including academics. Rights and justice were too vital to life to dismiss to the realm of the Non-Cognitive.

Unfortunately, however, that did not dispel the cloud over moral reality and knowledge which was cast by their exclusion from the domains of science and by the associated Non-Cognitivist offensive, and which could not but effect the force of claims to moral rights. Legal rights are, of course, another matter—though with problems of their own—except, of course, insofar as they are thought to depend upon a moral foundation. Legal rights are the result of political processes and are sanctioned by government action. They may be either moral or immoral. As important as they are, the moral quality of the society in which they exist is what concerns most people.

The legalities of the treatment of the prisoners in Guantanamo18 may be endlessly discussed, and no doubt will be. But the two sides are really concerned about whether or not the government of the United States should be permitted to treat those prisoners in ways which are regarded by many as immoral. Classifying them as “Non-Combatants” to get around provisions of the Geneva Convention is a typical maneuver to permit treating people in ways not morally acceptable. One side argues legalities to prevent what they regard as immoral—not just illegal—treatment. The other side argues legalities to permit treatment that they themselves would recognize as immoral under most circumstances. Here as in many other scenes of contemporary life, the moral has no effective standing, and is replaced with the political and the legal, which then fail to address the deeper issue of “is it right?”

*

But if there is no moral reality, or no knowledge of it, then the legal and the political are as far as one can go. What more is there to be concerned about? Persons who would respond to “moral” issues beyond that would be foolish, “unrealistic.” They would be worrying themselves, perhaps risking their careers or even their lives for nothing, or at least for something which no one has knowledge of—perhaps for no more that a personal quirk on their part. That is pretty much where the “knowledge” now acceptable as such to the University leaves us. And this explains why sporadic efforts to teach “professional ethics” have no significant impact upon professional behavior and life. They can find no cognitive foundation for the formation of moral character and for becoming a morally responsible person in all the connections of life. And since the University is the arbiter of what counts as knowledge, it rules out any such foundation from other sources, and leaves only ethnic identity (cultural relativism) or non-rational personal commitments to go on. These do not provide a satisfactory basis upon which to confront the widespread abuses of human rights that characterize our contemporary world.

*

I have spoken repeatedly of the reality of moral goodness and of knowledge of moral goodness. Now I would like to briefly state my view of them, and point out how that view positions human rights in the broader context of morally acceptable human existence. Here I cannot argue for my view, but only state it and offer a few essential clarifications.

The morally good person, I would say, is a person who is effectively intent upon advancing the various goods of human life with which they are effectively in contact, in a manner that respects their relative degrees of importance and the extent to which the actions of the person in question can actually promote the existence and maintenance of those goods. Thus, moral goodness (as well as badness) is a matter of the organization of human dispositions and will into a system called “character.”

“Character” refers to the settled dispositions to act in certain interrelated ways, given relevant circumstances. Character is expressed in what one does without thinking, as well as to what one does after acting without thinking. The actions which come from character will usually persist when the individual is unobserved, as well as when the consequences of the action are not what the agent would prefer. A person of good moral character is one who, from the deeper and more pervasive dimensions of the self, is intent upon advancing the various goods of human life with which they are effectively in contact (etc.).

The person who is morally bad or evil is one who is intent upon the destruction of the various goods of human life with which they are effectively in contact, or who is indifferent to the existence and maintenance of those goods.

This orientation of the will toward promotion of human goods is the fundamental moral distinction: the one which is of primary human interest, and from which all the others, moving toward the periphery of the moral life and ethical theory, can be clarified. For example: the moral value of acts (positive and negative); the nature of moral obligation and responsibility; virtues and vices; the nature and limitations of rights, punishment, rewards, justice and related issues; the morality of laws and institutions; and what is to be made of moral progress and moral education.

A comprehensive and coherent theory of these matters can, I suggest, be developed only if we start from the distinction between the good and bad will or person—which, admittedly, almost no one is currently prepared to discuss. That is one of the outcomes of ethical theorizing through the 20th Century. It is directly opposite to the consensus of the late decades of the 19th Century, for which, as we have noted, the fundamental subject of ethical theorizing was the will and its character. (See Green, Bradley, Sidgwick, Dewey)

I believe that the orientation of the will provides the fundamental moral distinction because it is what ordinary human beings, not confused or misled by theories of various kind, naturally and constantly employ in the ordinary contexts of life, both with reference to themselves (a touchstone for moral theory) and with reference to others (where it is employed with much less clarity and assurance). And I also believe that this is the fundamental moral distinction because it seems to me the one most consistently present at the heart of the tradition of moral thought that runs from Socrates to Sidgwick—all of the twists and turns of that tradition notwithstanding.

Just consider the role of “the good” in Plato, Aristotle and Augustine, for example, stripped, if possible, of all the intellectual campaigns and skirmishes surrounding it. Consider Aquinas’ statement that “this is the first precept of law, that good is to be done and promoted, and evil is to be avoided. All other precepts of the natural law are based upon this; so that all the things which the practical reason naturally apprehends as man’s good belong to the precepts of the natural law under the form of things to be done or avoided.“19 Or consider how Sidgwick arrives at his “maxim of Benevolence”—“that each one is morally bound to regard the good of any other individual as much as his own, except in so far as he judges it to be less, when impartially viewed, or less certainly knowable or attainable by him.“20 Sidgwick of course tried hard to incorporate his intuitions of justice and of prudence into this crowning maxim, but with little obvious success.

A few further clarifications must be made:

1. I have spoken of the goods of human life in the plural, and have spoken of goods with which we are in effective contact, i.e. can do something about. The good will is manifested in its active caring for particular goods that we can do something about, not primarily in dreaming of “the greatest happiness of the greatest number” or even of my own ‘happiness’ or of “duty for duty’s sake.” Generally speaking, thinking in high level abstractions will always defeat moral will in practice. As Bradley and others before him clearly saw, “my station and its duties” is nearly, but not quite, the whole moral scene, and it can never be simply bypassed on the way to “larger” and presumably more important things.

One of the major miscues of ethical theory since the sixties has been, in my opinion, its almost total absorption in social and political issues. This for reasons already indicated, and of course these issues do also concern vital human goods. They are important, and we should always do what we can for them. But moral theory simply will not coherently and comprehensively come together from their point of view. They do not essentially involve the center of moral reality, the will and character.

2. Among human goods—things that are good for human beings and enable them to flourish—are human beings and certain relationships to them, and, especially, good human beings. That is, human beings that fit the above description. One’s own well-being is a human good, to one’s self and to others, as is what Kant called the moral “perfection” of oneself. Of course non-toxic water and food, a clean and safe environment, opportunities to learn and to work, stable family and community relations, and so forth, all fall on the list of particular human goods. (Most of the stuff for sale in our society probably does not.) Rights are primary human goods, and therefore the good person, on my view, will be deeply committed to their recognition and full deployment.

There seems to me no necessity of having a complete list of human goods, or a tight definition of what something must be like to be on the list. Marginal issues, “Lifeboat” cases, and the finer points of conceptual distinction are interesting exercises and have a point for philosophical training; but it is not empirically confirmable, to say the least, that the chances of having a good will or being a good person improve with philosophical training in ethical theory as that has been recently understood. It is necessary for the purposes of being a good or bad person that one have a good general understanding of proximate human goods and of how they are effected by action. And that is also what we need for the understanding of the good will and the goodness of the individual. We do not have to know what the person would do in a lifeboat situation to know whether or not they have good will, though what they do in such situations may throw light on who they are, or on how good (or bad) they are. The appropriate response to actions in extreme situations may not be a moral judgment at all, but one of pity or admiration, of the tragic sense of life, or of amazement at what humans are capable of, etc. etc.

3. The will to advance the goods of human life with which one comes into contact is inseparable from the will to find out how to do it and do it appropriately. If one truly wills the end one wills the means, and coming to understand the goods which we effect, and their conditions and interconnections, is inseparable from the objectives of the good person and the good will. Thus, knowledge, understanding and rationality are themselves human goods, to be appropriately pursued for their own sakes, but also because they are absolutely necessary for moral self-realization as here described. Formal rationality, defined without reference to particular ends or values, is fundamental to the good will, but is not sufficient to it.

4. Clearly, knowledge of moral distinctions depends upon knowledge of the human self, the subject of those distinctions. What E. Anscombe said decades ago about the need to quit doing moral theory until we have an adequate “moral psychology21 seems very sensible in the light of how knowledge is now understood in the institutions of knowledge. Of course we can’t stop theorizing. We have to continue thinking about moral distinctions, because we have to act, and have to find out how to act. But we can never regain the self (will, character) as a subject of knowledge so long as we insist in forcing the self into a scientistic (“naturalistic”) mold. Moral knowledge disappears with authentic self-knowledge, which disappears with the ascendancy of “naturalism” (scientism). Moral character is not a matter of the physical body at any level of refinement, or of its “natural” relations to world and society. As long as the physical realm is regarded as the only subject of knowledge, there will be no moral knowledge and no cognitive foundation of the moral life. This is exactly where we stand today in Western culture and in the University system that presides over it on its epistemic side.

*

Moral rights have as their primary role resistance against the attitudes and actions of people and arrangements of evil intent. But in order for them to be effective in that role they must be urged and supported by multitudes of people of good will: people of established benevolence, wisdom, prudence, courage and temperance. Such people can only support their lives upon their experience of the reality of moral distinctions and values and upon a clear knowledge of their reality and nature. Upon that foundation, when widely shared, moral and then legal rights can frame societies and governments that are not just just as defined by rights, but are contexts of human flourishing. Pull that foundation away, and justice and rights themselves will not flourish—though we must have them and must always struggle to do the best we can by them. The point is not that we should wait for people to be highly developed or morally perfect to push for the upholding and expansion of rights. We should always do what we can to that end. It is an essential part of individual and corporate moral enculturation and progress. But what we can accomplish thereby depends upon the moral character of multitudes of people nourished and directed by knowledge of the reality and nature of moral values and distinctions. Ironically, the very institutions of knowledge today are turned against that upon which a high level of moral goodness in individuals and society depends.

DWillard.org is co-sponsored by the family of Dallas Willard, Dallas Willard Ministries, and FiveStone.