michael thinks

Gauthier1, 2002/03/29:19:32


michael's home ~*~ michael's index
Gauthier
respond
responses
SOME NOTES ON THE THE FIRST CHAPTER OF DAVID GAUTHIER'S MORALS BY AGREEMENT ("OVERVIEW OF A THEORY") — OXFORD UNIVERSITY PRESS 1986 (REPRINTED 1992); NUMBERING FOLLOWS HIS; MY SPECIFIC POINTS ORDERED BY LOWER-CASE LETTERS FOLLOWING HIS NUMBERS; IN THE FUTURE, INSTEAD OF EXTENDED QUOTES WITH COMMENTS, I WILL OFFER A PRECIS OF SOME MATERIAL, WITH COMMENTS INTERSPERSED AND/OR FOLLOWING; BUT HOPEFULLY ONE CAN GET A GOOD IDEA OF WHAT GAUTHIER IS UP TO FROM THE QUOTES AND MY COMMENTS THIS TIME AROUND; THIS IS A BIT ROUGHER THAN I WOULD LIKE AS WELL — I'LL TRY TO REFINE MY STRATEGY AND MY IDEAS MORE NEXT TIME.

1 (p. 1 para. 1) MORALITY. "Were duty no more than interest, morals would be superfluous."

1(a) EGOISM AND MORALITY: SOME TERMINOLOGICAL AND SUBSTANTIVE DISPUTES. DG states something here which is common knowledge in academic philosophy, something people do not take issue with anymore, something treated as part of the accumulated knowledge which philosophy has produced. I think he is right. Moral considerations play a distinctive *formal* role in our individual normative and practical economy: within a context of particular ends, they limit the *ways* in which these ends may be pursued. This feature is what is referred to when philosophers say moral considerations ‘override' or constitute ‘side constraints' on the pursuit of nonmoral ends; or that moral considerations are or give rise to ‘duties'. Where he goes wrong is in his particular characterization of this distinction: moral constraints as constraining *all* interest (including all egoistic interest).

(Another necessary condition for something's being a moral consideration is that it concern the well-being of others and/or preserving valuable features of a community one is a part of; this, again, is a common-knowledge starting point in the philosophical community. And something I am inclined to agree with. A real and important distinction is gotten at — the failure to value others or one's community properly is an important and widespread failure, and success in doing so is, or is a part of, an important stage in one's development into a happy, successful human being. The further, and final, commonly-accepted necessary condition for a consideration being a moral consideration is that it be *impartial*, making no distinction between the interests and well-being of the agent and the interests and well-being of others. DG accepts this distinction; I am anxious to read further to see what he is getting at. I disagree with this substantive condition, though, again, we need to distinguish the features picked out by the distinction from the characterization. The defenders of moral impartiality may be getting at something real; or they may just be confused; on this one, I tend to think there is much confusion and error and that the mischaracterization takes its adherents so far away from the truth that it is often not worth following all the antic required to built up a theory that even sort of squares with reality. I could be wrong, but that is my considered prejudice. In any case, llet's leave this aside for now. I mention these things just to fill out the picture and note that, at a fundamental level, I think there is enough to the first two commonly-accepted necessary features of morality, and enough evidence that they would be justified by a true egoist theory, that I accept them. I realize that others, including most Objectivists, will use the term ‘moral' differently. They may think we are fixing on something else in the world, observationally, when we use the term ‘moral'; but I think this is clearly wrong and that empirical studies of linguistic usage could prove this. More likely, such folks are fixing on another feature of the world entirely (e.g. those things that are most important to human happiness). This is fine. If you are doing this, call what everyone else calls moral considerations "smoral" considerations. My point is that the distinction DG and most of the rest of the philosophical community is getting at is: (a) the natural kind we are getting at when we use the terms ‘moral' and ‘immoral' and (b) something real and important.)

How might this feature of deliberation and action be explained if egoism is correct? First, we need to note that following one's interests — meaning acting on one's subjective preferences — is not necessarily acting egoistically. Egoism concerns the proper content of one's preferences, not simply the causal and normative role that one's own preferences — as opposed to another person's — play in one's deliberation and action. Second, there is a plausible egoist story to tell about what ‘side-constraints' are and why they are the way they are. That story concerns a role that principles, including principles about how we ought to treat others, can come to play in our psychology. If we come to accept principles — e.g. the trader principle or the principle of the noninitiation of force or a principle of benevolence — not simply as strategic rules to maximize utility (i.e. best achieve our own happiness), but as ways of thinking about, acting in, and responding to the world *which are constitutive of who we are and what makes us good*, then these principles come to have a very immediate and personal value dependent on (our grasp of) their strategic value but something over and above their strategic value. Two skeptical questions that such a view has to address: "Why can't particular values and projects alone play suffice to define a definite and worthwhile identity?" and "Why can't one gain a definite and worthy identity by thinking of oneself simply as an excellent strategic maximizer?" But my purpose here — and I realize the tail is starting to wag the dog — is simply to point out that DG's starting point, and the received view of what moral considerations are, can and probably should be considered on target from the standpoint of developing an egoist theory of morality. A legitimate distinction in reality has been fixed upon; it is just that it may well be misdescribed. We need not suppose that DT has stepped into a Kantian never-never land of free-floating duties in his third sentence and will likely spend the rest of the book trying to swim against the current back to something resembling reality.

1 (p. 1 para. 2 to p. 2 para 2) DESIRE, REASON, AND THE RATIONALITY OF ADOPTING MORAL CONSTRAINTS. "If moral appeals are entitled to some practical effect, some influence on our behavior, it is not because they whisper invitingly to our desires, but because they convince our intellect. Suppose we should find, as Hume himself believes, that reason is impotent in the sphere of action apart from its role in deciding matters of fact. Or suppose we should find that reason is no more than the handmaiden of interest, so that in overriding advantage a moral appeal must also contradict reason. In either case, we should conclude that the moral enterprise, as traditionally conceived, is impossible... But are moral duties rationally grounded?... [Yes.]... rational constraints on the pursuit of interest have themselves a foundation in the interest they constrain. Duty overrides advantage, but the acceptance of duty is truly advantageous. We shall find this seeming paradox embedded in the very structure of interaction. As we come to understand this structure, we shall recognize the need for restraining each other's pursuit of her own utility... Our enquiry will lead us to the rational basis for a morality, not of absolute standards, but of agreed constraints."

1(b) SORTING THE ISSUES AND "THE MOVE." Let's sort out the issues here. First, the relation between theoretical reason and value (or normativity). One could hold that theoretical reason, with the help of experience, allows us to discover what is valuable, what we have a practical reason to do. Or one could hold, with Hume, that theoretical reason only helps us discover the causal facts relevant to achieving ends set by interest (preference, desires). Thus interest, and interest alone, gives rise to value and hence practical reasons. Despite what he says, DG seems to adhere to Hume's "handmaiden" view. It is just that one of the plain old naturalistic facts about what works in achieving our interests, and that theoretical reason discovers for us, is that, sometimes, our interests are best achieved by *not* being motivated by them and *not* aiming at them. This is "the move": some version of it, perhaps first articulated, if only implicitly, by Mill, has allowed philosophers to hold the view that an attitude or motivation not directed toward the values postulated by a their theory may nevertheless be demanded by the theory. Pretty nifty way to get your theory out of a tight spot! Though DG is not really talking about egoism, but merely the pursuit of subjective interest (despite his ambiguous talk of "advantage"), one could similarly say: there are egoistic reasons to adopt nonegoistic motivations. Most philosophers think this statement is true.

There are some good arguments that an agent who actually did this would suffer from profound psychological incoherence (Michael Stocker wrote an interesting article entitled "Moral Schizophrenia" — I don't have the reference handy). And, I believe, when we look to moral motivations, we see people in some sense aiming at preserving their own sense of self and their self-esteem. Though I do not have this on my mind when I could lie to exploit someone and don't — this is the evidence cited by those who do not believe the egoistic motivation is there -- the distinctively moral aspect of my attitude and action lies in the sensitivity of my attitude and action to my identity and self-esteem. I know that, if I lie for the purposes of exploiting someone, I have taken a step down the road of destroying my definite sense of myself and of my own value (and the distinctive rottenness I feel if I go ahead and exploitatively lie reflects this). The problem, and the need for fancy philosophical footwork, goes away when we analyze duty in terms of character-based egoistic interest that plays the functional role of overriding or constraining first-order interests in most circumstances. So the door is certainly open to argue that morality is rational relative to the goal of one's own happiness.

1(c) MORALITY AND AGREEMENT. What of DG's sketch of why we adopt moral constraints? Apparently, he finds reasons for the adoption of moral constraints not in the results of individual action, but in the results of collective action, in cases in which we must all do the same thing in order for results good for each of us to happen — this is my guess, but I could be wrong. I'll go with it until I read the relevant material later in the book. In any case, this is an intriguing possibility. Such considerations could be an additional element in an egoist view of morality: a social as distinct from personal grounds for adopting moral constraints. Something to note: it seems to me that it is rationality -- or rationality conditional on others being rational -- not really agreement per se, that does the normative work here. Of course, if there is no better way to achieve the coordinated rationality, then agreement is relevant as the necessary means.

2.1 "We shall develop a theory of morals as a part of a theory of rational choice."

2.1 (a) RATIONALITY VERSUS SELF-INTEREST. Since for DG there are no restrictions on what interests one could or should have, the view must amount to this: whatever interests you have, they will be frustrated if you do not adopt interest-constraining interests. Such a view would have to be qualified in order to have a hope of being true. For starters, if I have a very strong interest in not having my interests constrained in any way, then this view will be false. I suspect that there are many such potential interests (for example, a strong interest in mayhem). So the view DG wants to defend would have to be either conditional ("Unless you have certain kinds of interests... blah blah") or reject a strict subjectivism with regard to interest, ruling out certain interests as inherently bad or irrational at the outset. I'm quite sure he would take the first option.

On an egoist view, one does not necessarily face such problems in developing a theory of rationality, for something can be said about what sorts of interests one should have — or in any case, there are restrictions with some normative import that do not derive *simply* from subjective interest. And my guess is that there are good reasons to have a theory of rational action, not just a theory of self-interested action. Beyond certain strictures at the beginning to get the theory going (and which have independent reasons to support them), there is no reason why a theory of economic rationality or a game theory needs to say anything about what interests agents could or should have; within a more robustly normative context, we may sometimes treat interests as if they were subjective.

(Worth noting: Objectivists often use ‘rational' in the context of practical reason to mean ‘rational pursuit of self-interest' -- as distinguished from following one's self-regarding whims. They never mean anything like ‘choosing appropriate means to your ends (whatever those ends happen to be)'. Similarly, DG and many others use ‘advantage' in a strictly formal sense, to mean ‘achieving one's aims' or ‘satisfying one's interests'. In this sense, if all of one's aims are self-sacrificial, and one achieves them, things have turned out to one's ‘advantage'. I will attempt to distinguish the pursuit of interests from the pursuit of self-interest, stipulatively, for the purposes of argument. It may be that, on a true and complete view, language would rightly reflect a closer relationship between interest and self-interest — as, for example, if most or our particular interests were egoistic and the only practical means of rationally ordering interests was by reference to one's overall well-being. Then ‘rational' and ‘self-interested' would be coextensive or nearly so.)

So, if we are using the subjective conception of rationality, it is not clear to me that the question "Is morality rational?" is very interesting. Very plausibly, we have left out some of the information necessary to get us to morality. The more interesting question is: "Is morality rational relative to the end of one's own well-being?" Of course, one answer to this question is: "Yes, having nonegoistic interests promotes one's well-being." So, with a theory of egoism in mind, we might want to ask other sorts of questions as well: "What is a moral consideration? Is it a kind of egoistic consideration?"

2.1 (p.3 para.4 ff.) DIFFERENT PARADIGMS OF RATIONAL CHOICE."...the core of classical and neoclassical economic theory, which examines rational behavior in those situations in which the actor knows with certainty the outcome of each of his possible actions... The economist formulates a simple, maximizing conception of practical rationality, which we shall examine in Chapter II... Bayesian decision theory relaxes this assumption [certainty of outcome], examining situations with choices involving uncertainty. The decision theorist is led to extend the economist's account of reason, while preserving its fundamental identification of rationality with maximization... Both economics and decision theory are limited in their analysis of interaction, since both consider outcomes only in relation to the choices of a single actor, treating the choices of others as aspects of that actor's circumstances. The theory of games overcomes this limitation, analyzing outcomes in relation to sets of choices, one for each of the persons involved in bringing about the outcome. It considers the choices of an actor who decides on the basis of expectations about the choices of others, themselves deciding on the basis of expectations about his choice. Since situations involving a single actor may be treated as limiting cases of interaction, game theory aims at an account of rational behavior in its full generality."

2.1 (a) AGAINST THE PARADIGM OF MAXIMIZING RATIONALITY. This is all good and well insofar as, for explanatory or justificatory purposes, we can abstract from the content of the preferences people have (effectively treating preferences as subjective); and insofar as we do or should think in a maximizing and/or strategic way, weighing costs and benefits and trying to do our best get the most out of our interactions with other people. I understand that game theory has some explanatory value in international relations; and I'm sure it does in other contexts as well; precisely because people are often rational in this way.

However, it is not clear that we do or should think this way all the time. And it is not clear to me that thinking this way exhausts what counts as "rational." For we treat certain values -- such as moral values and other projects, people, and ways of life that are essential to our identity and our self-esteem -- as constraints, as things for the most part outside our cost-benefit calculus which constrain that calculus. Not that there is no rational way to give up or trade off such goods; we sometimes do (we might weigh the moral good of seeing that justice is done when one's neighbor is an ass against the very practical, nonmoral good of maintaining civil relations with them because at least they don't presently hate you and want to do you harm); but we do not regard such goods as things which are generally to be regarded as to be traded off (let alone bought and sold in a market, as commodities). I think DG sets out to prove precisely this point. But I suspect that this manner of practical deliberation is more independent of strategic maximization than DG thinks. For *having* a definite identity may be more important than being an good strategic maximizer, so that we need only a rough-and-ready calculation of costs and benefits before we make our identity-commitments (maybe this is how it goes with children as they form their identities) — and then off we go, constrained maximizers. If this is so, then strategic maximization is not necessarily the rational context for the development and justification of "constraints"on cost-benefit calculation. And it may be that maximizing rationality does explanatory and justificatory work only when a backdrop of constraining value is assumed (e.g. for economic rationality in the context of a market, the widespread acceptance and enforcement of principles of individual rights against force and fraud). Obviously, I have more just stated this view than defended it; but I believe the distinction is readily observable, explanatorily and normatively important, and can be accounted for by reference to our need, as reflective and self-aware beings, for an identity which is both definite and good as a means to self-esteem (and perhaps as a means to other things as well — being a coherent, functioning agent may require this kind of practical or normative identity as well).

(I fully realize that even such "constraining" goods are valued more and less than other goods; and that, therefore, they can be ordered against other goods and given a "trade value" relative to other goods — even given a money "price" which is not explanatorily impotent (this, I think, is what is behind the thinking of folks like Posner and Becker, who conceptualize all human valuing and interaction as economic valuing and trading). I'm just skeptical about this similarity being sufficient to lump together all deliberation about value under the heading "strategic maximization.")

2.1 (p. 4 para. 3 ff.) CONTRA RAWLS AND HARSANYI. "Rawls argues that the principles of justice are the objects of rational choice — the choice that any person would make, were he called upon to select the basic principles of his society from behind a ‘veil of ignorance' concealing any knowledge of his own identity. The principles so chosen are not directly related to the making of any individual choices. Derivatively, acceptance of them must have implications for individual behavior, but Rawls never claims that these include rational constraints on individual choices. They may be, in Rawls' terminology, reasonable constraints, but what is reasonable is itself a morally substantive matter beyond the bounds of rational choice... Rawls' idea, that principles of justice are the objects of rational choice, is indeed one that we shall incorporate into our own theory, although we shall represent the choice as a bargain, or agreement, among persons who need not be unaware of their identities. But this parallel between our theory and Rawls's must not obscure the basic difference; we claim to generate morality as a set of rational principles of choice. We are committed to showing why and individual, reasoning from nonmoral premises, would accept the constraints of morality on his choices... Rawls supposes that persons would choose the well-known two principles of justice, whereas Harsanyi supposes that persons would choose principles of average rule- utilitarianism. But Harsanyi's argument is in some respects closer to our own; he is concerned with principles for moral choice, and with the rational way of arriving at such principles. However, Harsanyi's principles are strictly hypothetical; they govern rational choice from an impartial standpoint or given impartial preferences, and so they are principles only for someone who want to choose morally or impartially. But Harsanyi does not claim, as we do, that there are situations in which an individual must choose morally in order to choose rationally... Our theory must generate, strictly as rational principles for choice... constraints on the pursuit of individual interest or advantage that, being impartial, satisfy the traditional understanding of morality."

2.3 (p. 7, para 1) "MAXIMIZING" (I.E. PARTIAL) VERSUS "UNIVERSALISTIC" (I.E. IMPARTIAL) CONCEPTIONS OF RATIONALITY. "...consider rational action in the interests of others are involved. Proponents of the *maximizing* conception of rationality... insist that... the rational person still seeks the greatest satisfaction of her own interests. On the other hand, proponents of what we shall call the *universalistic* conception of rationality insist that what makes it rational to satisfy an interest does not depend on whose interest it is..."

3.1 (p. 9, para. 2) MORALS BY AGREEMENT: THE CORE POSITION. "Moral principles are introduced as the objects of fully voluntary *ex ante* agreement among rational persons. Such agreement is hypothetical, in supposing a pre-moral context for the adoption of moral rules and practices... Morality emerges quite simply from the application of the maximizing conception of rationality to certain structures of interaction. Agreed mutual constraint is the rational response to these structures... A genuinely problematic element in the contractarian theory is... the step from hypothetical agreement to actual moral constraint. Why need [one] accept, *ex post* in his actual situation, these principles [that he would have agreed to] as constraining his choices?"

3.1 (a) QUESTIONS, CRITICISM. Still seems like this is a garden-variety problem of collective action. The good, which is good for all of us, will not exist unless we all, or most of us, act in manner K. Since there is an incentive to free-ride, and very much free-riding will destroy the good at stake, the best solution may be for each of us to adopt a policy of not thinking as a strategic maximizer with regard to that good (and presumably, create a system of incentives that encourages others to do the same). Happily, we need to adopt principles as part of our identity, and here is an opportunity; also happily, we are wired to be quite sensitive to the approval and disapproval of others, so the basic tools for achieving general social compliance are in place. But where, exactly, does agreement come in here? Since the agreement is hypothetical, its relevance to the actual world would seem to be that it simply highlights the fact that it furthers everyone's interests to do what is necessary to achieve general compliance. But this gets us nowhere. Terror as well as morality could best achieve compliance. *Actual agreement*, especially to the extent most of the parties have already integrated principles of promise-keeping into their character, is certainly one way to achieve the compliance; but this is clearly not what DG means and if it assumes a moral commitment to keeping promises, the it cannot get him where he wants to go. Most fundamentally, though: why even posit hypothetical agreement? Why aren't we faced simply with the question of what the best strategy is for achieving collective goods that require coordinated action? Maybe we are and this is where game theory is supposed to come in: each of us, in rationally choosing in anticipation of what others will choose, will eventually, rationally, come to choose adopting self-constraining moral principles. Interesting enough -- game theory is pretty well-developed and there is no reason to question its results. So maybe game theory shows us that there are good strategic-maximization-type reasons to be moral. But I doubt these are the most important reasons.

3.3 (p. 14, para 1 ff.) A MAXIMIN RELATIVE BENEFIT ACCOUNT OF BASIC FAIRNESS. "Where mutual benefit requires individual constraint, this reconciliation [of individual interest with mutual benefit] is achieved through rational agreement. As we have noted, a necessary condition of such an agreement is that its outcome be mutually advantageous; our task is to provide a sufficient condition. This problem is addressed in a part of the theory of games, the theory of rational bargaining... [W]e introduce a measure of each person's stake in the bargain — the difference between the least he might accept in place of no agreement, and the most he might receive in place of being excluded by others from agreement. And we shall argue that the equal rationality of the bargainers leads to the requirement that the greatest concession, measured as a proportion of the conceder's stake, be as small as possible [minimax relative concession]. So we formulate an equivalent principle of maximizing relative benefit, which we claim captures the ideas of fairness and impartiality in the bargaining situation, and so serves as a basis for justice."

3.3 (a) FROM MUTUAL ADVANTAGE TO FAIRNESS IN AGREEMENTS. How does the "equal rationality" of the parties lead to the requirement that the greatest concession (among all the parties) be as small as possible? At first glance, this account of fairness is every bit as much a rabbit-out-of-the-hat act as Rawls'. But at least Rawls acknowledges it. How is it that accepting a principle like this furthers the interest of each agent? Is it even meant to? How are alternative principles ruled out? Could it really be that *what is actually beneficial to one* (such as the friendship, community, and knowledge one gains from others and the very high costs of an interaction spiraling into violence) — as opposed to whatever one happens to have an interest in or prefer — is irrelevant? I cannot help but suspect a lurking Kantian commitment to impartiality. Compare with a clearly egoistic account of an "equal shares" principle of justice. If principles of just desert, prior agreement, and initial acquisition fail to determine the who should get what, there are sound violence-avoidance type reasons for each person to adopt an equal- shares principle; and there are clear egoistic reasons to adopt the principle as a constraint not simply a maximizing strategy; and there are clear egoistic reasons to encourage others to do the same, punishing the strategic-maximizing free-riders. Or compare to the trader principle and a sketch of a possible justification for it: if it is either the trader principle or a principle of exploitation, and the later principle likely leads to the all-time-greatest social scourge of spiraling conflict and violence, then the trader principle wins. The evidence for this, perhaps, is available to a child in the context of her family and its and her social interactions. And one has every reason to accept such a principle as part of one's identity, not simply a strategic rule; and one has every reason to provide incentives for others to do the same (and not free-ride). Of course, these sketches are quite incomplete, but in both examples, it is pretty clear how a rational, egoistic justification would go. But these are just initial questions and objections; perhaps DG answers them adequately later on.

Final note. Intuitively, in order for agreement to do any normative work — and it is still unclear to me how DG's hypothetical agreement does any normative work — it needs to be a fair agreement. There are all sorts of mutually-advantageous arrangements which we might agree on but which are unfair. There is such a thing as an unfair but mutually advantageous agreement. I think this point is what drives DG's account here. There must, it seems, be something at work other than mutual advantage, if agreements are to be fair and have the normative force that comes with being fair. I don't think there is any way out of this dilemma for DG. For he wants to say that agreement is a basic or fundamental consideration of fairness or justice (interestingly, for DG, considerations of fairness or justice are prior to morality; they exist in the form of considerations or strategies, but not constraints on strategic maximization, prior to the adoption of such constraints) and this just ain't so. Compare: principles of proportional reciprocity, desert, property and initial acquisition of property forming a background against which the fairness of an agreement can be gauged. This picture makes sense, but it also could not be the basis of a theory justifying morality on rational grounds.

3.3 p. 14 para.4 ff. SOLUTION TO COMPLIANCE PROBLEM. "... in so far as the social arrangements constrain our actual *ex post* choices, the question of compliance demands attention. Let it be ever so rational to agree to practices that ensure maximin relative benefit; yet is it not also rational to ignore these practices should it serve one's interest to do so?... The weakness of traditional contract theory has been its inability to show the rationality of compliance... [C]onstrained maximizers, interacting one with another, enjoy opportunities for cooperation which others lack... [U]nder plausible conditions, the net advantage that constrained maximizers reap from cooperation exceeds the exploitative benefits that others may expect [to reap from them, at their expense]. From this we conclude that it is rational to be disposed to constrain maximizing behavior by internalizing moral principles to govern one's choices."

3.3 (b) STRATEGIC PRETEND-CONSTRAINED MAXIMIZATION. Maybe. But wouldn't "pretend" constrained maximizers do even better, since they could — presumably — get all the benefits of being constrained maximizers but also exploit the situation when no one is looking? The realities of human psychology may make it practically impossible do engage in such a strategy, but DG does not seem to be arguing this. I don't see how DG addresses the essential problem of there seeming to be, in theory at least, a rational interest in free-riding.

3.3 p. 15, para.3 ff. THE INITIAL BARGAINING POSITION: THE ANTIEXPLOITATION PROVISO. "...if what some bring to the table includes fruits of prior interaction forced on their fellows, then this initial acceptability [of what each bargainer brings to the table] will be lacking. If you seize the products of may labor and then say ‘Let's make a deal', I may be compelled to accept, but I will not voluntarily comply. We are therefore led to constrain the initial bargaining position through a proviso that prohibits bettering one's position through interaction worsening the position of another. No one should be worse off in the initial bargaining position than she would be in the nonsocial context of non-interaction. The proviso thus constrains the base from which each person's stake in the agreement, and so the relative concession and benefit, are measured. We shall show that this induces a structure of personal and property rights, which are basic to rationality and morally acceptable social arrangements."

3.3 (c) AGREEMENT AND PRIOR CONSIDERATIONS OF FAIRNESS OR JUSTICE. OK, we are still at hypothetical agreement. Again, the appeal seems to be to what counts as a fair agreement. And such initial conditions are important here. But DG seems clearly to be depending on more basic social principles — principles of nonexploitation and personal and property rights — in order to defend a grounding of specificially *moral* principles in hypothetical, fair agreement. I suppose these principles are justified simply by reference to mutual advantage (again, remember, advantage formally construed: whatever interests we happen to have are furthered). I guess the idea is that these principles are still strategic, not moral, that the resources of game theory are necessary for constrained maximization to be rationally justified. If this is right — I am not sure — then my disagreement, again, is that any game-theoretic reasons for adopting constraints on strategic maximization are secondary: we have much better and more immediate reasons for not being *simply* strategic maximizers, reasons grounded in the requirements of identity and self-esteem.

3.3 p. 16 para 3 ff. THE "ARCHIMEDEAN POINT" OF MORAL LEVERAGE. "... [we add a fifth point] — the Archimedean point, from which the individual can move the moral world. To confer this moral power, the Archimedean point must be one of assured impartiality — the position sought by John Rawls behind the ‘veil of ignorance'... Archimedean choice is properly conceived not as a limiting case of individual decision under uncertainty, but rather as a limited case of bargaining [contra Rawls]."

3.3 (d) SIMPLY CHOOSING THE MORAL POINT OF VIEW. Yikes! What is this? I think this. The idea is that there is a "moral point of view" which is in some sense impartial — everyone's interest matter equally. DG wants to show that this stance is not a general stance, but rather a stance, grounded in strategic maximization, but appropriate only in specific bargaining situations, which constrains strategic maximization with impartial concerns for the interests of all parties involved. But the stance itself is simply taken, the impartial content is determined simply by choice. The *taking* of the impartial stance itself may be rationally grounded, but the stance itself is not — i.e. the content of the stance is not determined by grasping that it (again as opposed to *having* the stance, attitude or motivation) is the best means to satisfying our interests. Look at it this way: the *content* of a nonegoistic motivation could not be determined on the basis of egoistic considerations (sacrifice cannot possibly be a means to benefit), but, conceivably, the *having* of nonegoistic motivation could be (*being sacrificially motivated toward* or *sacrificially aiming at* ends other than those of benefit might happen to result in benefit).
Notify me when michael writes again.

Find Enlightenment