Thursday 31 January 2013

Ethical Prisoners

The prisoners’ dilemma is often described, approximately, as follows:

Two people have been arrested for a serious crime and since been separated such that they cannot communicate.  The prosecutor provides an ultimatum to both of them individually:

You may choose to either confess or remain silent.

If you both choose to remain silent, rest assured that we have enough evidence to convict you both of a crime, albeit one which is less serious and attracts a lesser sentence than that for which you have been arrested.

If you both confess, you will both be charged with the crime for which you have been arrested, I will reward you both with a reduction in your sentences in recognition of that.

If you confess while the other remains silent, I will set you free.  I will use your evidence to ensure that the other will serve the maximum sentence possible.

This scenario clearly has ethical implications, but which ethical principle has precedence is not so clear.  There are also a number of uncertainties which I have glossed over which makes determining precedence extremely difficult.  Are the two people long-term friends or strangers?  Did they actually commit the crime?  Is the prosecutor reliable, that is will the prosecutor keep her word? 

What is the extent of the difference between the punishments?  Is the death sentence involved?  Are they part of an organisation which will punish a confessor on release?  What are the ethical stances of the prisoners – is lying worse than abandoning a colleague or vice versa? 

It is possible that neither prisoner will be aware whether the other has been given this ultimatum yet, or indeed ever will be.  From that perspective, it can be argued that the single prisoner may only be playing a game with the prosecutor, not the other prisoner.  Otherwise, in reality, the prisoner is playing two different games simultaneously – one with the other prisoner and one with the prosecutor.

In the standard treatment of the prisoners’ dilemma the following is assumed:

·         Both prisoners act rationally, based on nothing more than the information provided. 

·         Both prisoners consider only the best possible (short term) outcome for themselves without fear of ethical constraints or retribution after release.

·         Both prisoners may consider the other as little more than an abstraction, so no consideration as to the welfare of the other is necessary.  Both are aware that the other was also taken into custody.

·         The crime in question is sufficiently mundane that no lasting stigma is attached to admitting to it (at least from the perspective of the prisoners).

·         The rules of the game are fair and thus the prosecutor is not lying, provides both prisoners with the same dilemma and will keep her word.

Furthermore, the scale of punishments is such that the punishment accorded to a betrayed prisoner, one who remains silent while the other confesses, is worse than any punishments meted out to both.  We’ll call this the “Severe Punishment” and arbitrarily set it to 20 years in prison.   The punishment which would be shared by both prisoners is significantly lighter if both stay silent than if both confess.  We’ll call the former the “Minimum Punishment” (5 years in prison) and the latter the “Medium Punishment” (10 years in prison).  The reward for unilateral confession, freedom and immunity from prosecution or “No Punishment”, is preferable to all other outcomes.  (Variations of these scales of punishment are also studied by games theorists, but we shall not delve quite so deeply.)

The problem facing each prisoner is that the reward or punishment to be accorded is not entirely of his own making.  Consider the decision making process only one prisoner, calling him Larry for convenience.  Larry has two choices, to stay silent or to confess.  Larry knows that his partner in crime (Wally) has the same two choices, but he doesn’t know for sure what Wally will do and he knows that Wally doesn’t know what he, Larry will do.

If Larry chooses to stay silent, two outcomes are possible: Wally stays silent and they both receive the Minimum Punishment of 5 years or Wally confesses and is released while Larry receives the Severe Punishment of 20 years.

If Larry chooses to confess, two outcomes are possible: Wally stays silent and receives the Severe Punishment of 20 years while Larry is released or Wally confesses and both receive the Medium Punishment of 10 years.

What should Larry do?

Because Wally and Larry’s decisions are effectively simultaneous, Larry can consider his predicament from two perspectives: if Larry makes a certain decision, what will be the potential consequences of Wally’s decision? and, if Wally makes a certain decision, what options are open to Larry?

The first consideration can be illustrated thus:

Either

 
Larry decides to confess
Wally decides to confess
10 years in jail for both
Wally decides to remain silent
Larry goes free
Wally spends 20 years in jail

or

 
Larry decides to remain silent
Wally decides to confess
Larry spends 20 years in jail
Wally goes free
Wally decides to remain silent
5 years in jail for both

 

The second consideration can be illustrated thus:

Either

 
Wally confesses
Larry decides to confess
10 years in jail for both
Larry decides to remain silent
Wally goes free
Larry spends 20 years in jail

or

 
Wally remains silent
Larry decides to confess
Wally spends 20 years in jail
Larry goes free
Larry decides to remain silent
5 years in jail for both

The distinction between to the two modes of decision making is that the first is predictive (in which Larry makes a decision and hopes that Wally makes a particular decision) whereas the latter is reactive (one in which Wally makes or is assumed to have made a decision and Larry must ensure that his decision obtains the best result).

Looking at the predictive consideration:

·         If Larry decides to confess, he stands to gain freedom at the expense of Wally or to spend 10 years in jail, along with Wally.  Clearly in this instance, Larry will hope that Wally remains silent.

·         If Larry decides to stay silent, he might only spend only 5 years in jail together with Wally.  However, if Wally was going to stay silent anyway, then Larry has made a suboptimal decision which would have given him freedom.  Additionally, by remaining silent, Larry has exposed himself to the risk of spending 20 years in jail.

In terms of prediction, it appears that Larry’s best option is to confess.

Looking at the reactive consideration:

·         If Wally confesses, then Larry can either confess and only spend 10 years in jail along with Wally, or stay silent and allow Wally to walk free while he himself spends 20 years in jail. 

·         If Wally stays silent on the other hand, then Larry can either confess and be released or stay silent and spend 5 years in jail along with Wally.

In terms of reaction, the benefits accruing from confession and irrationality of staying silent are even starker.

The “dilemma” revolves around the fact that if the two are going to spend time in jail together (which seems the likely outcome given that both are rational agents and both will rationally decide to confess) then it is rational that they collectively strive to spend as little time in jail as possible – which they could accomplish by remaining silent.  Therefore, it’s rational to confess and it’s rational to remain silent.

It is tempting to consider this scenario in terms of ethics, both from the perspective of allowing ethics to sway the decision making of the prisoners and also by analysing the scenario to see if a moral structure could be derived from the hypothetical dilemma.

If ethics were allowed to sway the decision making of the prisoners, it would be necessary to decide which standard ethical principle has precedence – “do not lie” or “do not abandon/betray your colleagues”?  To determine this we need to know more about the prisoners.  For instance, we don’t know whether the prisoners actually committed the crime of which they are accused.  While we stated that the game is ‘fair’ it was not made clear that the prisoners were guilty or not.

If the prisoners are career criminals, then it is probable that they will value unity over honesty.  If the prisoners are innocent (and more, do not know the other prisoner at all), it is probable that unity will not figure highly.  This does not bode well for a concept of universal morality, at least as derived from the prisoners’ dilemma, because the morality to be applied is clearly dependent on the situation.

What we do see, however, is that the morality of unity does have a benefit in this situation.  It is generally considered moral to extend and honour trust.  In this instance, bilateral morality will benefit both prisoners.  Many natural situations are analogous to the prisoners’ dilemma and the morality of unity does seem to come into play in these situations. 

An example is when two ancient warriors first meet, and choose neither to shield themselves nor to raise their weapons against the other.  By extending trust and honouring that (apparent) trust, each the warriors risk death if the other plans betrayal.  What they avoid is the need to fight immediately and risk being wounded in an unnecessary battle while simultaneously standing to gain by potentially making an ally.

In the prisoners’ dilemma we can see that the morality of unity can reap the benefits of co-operation in a situation in which choosing not to co-operate would result in a worse outcome for both.  Is this sufficient basis for a universal morality?

Let us briefly consider a couple of issues.

First, who are the prisoners playing against?  Is Larry playing against Wally, or against the prosecutor (potentially together with Wally)?  When the scenario is framed, it is stated that the prosecutor is ‘fair’ but it is plain that she may benefit from the situation, or potentially fail to benefit, depending on the outcome.  If she obtains two confessions, that could be considered a win, possibly her best outcome.  However, she may consider one major conviction with a full sentence served to be preferable – we can reason that this is quite likely otherwise she wouldn’t make such a generous offer.  Her worst outcome is to have both prisoners remain silent, thus obtaining only two nominal convictions.

If Larry and Wally confess and remain silent with equal probability (randomly, rather than rationally), the prosecutor has a 75% chance of getting a favourable outcome – either two confessions or one major conviction.  If Larry and Wally are independent rational actors playing against each other, then both will almost certainly choose to confess, a favourable outcome for the prosecutor.  It is only when they choose to act ethically (applying the morality of unity) that the prosecutor will lose.

Therefore, exactly who it is that Larry plays against should be a vital part of his considerations.  Larry can only win against the prosecutor by staying silent and hoping that Wally does so too.  If Larry chooses to play against Wally though, then he should rationally choose to confess.  If Wally confesses then a draw results between the prisoners and the prosecutor secures a minor win.  If Wally chooses to play against the “wrong” person (that is against the prosecutor, and not Larry), then he will stay silent and Larry will win by walking free while Wally spends the next twenty years in jail – and the prosecutor secures a major win.

Second, we should consider other variants of the prisoners’ dilemma.  As the dilemma has been framed, it is a single event and effectively synchronous.  Most real-world situations are neither isolated events nor entirely synchronous.  There are variations of the prisoners’ dilemma within the discipline of game theory in which this is taken into account.

If the prisoners’ dilemma is asynchronous, the decision making processes of each prisoner will match with Larry’s hypothetical considerations above.  The prisoner who makes the first move uses the predictive consideration while the other uses the reactive.  Both prisoners will therefore rationally choose to confess, unless the first works on the assumption that they are playing against the prosecutor and not each other.  The second prisoner then has the opportunity to betray the other or to co-operate in order to beat the prosecutor by staying silent.

If the specific form of the prisoners’ dilemma is a series of dilemmas, then the prisoners can look at the problem in a number of different ways.  If both prisoners have a tacit agreement that they are playing against the prosecutor, then they will consistently choose to stay silent.  However, if at least one prisoner decides that he is playing against the other prisoner then other strategies arise.

Let us say that one prisoner is a superiority seeker, who wants no more than to prevail over a player who plays the game under the same conditions as himself.  To prevail in a recurring prisoners’ dilemma, a superiority seeker must win at least one more round than the other prisoner.  This can be achieved either by confessing in the first and all subsequent rounds and hoping that his opponent remains silent at least once or by staying silent in the first round, thereby potentially setting up a trust relationship which later be abused for profit.

The first option seems to be clearly more rational as the superiority seeker stands to do no more than come equal if the other prisoner follows the same strategy and will win otherwise.  The second option is risky with no clear benefit because by acting to set up a situation of trust, all the superiority seeker does is make it possible for his opponent to be the betrayer, rather than the betrayed.  The strategy could also fail in the very first round, if the opponent chooses to repeatedly confess.

It therefore seems to make little difference whether a prisoners’ dilemma is a singular event or recurring, or synchronous or asynchronous.  Who a player considers to be the opponent seems to remain the single most important determining factor.

 ----------------------------------------------

This article is one of a series.  It was preceded by The Moral Animal and will be followed by Ethical Farmers.

Saturday 26 January 2013

Some Musings on the Moral Animals of reddit

There have been some interesting responses to The Moral Animal although unfortunately not in the comments section of the blog.  You can find them at r/philosophy and r/atheism.  I chose those two areas of reddit.com because ethics is a school of philosophy and because I am curious to know what, in general, atheists think about the moral agency of animals.  Of course, the volume of responses I got doesn’t really give me any indication of the general view, but it was interesting to see people who identify themselves as atheist defending some sort of top-down version of morality.

When I use the term “top-down version of morality”, I am referring to statements like:

“basic morals are inherent”

“lower species are … morally superior”

“there is a substantial difference between morality and trained behaviour”

These all seem to imply that there is a morality which we can identify in ourselves, and possibly in other animals, but which is not a human invention.  Now, I might be misreading this, since it is possible that these people simply mean to imply that humans invent this idea of morality, define it in some meaningful way, and then look to see if we and other animals demonstrate it (an example being monogamy, a state that ducks are accused of maintaining).

There were a couple of responses which implied that the morality of animals is (or perhaps should be) defined in terms of their interactions with humans.  For example, the dog who mauls a toddler to death would have done something that is morally wrong.  Dogs who rescue children are morally righteous.  An orang-utan who rapes a girl is morally wrong.  (I’m editing this in Word and the grammar checker tells me that I shouldn’t use “who” when talking about a dog, anthropocentric bias seems to be built into at least Microsoft’s version of English!)

Other responses framed the question of morality more in terms of intraspecies interaction, so it’s not all bad.  For example: house cats rarely kill each other (although they certainly fight and male lions kill their rival’s cubs), other apes practice reciprocal altruism and dolphins commit a form of gang-rape.

Perhaps the major thread in the responses was my own fault.  The headline at r/atheism spoke only of animals being moral, rather than being moral agents.  The r/philosophy headline was better phrased, but even there, there was a heavy focus on “what is moral?” at the expense of the question addressed in The Moral Animal, “can animals be moral agents?”

However, much as it was my own stupid fault, in one comment thread despite repeatedly stating that I was only considering the absolute minimum requirements for being a moral agent, rather than whether animals are moral, my interlocutor seemed unable or unwilling to take that onboard.  The final comment from this chap, who has the handle okayifimust, was so full of assumptions that I could not do it justice in that comment thread.  So, I’d like to address it here.  I’ll reproduce a little section of thread, first me (as wotpolitan), then okayifimust (for the earlier discussion and anything following, see r/atheism):

wotpolitan

Absolute minimum requirement for moral agency. I've said it a few times.

I think animals have it. I think there is evidence to support the idea that they have it.

Furthermore, given the vagueness associated with the terms "right" and "wrong" (for which a definition is required to use the wiki definition of moral agency in any sensible way), I think that animals can be moral agents. I define for my dogs what right and wrong is, and they know in the specific context of that interaction what right and wrong is. And they act "morally" in that context. Now, like humans, they will act "immorally" if they think they can get away with it. Unlike humans, however, they aren't that good at working out what they can get away with. That means they appear less moral than humans, but I think that's probably an anthropocentric illusion. I may be wrong on that, but it's another discussion.

okayifimust

| Absolute minimum requirement for moral agency. I've said it a few times.

Agreed. But since it's not a sufficient condition for moral agency - what is the point?

| Furthermore, given the vagueness associated with the terms "right" and "wrong" (for which a definition is required to use the wiki definition of moral agency in any sensible way), I think that animals can be moral agents.

Yes, we have to define "right" and "wrong" somehow. But in order for X to be a moral agent, all we have to do is agree that X is somehow aware of the concept; we don't have to agree with it, even.

| I think that animals can be moral agents. I define for my dogs what right and wrong is, and they know in the specific context of that interaction what right and wrong is.

But then a dog who doesn't shit on your carpet is equally "moral" as a dog who mails toddlers to death, just as long as that is what you train them to be. The dogs lack any consideration of morality, they just fear punishment.

| And they act "morally" in that context. Now, like humans, they will act "immorally" if they think they can get away with it.

Look, if you keep putting every word in quotes that we absolutely need to be very, very clear about, we might as well stop discussing it.

And, no, humans will not always try to do things that they can get away with. Humans can be moral agents, they can decide that an action is "wrong" as opposed to "likely to result in punishment if i am found out".

Okay, responding as I must:

I’m not, at this point, talking about moral agency per se. Earlier in the discussion I had highlighted a passage from The Moral Animal:

The absolute minimum requirements of moral agency … are an ability to understand and predict the consequences of action/inaction (comprehension) together with an ability to make decisions and act on them (volition).  Note that at this point I am neither assuming nor proposing any definitions of right/wrong, or good/bad, or moral/immoral.  An actor meeting the absolute minimum requirements of moral agency merely is able to choose between at least two predictable consequences of action/inaction.

The redditor okayifimust was surprised that I wanted to “claim ‘moral agency’ without considering the distinction between moral/immoral”.  I directed okayifimust’s attention to a response to another redditor, NukeThePope, in which I had written (clarifying edits in brackets):

Like I say (elsewhere), I am only talking in (The Moral Animal) about moral agency, by which I mean the ability to make a decision and act upon it while understanding the consequences of potential choices. Perhaps it would help to think about it in these terms - before we've decided what is right and wrong, even before we've decided on the basis (bases, perhaps) upon which we would make the decisions regarding right and wrong, we can consider moral agency. In other words, presuming that there is a morality with which we (and animals) may interact, what must we have to be able to interact with that morality?

Taking the viewpoint of some theists for a moment, say there is a god who either by its existence or by divine decree establishes a range of things that are right and good, and a range of things that are wrong and bad. Say further that the only creatures capable of interacting with this divinely inspired morality are humans. Therefore, in such a world, there is something different between animals and humans. What is that something?

Some theists, although not all, put that down to a soul, some element of likeness to their god. These people are saying, in a way, that the soul is the tool which determines for us what is moral. (I may be horribly simplifying a brilliantly exquisite point of theology here, but it is not my intention to attack theists on that point.)

Now, what I am saying is that I agree that to be a moral agent, we … must be able to work out that something is in some way wrong - that can be because it is against the rules, or something else, like an inbuilt moral detector, tells us it is wrong. Furthermore, we … must have volition - which theists along with many atheists would put down to free will.

I'm not saying anything about empathy, or needs and desires of others, I am just saying that my dogs know that if they take certain actions, then there will be certain outcomes, and they do have the volition necessary to choose one action over others. I'm not saying that they thinking morally, pondering on the action that will cause minimum suffering to those around them, but I am saying that they have the skill set necessary to act morally, if they had the ability and inclination to consider the potential suffering of those around them.

I really am just talking about the absolute minimum requirements for moral agency here.

I’d like to expand on this a little.  Imagine a situation in which a moral question must be answered, it could be anything, but I’m not particularly interested in a situation which has no clearly correct resolution (so that eliminates dilemmas such as the trolley problem).  Let’s use the finding of a large sum of money instead.  You come across a briefcase in which there is $1,000,000 along with a letter which indicates that the money is legitimately owned by a Mr Smith of 1 Smith Street, Smithton (and that it is not associated with any criminal activity).  That address is just around the corner, you are not busy and it’s on your intended route (you’re even walking so it’s no bother whatsoever to drop it at Mr Smith’s house).  However, it’s late at night, no-one is around, no-one knows that you are in this area, and $1,000,000 would make a significant improvement to your standard of living.  No-one would ever know if you just continued on your way and took the money home.  What is the correct course of action for you?

Now, the morally correct answer is pretty clear: you should take the money to Mr Smith’s house, returning it to the rightful owner.  But let’s look at you as a moral agent for a moment.  Say that you walked right past the briefcase without noticing it was there.  Would you in that case have had a moral obligation to return it?

I would argue that you would not, the obligation had not been triggered.

Say instead that you saw the briefcase, you opened it, and found that it contained scraps of meaningless paper, all about the same size in nice bundles, with another meaningless paper with various squiggles on it.  Would you then have a moral obligation to return it to Mr Smith, noting that in this situation you would not know that it belonged to Mr Smith (because the letter explaining the money and detailing his address was unreadable to you)?

Again I would argue that you would not.

In order to have a moral obligation, you must be cognitively aware that an obligation exists.  You must have comprehension.

On the other hand, suppose that you do become aware of an obligation, but when you try to lift up the briefcase, you find that it is incredibly heavy, or stapled immovably to the pavement.  Do you have an obligation to drop the briefcase at Mr Smith’s address?

I don’t think so, since you can’t lift the briefcase.  You might have the will, but not the ability to act.

Alternatively, perhaps just as you realise what you should do, you are knocked unconscious and carried home.  In this situation, it is surely ridiculous to expect that you have a moral obligation to return Mr Smith’s money while you are being carried home in a stupor.  You don’t have volition.

Even as a human, I argue, in order to be a moral agent as an absolute minimum you must have comprehension and volition.  Taking that a little further, it occurs to me that animals also have comprehension and volition, which means that they satisfy the absolute minimum requirements for being a moral agent.

Yes, we have to define "right" and "wrong" somehow. But in order for X to be a moral agent, all we have to do is agree that X is somehow aware of the concept; we don't have to agree with it, even.

Hm.  I might not really understand where okayifimust is going with this one.  I think he means that for X to be a moral agent, then X must have some sort of concept of morality, even if we don’t agree with that morality.  For example, X might be moral if X kills every second red-haired person she meets, because that’s in her moral code (“Yea verily, I say unto thee, thine brothers and sisters with hair of red that are without souls are an abomination to me, let the first walk freely but smite every second that thee doth meet so that I may glory their suffering, for I am a vengeful and capricious God.  So sayeth the Lord.”  [Leviticus 17:12].)  We don’t have to agree with her morality, but if she follows her convictions, we could say she is acting morally.

Personally, I don’t agree with that.  Perhaps there are people out there who consider that people who follow their own convictions are moral no matter how depraved their actions are in terms of what the rest of us think, but I do doubt it.  Anyway, I’ve talked about moral cowardice elsewhere.

That all said, can such people be moral agents?  Well, I think they satisfy the absolute minimum requirements for being moral agents, sure.  And I think they can be moral agents, but I still think that their morality needs some work.  And they could be retrained to exhibit a more widely accepted morality – which is to say, I am not convinced that there is a substantial difference between morality and trained behaviour.

But then a dog who doesn't shit on your carpet is equally "moral" as a dog who mails toddlers to death, just as long as that is what you train them to be. The dogs lack any consideration of morality, they just fear punishment.

This is just a total misunderstanding of the point.  If a dog comprehends what is right and what is wrong (in terms dictated by a human owner or the pack) then, if that dog is free to act (has volition), then it may be a moral agent in that it may choose to do what it knows to be right or it may choose to do what it knows to be wrong.  If an owner successfully teaches a dog that mauling toddlers to death is good, then it’s just doing the right thing in the same way as righteous humans in history have performed many acts that we today see as abhorrent.

I agree that we fear punishment, along with dogs.  However, certainly for humans, that punishment does not need to be external.  I think that punishment is also internalised in other social animals, albeit to a lesser extent.  Certain domestic animals exhibit guilt, perhaps that is fear of punishment or fear of disapproval, but then again humans are also conditioned by similar considerations.

Look, if you keep putting every word in quotes that we absolutely need to be very, very clear about, we might as well stop discussing it.

And, no, humans will not always try to do things that they can get away with. Humans can be moral agents, they can decide that an action is "wrong" as opposed to "likely to result in punishment if i am found out".

Well, okayifimust, I put quotation marks against the words “morally” and “immorally” because I was using an unusual definition of the term.  Returning to ducks, who are accused of being monogamous, I’ve seen a female duck being mounted by a drake who was not her partner (that partner was right alongside her looking as distressed as I’ve ever seen a duck look).  The assertion that life-long monogamy is morally good is somewhat arbitrary, particularly when applied to ducks.  They tend to be monogamous, but whether that is morally good or just is, well, that is another question.

Humans do act “immorally” by being unfaithful or only serially monogamous, you might want to argue that that is morally bad, but please argue that case – don’t just assume that your assumption is correct and that everyone agrees with you.  I’m not saying you are necessarily wrong, I’m just saying that we have perhaps arbitrarily defined life-long monogamy as moral despite the fact that the vast majority of people don’t actually practice it.

And yes, I agree, humans don’t always try to do what they know to be wrong if they think they can get away with it.  I do get a little deeper into that in the series of articles of which The Moral Animal is only the first.  Hopefully things will become clearer to okayifimust once the relevant articles have been published.

Friday 25 January 2013

The Moral Animal

This article is the first in a series.  All will be linked by the tag “Morality as Playing Games”.

There are some "preludes" to the series, Saving the Dog, Being Bad and The Problem with Sam, which may be of interest, but aren't essential for understanding what follows.

------------------------------                                                        

It is virtually, if not in fact literally, impossible to consider ethics without considering "moral agency", the quality embodied by a moral agent.  We can think of a moral agent as one who can make moral decisions or determinations and who can subsequently act upon those decisions or determinations.  In other words, we can think of a moral agent as one we could justifiably blame (or praise) for what they do.  Before arriving at a definition of the minimum requirements of moral agency, let us consider the practical application of moral agency considerations.  More specifically let us consider "blame".

 At one end of the scale we have inanimate objects, for example, a rock that we cannot justifiably blame for anything.  While we might curse the rock on which we stubbed our toe, we are well aware that the toe was stubbed by us, not by the rock.  Further up the scale, there are humans who we do blame (or alternatively praise) for their actions.  We do not, however, attribute blame to all humans equally.  If a child hurts a cat, for example, most of us would intervene and chastise the child, but we would understand that the child is not fully responsible for his actions.  If an adult hurts a cat, so long as the adult does not have any impairment which would otherwise lead to diminished responsibility, we hold that person to be fully responsible for their actions.  We will attribute levels of blame which correspond with the level of responsibility we accord them.

But what is this “diminished responsibility”?  In the case of the child, we might presume that he either doesn't understand that his actions hurts the cat, or that he doesn't understand that hurting an animal will lead to chastisement.  In some cases, it might be because the child is not able to fully control his actions, for example a toddler might not intend to hurt a cat he is attempting to carry, but might be unable to manage a comfortable arrangement for the object of his attentions.  Once we have chastised the child, however, if he is to hurt the cat again in the same way, we will accord higher levels of responsibility and attribute higher levels of blame, until such time as he reaches adulthood when we no longer assume any lack of knowledge or understanding about the consequences of how he handles an animal.

The absolute minimum requirements of moral agency, it would seem, are an ability to understand and predict the consequences of action/inaction (comprehension) together with an ability to make decisions and act on them (volition).  Note that at this point I am neither assuming nor proposing any definitions of right/wrong, or good/bad, or moral/immoral.  An actor meeting the absolute minimum requirements of moral agency merely is able to choose between at least two predictable consequences of action/inaction.

This definition should not be immediately controversial, it should make some intuitive sense.  A rock comprehends nothing and is incapable of volition.  A rock, we could therefore argue, has no capacity for moral agency.  We could posit a Being with limitless comprehension and completely unconstrained volition, which indeed some people do.  This Being, we could say, has as maximal moral agency.  Now while I have not addressed what constitutes wrong/right and so on, we can safely assume that a Being with limitless comprehension will know what is right and wrong by virtue of that limitless comprehension or, alternatively, will be able to develop concepts of right and wrong from first principles.  For our purposes, however, we can say that no matter on what basis such a Being would make decisions, this moral "super-agent" would be able to make those decisions with a full comprehension of the circumstances and be able to act without constraints, so the consequences of any action or inaction would therefore be selected deliberately.

As moral agents, humans can be found somewhere in the middle, unable to claim the ignorance and helplessness of a rock, but also lacking in the knowledge and predictive power, and freedom of volition that a moral super-agent would have.  This too should make intuitive sense, in part because we make similar sorts of observations when attributing responsibility to others for their actions and explaining levels of responsibility for our own actions, including but not restricted to "moral responsibility".  When I explain why I ran a red light, I might point to the factors which were outside of my control.  I could not brake in time, the cars behind were too close.  The description of such constraints would be intended to convey that I am less culpable for what occurred than I would have been if I simply decided to drive through the intersection.  This is a contributor to attribution bias, the cognitive bias by which others tend to be considered more responsible for their actions than we are for our own.  If I were to describe a similar event in which someone else ran a red light, I would tend to assume that the other driver was less constrained than I would have had to have been to have done the same thing.

The same applies to comprehension.  Again, while explaining the running of the red light, I might explain that I didn't actually know the exact details of the traffic regulations.  The rules are a little vague when it comes to weighing the risks associated with passing through an intersection when it had just turned red and the risks associated with braking suddenly with cars behind, especially when I didn't know for sure that the driver behind me was paying attention.  If I have less than total comprehension of the possible consequences, my culpability is reduced.  Attribution bias, by the way, rarely permits me to assume a similar level of confusion on the part of others who run red lights.

Limits on comprehension and volition allow us to give considerable latitude to young children.  We know that young children do not fully understand the consequences of their actions.  We afford similar latitude to our pets.  This again should not be controversial although some (such as Helene Guldberg in Just Another Ape?) do question whether we should ascribe any moral agency to animals at all.  When most of us consider our own pets, however, we do at least appear to ascribe a level of moral agency to them.  We attempt to train them so as to know what is permitted, and what is not. 

In my own case, I have taught my dogs to sit while I fill their food bowls and to not start eating until I give them express permission (which is particularly useful on a rainy day, if I am to avoid being covered in mud).  In the terms of the minimum requirement of moral agency, the dogs certainly appear to have some understanding of consequences of their actions, if it is only that I get angry and they don't get fed if they fail to obey the rules, and they exercise some measure of volition by sitting and waiting patiently until given the command to eat.  Behaviorists might argue that the dogs are merely responding to stimuli with no cognitive processing, but the behaviour of one of my dogs indicates that this is not the case.  If I give the command to eat to this particular dog, and only to him, he will exhibit confusion and won't eat until both dogs have been given the command to eat.

A dog is essentially a domesticated wolf.  My overly polite dog, who won’t eat until the other also is given the command to eat, is the submissive of the pair.  It could be argued that this is an example of canine morality; he knows the rule about not eating before a dog that is further up the hierarchy and complies with it.  It is true enough that this canine morality is neither as rich nor as complex as human morality, but it seems clear that non-humans are capable of at least some level of moral agency.

-----------------------------------------

This article is one of a series.  It was preceded by The Problem with Sam - The Final Prelude and is followed by Ethical Prisoners.