Friday, 10 July 2020

Answers to the last post's questions about correction in language teaching

NB
  1. In this and the previous post, a mistake or error is language that would be considered unsuccessful or inappropriate usage in the relevant social context. Even by this criterion, we often disagree about whether something is an error, and even more often about how serious an error it is. These notions are tied up with our understanding of the context in which the error was made.
  2. The teaching contexts I have in mind involve adults only.
Should minor errors ever be corrected? (I mean things like wrong prepositions, word order in questions, missing articles, etc.) If so, why?
The thought underlying this question is: do minor errors matter? Should people care about them? The answer is that it depends on what you want to use the language for, and on your problem situation more generally. Some language learners want to be able to speak accurately as well as clearly. Others don’t. Both preferences are legitimate. For teachers, the context is obviously relevant here: if you are teaching exam preparation, it is more likely that learners will want to know about how the accuracy of their language could affect their performance.


So to answer the original question: only if the learner seems interested in being corrected on those things, and is developmentally ready for them. Observe how they react and adjust your corrections as needed. At higher levels you can simply ask: “Would you like some feedback?” and gauge the effusiveness of their reaction. If you still aren't sure, the safest thing is to use interactional recasts: find a way to echo their remark, incorporating the correction, while maintaining the flow of conversation.* People who are interested in being corrected tend to pick up on this, while those who aren't tend to ignore it – so it is both a way of correcting students and gauging their interest.


Another rule of thumb is: don't correct slips. These are mistakes that the learner makes in one instance (due to tiredness, echoing another learner, the complexity of his or her surrounding talk, etc.), but which you know they wouldn't normally make. All correction does in this case is draw attention to their lack of control in the L2 – likely not something they need you to point out.


How much context is needed for correction of an error to be meaningful or beneficial?
I was thinking of delayed correction here. There are many types of small error for which context isn't as important. Misunderstanding is always possible, though, so ideally the delay is short, and ideally learners would remember the context based on a phrase or a sentence.


Is it ever harmful to correct language learners' mistakes?
Yes, if it humiliates, annoys, frustrates, discourages, bores or distracts them. Pre-course needs analyses with a question about correction should be common practice for this sort of reason, although that is no replacement for teacher sensitivity.


In addition, some types of correction can be problematic when the learner's intended meaning is not clear. For example:


In this exchange, student J immediately takes up the teacher's recast – an apparently successful instance of uptake following correction. But it's possible that he/she was originally trying to say more than or less than. If there is any doubt about what the person means, it is better to seek clarification than to elicit a well-formed utterance that he/she may never have wanted to say in the first place.**


How often do learners even know what you're correcting them on? Is it worth checking? And if so, do you need to check every time, or just in specific cases?  
Not only do they often not know, they are often outright mistaken about it. You might correct a grammatical error and they will think it was a lexical error, or vice versa. They may even mistake a lexical correction as a (rather unfair) attempt to correct their foreign accent, as happened to me in one case. There is a view that learners are more likely to have accurate perceptions of phonological and lexical correction, and less likely have accurate perceptions of grammatical correction. This seems plausible given that the former usually only involve individual words or syllables, and so might be easier to process. Teachers can also supply both the 'correct' and the 'incorrect’ pronunciation to make the distinction clearer.


It is worth checking understanding during delayed correction, and it could be worth checking or explaining the nature of the error on-the-spot, if not too disruptive.


Why do learners repeat mistakes after they’ve been corrected?
It is difficult to change one’s L2 speaking and writing habits. It takes time for new language to be internalised. People might also feel shame around reading, recalling or listening back to their mistakes, making it harder to criticise and change them. That is why I emphasise that we should only be correcting learners to the extent that they show interest in being corrected.


Does correcting learner errors actually improve the accuracy of their language production?
Even if you do everything right, it is not inevitable. Learners have to pay attention to the corrections, perhaps understand them explicitly, and consciously work to incorporate them into speech and writing.


If it doesn’t, what other reasons could there be to correct errors?
Other reasons could be: it helps people build up their confidence in the L2 if they know that you will point out errors as they occur; it is useful for other learners who are listening; it shows learners that you're paying attention to their language use and taking an interest in their progress (although there are other ways to do this, such as by simply pointing out where they have made progress).


Criticism and response
One could argue that the idea of 'just do what students want' is a cop-out, for two reasons. Firstly, teachers should come with their own principles and views about what types of language knowledge and classroom activities students should value. This could include true ideas about the potential benefits of attending to corrections of one's mistakes, as well as the broader aspects of the language that they relate to. Secondly, teachers' views should be informed by research about the effects of feedback, not by what students say they want.

On the first point: teachers should try to persuade students of those views and principles in the course of their interaction, while taking into account individual differences. This, in itself, is an important principle. On the second point: the affective and motivational aspects discussed here override other considerations (such as the relative effectiveness of different types of feedback, or peer vs teacher feedback). Discussions of 'the effects of feedback' are meaningless unless they take into account the learner's own experience and views about the interaction. So, no, it is not a cop-out.
 

* See this handout describing interactional recasts and other types of feedback.


** More discussion of this and other examples in this paper (which is also the source of the extract above).

Tuesday, 14 April 2020

Mistakes and correction in language learning

Suppose you're trying to teach someone a new language. At some point they get good enough to have short exchanges with you. They're not a complete beginner anymore, but they still can't speak with the same ease, precision or complexity that you can. Almost inevitably, they make mistakes. These could be mistakes in grammar, choice of words, or pronunciation. They don't actively want to make mistakes, and in many cases they want you to point out their errors. So do you just stop them at every error and correct them?

Unless they insist on it, that is unlikely to be right. I do think one of the single best ways to help a person acquire correct grammar is simply to point out their errors, and explain them in terms that they can understand. But excessive interruption to correct errors may do more harm than good. While it might preserve some notion of grammatical purity in the classroom, it hinders other valuable processes that are going on in the learner's mind while he's involved in production of the target language. He's thinking about how to start the sentence, which verb forms to use, how to connect the different ideas together, how to pronounce the less familiar words, and so on. He may also be thinking about what he's trying to say - and it's hard to pay conscious attention to form if you haven't yet worked out the content.

So it's not obvious when to correct language learners on their errors. Here are some questions to start with:
  • Should minor errors ever be corrected? (I mean things like wrong prepositions, word order in questions, missing articles, etc.) If so, why?
  • How much context is needed for correction of an error to be meaningful or beneficial?
  • Is it ever harmful to correct language learners' mistakes?
  • How often do learners even know what you're correcting them on? Is it worth checking? And if so, do you need to check every time, or just in specific cases? 
  • Why do learners repeat mistakes after they’ve been corrected on them?
  • Does correcting learner errors actually improve the accuracy of their language production?
  • If it doesn’t, what other reasons could there be to correct errors?
On the first point, a key argument is that errors only matter insofar as they affect communication — which would mean there’s no point in correcting what I’ve called ‘small’ errors. If the goal is to become an effective communicator, there are lots of things which are more important than grammatical accuracy per se (1). Effective communication depends on clear pronunciation, appropriate word choice, ability to connect ideas logically, knowledge of the culture’s social conventions, and more. Correcting the small stuff doesn’t necessarily help people become effective communicators. Even if it were important, there is a view (2) that grammatical correction doesn’t even lead to improvements in accuracy. This could be connected with general criticism of traditional approaches to teaching grammar — that is, as a core component of a language course that can be presented without much if any reference to meaning.

Still, I keep correcting students. So I’ve been asking myself the final question listed above. Why do I bother correcting them? I think it’s important to have a good answer to this question, because it is easy to fall into the mindset of: “I taught you this last week, so you ought to remember it now” — and then to despair when students repeat their mistakes. Answer (eventually) to follow.

Notes

1. Mistakes and Correction by Julian Edge is a good introduction to this topic.
2. E.g. Truscott 1996 - although I think this was only about correction of students' written work.
3. Another problem in this area is the following unpleasant paradox that I have observed in my own teaching practice and experienced as a learner:
 If an error is small, it's easy to correct it without causing confusion or taking up too much time. If an error is serious, it's harder to correct by the same criteria - especially if the teacher himself isn't sure what the learner is trying to say. But: If an error is small, it doesn't benefit the learner much to be told about it, for the reasons discussed in the post above. If an error is serious, it benefits the learner a lot to be told about it. So in practice, the more you stand to benefit from an error being brought to your attention, the less likely it is that this will happen.
(I am assuming that learners are developmentally ready for and interested to hear the corrections in question - otherwise they don't benefit either way.)

Tuesday, 10 February 2015

Immigration and the collapse of society: a reply to Michael Huemer

Michael Huemer’s article ‘Is There a Right to Immigrate?’ tries to show that restricting immigration violates the prima facie right to be free from harmful coercion, a right which is not overridden by the points usually raised in defence of restriction. The approach of using a thought experiment based on certain widespread intuitions (rather than taking a philosophical theory or ideological orientation to be true and deriving policies from it) is very sensible, and makes the whole article more relevant and persuasive.

I think the article succeeds as an argument for economic immigration, and ideally it may justify an open-borders policy for the US. The problem is in the claim that its arguments "apply equally well to other countries." On the contrary, most of its responses to objections to unlimited immigration become naive when applied to other developed countries, such as Britain and elsewhere in western Europe. A related problem is that the article ostensibly aims to defend immigration as such, yet its central thought experiment refers strictly to two parties who want to trade with each other (starving Marvin and an unspecified shop owner) but are prevented from doing so by a third party. What hasn't been addressed is the large number of people who would come purely to receive certain benefits offered in the developed world. For example, many of those countries have socialised health care, whose legitimacy depends on the ability and willingness to treat everyone, rich or poor, without charging them. A flood of critically ill immigrants would destroy that institution (or more likely, the government under which open borders were introduced would be voted out, and restrictions would be reintroduced).

The argument for preventing that kind of immigration is not the same as the argument that, "because of a policy one has voluntarily adopted, if one did not coerce one’s victim in this way, one would instead confer a benefit on the person that one does not wish to confer." It is not the same because it does not infringe the freedom of association to stop people from entering a country purely in order, say, to gather outside its hospitals, creating an imperative to treat them (by that institution's ethical/social standards) and thereby destroying an institution set up to provide (relatively) high quality health care to anyone who needs it.

(If one has some absolutist conception of rights, one might say it impinges on the prospective immigrants' freedom of movement. But the argument in the article is set up around not assuming any such view, so that doesn't matter.)

This sort of concern is not confined to the existence of government welfare; that is just a factor which would attract non-economic immigrants. The general problem is this. In a country like Britain – which I will use as a counterexample from here on – institutions work only in virtue of shared knowledge of their members. It is a special kind of society, in which people have learnt to deal with each other and manage conflicts without violence. Because Britain does not currently have sufficient mechanisms (be they policies or informal traditions) for getting immigrants to assimilate, unrestricted immigration would threaten the existence of social norms which allow this kind of society to exist. For example: today in Britain, it is outside the realm of ordinary experience to come home and find someone ransacking the house, who, when confronted, pleas that he needs money to save his child’s life. That is the case partly because society is arranged in such a way that no one ever gets into such a desperate situation (an arrangement which would no longer work if swamped). But it is also because the vast majority of people respect the law, and would find legal, peaceful ways to manage desperate situations. Law enforcement in Britain relies on that fact. It could not cope with vast numbers of people who were not law-abiding in the British sense, and many of whom preferred prison to their tyrannical or war-torn countries of origin.

Huemer dismisses the 'cultural change' objection with a thought experiment about a country becoming Buddhist from the inside. But the objection has force when it is acknowledged that change might be imposed on a society from the outside, and that it involves a threat of violence, in the sense that members of the foreign culture do not recognise the non-violent institutions which make the society of their new place of residence what it is. Again, this is not an issue for Huemer's example country, partly because the US is so large (it would take a far greater number of immigrants to have any of the above effects), and, more importantly, because it has an ethos which strongly encourages immigrants to 'become American'. The same has become far less true of Britain over the last few decades, where that sort of attitude would nowadays be considered churlish.

Huemer addresses the fear of general societal collapse in section 3.5, where he disputes that any such thing would happen, apparently because foreigners, no matter where they come from or what their circumstances, would rather be with their families and stay in their own country (to which they are proud to belong) than move to a richer one. This is just factually false. To point out that Americans rarely move between states is to make a false comparison. Every American state has a tolerable standard of living. People are not dying of hunger in America. So of course their incentive to move is comparatively weak. The central example of the paper is based on the fact that many people who want to immigrate will otherwise die. That – and also much lesser plight than that – overrides any preference to stay with family or to stay in one's hometown.

This is especially true of an area of concern already raised: immigrants who come expressly to receive health care. They will die if they do not come. So the author's dismissal of the claim that a country with open borders would be overloaded with immigrants (on the basis that they would rather stay with their families) is too hasty.

It is strange that Huemer attaches significance to the fact that only 13 million people have made some effort toward moving to the US, and then admits that many others presumably make no effort, being aware of the draconian restrictions – not acknowledging that the vast majority of people who would rather live in the US are preemptively put off from making any such effort, which renders the 13 million figure meaningless. He does concede that the 'immigrant flood' worry means it might be better to open borders gradually, adding an extra million to the cap each year. Again, this may work for the US. It would not work for most developed countries.

Wednesday, 30 July 2014

Four myths about antisemitism

#1 It's a form of racism
The word 'antisemitism' might never have entered the world’s vocabulary. It might have remained a piece of pseudo-scientific jargon, exclusive to the thought of eugenicists and other writers on race in the second half of the 19th century, whose central theories have long since been refuted and abandoned. During the same period, the situation of European Jews seemed to be improving. The Age of Enlightenment had generated universalistic ideas about human beings, which led to the expectation that institutions treat people equally, regardless of religion or social status. In the spirit of social integration and the Enlightenment values of rationalism, humanism and universalism, successive European states emancipated their Jews. Britain opened membership of Parliament to Jews in 1858. In the 1860s and 1870s, swaths of Europe granted various forms of emancipation to their Jewish minorities. By 1919, Spain was the only European country in which Jews did not formally have full civil equality. Thus it has been pointed out1 that at the time, it was reasonable to expect these modernising processes to continue, since the parochial, theological basis of antisemitism had fallen out of favour, especially in northern Europe.
An antisemitic drawing on a 13th-century English tax roll

But the trend did not continue, and a second generation of eugenicists made this word famous. Yet even Nazi racism did not treat Jews the same way as any other inferior race. It was only the Jews, not Gypsies, not Slavs, who were blamed for the loss of the First World War, for the economic crises, for Britain's opposition to German expansionism. The intensity of the Nazi obsession with Jews, and their invoking traditional antisemitic canards, cannot be explained by their racism.

And for most of the history of antisemitism, it was not a question of race at all. Until the 19th century, the low status of the Jews was chiefly justified by their social separation and adherence to a degraded religion. Today this is still widely described, with the originally racial term, as antisemitic. And indeed, a Christian antisemite in the Middle Ages commits the same basic immorality as a modern, racist antisemite; medieval accusations of ritual slaughter carried out by Jews are no different from blood libel in the 21st century. Pagan blood libel from the Hellenistic period, too, is described as antisemitic, although its equivalence to Christian and modern era antisemitism is more controversial – which brings us to the second popular myth.

#2 It's Christianity’s fault
The historian Jacob Katz argued that modern antisemitism (beginning in the 19th century) is a continuation of the historical rejection of Judaism by Christianity, dating from Late Antiquity.2 Modern antisemites used the same arguments, stereotypes and generalising attacks on the Jewish character that Christians once used. Their socioeconomic separation was rooted in the Christian era. And even in the modern era, people considered Christianity the 'superior' religion in 'historical perspective'. Jews gained more prominent positions in society after emancipation, and were seen by antisemites as former 'pariahs', now encroaching on the Gentile population.3 Indeed, what many modern antisemites wanted was simply to reinstate the position of Jews in pre-emancipatory times.4

Accordingly, Katz argues that modern antisemitism does not resemble its ancient equivalents closely enough for it to be the same thing. Christianity added new, specifically Christian accusations to ancient Jew-hating ideas. The old charges were combined with deicide and religious guilt. Katz thinks this means that antisemitism is a legacy of the theological conflict with Christians, rather than of earlier times.5 There was something exceptional in the linking of antisemitism with the tenets of a new religion – and as this religion spread throughout the world, so did hostility towards Jews.

As an argument for why antisemitism became so widespread, Katz's account is plausible – although it could equally be the case that Christianity was a successful religion precisely because it seemed to contain ideas that already appealed to people, such as antisemitism. In any case, none of this shows that Christianity is the cause of antisemitism. In particular, it doesn't account for antisemitism being included in the Gospels (e.g. John 8:44, Matthew 27:24-5). If we want to explain why an idea is present in one generation, it’s not enough just to say that it was passed on from the previous generation. Not all ideas are retained over time. Some are kept and some are discarded. To explain the cause of antisemitism, one needs to explain the enduring appeal of the antisemitic mindset.

#3 It's multi-causal
Many people have tried to explain its appeal in terms of discrete historical episodes, so that there is no one explanation for the existence of antisemitism, and instead the Jewish people have just been profoundly unlucky. Pogroms in Russia were spurred on by economic hardship; the propagation of the Christian faith brought with it a tradition of enmity toward Jews; Jews in medieval Europe were forced into the moneylending and tax collecting professions, prompting resentment from borrowers and scorn from Catholics. Explanations like these are often cited for particular cases of persecution or discrimination. It is of course true that each of these phenomena came about in the way it did by virtue of a particular set of circumstances. But as a general explanation, it raises the question: Why the Jews?

More importantly, these explanations in terms of direct, local causes fail to explain the long-term persistence of distinctive patterns of persecution and antisemitic canards used to justify them. One of these is blood libel: false accusations of atrocities committed by Jews, almost always involving blood. The most prominent examples come from the Middle Ages, when Jews were widely accused of poisoning wells, spreading the plagues, desecrating the Host, killing defenceless Christians in order to use their blood to cure ailments or for Jewish rituals, or out of spite.6 But the first known instances of blood libel predated this by over a thousand years. Apion, a writer from the time of the Roman Empire, claimed that Jews had an annual tradition of kidnapping a Greek foreigner, fattening him up, and offering him as a sacrifice. The Greek historian Damocritus is also thought to have claimed that Jews practiced ritual slaughter of foreigners.7 Blood accusation was successfully revived by the Nazis, and today it still persists in the Arab world and closer to home.

Another example is the dual loyalty accusation. It was not only made during the Dreyfus Affair in France, and not only in the Stalin-era Soviet Union. In the second century C.E., the poet Juvenal claimed that the Jews were "accustomed to despise Roman laws.8 Similarly, Apion accused the Jews of hating Greeks. Further back, in the Bible itself, Haman advised king Ahasuerus:

There is a certain people scattered abroad and dispersed among the people in all the provinces of thy kingdom; and their laws are diverse from all people; neither keep they the king's laws: therefore it is not for the king's profit to suffer them. (Esther 3:8)

Note that despite its distance in time, this accusation follows with uncanny accuracy the same pattern as those made during the Dreyfus Affair and through to the Nazi movement: Claim that Jews are disloyal, and thereby legitimise their degradation, abuse or murder.

More recently, anti-Zionists (many of whom are themselves Jews) have used it against Jews in the Zionist movement. It also cropped up during the Iraq War, when its supporters in the US government were said to be acting only for Israel's interests – a claim which was of course nonsense. Another prominent example is the book The Israel Lobby by John Mearsheimer and Stephen Walt, in which it is claimed that there is a coalition of people and organisations, most of whom Jewish and all of whom Zionists, who are actively working to influence U.S. foreign policy so that it serves the interests of Israel, rather than those of the United States.

Blood libel and the dual loyalty accusation are quintessentially antisemitic, in the following respect. Consumption of blood is strictly forbidden under kashrut (kosher laws). This makes the accusation that (for instance) Jews killed Christians in order to use their blood to make matzah for Passover wildly unrealistic. The idea is utterly repugnant to any Jew. Used as an accusation to justify their persecution, it is designed to hurt. Similarly, the dual loyalty canard arose, along with the growth of political antisemitism, at a time when Jews were making every effort to assimilate. So this accusation was not just wildly untrue, it negated the very essence of contemporary Jewish culture and spurned the Jews’ effort to participate in mainstream society (which itself was partly an attempt to cure antisemitism).

#4 It's 'hatred of the other'
Antisemitism has been explained as just one example of tension between In and Out groups. Since diasporic Jews have always lived as minorities, they have always been the victims of this pattern of social psychology, being a convenient scapegoat for the society’s problems. Criticising this view, Samuel Ettinger pointed out that this sort of explanation wrongly emphasises the “existence of a real difference between Jews and their surroundings.”9 If Jews were hated because they were different, it follows that antisemitism should have decreased when most Jews in 19th-century western Europe abandoned strict religious observance and began not only to assimilate and identify nationally with the countries in which they found themselves, but to devote their lives to excelling in the local culture – Heinrich Heine and Felix Mendelssohn are obvious examples. This week, a story came out about Hessy Taft, a German Jew who was chosen, possibly by Goebbels himself, as the ideal Aryan baby for the cover of a Nazi magazine. The spectre of the Jewish stereotype existed entirely in their minds. But that did not matter. And so antisemitism did not decrease in the 20th century – it massively increased. Hitler feared Jews all the more because of their ability to ‘camouflage’ themselves in the host society; it simply wasn’t important, from a social perspective, whether they fitted in or not.

It is also not accurate to describe antisemitism as a form of ‘hatred’. Hatred is an emotional reaction. But one can believe conspiracy theories or other accusations which justify hurting Jews without personally hating them. Richard Wagner had no problem with befriending or working with Jews, yet he still believed that that Jews were by nature incapable of producing good art, and he was fixated with this theory to the point of writing a book about it. It was clearly an antisemitic theory, since it justified excluding Jews from the art profession, and denigrated the already prominent contribution of Jews to German art and culture.

If none of these four things is the unifying cause of antisemitism, what is? To begin with, antisemitism is an ancient psychological disorder, a kind of wrong thinking about morality, which compels people to legitimise the hurting of Jews for being Jews. I do not know the underlying explanation.




1. E.g. Jacob Katz, From Prejudice to Destruction: Antisemitism, 1700-1933: p7.
2. Ibid., 319.
3. Ibid., 320.
4. Ibid., 321.
5. Ibid., 323.
6. Efron et al (2009), The Jews: A History, p152.
7. Menahem Stern, Greek and Latin Authors on Jews and Judaism, p530.
8. Jacob R Marcus (1946), "Antisemitism in the Hellenistic-Roman World," in Koppel S. Pinson, Essays on Antisemitism, p.76.
9. Samuel Ettinger (1976), “The Origins of Modern Anti-Semitism,” in Yisrael Gutman and Livia Rothkirchen, eds., The Catastrophe of European Jewry, p18.

Friday, 3 January 2014

Who should rule? A bogus question and a bogus answer

Recently I read a paper which dealt with various problems involved in measuring and preserving representation of the electorate under proportional and plurality systems. A recommended read, if only because it gives a fresh take on some old problems, and at several points the authors point out interesting possibilities for further research. It is perhaps the only research piece I have read which is willing to entertain the idea that proportionality is not among the primary criteria of soundness for an electoral system. I reject this criterion entirely.

However, instead of proportionality, the authors posed "representation of the median voter's preferences" as their criterion, supposing it to be the goal of plurality systems. By their description of the majoritarian vision of politics, the point of elections is to allow 'citizens' to make a clear decision about who governs 'them'. Working on this assumption, they tried to test the Downsian theory of plurality systems, which says that the two main parties will converge toward the preferences of the median voter.

But this supposed expression of preferences isn't the point of elections at all. For a start, any study in this field should acknowledge that, there is no rational, self-consistent way to express the preferences of a diverse group – as Arrow's theorem showed. Therefore there can be no ideal in this matter, because every possible form of representation will contain some paradox or other. But Arrow's theorem isn't the only trouble with this sort of approach. There is no foolproof way to choose a ruler – not only in democracies, but in every kind of system – because all rulers, people, are fallible. The reader may object: "Yes, there is no perfect way, but surely there are better and worse ways to choose a ruler."

No. Accept for the sake of argument that any ruler, or method of choosing one, is prone to error. Therefore, if it is taken to be the 'right' method, and is established dogmatically and without any redeeming institutional features, its inherent errors – whatever they happen to be – become entrenched. For this reason, Popper suggested in the Open Society and its Enemies that the question we ask about politics ought not be "Who should rule?" but rather "How can we limit the damage they do?" In other words, it doesn't matter how the ruler is chosen per se – there is nothing *inherently* better about rule of the many versus rule of the few – but rather that the system contains mechanisms for its own improvement. Such improvement is greatly helped by a procedure for carrying out changes of power without violence. That is the real virtue of democracy.

Moving on from this basic objection, the next question is, how exactly do we measure the "median voter's preferences"? The authors go with the left-right scale, using it to compare policy-positions of citizens with policy-positions of the parties that are supposed to represent them. They make an ostensibly reasonable defence of this tool: It is true that 'left' or 'right' are good summaries the political views of most people in democracies. Nevertheless, I don't think that makes it viable for this study. It merely compares how right- or left-wing the policy positions of the voters are with how right- or left-wing are those of the representatives, taking no notice of the relative importance of different policies to the voters (which could easily cause a left-leaning person to vote for a more right-wing party, for example).

An even worse difficulty, not acknowledged as such by the author, is that the polarisation of left and right varies wildly between countries. The same units are used to measure them, yet the difference between, say, 1 and 2 on the left-right scale might be considerably greater for France than for the United States. This is not merely a lack of 'rigour'; the same units are being used to represent wildly different values (which themselves may not be measurable in the first place). It effectively makes the study meaningless.

Tuesday, 30 July 2013

Methodological musings, cultural clashes, and something in the air at Mises University 2013

When I arrived in Auburn, Alabama for Mises University 2013 I was not expecting much more than the usual mixed bag of conference attendees, standard libertarian spiels and a bit of exacting academic work thrown in. My roommate was a pleasant, quietly self-assured girl in a libertarian T-shirt. In the hour before the evening meal, all over the rooms of the Mises Institute there were knots of libertarians having animated discussions about economics. This was promising enough. 

I had come to the Institute after having read about Hayek's view of the method of the social sciences. He recognised that they cannot proceed by deriving theories from observations – that we can only interpret social phenomena in the light of pre-existing theories. Part of this clearly comes from the old debate between positivists and 'praxeologists'. But the Austrian rejection of inductivism was also endorsed in The Poverty of Historicism by Karl Popper, who argued that the natural sciences, too, do not make progress via induction, but rather by creative conjecture and refutation as attempts to solve particular problems.

So this was interesting. There was a school of thought, the Austrian School of economics, which not only had libertarianism as its conclusion, but a critical-rationalist-influenced methodology as its foundation. From my perspective this was a convergence of two pillars of rational thought. It didn't come as a surprise, then, that the opening speaker argued that it is the only school of economics which takes seriously the dignity and freedom of economic actors – of people. He also assured us that the work of Austrian economists is descriptive; prescriptive, political arguments are strictly separate.

The lectures that followed were a fascinating introduction to the science. There is a doubt about whether economics really is a 'science', if it doesn't make testable predictions. But, as I discovered over the course of the week, the central ideas of Austrian economics come from assuming the truth of certain axioms, taking them seriously and thereby using them to interpret economic phenomena. As Steve Horwitz put it, this approach of "[r]endering human action intelligible means telling better stories about what happened and why." It has a degree of sophistication which arguably brings it closer to the status of a real 'science' – or at least more deserving of the epistemological prestige of that title – than more empirics-focused mainstream economics, where naive interpretations of data are widespread. (For example, statistical significance has been prized as a mark of scientific status, even if it's not clear that this has any bearing on the economic significance of a given dataset). 

The course exceeded my expectations in all kinds of small ways. The atmosphere was like that of an actual university, because there were readings to do, lectures to attend and academics to question. But there was something more. The Institute was welcoming, comfortable, and from morning to evening there was an interesting discussion to be overheard in every room and hallway. Mingling in the social hours, I would repeatedly come face to face with an ordinary-looking student, brace myself for small talk or fruitless debate, and then quickly discover that they were in fact very well read, thoughtful and knowledgeable. This happened with practically everyone. Fortunately there were also places to be alone, and places to play chess, places to smoke and drink. The speakers were obviously passionate about their research, but the difference from my normal university life was that they were able to communicate their passion and generate an atmosphere of excitement.

Despite the first speaker's assurances, the political side of the course cast a surreal tinge on my experience of it. Judge Napolitano spoke on the growth of the Commerce Clause and how it was leading to the erosion of freedoms enshrined in the Constitution. He ended by declaring that a certain proportion of us would die in prisons, and some may die in public squares, while fighting for our principles. This got a standing ovation. But it unnerved me. For a start I wasn't sure what had just happened. While I could imagine some of the more militant attendees being arrested for civil disobedience, it's not at all plausible that this would spiral into a situation where libertarian activists get life sentences (let alone being shot in the street). This kind of unjustified alarmism, I thought, implies that the main evil to society is the state itself, rather than the overwhelmingly statist milieu. Yet the latter is the only thing that enables the former to exist. Criticism of things like excessive government surveillance is not restricted to libertarians – it's mainstream. What counts as 'excessive' is also being subjected to public debate. A government that doesn't respond to its conclusions will not survive an election.*

Discussing the speech with other students, I said it was a bad idea to cause such alarm that people are ready to defend their lives. Someone asked: "Why?" I wondered why I had said it. Why shouldn't people be ready to defend their lives? Isn't this the eternal vigilance which that Founding Father was talking about? 

I didn't think so. Holding politicians to account is one thing, but if the attitude is one of preparing for revolution (if the story of another attendee is correct, Napolitano did prophecy a revolution), time and attention are directed away from education, away from improvement, and toward antagonism. A fellow European diplomatically suggested that my perturbation was down to a cultural difference between British and American libertarians. And since coming home, it has occurred to me that it may not be necessary to take these alarmist-type claims literally. They are true, in the sense that they are expressions of a devotion to freedom. I cannot express myself in that way, nor understand expressions of that kind, because my idea of liberty is one of ancient freedoms that have grown up alongside a government and a monarchy which themselves were subject to the rule of law that secured those freedoms. This is simply a difference between the American and the British mindset – and one may not be objectively better than the other.

I went in as a conservative, libertarian to the extent of finding plausible the arguments in favour of private law. I have left more skeptical of my own cautious traditionalism. Watch this space for further arguments - for what, I don't yet know. But I do know that every week of my life should have the intellectual intensity of that week at the Mises Institute. I'm glad to have had the chance to attend.



* I'm aware that this is just a naïve statement of the argument and doesn't address public choice theory, etc. It is not meant here as a rebuttal so much as a description of what I was thinking at the time.

Friday, 21 June 2013

Ontological argument for the existence of morality – Thoughts from a lecture on Philippa Foot

http://graphics8.nytimes.com/images/2010/10/09/us/OBIT-FOOT/OBIT-FOOT-articleInline.jpgI went to a lecture at the LSE recently on the life and work of Philippa Foot. It was a very nice event, describing the life of a thoughtful daughter from an aristocratic family and how its events – including those surrounding the Second World War – led her to the philosophical problems with which she was chiefly concerned.

The main topics under discussion were moral relativism and the question, "Why be moral?" Professor Sarah Broadie gave an outline of how Foot's answer developed, in which she eventually arrived at the notion that there are certain universal and basic human needs, whose fulfillment leads to flourishing and long-term survival. This view of morality as fulfillment of intrinsic needs or goals, or needs which arise as part of the logic of the situation, still seems easy to pick apart. One need only find examples of actions that are intuitively morally impermissible and yet cohere with a long-term survival strategy – for instance, a medical procedure that causes extreme and unnecessary pain, but also wipes the patient's memory of it completely, without fail, after the operation has finished.

Perhaps those are just so many details to be ironed out. I had a more skeptical concern. Even if there are these intrinsic needs, what is good about fulfilling them? Why is it good to survive and flourish in the first place? The speakers' answer was essentially that Foot did not address this sort of question in her work; her interest was in finding "where the trouble is" – where our moral intuitions conflict and why. (Question and discussion here from 68:00.)

One way to respond to the question could be this. Ethics is an intellectual pursuit, a field of knowledge, whose theories can be better or worse (as per various criteria, with some normative theories imposing a very specific set of criteria, some of which we all agree on, such as non-contradiction).  Since morality is about what we should do, by definition we should act in accordance with the best available theory.

I think this is unlikely to satisfy the skeptic. He could concede that one moral theory is better than a rival theory that is riddled with contradictions. But this says no more about the objectivity of ethics than about the existence of fairies at the bottom of the garden – about which there could equally well be better and worse theories. It is a sort of ontological argument for the existence of morality.

Is skepticism self-defeating?
The conjecture our skeptic disputes is that it is good to survive and flourish. If he simply says, "That's rubbish," and ends the discussion, that is of course his prerogative. But if he wants to continue to argue about it, the view that it is not good to survive and flourish quickly becomes untenable, for the following reason. Taking this view seriously, we would stop trying to survive and flourish. We might die out as a species, or at any rate there would be fewer of us, and we would be less productive. Yet criticism and improvement of the theory that it is not good depends upon people being around and disposed to work on it. So the skeptic, taking his view seriously, destroys the means by which his own opinion might be proved wrong. Inherently, this view is incompatible with truth-seeking. The question that then arises is: "Why be truth-seeking?"

It seems to me that certain assumptions are made by the nature of the topic and the conversation itself. Firstly, one cannot get away with saying "I take no position on whether or not it's good to survive, flourish or be truth-seeking," because it is either good or not, and one's actions necessarily assume and express a position on it. Regardless of what the skeptic says or is aware of believing, not striving to survive, etc., implies that he does not think it is good. Secondly, by taking up a view and advocating it as true, the skeptic takes for granted that he and others should hold true theories – that we should be truth-seeking. Yet we've determined that his answer to the moral question is not truth-seeking. For this reason, any claim that it is not good to survive and flourish is ultimately self-contradictory.

Unlike the 'ontological' argument, this is not a completely a priori proof of objective morality, although I suspect it has some related problem. Perhaps it tries to be an argument without being an explanation. But I can't find any specific fault with it.