Some of you may recall that in May, 2012, I wrote several posts about an exciting new opportunity for me at Bennett College, the oldest Historically Black Women's College in America. I am sorry to report that things did not go as I had hoped, and I shall not be working at Bennett after the end of this semester. It is a long story, and not at this time ready to be told.
Which leaves me, once again, wondering what I am going to do with myself. The opportunity offered by The Society for Philosophy and Culture in Canada to make my out-of-print books available on Amazon.com as e-books has given me an idea. I think I am going to explore the possibility of assembling several volumes of my published and unpublished papers. There are not so many published papers as you might suppose -- maybe thirty published academic papers, a goodly number of reviews and replies to critics, and then a number of political writings, letters to the editor, and the like. In addition, of course, there are the tutorials, mini-tutorials, and appreciations that I wrote for this blog and have archived on box.net. And finally there are the unpublished essays and lectures that languish in a file drawer in my office.
The principal problem in putting this project together is the fact that most of these items exist only in printed form or typescript, not in electronic form. That means a good deal of scanning and reading into an optical character recognition program or, as a last resort, retyping. It seems to me that this should keep me out of trouble for some time to come.
A Commentary on the Passing Scene by Robert Paul Wolff rwolff@afroam.umass.edu
Saturday, March 30, 2013
Thursday, March 28, 2013
DREAMS OF A BEST-SELLING AUTHOR
Sales of my three e-books [Understanding Marx, Understanding Rawls, and The Poverty of Liberalism] are booming. At latest report, a total of fourteen had been downloaded at $9.99 a pop [I have donated my royalties to the organization that is posting the books on Amazon.] I am waiting for a call from Dancing With the Stars.
Kant's Theory of Mental Activity, always a hot item, will be available soon, followed by Moneybags Must be So Lucky. The thing about e-books, I think, is that they are immortal. It is probably more trouble for Amazon to remove a title simply because it has not sold in a decade than it is to leave it there to languish. Now if only someone could crack the problem of digitizing self-consciousness, I could be truly immortal.
Kant's Theory of Mental Activity, always a hot item, will be available soon, followed by Moneybags Must be So Lucky. The thing about e-books, I think, is that they are immortal. It is probably more trouble for Amazon to remove a title simply because it has not sold in a decade than it is to leave it there to languish. Now if only someone could crack the problem of digitizing self-consciousness, I could be truly immortal.
Wednesday, March 27, 2013
REPLY TO A COMMENT
Jacob T. Levy [who is, by the way, a distinguished member of the McGill faculty] observes a propos my little riff on Rambo's knife that while die-hard Randians may cultivate the fantasy of the lone heroic producer, free market proponents and libertarians generally embrace warmly the role of the division of labor in modern economies [and others as well.] He is quite right, of course. Indeed, they insist upon it.
The point of my imaginary course syllabus tracing all the preconditions and filiations of Rambo's knife was to emphasize the extent to which we are all deeply embedded in and dependent upon the collective and anonymous products of prior labor and invention. It is simply out of the question to untangle these dependencies so as to establish which of them are the result of rational bargains freely entered into. Hence it is impossible to argue plausibly that present day individual holdings of private property are justified morally because they have arisen out of free and equal exchanges in the marketplace.
Every one of us comes into the world endowed with a material and cultural inheritance that we have not earned and can never justify. There are no "takers" and "makers" in our society. All of the takers are makers, and all of the makers are takers. And quite often those who start out with, or end up with, the most stuff have worked considerably less industriously than those who start out and end up with the least.
It is this fact that constitutes the real justification for Marx's Critique of the Gotha Program slogan: "From each according to his ability; to each according to his need."
The point of my imaginary course syllabus tracing all the preconditions and filiations of Rambo's knife was to emphasize the extent to which we are all deeply embedded in and dependent upon the collective and anonymous products of prior labor and invention. It is simply out of the question to untangle these dependencies so as to establish which of them are the result of rational bargains freely entered into. Hence it is impossible to argue plausibly that present day individual holdings of private property are justified morally because they have arisen out of free and equal exchanges in the marketplace.
Every one of us comes into the world endowed with a material and cultural inheritance that we have not earned and can never justify. There are no "takers" and "makers" in our society. All of the takers are makers, and all of the makers are takers. And quite often those who start out with, or end up with, the most stuff have worked considerably less industriously than those who start out and end up with the least.
It is this fact that constitutes the real justification for Marx's Critique of the Gotha Program slogan: "From each according to his ability; to each according to his need."
Tuesday, March 26, 2013
RAMBO'S KNIFE
I am so stressed out about really important things over
which I have no control, like income inequality and global warming and the
Supreme Court's forthcoming decisions on Prop 8 and DOMA, that I have decided
to deal with the stress by a time-tested technique -- denial. Today I shall blog about something that has
absolutely no connection to any of these important issues [save in the most
etiolated and indirect fashion], but which has intrigued me for a long time,
namely the extent to which even the simplest things we have or do are so deeply
integrated into the history of the human species that it would be impossible
fully to explicate even one of them without evoking an entire culture.
I shall choose as my example the humongous knife that John
Rambo carries with him in First Blood,
the original [and far and away the best] of the Rambo movies. For the handful of you who are unfamiliar
with it, Rambo's knife is a really scary looking object with a wide blade, a
razor sharp edge, serrations on the opposite edge, and even a little receptacle
in the handle with a screw top from which at one point he extracts a needle and
thread to sew up a deep cut he has sustained [without anaesthesia, needless to
say.]
I have often thought it would be fun, albeit very, very
hard, to teach an entire college course devoted simply to tracing out every
single bit of technical information and every single social relationship or
cultural practice that is presupposed by the existence of that knife. The more you think about it, the more you
realize how deeply embedded that knife is in our material culture [as
anthropologists call it] and in our social elations of production [in Marx's
lingo.]
Let us start just with the stuff of which the knife is
made. The blade is steel, we may
suppose. Steel is made from iron
combined with carbon and other elements.
A quick scan of the Wikipedia article on steel gives us some sense not
only of the wide range of techniques used for making steel of different sorts
but also of the history of the discovery of those techniques, going back as
much as four thousand years.
So to begin with, we need iron ore, which must be smelted to
extract the iron. This in turn presupposes
that we know everything that is required to find usable iron ore [how many of
us, sent out into the world without directions, would have the foggiest idea
where to look for iron ore, or even how to recognize it when we found it?] Unless we plan to scrabble the ore out of the
earth with our fingers and toes, we will need to know about shovels -- what
they are, how to make them, how to get the wood and other materials from which
they are made, how to make something in which to carry the iron ore, how to
make fire, how hot a fire we need for smelting, what sort of container one uses
to smelt iron ore, and so on and on.
Pretty soon, it will become obvious that no single person is
going to be able to carry out every step required to produce steel, to produce
the materials from which steel is made, to produce the tools and equipment used
in turning iron ore into steel, to make the equipment that is required to make
the equipment and tools used, and so forth.
In short, we are going to have to rely on some sort of social structure
involving the division of labor and exchange of products.
And all of this, which could be extended almost
indefinitely, is just what is required to produce John Rambo's knife.
The point of this exercise, of course, would be to establish
definitively that no human being makes anything or does anything alone. Even the simplest implements require for
their production an elaborate network of functional differentiations, dependencies,
and reciprocities that implicates an entire civilization. No individualist fantasy -- not the John Galt
nonsense of Ayn Rand, not the quintessentially
American myth of the Mountain Man, not the equally absurd Romantic myth of the
lonely creative artist -- can stand against the manifest impossibility of
explaining even the existence of something as simple as a knife without collaboration
of all humanity.
It would be a lot of work to develop such a course, drawing
as it would on so many different disciplines and bodies of specialized
knowledge, but it would be a wonderful educational instrument.
Monday, March 25, 2013
ONE MORE
UMass Press has given permission for us to do an e-book of MONEYBAGS MUST BE SO LUCKY, so that one will be available in due course.
Sunday, March 24, 2013
WHY DO WE READ THE GREAT PHILOSOPHERS?
Immanuel Kant's Metaphysische Anfangsgrunde der Naturwissenschaft, Or Metaphysical Foundations of Natural Science, was published in 1786, which is to say between the first [1781] and second [1787] editions of the First Critique. I have always considered it very much a minor or secondary work by Kant, and after reading it once never went back to it. When I offered this dismissive evaluation recently, my old friend Charles Parsons, perhaps the leading expert on Kant's philosophy of mathematics, called my attention to the fact that the distinguished Kant scholar Michael Friedman had written a monumental work on the Metaphysische Anfangsgrunde, and hence that I might want to rethink my opinion of it.
This got me musing about how and why we read the works of the great philosophers. Long, long ago, when I wrote my first book, on the Transcendental Analytic of the First Critique, I observed that the distinctive mark of the truly great philosophers, it seemed to me, is that they were able to see more deeply than they could say, and refused to relinquish their grasp on that deeper insight merely to achieve surface consistency. It was therefore always worthwhile to wrestle with them, struggling to liberate the deeper insights. Since it is inevitably a matter of judgment what is deep and what is not, what is worthwhile and what is not, we keep returning to those great texts, generation after generation.
I mean, think about it. Who is far and away the greatest commentator on the works of Plato who has ever lived ? The answer is obvious: Aristotle. Not only is Aristotle the most brilliant philosopher who ever wrote about Plato, he actually studied with the man for twenty years! And yet, this fact has not stopped two thousand five hundred years of philosophers from puzzling over Plato's Dialogues, poking at them, prodding them, reinterpreting them, translating them into every imaginable language. No one would ever say to a Plato scholar who has just brought out a new book on one of the Dialogues, "Why do you bother? Aristotle already has told us what to think about that."
As for Friedman's decision to focus on what I and at least some others have thought of as a minor part of the Kant corpus, we need only remind ourselves that in the nineteenth century, there were many serious thinkers who considered the Third Critique more important than the First! In the eighteenth century in England, Cicero was taken seriously as a thinker, a judgment that I have always considered bizarre and absurd, even thought it was apparently shared by David Hume, who was, for my money, the greatest philosopher ever to write in English [his only competitor being Thomas Hobbes.]
There is a view that has gained some traction with young philosophers today that Philosophy is now a science, and need no more concern itself with its history than physicists need waste time reading Einstein's early papers. I do not share that view, needless to say, but it too has its history, and crops up every few centuries.
All of which leads me to hope that after I have passed on, there will continue to be a few readers who are able to find something of value in my first book, Kant's Theory of Mental Activity.
[By the way, when I spellchecked this post in Blogger before hitting "publish," it highlighted the word "philosophers," which I had mistyped as "philosopehrs." It suggested "flyspeck" as a correction. Do you think it was trying to tell me something?]
This got me musing about how and why we read the works of the great philosophers. Long, long ago, when I wrote my first book, on the Transcendental Analytic of the First Critique, I observed that the distinctive mark of the truly great philosophers, it seemed to me, is that they were able to see more deeply than they could say, and refused to relinquish their grasp on that deeper insight merely to achieve surface consistency. It was therefore always worthwhile to wrestle with them, struggling to liberate the deeper insights. Since it is inevitably a matter of judgment what is deep and what is not, what is worthwhile and what is not, we keep returning to those great texts, generation after generation.
I mean, think about it. Who is far and away the greatest commentator on the works of Plato who has ever lived ? The answer is obvious: Aristotle. Not only is Aristotle the most brilliant philosopher who ever wrote about Plato, he actually studied with the man for twenty years! And yet, this fact has not stopped two thousand five hundred years of philosophers from puzzling over Plato's Dialogues, poking at them, prodding them, reinterpreting them, translating them into every imaginable language. No one would ever say to a Plato scholar who has just brought out a new book on one of the Dialogues, "Why do you bother? Aristotle already has told us what to think about that."
As for Friedman's decision to focus on what I and at least some others have thought of as a minor part of the Kant corpus, we need only remind ourselves that in the nineteenth century, there were many serious thinkers who considered the Third Critique more important than the First! In the eighteenth century in England, Cicero was taken seriously as a thinker, a judgment that I have always considered bizarre and absurd, even thought it was apparently shared by David Hume, who was, for my money, the greatest philosopher ever to write in English [his only competitor being Thomas Hobbes.]
There is a view that has gained some traction with young philosophers today that Philosophy is now a science, and need no more concern itself with its history than physicists need waste time reading Einstein's early papers. I do not share that view, needless to say, but it too has its history, and crops up every few centuries.
All of which leads me to hope that after I have passed on, there will continue to be a few readers who are able to find something of value in my first book, Kant's Theory of Mental Activity.
[By the way, when I spellchecked this post in Blogger before hitting "publish," it highlighted the word "philosophers," which I had mistyped as "philosopehrs." It suggested "flyspeck" as a correction. Do you think it was trying to tell me something?]
Saturday, March 23, 2013
GUEST POST BY MATKO SORIC
My rather jejune remarks about Yugoslavian Marxists provoked Matko Soric, who actually knows something about the subject, to write the following short essay, which I am pleased to reproduce, unaltered, as a guest post. Here is Matko Soric's brief self-description:
"Matko Soric is a PhD student at the University of Zagreb, Croatia. He wrote a book on postmodernism (The Concepts of Postmodernist Philosophy), two scientific articles (Semantic Holism and the Deconstruction of Referentiality: Derrida in an Analytical Context; Reflexivity in the Sociology of Pierre Bourdieu: Beyond Sociological Dichotomies) and a dozen of book reviews. His main areas of interest include classical German idealism and western Marxism. Currently, he is writing a PhD thesis on Milan Kangrga. An essay on Gajo Petrovi? (Gajo Petrovi?: Critical Essay) is to published this year."
by Matko
Sorić
"Matko Soric is a PhD student at the University of Zagreb, Croatia. He wrote a book on postmodernism (The Concepts of Postmodernist Philosophy), two scientific articles (Semantic Holism and the Deconstruction of Referentiality: Derrida in an Analytical Context; Reflexivity in the Sociology of Pierre Bourdieu: Beyond Sociological Dichotomies) and a dozen of book reviews. His main areas of interest include classical German idealism and western Marxism. Currently, he is writing a PhD thesis on Milan Kangrga. An essay on Gajo Petrovi? (Gajo Petrovi?: Critical Essay) is to published this year."
MODERNIST HUMANISM OF MILAN
KANGRGA AND GAJO PETROVIĆ
Gajo Petrović and Milan Kangrga were two crucial theoretical and
logistical pillars of the Yugoslav magazine Praxis
and the Korčula Summer School. Milan Kangrga is usually regarded to be a
Hegelian Marxist with special interest in ethics, while Gajo Petrović is often
looked upon as a Heideggerian Marxist with special interest in analytical
philosophy. There is some truth to that, but there is also a certain paradox
surrounding the two. Kangrga cannot be called an ethicist, moralist or some
sort of Marxist preacher of precise normative demands, in spite of his
life-long interest in ethics. Drawing upon a wide range of classical Marxist
themes in a uniquely unorthodox way, Petrović developed his original
philosophical position which has much more in common with classical German
idealism than with Heidegger. In this text, I will try to summarize and sketch
out a couple of important and internationally still unappreciated ideas of
their philosophical legacy.
Some believe they were authentic dissidents with an important and
original contribution to a so called open or western Marxism, similar to Rosa
Luxemburg, Herbert Marcuse, Miroslav Krleža, Milovan Đilas, and partly George
Orwell, Raymond Williams, Georg Lukács, Raya Dunayevskaya and Karl Korsch. Others
claim they were unofficial theoretical facilitators in service of an anti-Stalinist
fraction of Yugoslav bureaucratic elite with the task of vindicating and justifying
the existing socialist regime. In both cases, their work remains conceptually
unexplored.
Kangrga and Petrović started their
academic career as young assistants in the wake of the II World War, at the
Department of Philosophy at the University of Zagreb. Petrović wrote his dissertation
on Georgi Plekhanov in 1956, while Kangrga finished his dissertation on Marx
and ethics in 1961. The crucial moment in the genesis of humanist Marxism and Praxis magazine was a conference held in
a Slovenian town called Bled in 1960 where the two major fractions of Yugoslav Marxists
collided. Their main point of divergence was the so called theory of
reflection. According to the reflection theory, developed by the early Lenin in
Materialism and Empirio-criticism and
reiterated by Todor Pavlov in his Theory
of Reflection, human consciousness is nothing but a necessary effect of the
surrounding matter without any causal autonomy. Our thoughts and knowledge are
a direct consequence of crude mechanical determinism. To a contemporary reader
acquainted with analytical philosophy, reflection theory might be best
represented as a rudimental Soviet version of eliminative materialism or at
least reductive materialism. On the other hand, a group formed around Kangrga
and Petrović, inspired by Hegel and early Marx, arguing that human
consciousness has a certain degree of autonomy. This group will later be known
as the editorial board of the Praxis
magazine. In the terminology of contemporary philosophy of mind, they could be
considered Emergentists.
The theory of reflection is one
version of naïve realism or direct Referentialism that explains human knowledge
as a pure reflection or mimesis of the
external world. Beside this epistemological aspect, there is a much more important
political aspect. If our mental states are necessary, and our actions are based
on our mental states, then our actions are necessary, whatever they may be.
According to the theory of reflection, human freedom does not exist, and the
course of history is inevitable. Kangrga and Petrović did not believe this to
be the case. They discarded the theory
of reflection as a sort of metaphysics, which has a somewhat special meaning
for them.
For both Kangrga and Petrović, metaphysics
is a name for any sort of perennial theory, be it of religious, philosophical,
scientific, political or economic origin, that negates radical changes of human
beings and their culture through time. What they call metaphysics resonates
with the position Nietzsche dismissed as Platonism, Heidegger as metaphysics of
presence, and Derrida as logocentrism. According to Kangrga and Petrović,
reality evolves, and so does human history, which means we should never stop being
engaged in the transformation of social
institutions. It should be pointed out that with the term metaphysics they do not designate only idealism: reflection theory
is nothing but materialistic metaphysics, a model of the universe in which
nothing essentially new can come into existence. In the discourse of Marxist
humanism, metaphysics is another name for ontological, political and historical
determinism.
My basic claim in this text is that
we should rename Kangrga and Petrović's position and term it “modernist
humanism”. Why? They were both devoted to this fundamental metaphysical claim
that Being is a process, not a state. Everything that exists is in a constant
and unstoppable flux and development, especially human beings, with the important
difference that while everything else changes itself uncontrollably, humans can
consciously create their own destiny. This abstract notion of omnipresent
change can also be found in Heraclitus, Hegel, Nietzsche, Heidegger, Whitehead
and James. It is important for Kangrga and Petrović because, on the one hand,
they had a theoretical tool to disregard every pre-modern social formation they
encountered in everyday life, and on the other hand, they could idealise the future
state of the human nature.
Just like contemporary postmodernists,
Kangrga and Petrović saw human nature as a social or historical construct.
There is of course a big difference: unlike postmodernism or poststructuralism,
which breaks down the Hegelian axis of historical progress, they implicitly believed
there to be only one and universal criteria of human development. Nonetheless, for
them, the human nature remains a historical product. A fundamental distinction
they sustained throughout their entire careers was the opposition between
metaphysics and historical thinking (povijesno
mišljenje, geschlichtliches Denken)
in the case of Kangrga, and metaphysics and thought of revolution (mišljenje revolucije, das Denken der Revolution) in Petrović. Metaphysics always advocates
some universal, transtemporal, unchangeable and indestructible human essence,
while Kangrga and Petrović see human beings as products of their own age and
culture. This dispute remains present in the contemporary nature-nurture
debate, that can be seen in the naturalist evolutionary psychology of Steven
Pinker and all sorts of culturalisms typical for the humanities and social
sciences.
The theory advocated by Kangrga and
Petrović is one version of culturalism, if by culturalism we mean the theory
that inextricably links one's existence with a wider cultural context. In the culturalist
paradigm, individuals' life cannot be explained only by its intrinsic,
individual properties: proper explanation must include heteronomous historical
circumstances and fluctuating social surroundings. For Kangrga and Petrović, these
surroundings are History (die Geschichte),
material context produced and reproduced by humans. If humans create history,
and history defines future generations of humans, it is plausible to say that humans
create themselves. Just like Feuerbach claimed that God did not create men, but
vice-versa, Kangrga and Petrović claim that history did not create men, but
vice-versa. But since history is under constant change and development, so is
human nature. That is why Kangrga states this, in his speculative manner: “…Man
is not what he already is, he is what he is not, but can and should be, in
order to be.” (Kangrga, 1989b: 229). In their justified attempt to escape from
hard determinism of reflection theory, they ended up in an over idealised
humanism, purged of the most important elements of a Marxist social model,
elements that point out strict financial limits and material conditions of
human self-creation. That is why I think we should label their theory
“modernist humanism”, rather than “Marxist humanism”.
There were a couple of attempts to reconsider
the relationship of Marxism and ethics. I will mention only The Ethical Dimensions of Marxist Thought
by Cornel West, The Ethical Thought of
Young Marx by Marek Frichand, Marxism
and Ethics by Paul Blackledge, and
Marxism and Ethics by Philip Kain. As Marek Frichand states in his valuable
study, The Ethical Thought of Young Marx,
there are three possible solutions: Marxism and ethics are mutually exclusive; Marxism
and ethics are not mutually exclusive, and therefore Marxist ethics should be
created, and finally; Marxism already possesses normative demands, so, there is
no need to articulate a specific Marxist ethics. Kangrga is among those who believed
that Marxism and ethics are incompatible.
According to Kangrga's interpretation, Hegel regarded ethics to be
contradictory. Simply put, every ethical claim is based upon a state of affairs
that should be changed and replaced by a better situation. If this better state
of affairs ever occurs, ethical claim destroys itself. So, an ethical claim can
exist only due to a morally corrupt state. The essence of ethics is a gap
between ought and is, Kangrga claims: “Entire Hegel's
analysis of the moral consciousness tries to show that moral consciousness as
practice cannot and should not, in order not to contradict its essential
identity, realize what it is destined for.” (Kangrga, 1989a: 64). For Kangrga,
ethics should be straightforwardly assimilated into the realm of history. Instead
of a philosophical search for proper ethical values, we should turn to his
version of historicism, that is historical
thinking, which explains moral values in the light of their social function.
A crucial term for both Kangrga and
Petrović is practice (cro. praksa; ger. Praxis ). In my opinion, the best way to understand practice is through a culturalist
perspective. Simply put, practice is
a process of creating culture (or history) that redefines the material context of
living for the future generations. When Kangrga states that “…practice is
creativity.” (Kangrga, 1989c: 80), he is pointing out a culturalist maxim about
the arbitrary nature of social institutions. The realm of culture and history
is not deterministically encoded in the structure of universe, but freely
created by man, and therefore apt for transformation. Along those lines, Gajo
Petrović defines practice as
“…universal, free, creative and self-creative being.” (1986: 192). For Kangrga,
the possibility of practice is rooted in spontaneity; for Petrović, it is
rooted in the revolution as an underlying principle of the universe.
Petrović calls his own theory the
thought of revolution (mišljenje
revolucije, das Denken der Revolution). How come? Revolution
is not only a political, but an ontological concept as well. According to
Petrović's process-ontology, the essence of reality is radical change. This
sort of change is not a mere realization in time of pre-existing necessity, but
a moment when completely new, unexpected and spontaneous state of affairs comes
into being. Revolution as radical change is not predictable, not even from God’s
point of view. Petrović's revolution might
be compared to the notion of event in
Alain Badiou or Slavoj Žižek, a radical rupture with the material or
psychological past, and it is probably motivated by a Heideggerian notion of Ereignis, whose legacy both Badiou and
Žižek bear on.
Kangrga and Petrović have much in
common, but there are also many differences. One is the apparent difference in
style. Petrović is under the influence of analytical philosophy, so he writes
in a clear, understandable, plain, simple, downright and unambiguous manner. On
the other hand, Kangrga is under the influence of German idealism, especially
Hegel, so his style is much more jargoned. Despite that, I believe that Kangrga
had a strong intellectual influence on Petrović's thought of revolution. This effect
should be explored, not just in the context of the Praxis group, but also in a wider context of left-anticommunism and
western Hegelian and Heideggerian Marxism.
CONFESSIONAL MUSINGS
When I was young, a theory was put forward in aesthetic theory [not a branch of philosophy about which I ever knew much, by the way] according to which certain sensory presentations make an objective demand on the observer [I cannot now recall the precise term that was used for this phenomenon.] For example, it was said, if a subject was presented with a line drawing of almost, but not quite, a complete circle, she would feel an objective demand to complete the circle by adding the missing segment. A variety of examples were offered of this phenomenon, on which then was erected a theory of the objective status of judgments of beauty. You get the idea, I trust.
I never gave much thought to the idea -- as I say, aesthetic theory was not my thing. But I have come to realize that I experience something very like this in my own life. I am, I must confess, an obsessive crossword puzzle solver. I don't simply mean that I enjoy doing crossword puzzles. I mean that when I come across a crossword puzzle, I am psychologically incapable of passing it by without doing it. I do crossword puzzles in ink, of course, and am embarrassingly vain about my ability to complete them.
For example, all domestic airlines put a copy of their corporate magazine in the pocket of each seat, containing articles about great places to eat at one or another of the airline's hub cities and maps of their principal airports, among other things. Usually, at the back of the magazine are a few pages of puzzles, including a crossword puzzle. Now, these puzzles are really easy, and rather boring to do, but if I get a seat with a magazine in which the puzzle has not been attempted by a previous passenger [which happens usually only near the beginning of the month], I am incapable of stopping myself from doing it, preferably before the plane actually takes off. I quite literally feel a burdensome obligation to do the puzzle, an obligation I would prefer not to be saddled with. I find this behavior pathetic, but I could no more stop myself than I could stop breathing simply because the air in the cabin is stale and probably filled with flu germs.
Needless to say, I do the NY TIMES crossword puzzle every day. Those of you who are really familiar with the TIMES puzzle will know that Will Short, now the editor of the puzzle but in the past the creator as well, arranges things so that the puzzles progress in difficulty as the week goes on. The Monday puzzle is so easy that it takes me no more than five minutes to fill it in. It is not really fun, and often I find myself irritated by its intrusion into an otherwise relaxed Monday morning in the Carolina Cafe with my lemon poppyseed muffin and decaf coffee. But I am utterly incapable of simply ignoring it. It imposes on me an objective demand. Not until Wednesday is the puzzle any sort of challenge at all. On Thursday, Short offers a puzzle with a gimmick in it, and that really is fun. The Friday and Saturday puzzles are genuinely challenging, and there are even weeks -- few and far between, I am happy to say -- when I have failed to finish one of them. [This morning, for example, I was half way through my muffin before I solved the first clue, and I had to take a break to do the two Ken Ken puzzles before returning to the crossword, but I did, I am happy to say, complete it finally.] The Sunday puzzle, by the way, is not really very hard. It is just enormous, so it takes a half hour or more to do. But there have been some memorable Sunday puzzles. My favorite is one with the title "I Surrender" -- [The Sunday puzzles all have titles which are hints to the solution of certain long across words.] Each of the long across clues in this one was the same: "back down" [i.e., "I surrender."] In each case, the solution was a word or phrase which meant, roughly, "to back down" or "to surrender," and the answer had to be entered first backwards and then down. That one was, I thought, really brilliant.
What does all of this mean, other than that I am something of a dork? I seriously doubt that it implies an objective theory of aesthetic value, but it may very well indicate something about the screwed up hard-wiring of the neurons in my brain. Maybe I should alter my will and leave my skull to science.
I never gave much thought to the idea -- as I say, aesthetic theory was not my thing. But I have come to realize that I experience something very like this in my own life. I am, I must confess, an obsessive crossword puzzle solver. I don't simply mean that I enjoy doing crossword puzzles. I mean that when I come across a crossword puzzle, I am psychologically incapable of passing it by without doing it. I do crossword puzzles in ink, of course, and am embarrassingly vain about my ability to complete them.
For example, all domestic airlines put a copy of their corporate magazine in the pocket of each seat, containing articles about great places to eat at one or another of the airline's hub cities and maps of their principal airports, among other things. Usually, at the back of the magazine are a few pages of puzzles, including a crossword puzzle. Now, these puzzles are really easy, and rather boring to do, but if I get a seat with a magazine in which the puzzle has not been attempted by a previous passenger [which happens usually only near the beginning of the month], I am incapable of stopping myself from doing it, preferably before the plane actually takes off. I quite literally feel a burdensome obligation to do the puzzle, an obligation I would prefer not to be saddled with. I find this behavior pathetic, but I could no more stop myself than I could stop breathing simply because the air in the cabin is stale and probably filled with flu germs.
Needless to say, I do the NY TIMES crossword puzzle every day. Those of you who are really familiar with the TIMES puzzle will know that Will Short, now the editor of the puzzle but in the past the creator as well, arranges things so that the puzzles progress in difficulty as the week goes on. The Monday puzzle is so easy that it takes me no more than five minutes to fill it in. It is not really fun, and often I find myself irritated by its intrusion into an otherwise relaxed Monday morning in the Carolina Cafe with my lemon poppyseed muffin and decaf coffee. But I am utterly incapable of simply ignoring it. It imposes on me an objective demand. Not until Wednesday is the puzzle any sort of challenge at all. On Thursday, Short offers a puzzle with a gimmick in it, and that really is fun. The Friday and Saturday puzzles are genuinely challenging, and there are even weeks -- few and far between, I am happy to say -- when I have failed to finish one of them. [This morning, for example, I was half way through my muffin before I solved the first clue, and I had to take a break to do the two Ken Ken puzzles before returning to the crossword, but I did, I am happy to say, complete it finally.] The Sunday puzzle, by the way, is not really very hard. It is just enormous, so it takes a half hour or more to do. But there have been some memorable Sunday puzzles. My favorite is one with the title "I Surrender" -- [The Sunday puzzles all have titles which are hints to the solution of certain long across words.] Each of the long across clues in this one was the same: "back down" [i.e., "I surrender."] In each case, the solution was a word or phrase which meant, roughly, "to back down" or "to surrender," and the answer had to be entered first backwards and then down. That one was, I thought, really brilliant.
What does all of this mean, other than that I am something of a dork? I seriously doubt that it implies an objective theory of aesthetic value, but it may very well indicate something about the screwed up hard-wiring of the neurons in my brain. Maybe I should alter my will and leave my skull to science.
Friday, March 22, 2013
SAD NEWS OF THE PASSING OF A GREAT MAN
I have just read online of the death at 82 of the great Nigerian novelist, Chinua Achebe. Achebe virtually created the modern African novel with his first work, Things Fall Apart, and though he was never awarded the Nobel prize [a terrible failing on the part of the Nobel Literature Committee], he was widely recognized as one of the great novelists of the twentieth century. I had a glancing personal connection with Achebe, which it is perhaps worth mentioning, with the understanding that it is, on my part, an attempt to grab on to a little piece of immortality.
In 1974-75, four years after I joined the faculty of the University of Massachusetts, then Chancellor Randolph Bromery [who has, himself, recently passed away] decided to inaugurate an annual series of lectures called Chancellor's Lectures, as a way of showcasing the distinguished members of the UMass faculty. That first year, three of us gave Chancellor's lectures -- Achebe, the great mathematician Marshall Stone, and myself. Achebe gave an elegant and extremely controversial lecture attacking Joseph Conrad's Heart of Darkness.
Seventeen years later, when I joined the Afro-American Studies Department, I discovered that my colleague, Michael Thelwell, was a very close friend of Achebe, and had in fact name his son "Chinua" after the great novelist. In my first year in the department, I sat in on Mike's lectures on Achebe's works and read most of his novels.
Achebe was badly injured in a car crash, and spent many years of his life paralyzed from the waist down. He left UMass to go to Bard College, where Leon Botstein, the president [and an old friend of mine] had a cottage specially constructed for Achebe. Some years after Achebe went to Bard, Mike took me down to Annandale-on-Hudson to see Achebe, and I had the great privilege of spending an afternoon with him.
Ever since my disastrous tea with Bertrand Russell in 1954, I have shied away from meeting famous people, but I am very fortunate to have had the opportunity, even briefly, to spend a bit of time with Achebe.
In 1974-75, four years after I joined the faculty of the University of Massachusetts, then Chancellor Randolph Bromery [who has, himself, recently passed away] decided to inaugurate an annual series of lectures called Chancellor's Lectures, as a way of showcasing the distinguished members of the UMass faculty. That first year, three of us gave Chancellor's lectures -- Achebe, the great mathematician Marshall Stone, and myself. Achebe gave an elegant and extremely controversial lecture attacking Joseph Conrad's Heart of Darkness.
Seventeen years later, when I joined the Afro-American Studies Department, I discovered that my colleague, Michael Thelwell, was a very close friend of Achebe, and had in fact name his son "Chinua" after the great novelist. In my first year in the department, I sat in on Mike's lectures on Achebe's works and read most of his novels.
Achebe was badly injured in a car crash, and spent many years of his life paralyzed from the waist down. He left UMass to go to Bard College, where Leon Botstein, the president [and an old friend of mine] had a cottage specially constructed for Achebe. Some years after Achebe went to Bard, Mike took me down to Annandale-on-Hudson to see Achebe, and I had the great privilege of spending an afternoon with him.
Ever since my disastrous tea with Bertrand Russell in 1954, I have shied away from meeting famous people, but I am very fortunate to have had the opportunity, even briefly, to spend a bit of time with Achebe.
Thursday, March 21, 2013
ANNIVERSARIES
It occurred to me this morning that this is the sixtieth anniversary of my graduation from Harvard. Inasmuch as I did not attend my fifth, tenth, fifteenth, twenty-fifty, thirty-fifth, or fiftieth reunions, I think consistency requires that I also not attend my sixtieth. Technically, since I was a member of the class of '54, my sixtieth is not until next year, but who's counting? My commencement year was a time of transitions. James Bryant Conant was stepping down as President of Harvard to become High Commissioner of Germany in the post-war occupation [I actually had a brief interview with him in Berlin while I was wandering about Europe on a traveling fellowship.] The in-coming president, Nathan Marsh Pusey, was a member of the twenty-fifth reunion class, a very big deal.
When you get to my age, you spend a certain amount of time keeping track of whom you have outlived. I am sorry to say that I have outlived the two best-known members of my class, Ted Kennedy and John Updike [neither of whom I knew, by the way.]
I can vividly recall the Commencement procession that June day, with the aged fiftieth reunion class of '03 preceded by the handful of superannuated relics who had managed to survive to their sixtieth. I was quite sure that none of those stooped old men could possibly understand how the world looked to me, but contemplating this year's forthcoming Commencement ceremonies from a somewhat different vantage point, I am sublimely confident that I quite well understand the world into which the class of 2013 is being launched.
Perhaps if I make it to my seventieth I will grace the proceedings with my presence.
When you get to my age, you spend a certain amount of time keeping track of whom you have outlived. I am sorry to say that I have outlived the two best-known members of my class, Ted Kennedy and John Updike [neither of whom I knew, by the way.]
I can vividly recall the Commencement procession that June day, with the aged fiftieth reunion class of '03 preceded by the handful of superannuated relics who had managed to survive to their sixtieth. I was quite sure that none of those stooped old men could possibly understand how the world looked to me, but contemplating this year's forthcoming Commencement ceremonies from a somewhat different vantage point, I am sublimely confident that I quite well understand the world into which the class of 2013 is being launched.
Perhaps if I make it to my seventieth I will grace the proceedings with my presence.
SHAMELESS BRAGGING
I was idly wandering around the web, reading the Wikipedia entries on my two sons, and I came across a fact I did not know. There is an annual Intercollegiate Chess Match between Yale and Harvard, and the trophy is named the Wolff Cup after Patrick, who is the only grandmaster to have played for both teams, first as a Freshman and Sophomore at Yale and then as a Junior and Senior at Harvard. I mean, how cool is that!
Wednesday, March 20, 2013
A PUZZLE
The oddness of Internet handles makes it difficult to be sure, but it is my informal impression that the readership of this blog is very heavily tilted toward men. Is that true, and if so, does anyone have an idea why? For obvious reasons the readership tends to be drawn from the Academy, but these days there is something approaching gender equality among academics, or perhaps even a tilt toward women. It may just be that a larger proportion of male commentators are willing to identify themselves by their real names. I think I am right in saying that none of the overseas commentators who have chosen to identify themselves are women.
The generational spread, on the other hand, seems to be quite broad. Obviously a blogger approaching his eightieth birthday is likely to attract some equally superannuated readers, but there certainly appear to be a goodly number of young readers, keeping in mind that like most people my age, I have a rather elastic notion of what counts as young! I mean, I have reached the point at which my students are retiring. Pretty soon, my students' students will be on Medicare.
Just wondering.
The generational spread, on the other hand, seems to be quite broad. Obviously a blogger approaching his eightieth birthday is likely to attract some equally superannuated readers, but there certainly appear to be a goodly number of young readers, keeping in mind that like most people my age, I have a rather elastic notion of what counts as young! I mean, I have reached the point at which my students are retiring. Pretty soon, my students' students will be on Medicare.
Just wondering.
A REPLY TO A QUESTION
T. Gent has posted a long and interesting question about Rawls, the answer to which is going to take me a bit more than can fit comfortably into a comment, so I will take a few moments to reply in a post. Here is the central part of his [? her?] question -- I urge you to read the whole comment, which is attached to the guest post by my son, Patrick:
"I was reading Rawls and an old post of yours came to my mind, where you said that once his project of providing a theorem in game theory failed, Rawls's attachment to his two principles was only comparable to faith in the Biblical word. My question is: doesn't the Difference Principle have a strong 'intuitive force'? I don't mean it in the sense in which the first principle might have intuitive force. The latter could simply be due to the 'success' of liberalism in the last few hundred years. What I mean is that if you think inequality is bad, it's prima facie a brilliant solution to allow inequality only if it benefits those that are worse off."
First let me explain the reference to the Bible. If you read the several different texts in which Rawls developed his theory [first "Justice as Fairness," then "The Difference Principle," finally A THEORY OF JUSTICE], you find something really weird about the way in which he refers to his Two Principles [the so-called Difference Principle is the second.] He first states the two principles in "Justice as Fairness" as the solution to a bargaining game -- it is a theorem in bargaining theory, he says, that these two principles would be unanimously chosen by parties engaged in negotiating with one another about the fundamental binding rules to guide their social interactions. Notice that Rawls invented these principles -- neither of them, and especially the Difference Principle, had ever been stated in anything like that form in philosophical literature before.
Then Rawls realized that he was wrong -- the two principles as he had stated them are not the solution to the bargaining game he sketches. [If you are interested in why, you can look at my 1966 Journal of Philosophy article, "A Refutation of Professor Rawls' Theorem on Justice." I think, by the way, that he realized his original argument wouldn't work before he saw my article.]
Now, you would think that the natural thing for Rawls to do at this point would be to revise the Difference Principle, that being the principal locus of the difficulty. But he does not do that! Instead, he keeps identically the same wording of the principle, and says, in effect, "Now, you might think that the natural interpretation of this principle is -- [and then he gives the interpretation of the original article.] But that cannot be so, because [and he then offers the objections that I, and we may suppose he, saw.] So the correct interpretation of these words must be [and then he offers what is actually a new Difference Principle.]"
He talks as though he did not invent the Difference Principle in the first place, but is merely tasked with finding an appropriate interpretation of a set of words handed to us from on high. This is exactly the mode of textual interpretation adopted by biblical commentators. Since the Bible is the Revealed Word of God, we cannot go about re-writing it. But since our natural reason tells us that the obvious interpretation of some Biblical passages makes them out to be utter nonsense, we must, as faithful believers, find some interpretation of the texts that is acceptable to reason while not denying the Word of God itself. Rawls really does talk about his own theory this way all the time, and it is, if I may say so, a little creepy. It is perhaps not surprising to learn that his very first publication, as a Princeton undergraduate, was a review for the Princeton Literary Journal of a multi-volume translation of the works of the Church Fathers.
Now let me turn to the heart of T. Gent's question, which concerns the intuitive appeal of the Difference Principle as it makes its appearance in A THEORY OF JUSTICE. I think the Difference Principle does have a good deal of intuitive appeal. I also think that if we take it really seriously, it pretty clearly implies some form of egalitarian socialism, for all that Rawls does not appear to have thought so himself. But we need to keep very clearly before us just exactly what Rawls conceived himself to be doing. Rawls comes on the scene at a time when Anglo-American ethical theory was locked in what Kant would have called an Antinomy between Intuitionism and Utilitarianism, the first descending from Kant and the second from Bentham. Each school had devastating objections to the theses of the other school while having no plausible defense against the attacks from its opponent. Rawls had the really brilliant idea of moving past this impossible stand-off by resurrecting the old tradition of social contract theory and wedding it to the brand-new field of Game Theory and Bargaining Theory. The two principles were put forward as the solution to a bargaining game in which self-interested agents were conceived as being willing to take one single step beyond pure self-interest by agreeing to bind themselves to principles unanimously endorsed on the basis of rational self-interest. I think, although I have absolutely no evidence for this, that Rawls saw himself as offering a theorem as powerful in its way as the astonishingly powerful Impossibility Theorem that Kenneth Arrow had proved in his doctoral dissertation, for which he later received the Nobel prize in Economics. Rawls was very definitely not simply suggesting that his principles had "intuitive appeal," because in the context in which he was writing, that would simply have him on one side of the Intuitionism/Utilitarianism divide.
Now, having said all of that, how plausible is the Difference Principle simply as a rule for deciding who gets what? That is a very large question, so I will just offer a very quick response. The Difference Principle is, I think, not at all plausible as a rule that would appeal to rationally self-interested agents -- economic agents, as that phrase is usually interpreted in Economics and Political Theory. However, if a society embraces the Credo that I have several times posted on this blog, then something like the Difference Principle might well be attractive to the members of such a society.
Of course, rather than struggle through A THEORY OF JUSTICE, which is a really boring book, the members of that society might simply inscribe on their banners the slogan From All According To Their Abilities. To All According to Their Needs.
About Nozick, by the way. I liked Bob, and when he published ANARCHY, STATE, AND UTOPIA, he had not yet become the darling of the right wing. So I guess I have always cut him some slack, even though I eviscerated the book in my 1978 Arizona Law Review article.
"I was reading Rawls and an old post of yours came to my mind, where you said that once his project of providing a theorem in game theory failed, Rawls's attachment to his two principles was only comparable to faith in the Biblical word. My question is: doesn't the Difference Principle have a strong 'intuitive force'? I don't mean it in the sense in which the first principle might have intuitive force. The latter could simply be due to the 'success' of liberalism in the last few hundred years. What I mean is that if you think inequality is bad, it's prima facie a brilliant solution to allow inequality only if it benefits those that are worse off."
First let me explain the reference to the Bible. If you read the several different texts in which Rawls developed his theory [first "Justice as Fairness," then "The Difference Principle," finally A THEORY OF JUSTICE], you find something really weird about the way in which he refers to his Two Principles [the so-called Difference Principle is the second.] He first states the two principles in "Justice as Fairness" as the solution to a bargaining game -- it is a theorem in bargaining theory, he says, that these two principles would be unanimously chosen by parties engaged in negotiating with one another about the fundamental binding rules to guide their social interactions. Notice that Rawls invented these principles -- neither of them, and especially the Difference Principle, had ever been stated in anything like that form in philosophical literature before.
Then Rawls realized that he was wrong -- the two principles as he had stated them are not the solution to the bargaining game he sketches. [If you are interested in why, you can look at my 1966 Journal of Philosophy article, "A Refutation of Professor Rawls' Theorem on Justice." I think, by the way, that he realized his original argument wouldn't work before he saw my article.]
Now, you would think that the natural thing for Rawls to do at this point would be to revise the Difference Principle, that being the principal locus of the difficulty. But he does not do that! Instead, he keeps identically the same wording of the principle, and says, in effect, "Now, you might think that the natural interpretation of this principle is -- [and then he gives the interpretation of the original article.] But that cannot be so, because [and he then offers the objections that I, and we may suppose he, saw.] So the correct interpretation of these words must be [and then he offers what is actually a new Difference Principle.]"
He talks as though he did not invent the Difference Principle in the first place, but is merely tasked with finding an appropriate interpretation of a set of words handed to us from on high. This is exactly the mode of textual interpretation adopted by biblical commentators. Since the Bible is the Revealed Word of God, we cannot go about re-writing it. But since our natural reason tells us that the obvious interpretation of some Biblical passages makes them out to be utter nonsense, we must, as faithful believers, find some interpretation of the texts that is acceptable to reason while not denying the Word of God itself. Rawls really does talk about his own theory this way all the time, and it is, if I may say so, a little creepy. It is perhaps not surprising to learn that his very first publication, as a Princeton undergraduate, was a review for the Princeton Literary Journal of a multi-volume translation of the works of the Church Fathers.
Now let me turn to the heart of T. Gent's question, which concerns the intuitive appeal of the Difference Principle as it makes its appearance in A THEORY OF JUSTICE. I think the Difference Principle does have a good deal of intuitive appeal. I also think that if we take it really seriously, it pretty clearly implies some form of egalitarian socialism, for all that Rawls does not appear to have thought so himself. But we need to keep very clearly before us just exactly what Rawls conceived himself to be doing. Rawls comes on the scene at a time when Anglo-American ethical theory was locked in what Kant would have called an Antinomy between Intuitionism and Utilitarianism, the first descending from Kant and the second from Bentham. Each school had devastating objections to the theses of the other school while having no plausible defense against the attacks from its opponent. Rawls had the really brilliant idea of moving past this impossible stand-off by resurrecting the old tradition of social contract theory and wedding it to the brand-new field of Game Theory and Bargaining Theory. The two principles were put forward as the solution to a bargaining game in which self-interested agents were conceived as being willing to take one single step beyond pure self-interest by agreeing to bind themselves to principles unanimously endorsed on the basis of rational self-interest. I think, although I have absolutely no evidence for this, that Rawls saw himself as offering a theorem as powerful in its way as the astonishingly powerful Impossibility Theorem that Kenneth Arrow had proved in his doctoral dissertation, for which he later received the Nobel prize in Economics. Rawls was very definitely not simply suggesting that his principles had "intuitive appeal," because in the context in which he was writing, that would simply have him on one side of the Intuitionism/Utilitarianism divide.
Now, having said all of that, how plausible is the Difference Principle simply as a rule for deciding who gets what? That is a very large question, so I will just offer a very quick response. The Difference Principle is, I think, not at all plausible as a rule that would appeal to rationally self-interested agents -- economic agents, as that phrase is usually interpreted in Economics and Political Theory. However, if a society embraces the Credo that I have several times posted on this blog, then something like the Difference Principle might well be attractive to the members of such a society.
Of course, rather than struggle through A THEORY OF JUSTICE, which is a really boring book, the members of that society might simply inscribe on their banners the slogan From All According To Their Abilities. To All According to Their Needs.
About Nozick, by the way. I liked Bob, and when he published ANARCHY, STATE, AND UTOPIA, he had not yet become the darling of the right wing. So I guess I have always cut him some slack, even though I eviscerated the book in my 1978 Arizona Law Review article.
Tuesday, March 19, 2013
ONCE AGAIN, THE OLD PHILOSOPHER STANDS IN AWE OF GOOGLE
I am old enough to recall a time when the gold standard for the accumulated knowledge of humanity was the Encyclopedia Britannica. When Susie and I were married in the summer of 1987, one of the many things she brought to our new household was a complete set of the Britannica. For twenty-one years, it sat on the built-in shelves of our family room, flanking the television set, and from time to time I would pull a volume down to consult it on some bit of arcana.
In 2008, when I retired and we sold the house in order to move to Chapel Hill, we decided that the Britannica would have to go, so I took the many volumes, along with some other books, to the Amherst Town Dump, where there was a shed set aside for unwanted books. But the overlord of the shack would not accept them. He said there was no demand for them. I was reduced to driving about town with the entire set in the trunk of my car, surreptitiously dumping a volume at a time in public trash cans.
This morning, a question occurred to me. How much money, I wondered, is paid to the actress who plays Flo, the Progressive Insurance lady, in the humorous ads that have proliferated on television. This is a fact so obscure and unimportant that it could never have made it into one of the magisterial articles commissioned for the Britannica, or even for one of the lesser encyclopedias that competed with it, albeit never successfully.
So I went to my computer and asked Google. Before I had finished typing in the question, three versions of it popped up as Google suggestions, a sure sign that I was by no means the first person to whom the question had occurred. It turns out that Stephanie Courtney, the professional actress and comedian who plays Flo, is paid $500,000 a year for the gig. I have to say that I think she is worth it. After the Geico gecko, she is my favorite pitchperson, and the gecko, of course, being animated, doesn't earn a nickel.
How on earth can I ever explain to my grandson and granddaughter that there was a time when one did not have every conceivable fact at one's fingertips [or thumb tips if one is texting]?
In 2008, when I retired and we sold the house in order to move to Chapel Hill, we decided that the Britannica would have to go, so I took the many volumes, along with some other books, to the Amherst Town Dump, where there was a shed set aside for unwanted books. But the overlord of the shack would not accept them. He said there was no demand for them. I was reduced to driving about town with the entire set in the trunk of my car, surreptitiously dumping a volume at a time in public trash cans.
This morning, a question occurred to me. How much money, I wondered, is paid to the actress who plays Flo, the Progressive Insurance lady, in the humorous ads that have proliferated on television. This is a fact so obscure and unimportant that it could never have made it into one of the magisterial articles commissioned for the Britannica, or even for one of the lesser encyclopedias that competed with it, albeit never successfully.
So I went to my computer and asked Google. Before I had finished typing in the question, three versions of it popped up as Google suggestions, a sure sign that I was by no means the first person to whom the question had occurred. It turns out that Stephanie Courtney, the professional actress and comedian who plays Flo, is paid $500,000 a year for the gig. I have to say that I think she is worth it. After the Geico gecko, she is my favorite pitchperson, and the gecko, of course, being animated, doesn't earn a nickel.
How on earth can I ever explain to my grandson and granddaughter that there was a time when one did not have every conceivable fact at one's fingertips [or thumb tips if one is texting]?
GUEST POST BY PATRICK WOLFF
What It Is Like to Be a Bat
There
is a confusion in the philosophy of mind concerning the possibility of offering
a scientific explanation of the nature
of consciousness. As this confusion seems best embodied in Thomas Nagel's famous
essay, “What Is It Like to Be a Bat?” these remarks are framed as a direct
response to that question.
The
confusion resides in the distinction between subjective and objective
facts. An objective fact is one that can be completely described by language. A
subjective fact is one that has an experiential aspect to it. Of course, it can
be described by language (since language can be deployed to describe anything);
however, it cannot be completely described by language because there is
“something that it is like” within this fact, and therefore a complete
representation requires the experience as well. (There are, no doubt, tricky philosophy-of-language
issues that are raised by the above, but this simple definition will serve us
for this discussion.)
Qualia
are subjective facts. (Whether all subjective facts are qualia need not concern
us here; the key point is that all qualia are subjective facts.) Hence, qualia
cannot be completely described by language. It is easy to grasp this
intuitively when we think about our everyday experience. For example, when
someone describes some subjective experience, we understand that in order to
grasp more fully what is being described, one must imagine oneself in that
person’s experience. The description facilitates the effort, but the effort is
necessary nonetheless. That we are (sometimes) successful in making this active
effort is explained by the essential similarity we all share as human beings.
Now
comes the confusion. It has become respectable to argue that the subjective
nature of qualia suggests that a purely physicalist account of consciousness
may not (or should not, or cannot) be possible. I believe this is clearly
wrong. It is true that we do not at present have a physicalist account of
consciousness, and may also be true that such an account is far off. (We will almost
surely not have one in my lifetime, for example.) It is even possible
that there will turn out to be some reason why a physicalist account may not
(or should not, or cannot) be possible. But we have no reason at present to
believe that a physicalist account would not be possible. Instead, we have
every reason to believe that such an account should be possible, at least in
principle, because we are all physical beings in the world and consciousness is
a physical phenomenon.
To
further clear up this confusion, let’s describe how we would determine what it
is like to be a bat.
It
is surely undeniable that bats have qualia. But bats, unlike people, cannot
describe their qualia. And furthermore, we are enough dissimilar to a bat (e.g.
we have no sense that is analogous to a bat’s sonar) that even if somehow we
were presented with a description of a bat’s qualia, it seems unlikely that we
could imagine ourselves into a bat’s experience. A more fundamental approach is
necessary.
Before
describing this fundamental approach, it may be useful to elaborate upon the
normal process of imagining what another person’s qualia are like. Let’s take a
simple example, like someone explaining what a meal you did not have tasted
like. She might describe the tastes and compare it to other foods that you have
recently eaten. You might close your eyes to eliminate competing physical
sensations and then direct yourself to imagine the food. Somehow, your
conscious effort might allow you to reconstruct and imagine (albeit faintly and
inadequately) the relevant gastronomic qualia. At the time of this writing, we
do not have anything like a complete physical description for how this process
happens. But does anyone seriously doubt that it is at root a physical process?
The sounds of the words hit the eardrum and are converted into signals that are
processed (somehow) by the brain; the conscious faculty decides (somehow) to
imagine the physical sensations conveyed by the signals; the brain (somehow)
simulates the physical sensations and (somehow) conjoins those sensations to its
understanding of the words, etc. Even though we cannot now provide any sort of
adequate physical description of the entire process, do we have any reason to
doubt that it is entirely a physical process? And furthermore, do we have any
reason to believe that such a description is inherently impossible? I
believe the answer to both of these questions is no.
If
I am right that the last two questions are correctly answered in the negative,
then it follows that we could in principle replicate scientifically what
normal, human empathy approximates interpersonally all the time. Furthermore,
this replication could be applied far beyond the limits of human empathy, both
in terms of scope (applying to bats as well as humans) and representativeness
(going far beyond what our normal imagination is capable of). Of course, there
might be some scientific reason why such laboratory replication turns out to be
inherently impossible. You never know what you will learn until you try: nobody
living before the twentieth century would have been likely to guess the
inherent physical limitations of the speed of light and the uncertainty
principle, for example. Perhaps some physical limitation applies to the
replication of consciousness.
But
the possibility of a physical limitation of our ability to replicate qualia is
completely different from the philosophical claim that “every subjective
phenomenon is essentially connected with a single point of view, and it seems
inevitable that an objective, physical theory will abandon that point of view.”
[Nagel, http://organizations.utep.edu/portals/1475/nagel_bat.pdf,
page 2] I contend that this seeming inevitability stems simply from a lack of
philosophical imagination.
So
now let’s describe the fundamental approach that would result in a physical
theory that would meet our needs.
The
first step is to develop a sufficient understanding of a bat’s brain (and
associated neurological system) to create whatever qualia are desired. Let us
suppose that we want to know what it is like for a bat to perceive the wall of
a cave using sonar. Then we must understand the bat’s brain in sufficient
detail to know precisely what physical processes are associated with
“perceiving this wall under these circumstances using sonar”. In doing so, we
must learn precisely where the conscious states that are associated with such
perception are located, and we must identify every single relevant feature of
the brain that is associated with such conscious states.
Of
course, we are nowhere close to being able to do anything of the sort. It would
require a body of knowledge and an empirically verified theory of bat brain
physiology that is completely beyond us today. But I do not know any reason why
this would be inherently impossible; it just means that our science is many,
many years away from such understanding. And anyway, this is the easy part! For
once we have a complete physical description of bat qualia in question,
verified (however it would be verified) by experiment and theory, we must move
to the second step. The qualia must be replicated in a human brain in such a
way that it can be experienced by a person.
The
hypothesis here is that bat brain physiology and human brain physiology are sufficiently
similar that specific bat qualia can be recreated in a human brain. Of course,
this may not be true: but again, we have no reason to assert with any
philosophical justification that it cannot be done. The key would be to
identify whatever bat stimuli are associated with specific conscious states and
then reproduce those stimuli under conditions that are both theoretically
justified and empirically verifiable. Under such conditions, we would have at
least some reason to believe that whatever qualia would then be perceived by
our human subject would represent how the qualia are perceived by the bat. Our
human subject would know what it is like to be a bat.
The
question at hand was whether it is possible to provide a justified physicalist
account of conscious mental states. Our answer is as follows. “In practice: not
today. In theory: we have every reason to believe so and no obvious reason to
think not, although we won’t know for sure until we try. Of course, since
subjective facts cannot be fully described objectively, our objective, physical
theory will require a subject to experience the qualia, and only that subject
will have the experience. But that is just the nature of subjective facts.”
What
is it like to be a bat? We don’t know today, but we have every reason to
believe that it is in theory knowable. More importantly, until we learn far
more than we know today we are wasting our time raising philosophical
objections to the possibility of such knowledge.
LEAD-IN TO A GUEST POST
One of the humbling lessons of modern science is that it is often risky to argue a priori that certain things are impossible. Like as not, some scientist will pop up to tell you that it is not only possible, it is actual. My favorite example is the notion of "contrast dependent terms" that was popular some while ago. Philosophers argued that certain pairs of terms, such as left/right, up/down, and in/out are contrast dependent, so that it is logically impossible for someone to understand one of them without also understanding the other. Then along came the great neurologist and author Oliver Sacks, who reported in one of his books a case of a woman whose brain injury had left her quite capable of understanding and using the concept "left" but completely unable to grasp the idea of "right." Told, for example, that something she was looking for was to her right, she would turn all the way around to her left until it came into view.
Another lovely series of examples comes from the defenders of Intelligent Design, who think they are offering a brilliant and irrefutable objection to the theory of evolution when they ask, rhetorically, what possible survival value there could be to each of the minute mutational changes leading to the formation of a fully functional eye. Surely, they say, the mutation that elongates a nerve and positions it at the edge of an organism's outer surface cannot have any value that will cause the process of natural selection to privilege it. Only a purposeful God, aware that this is the first small step on the way to the eye, can anticipate the end product and thus carry the evolution of the eye to its completion.
So then some brilliant evolutionary biologists do some really classy research and sure enough, they come up with a demonstration that that intermediate stage does, all by itself, confer a differential survival advantage on its organism. The Intelligent Designers go back to the drawing boards and reconstruct their objection. "Maybe so," they reply, "but for that step in the evolution of the eye to take place, there must have been this or that change in the expression of some gene, or in the production of some amino acid, and there is no evolutionary advantage conferred by that mutation." So the scientists go back to their labs, do some more brilliant research, and sure enough come up with a differential survival advantage attached to just that apparently useless mutation. And so on and on. The Creationists never pause even for a moment to experience awe at the brilliance of the scientific research. They just hunker down and back up and dig in their heels at the very edge of whatever point science has managed to advance to.
All of which is merely an introduction to today's guest post, a little essay written by my son, Patrick Gideon Wolff. Before Patrick was the Managing Director of a Hedge Fund located in San Francisco; before he was the husband of Diana Schneider and the father of my two grandchildren, Samuel Emerson Wolff and Athena Emily Wolff; back when he was one of the most famous chess grandmasters in the world, Patrick returned to college to complete his undergraduate degree, doing Philosophy at Harvard with the likes of Jack Rawls, Bob Nozick, and Christine Korsgaard. It appears to have stuck.
Yesterday, Patrick took time off from managing his fund's millions to send me a little essay he wrote in response to Thomas Nagel's famous journal article, "What is it like to be a Bat?" He agreed to allow me to post it here. I think you will be able to see quite easily the connection to my introductory remarks above. By the way, since I am in an avuncular mood, I should note that Tom Nagel was my student back in 1960, way before he became, as it were, Thomas Nagel. He took my course at Harvard on Kant's Critique of Pure Reason. [And yes, he did brilliantly.]
Another lovely series of examples comes from the defenders of Intelligent Design, who think they are offering a brilliant and irrefutable objection to the theory of evolution when they ask, rhetorically, what possible survival value there could be to each of the minute mutational changes leading to the formation of a fully functional eye. Surely, they say, the mutation that elongates a nerve and positions it at the edge of an organism's outer surface cannot have any value that will cause the process of natural selection to privilege it. Only a purposeful God, aware that this is the first small step on the way to the eye, can anticipate the end product and thus carry the evolution of the eye to its completion.
So then some brilliant evolutionary biologists do some really classy research and sure enough, they come up with a demonstration that that intermediate stage does, all by itself, confer a differential survival advantage on its organism. The Intelligent Designers go back to the drawing boards and reconstruct their objection. "Maybe so," they reply, "but for that step in the evolution of the eye to take place, there must have been this or that change in the expression of some gene, or in the production of some amino acid, and there is no evolutionary advantage conferred by that mutation." So the scientists go back to their labs, do some more brilliant research, and sure enough come up with a differential survival advantage attached to just that apparently useless mutation. And so on and on. The Creationists never pause even for a moment to experience awe at the brilliance of the scientific research. They just hunker down and back up and dig in their heels at the very edge of whatever point science has managed to advance to.
All of which is merely an introduction to today's guest post, a little essay written by my son, Patrick Gideon Wolff. Before Patrick was the Managing Director of a Hedge Fund located in San Francisco; before he was the husband of Diana Schneider and the father of my two grandchildren, Samuel Emerson Wolff and Athena Emily Wolff; back when he was one of the most famous chess grandmasters in the world, Patrick returned to college to complete his undergraduate degree, doing Philosophy at Harvard with the likes of Jack Rawls, Bob Nozick, and Christine Korsgaard. It appears to have stuck.
Yesterday, Patrick took time off from managing his fund's millions to send me a little essay he wrote in response to Thomas Nagel's famous journal article, "What is it like to be a Bat?" He agreed to allow me to post it here. I think you will be able to see quite easily the connection to my introductory remarks above. By the way, since I am in an avuncular mood, I should note that Tom Nagel was my student back in 1960, way before he became, as it were, Thomas Nagel. He took my course at Harvard on Kant's Critique of Pure Reason. [And yes, he did brilliantly.]
WEIRD
All of a sudden today, the number of visitors to this blog has doubled. That usually means that Brian Leiter mentioned me on his blog, but I checked, and that was not the explanation. Does anyone know who generated the traffic?
Monday, March 18, 2013
AND NOW CYPRUS
No sooner had I stumbled into a discussion on this blog of political tendencies in Europe, a subject about which -- I hope it was clear -- I am woefully ignorant, than a financial crisis erupts in Cyprus that threatens once again the survival of the Eurozone. [There is a long story in the NY TIMES today that starts on the front page and continues on page B6, if you are interested.] With all its flaws, which seem to be manifold, the effort to create and sustain a pan-European economic union and something like at least a partial framework for political cooperation is one for which I have very powerful positive sentiments. The sixty-eight years between the end of World War II and the present day is the longest sustained period of peace in the Franco-German heartland of Europe since the eighteenth century or even, depending on how you think about it, since the Middle Ages. I have not the slightest idea how this continuing crisis of the euro is going to play out. But I really hope that a way is found to stabilize the Eurozone financially, and that economic and political structures are established that make it impossible for Europe once again to descend into the hell of fascism and war.
REALITY CHECK
As I was checking Amazon.com for the newly available digital versions of several of my books [I was right, Chris -- you can "borrow" THE POVERTY OF LIBERALISM on Kindle, and it has the Mill essay in it], I noticed that several of my books are available used for -- wait for it -- $0.01, which is to say, for a penny. One of the less-often mentioned virtues of a market economy is its salutary effect on swelled heads.
Saturday, March 16, 2013
THE BANALITY OF EVIL
The title of my post today is taken, of course, from Hannah Arendt's famous work on the 1963 trial of Adolf Eichman, Hitler's architect of the Holocaust. In my Memoir, I tell the story of a brief encounter with Arendt in the 60's, during my time as a Columbia University philosophy professor. I gave a lecture on John Stuart Mill at a session of a faculty seminar series at Columbia, and Arendt, whom I knew casually, attended. My lecture was taken from an essay I had published as my contribution to a little volume called A Critique of Pure Tolerance authored by Herbert Marcuse, Barrington Moore, Jr., and myself, in which I beat up on old J. S. pretty bad. At the end of the lecture, Arendt came up to say hello. She was pretty clearly not too thrilled with my talk, but she asked, politely, what I was working on. I replied that I was hard at work on a book on Kant's ethics. When I said this she brightened visibly, smiled, and said, "Ah, yes. It is so much better to spend time with Kant!"
I thought of her remark this morning as I was wondering what I might say today on my blog. During the past week and more, while I have been writing my unsatisfactory three-part unfinished essay on the concept of money, interrupted by the posting of my little paper on The Color Purple, America's political clown show has continued, complete with the bizarrerie of the annual meeting of the Conservative Political Action Conference, known familiarly as CPAC. I have feelings about what has been happening politically in America, but it would be too much to say that I have thoughts, which would imply that America's politics have a formal structure adequate to support rational discourse, and about that I have serious doubts. [Compare the well-known passage in the Parmenides (130C) where Plato has the young Socrates question whether "hair or dirt or mud or any other trivial and undignified objects" have Forms.]
Still and all, we live in this world, and it behooves us to engage with it as it is, not -- pace the Utopian Socialists -- as we wish it were. So I shall try to find something to say that is "useful or agreeable to myself or others," to quote David Hume's description of those things about which we experience a sentiment of approbation.
Well, in the course of complaining about the abysmal state of contemporary American politics, I have managed to allude to Hannah Arendt, Herbert Marcuse, Barrington Moore, Jr., John Stuart Mill, Immanuel Kant, Alice Walker, Plato, Socrates, Parmenides, and David Hume. Not a bad day's work.
I thought of her remark this morning as I was wondering what I might say today on my blog. During the past week and more, while I have been writing my unsatisfactory three-part unfinished essay on the concept of money, interrupted by the posting of my little paper on The Color Purple, America's political clown show has continued, complete with the bizarrerie of the annual meeting of the Conservative Political Action Conference, known familiarly as CPAC. I have feelings about what has been happening politically in America, but it would be too much to say that I have thoughts, which would imply that America's politics have a formal structure adequate to support rational discourse, and about that I have serious doubts. [Compare the well-known passage in the Parmenides (130C) where Plato has the young Socrates question whether "hair or dirt or mud or any other trivial and undignified objects" have Forms.]
Still and all, we live in this world, and it behooves us to engage with it as it is, not -- pace the Utopian Socialists -- as we wish it were. So I shall try to find something to say that is "useful or agreeable to myself or others," to quote David Hume's description of those things about which we experience a sentiment of approbation.
Well, in the course of complaining about the abysmal state of contemporary American politics, I have managed to allude to Hannah Arendt, Herbert Marcuse, Barrington Moore, Jr., John Stuart Mill, Immanuel Kant, Alice Walker, Plato, Socrates, Parmenides, and David Hume. Not a bad day's work.
Friday, March 15, 2013
EUROPE'S FUTURE
Magpie writes a comment [see comments to Addendum] about a troubling piece by Yanis Veroufakis concerning the direction Europe is currently taking. The burden of the piece is that between the two world wars, Europe descended into the hell of fascism, at least in part because of the effects of the depression, and there is reason to fear that Europe will again take that path. Veroufakis has assembled a collection of statements that sound very much like what is now being said by supposedly sensible people in Europe, but turn out all to have actually been said by Nazis or Italian Fascists in the 30's and early 40's. The effect is very chilling.
I am not knowledgeable enough to make any sort of reasoned guess about the degree of the danger of another descent into European fascism. America's dark side is different from Europe's [not better or worse, just different], as we are seeing at this time. I have some small confidence in my ability to read the American scene, but no confidence at all in my ability to read the European scene. The one European country whose politics I have some familiarity with is France. I am encouraged by the fact that France recently elected a progressive socialist government, at the same time that I am deeply troubled by the growth there of "fascism with a human face" in the person of Marine LePen.
It is so hard to have any impact on the direction of the American economy and society -- I cannot even imagine what I and others of like mind could do to affect what is happening in Europe. However, this is a matter of the first importance, and I welcome comments from those with more knowledge than I.
I am not knowledgeable enough to make any sort of reasoned guess about the degree of the danger of another descent into European fascism. America's dark side is different from Europe's [not better or worse, just different], as we are seeing at this time. I have some small confidence in my ability to read the American scene, but no confidence at all in my ability to read the European scene. The one European country whose politics I have some familiarity with is France. I am encouraged by the fact that France recently elected a progressive socialist government, at the same time that I am deeply troubled by the growth there of "fascism with a human face" in the person of Marine LePen.
It is so hard to have any impact on the direction of the American economy and society -- I cannot even imagine what I and others of like mind could do to affect what is happening in Europe. However, this is a matter of the first importance, and I welcome comments from those with more knowledge than I.
Thursday, March 14, 2013
BUT I REPEAT MYSELF
I was searching various file folders on my computer [never mind why], and stumbled on the five-part discussion of the Humanities that I posted just two years ago, in March 2011. When I re-read the fifth Part, I discovered that I included in it one of the greatest passages on education from all of American literature, the conversation between Huck and Nigger Jim from Huckleberry Finn on the French language. It is so wonderful that I have decided to post again. Perhaps I will keep doing this every two years as an homage to Mark Twain. Enjoy:
The assault on the Humanities is almost entirely budgetary. Wealthy schools [particularly, in America, well-endowed private colleges and universities] are content to leave their Humanities departments in place, and even to underwrite their expansion and multiplication. But the budget crises that periodically afflict public institutions seem almost always to take the heaviest toll on the Humanities. The experimental sciences have for many decades now relied on government and corporate funding for most of their research, and a combination of capitalist self-interest and national defense anxiety has sufficed to keep their money pouring in.
Having
ventured into depth psychology and other treacherous realms in search of a
defense of the Humanities, I shall now return to the quotidian struggle for jobs
and paychecks. Today, I wish to talk for
a bit about what is happening to Humanities departments in universities. My comments will be anecdotal, and restricted
by and large to this country, simply because of the limitations of my knowledge
and experience. I invite my readers from
other countries to tell us what is happening there.
The assault on the Humanities is almost entirely budgetary. Wealthy schools [particularly, in America, well-endowed private colleges and universities] are content to leave their Humanities departments in place, and even to underwrite their expansion and multiplication. But the budget crises that periodically afflict public institutions seem almost always to take the heaviest toll on the Humanities. The experimental sciences have for many decades now relied on government and corporate funding for most of their research, and a combination of capitalist self-interest and national defense anxiety has sufficed to keep their money pouring in.
Many of the
readers of this blog will understand quite fully how all of this works, but for
those of you who do not hold faculty positions at tertiary institutions, permit
me a few words of explanation. A grant
proposal emanating from a university-based research scientist routinely includes
money for research assistants, which is to say doctoral students, who will form
part of the team working in the "Principal Investigator's" laboratory. Science these days is virtually always carried
on by teams, in sharp contrast to the research of Humanist scholars. [Compare the publications of the two
groups. The science papers always have
multiple authors, with the grant-getter's name appearing first. Only rarely do humanists publish
jointly.] The grant proposal also
routinely includes money for phones, travel, "research materials," and other
expenses that Humanists rely on their Deans to
provide.
In addition
-- and this is profoundly important in the finances of a university -- funders
such as the National Science Foundation and the National Institutes for Health
permit grant applicants to include a very large overhead allowance -- a standard
percentage of the dollar amount of the grant application -- ostensibly to
compensate the home institution for the expenses incurred by hosting the
research team. At most universities, this
overhead, which can be as much as 40% added onto the total grant, is then
divided up, by a standard formula, among the Principal Investigator [PI], the
home department, the Dean of the Science Faculty, and the Provost or central
office. The money going to the PI and to
the home department funds graduate students, travel, phones, equipment, and all
the other amenities of academic life.
In return
for this largesse, as I have already noted, the grant applicants must search the
existing databases of funders for money available to underwrite the research
they wish to carry out, while creatively shaping their research proposals to fit
the announced priorities of the funders.
If there is money to fund a search for a vaccine for AIDS, but little or
no money to fund a study of previously undiscovered flora and fauna in the
Amazon rainforest, then the challenge is to persuade funders that potential
breakthroughs in AIDS vaccine development lie waiting in the canopy of the
Amazon jungles. A mathematician
interested in the topology of connected tree-structures will shape her proposal
so that it appears to promise a solution to traffic jams in big cities. And so forth.
Research
scientists will tell you that they spend a great deal of their time writing
grant proposals, and departments in the sciences weigh a candidate's success in
securing grants very heavily when making tenure and promotion
decisions.
By and
large, humanists know nothing of this world of external funding, and many of
them resist as a matter of principal shaping their research to fit the funding
priorities of foundations, corporations, and government agencies. There is much less money available for
humanistic research, and virtually none for teaching in the humanities. Over time, a class structure has evolved in
the American academic world. Science
doctoral students are routinely fully funded;
doctoral students in the Humanities scrounge for funding, making do with
partial teaching assistantships, back-breaking assignments in Freshman
Composition, and jobs in fast food emporia.
Science departments have travel budgets, research budgets, conference
budgets, travel budgets, and multiple phone lines. Humanities Departments pay by the sheet for
Xeroxing.
During the
Golden Age of American higher education -- the 60's, 70's, and 80's of the last century,
which is to say during a time coterminous with my own career -- the number and
size of tertiary institutions expanded rapidly.
First in response to the demand from returning World War II GI's funded
by the GI Bill, then as a National Security response to the Cold War and the
Soviet Union's early successes in space exploration. money poured into higher
education. State Colleges were jumped up
to campuses of the State University, and Community Colleges promoted to State
College branches, all needing Humanities Departments to justify their new
status. The available jobs so far
exceeded the supply of scholars holding doctorates in the Humanities that
graduate students were being offered full time positions even before having
passed their qualifying exams. Thanks to
the multiplication of campuses and money from the National Defense Education
Act, some of which inevitably trickled down into university library budgets,
publishers found that they could at least break even on virtually any academic
title they published. A scholar in the
Humanities willing and able to crank out manuscripts could get contracts and
advances simply on the basis of an idea and a one page rationale. "Bliss was it in that dawn to be alive/But to
be young was very heaven."
Well,
Thermidor comes to all revolutions, and pretty soon the money started to dry
up. At first expansion stopped. Then travel money and research assistance
disappeared. Funding for graduate
students dwindled, and Deans desperate to avoid firing faculty removed
professorial phone lines. These cheese
parings served for a while, but as we entered the new millennium, serious cuts
replaced these trimmings. Poorly paid
part time faculty began to replace tenure track faculty, and when that was not
enough, Universities required by law to declare "financial exigency" before
contemplating the firing of tenured faculty ventured into that previously
forbidden territory. Doctoral programs
were summarily terminated as "too expensive," and teaching loads were
raised.
One of the
most bizarre of the many budget cutting moves has been the merging into one of
previously distinct departments of language and literature. Apparently, the corporate managers who have
found soft berths for themselves as university chancellors look at the array of
language departments in the Humanities faculties -- Germanic Languages and
Literature, Classical Studies, Slavic Languages and Literatures, Spanish and
Portuguese, and all the rest -- and decide that since they aren't English, they
all belong together. This maneuver always
reminds me of one of my favorite passages in the novels of Mark Twain, the
famous argument between Huck and Jim about whether the Duke and the Dauphin
really speak something called French.
Here it is, verbatim, from Chapter 14 of THE ADVENTURES OF HUCKLEBERRY
FINN. I hope you will not mind my quoting
the entire passage. Huck is narrating, of
course:
"I told
about Louis Sixteenth that got his head cut off in France long time ago; and
about his little boy the dolphin, that would a been a king, but they took and
shut him up in jail, and some say he died there.
"Po' little
chap."
"But some
says he got out and got away, and come to America."
"Dat's
good! But he'll be pooty lonesome—dey ain' no kings here, is dey, Huck?"
"No."
"Den he
cain't git no situation. What he gwyne to do?"
"Well, I
don't know. Some of them gets on the police, and some of them learns people how
to talk French."
"Why, Huck,
doan' de French people talk de same way we does?"
"NO, Jim;
you couldn't understand a word they said—not a single word."
"Well, now,
I be ding-busted! How do dat come?"
"I don't
know; but it's so. I got some of their jabber out of a book. S'pose a man was to
come to you and say Polly-voo-franzy—what would you think?"
"I wouldn'
think nuff'n; I'd take en bust him over de head—dat is, if he warn't white. I
wouldn't 'low no nigger to call me dat."
"Shucks, it
ain't calling you anything. It's only saying, do you know how to talk French?"
"Well, den,
why couldn't he SAY it?"
"Why, he IS
a-saying it. That's a Frenchman's WAY of saying it."
"Well, it's
a blame ridicklous way, en I doan' want to hear no mo' 'bout it. Dey ain' no
sense in it."
"Looky
here, Jim; does a cat talk like we do?"
"No, a cat
don't."
"Well, does
a cow?"
"No, a cow
don't, nuther."
"Does a cat
talk like a cow, or a cow talk like a cat?"
"No, dey
don't."
"It's
natural and right for 'em to talk different from each other, ain't it?"
"Course."
"And ain't
it natural and right for a cat and a cow to talk different from US?"
"Why, mos'
sholy it is."
"Well,
then, why ain't it natural and right for a FRENCHMAN to talk different from us?
You answer me that."
"Is a cat a
man, Huck?"
"No."
"Well, den,
dey ain't no sense in a cat talkin' like a man. Is a cow a man?—er is a cow a
cat?"
"No, she
ain't either of them."
"Well, den,
she ain't got no business to talk like either one er the yuther of 'em. Is a
Frenchman a man?"
"Yes."
"WELL, den!
Dad blame it, why doan' he TALK like a man? You answer me DAT!"
I see it
warn't no use wasting words—you can't learn a nigger to argue. So I
quit."