Readers of this blog will have noticed that it takes almost nothing at all to get me started writing. Back in early October, "Luke's Mother" asked briefly how one ought to study society, and that idle question provoked a twelve-part answer from me that might easily have become a monograph. Now, a reader newly come to this blog has sent me an email wondering whether I might say something "on the subject of the ever increasing cost of education past high school and how it impacts the various levels of American society." Since this is another subject to which I have given a good deal of thought over the past half century, I will be happy to oblige. This will not run on to many segments, I shouldn't think, but it may require more than a single blog post.
I need to talk about some other matters before addressing this question directly, but it might be useful to put down just a few facts at the outset, to create a context for what follows. I spent some time this morning surfing the web with the aid of Google, and I was not able to come up with what I really wanted, which was a table showing year by year increases over the past half century, in nominal dollars and in dollars adjusted for inflation, for both public and private tertiary institutions I am sure such tables exist, and if anyone reading this can point me to one, I would be most grateful.
I did, however, come up with the following information, which can serve as a substitute for a fuller and more detailed data set. Between 1978 and 2008, the cost of living rose roughly 330% [including some hairy years in the late 70's and early 80's when annual inflation was running near or above ten percent]. Over the same thirty years, tuition and fees at four year colleges rose about 980%. By way of comparison, during that three decades long stretch, medical costs, which of course have been soaring, rose a bit less than 600%. Thus, although medical costs rose somewhat less than twice as fast as inflation, tuition and fees at four year colleges actually rose three times as fast as inflation. Clearly, there is something here that needs explaining, even before we come to talk about the impact of this rapid rise on students, families, and the society as a whole.
But first, let me talk about the widely held belief that higher education for all is the key to moderating the increasingly severe income inequality that shames and blights American society. Everyone seems to have embraced this notion, from politicians, newspaper columnists, and bloggers to Nobel laureate economists. And it yet it is rather obviously false. What is going on here is a pandemic commission of what we in Philosophy call "the fallacy of composition." The Fallacy of Composition is the mistake of going from the premise that something is true of each of the members of a group to the conclusion that it is true of all the members of the group. A simple example should make this all clear. When an audience leaves a concert hall at the end of the performance, it is true of each person in the audience that he or she can be the first person to leave the hall. But assuming that the exit is a normal doorway, it is not therefore true that all of the members of the audience can be the first person to leave the hall, not even if we allow for ties. Indeed, if the entire audience tries to be first through the door [which happens, typically, only after someone has yelled "fire"], the likely result is that there will be a jam at the door and no one will leave at all.
Familiar statistics, many times repeated, show that individuals with tertiary degrees are less likely to be unemployed and also enjoy, over their working lives, markedly higher incomes. From these facts, everyone draws the conclusion that if we can only dramatically increase the average number of years of education successfully completed in the workforce as a whole, unemployment will drop and average lifetime earnings will increase. This, as I say, is an example of the Fallacy of Composition.
To see why the conclusion does not follow from the premise, we need simply imagine a company that has, let us suppose, one thousand employees. Assume that ten of these are senior management, fifty are middle management, eight hundred are production or service workers who actually make the product or provide the service that the company sells, and the remaining one hundred forty have low end jobs as cleaners, janitors, loading dock workers and such. Let us also make three further assumptions that correspond pretty closely, I would think, to reality:
First, we will assume that wages, salaries, benefits, and fringe perks improve as we move up the ladder from the one hundred forty low end workers, through the eight hundred production or service workers, into middle management and then to upper management. We need not put numbers to these levels of employment, but I am going to suppose that the differences between those at the top and those in the middle are dramatic -- sufficient to support totally different lifestyles -- while the differences between the middle group and those at the bottom are certainly not trivial.
Second, we will assume that level of educational attainment correlates pretty closely with level of employment, at least in this sense: All of the top and middle managers have college degrees, and at least some of the top managers have MBAs. Keeping in mind that only 25% of adult Americans have college degrees, we will also assume that having or not having a college degree correlates in some measure with where in that great middle group of eight hundred one sits, although length of employment may also be an important factor here as well.
Finally -- and this is in fact the most important and often overlooked element of the entire situation -- I am going to assume that the wages, salaries, fringe benefits, and perks attach to the jobs, not to the people. What I mean is this. The company has a fixed number of jobs of various sorts that it must fill to carry out its business plan. When a top manager retires or leaves, the company either promotes someone into that job or recruits someone for it from the outside. The job carries a salary and associated benefits, which are offered to whomever they select to fill the job. If a middle manager is promoted, she gets a raise, and perhaps an office of her own. Should the company be restructured, so that a top manager is demoted to middle management level, he does not carry his top management salary with him. That went with the job, and he must accept a lower salary and fewer perks if he wants to stay at the company.
Now, let us suppose that a bright young woman in one of the service departments reads a Paul Krugman column about the importance of educational attainments, and by sint of great effort manages to earn a college degree. Will she improve her chances of promotion? You bet. If her company does not have any slots open higher up on the food chain, she will start reading the Help Wanted pages of the local newspaper and, with her diploma in hand, will stand a better chance of snagging a job that is a step up.
So, what happens if EVERYONE in the company reads the same column and goes to night school, earning college degrees. Does the company respond to this extraordinary upgrading of the educational credentials of its employees by totally eliminating the lowest level jobs and reclassifying all one thousand as managers, with appropriate salary raises and perks? Alas, no. Who will actually make the products or deliver the services if every employee in the company is a manager? And who will clean the offices of the newly promoted managers and haul away the trash they generate in those new offices? Indeed, where are they going to find the space for so many managerial offices?
We all know what actually happens. It has been happening in America for more than a century. The boss redefines the criteria for the various levels of employment. It now takes a Ph. D. as well as an MBA to be a senior manager. MBAs bag middle management jobs. Graduates of good private colleges get production or service jobs, and holders of diplomas from state colleges get to work on the loading dock. High school graduates need not apply.
But, you say, surely these thousand college graduates will fan out across America and find better jobs somewhere? True, true. Unless, heaven forbid, everybody in America takes to reading Paul Krugman, and they all earn college degrees. Then, the Fallacy of Composition kicks in with a vengeance. Only in Lake Wobegon are all the children above average.
Well, this is enough to get us started. I shall continue tomorrow or the next day.
28 comments:
The best source for official education data is here:
http://nces.ed.gov/programs/digest/
There is also the cost of applications to grad school. Which, I'm finding at the moment, to be insurmountable. As a recent graduate, I'm having to forego applying to numerous school I had intended, merely because I cannot afford the application fees. Worst of all, if the school rejects you, they still retain the application fee. Pfui.
I will get to costs a bit later, but that really is just awful. They will take any way to squeeze a little bit more out of the anxious students who are desparate to launch a career -- and then justify it by complaining about how much it costs them to process the applications. On occasion, in the Afro-American Studies doctoral program I ran at Umass, we would want to admit a student who had not paid the fee, and the Graduate School would not allow us to do so until the fee was paid. Talk about Plato's Academy!!!
Yeah, although I want to attend numerous school in the New York, and North Eastern US region, the application fees are on average $100 more than the ones in the South (where I live now). Ergo, as much as I loathe the South, I'm rather stuck sending out application fees here, and not there.
Great post. I am totally with you on Krugman committing the fallacy of composition.
However, the fallacy of composition is not exactly this:
"going from the premise that something is true of each of the members of a group to the conclusion that it is true of all the members of the group"
Because that is a valid inference. "True of each" and "true of all" do not differ in their truth-conditional contribution. For example, "Each philosopher went fishing" and "All philosophers went fishing" are logically equivalent. Rather, the fallacy involves a scope confusion. (I would have thought that Quine taught you this 60 years ago, but then, maybe he didn't; Quine himself was confused about scopes at that time, as in his example of bicyclists who necessarily have two legs and mathematicians who are necessarily rational :)
The confusion involves these:
(1) For each x in G, it is possible that P(x)
(2) It is possible that, for each x in G, P(x)
The fallacy of composition is just the fallacy of going from a premise of the form (1) to a premise of the form (2). This is not a valid move.
As the example you use immediately after your description of the fallacy makes clear, the action is in the modal expression "can" and its scope, not in the extensional "true of" and "each". Replace "can" with the extensional "is", and you'll find that the example you give becomes a valid argument.
There is a difference between 'true of each' and 'true of all', a distinction Abelard made between generality in 'sensu diviso' and in 'sensu composito'. Classical logic doesn't respect the difference, but natural language does (and so much the worse for classical logic). If you go to a dance party attended only by old hands, it's true in sensu diviso that all the people there can dance, because it is true of each individual. But it isn't true in sensu composito that everybody at the party can dance, because it isn't a truth that whoever might be at the party would be a good dancer, because a newbie might wander in. Another way to say this is to say that in this case, it's a truth de re of all the people at the party that they can dance, but not a truth de dicto of the party-goers (very frequently you can map the sensu diviso/sensu composito distinction onto the de re/de dicto distinction like this, but they're not the same distinction). One form of the fallacy of composition is confusing in sensu diviso truths of all the members of a set for in sensu composito truths.
I'd also hesitate long and hard before I call Quine confused about anything.
Marinus, in his papers on quantified modal logic, Quine committed what Ruth Marcus accurately described as "logical howlers", such as the one I cited. Philosophy has moved on; you're not going to find any (prominent) contemporary philosophers defending Quine's attack on quantified modal logic.
Of course Quine was confused about lots of things. He was merely human. Philosophy moves on and confusions get corrected.
What you say about 'sensu diviso' and 'sensu composito' simply restates my point in different words. Note that all of the action is in the modal "might". You made my point again with another example.
If you read philosophical logic published since the 1960s (e.g., Saul Kripke, Kit Fine, David Kaplan, Stephen Neale, all of whom vindicate Marcus's assessment and represent the current consensus), you'll find that we've come a long way from wondering whether a mathematician who is a bicyclist is necessarily rational but not necessarily bipedal, or necessarily bipedal but not necessarily rational. This stuff is covered in undergraduate logic courses these days. I learned about these confusions of Quine's as an undergraduate, and I have taught the same material to undergraduates as a graduate student. There's just no serious question about who was right at this point anymore.
I do read philosophic logic since the 60s. I also think you should be less eager to lecture people. My addendum was mostly against the tone of your response: I found 'I would have thought that Quine taught you this 60 years ago' to be a very immoderate thing to say.
Yes, my point about the distinction in types of generality comes to the same conclusion about yours about the scope of modal operators, because you, I and Prof Wolff all believe there is something like the fallacy of composition. My point, though, was that there is a difference between 'true for each' and 'true for all', and a difference which suffices to explain the fallacy of composition. And while our points come to the same conclusion, there is no general way to map the sensu composito/sensu diviso distinction onto an observation about modal scope, in the same way you de re/de dicto distinction isn't a modal distinction. But instead of us having a pissing match, let's keep our discussion to the topic at hand, and hopefully not in a preachy tone.
"The company has a fixed number of jobs of various sorts that it must fill to carry out its business plan."
This is itself an unhelpful working assumption. Suppose that higher education does actually boost potential labor productivity. (This point is controversial, and the economics literature contains an entertaining contrast between "human capital" and "screening" models.)
But if one takes enhanced potential productivity as an initial working hypothesis, then your company could use an increased number of skilled employees with greater amounts of capital (machinery, equipment, IT), and use less unskilled labour. And this could be aggregated up the level of the economy as a whole. An example would be southern Germany or Switzerland, where factory employees tend to be more like a technician or even engineer than a stereotypical English or American assembly line worker. Average labor productivity can (and does) rise over time around Lake Constance, without running into the Lake Wobegon paradox of everyone being above average.
One of the useful points which economists specializing in education are making in the current debate over the changing policy on UK university fees, is to ask whether the enhanced income and job security which a UK degree has conveyed in the past survives a significant rise in the proportion of school graduates going on to university. (This proportion has risen substantially the past decade.) So it is an active research topic rather than an unexamined assumption.
First of all, I thank those of you who corrected and clarified my remarks about the fallacy of composition. I am content if my point was right, even though my exposition and explanation of it left something to be desired. I fully understand the tendency to look disparagingly at the giants who went before us. Philosophy does not thrive on deference. Quine was possibly the smartest person I ever knew [although maybe Noam Chomsky deserves that title, or indeed Amartya Sen], but of course he had blind spots and made mistakes. Had he not, what would there be for the rest of us to do? I was there, as they say, when the mature and famous Quine met the undergraduate Kripke. Modal logic in those days was the sort of thing done by C. I. Lewis, with whom Quine had studied and for whom he had the same genial contempt as you folks have for Quine. I think he never was willing to acknowledge that Kripke weas on to something.
Wallyverr, I agree completely, and planned to address that argument today. As you will see, I am not too impressed with it, at least in the context of my present discussion, for reasons I will try to explain later today and perhaps tomorrow. This is one of the problems with posting daily on a blog rather than writing in the privacy of my study and publishing all at once.
I've tried to post this twice, but blogger.com keeps refusing to accept it, perhaps because it's too long. So let me try doing it in pieces. To begin with my reply to Marinus:
--
Marinus, oh for crying out loud. Lecturing people on logic is my job. It's unsurprising, then, that I'm eager to correct the kinds of confusions that I lecture on outside of the classroom. It's what Bob is doing with this post, in which he accuses Nobel laureate economists of a certain logical confusion. It's what I do in my initial comment, in which I correct a (very common) logical confusion in Bob's accusation. If he can dish it out, he can take it.
To soften the tone of my initial comment, I gratuitously included a "smiley" at the end of the sentence about Quine, to make it absolutely crystal clear that that part was meant to be humorous. But I guess on this World Wide Web you can always count on someone to miss the cue even if it comes in the form of a symbol ("emoticon") universally recognized to mean "This is not meant in a hostile way"!
I know that Bob is above getting offended by friendly jabs from graduate students who learned logic half a century after he did, and in any case you have no business getting offended on his behalf. He can defend himself quite ably.
[cont'd]
But as for the substance of the matter, I think it's quite serious and important. Talk of "the fallacy of composition" is all over economics texts. Some Nobel laurates (e.g., Krugman) think they understand it, and yet they're often guilty of it -- and this has consequences for the rest of us, who might read their NYT columns and believe what they say. Some philosophers who criticize the Nobel laureates on blogs incorrectly describe the fallacy (this post). It's important to get clear on what the fallacy is, and this is an area in which logic has something to contribute to policy debates.
And as for the logical/philosophical issues, I don't think you would have said the things you did in your posts if you had actually taken in the lessons of the papers and books I was alluding to (Kripke, "Naming and Necessity", "Speaker's Reference and Semantic Reference"; Kaplan, "Opacity"; Fine, "Quine on Quantifying In"; Neale, Descriptions [MIT Pr. 1990 or so)]. You repeat your mistaken claim that there is a truth-conditional difference between "true for each" and "true for all". There isn't, at least not in English. In English supplenented by medieval jargon, you can get "readings" of these locutions which conform to the ones you claim to find in English, but that ain't English. And note that -- as I already pointed out -- to spell out these readings in English (as you did) you need the modal auxiliaries "might"/"can" which introduce the scope ambiguities I was talking about in my initial comment. So, again, I don't see how the introduction of medieval vocabulary does anything at all to clarify the issue. As far as I can tell, it just enables you to make my point in a much more complicated way.
[still cont'd]
A biographical note that is not entirely irrelevant to the discussion at hand: as an undergraduate, I learned logic from Terry Parsons and Calvin Normore, who you will know are two leading (perhaps even the leading) scholars of medieval logic. So I've heard of Abelard. Both Terry and Calvin agree 100% with what I said about Quine's confusion about the interaction of generality and modality. In fact, I learned it from them. Terry is the author of a classic paper -- one of the first -- pointing out Quine's fallacies in this area (it's called "Essentialism and Quantified Modal Logic", highly recommended).
Though you may want to make this into a "pissing match", this isn't one. There are logical facts that are at issue.
And finally, Bob, your "point was right, even though [your] exposition and explanation of it left something to be desired" -- that is just how I intended my first comment to be understood.
This comment is completely off topic (off the topic of the original post), but because I live and breathe this stuff, I cannot but respond to Bob's comment about Quine, C.I. Lewis, and Kripke...
Bob is exactly right that at the time when "the mature and famous Quine met the undergraduate Kripke ... [m]odal logic ... was the sort of thing done by C. I. Lewis, with whom Quine had studied and for whom he had the same genial contempt as [I] have for Quine".
This genial contempt was quite justified, because modal logica at the time was not logic in good standing. Lewis had developed various axiomatic systems of propositional modal logic, and there was not much agreement about what those systems were about, or about which one was correct even if the subject matter (e.g., logical necessity, provability) were fixed.
[cont'd]
An axiomatization of a type of logic is generally thought to derive its legitimacy from a semantics. A "semantics" in this technical sense (pioneered by Gödel and Tarski) is a theory of models. A model is a kind of set-theoretic structure that provides an interpretation for the formal language. In the simplest case (for the lower predicate calculus) this would be a domain of individuals together with a function that assigns each nonlogical symbol of the formal language an "interpretation", viz., a denotation or extension which is a member or a subset or an n-tuple drawn from the domain. One defines "truth in a model" so that one can prove that one's axiomatic system preserves truth in models. That is what it is for the system to be sound or for its axioms/rules to be valid. If a system is not sound relative to the intended class of models, then it is inconsistent and uninteresting.
In the 1940s Quine voiced justifiable concerns about the interpretability of axiomatic systems of modal logic, especially ones that included both quantifiers and modal operators. Their consistency was yet to be proved, and Quine conjectured that their consistency could not be proved -- that they were inconsistent and uninteresting. Some of the informal arguments he used at this juncture were fallacious (such as the bicyclist/mathematician argument I cited), but no matter, the concern was understandable.
In the 1950s, a brilliant young philosopher named Saul Kripke -- then still a high school student -- solved the problem. He provided a perfectly respectable semantics for quantified modal logic (QML) and used it to prove the soundness of the strongest system (S5) of QML. He also proved its completeness, which is a much more difficult task (a system is complete iff every theorem of it is true in every model). This turned out to be a rather simple -- but no less brilliant for that -- application of the Gödel/Henkin completeness proof for the lower predicate calculus to QML, in which the modal operators were, in effect, treated as quantifiers. (And it is, by the way, a far more important logical achievement than anything Quine ever published.)
At this point Quine should have accepted that his challenge had been met -- that Kripke had proved that there was nothing incoherent about QML. But instead he became increasingly irrational and vindictive in his writings on the topic, finding new (fallacious) reasons to condemn QML. The "logical howlers" (Ruth Marcus) reappeared. Scopes were mangled. QML allegedly entailed "invidious essentialist" claims. This was quickly refuted by T. Parsons, who proved that QML carried no essentialist commitments. Oh, but no matter: modal logicians are the kind of people who should be sympathetic to essentialism anyway, was Quine's response, and therefore QML is bunk. And the Kripke semantics for QML invokes "possible worlds" and other creatures of darkness. Not so: it only requires maximally consistent sets of sentences (sometimes called "worlds"), and sets and sentences are entities in good ontological standing even by Quine's nominalistic standards. And if that didn't stick, he went back to his old charge that modal contexts are "referentially opaque", so quantifying into them somehow doesn't make sense (even though it make sense in natural language and Kripke had shown that it makes formal sense too). The doctrine of "referential opacity" was thoroughly debunked by Kaplan, Fine, and, much later, Neale in his excellent book on descriptions. Just to give you a flavor of how desperate these attacks were, I'll note that the "referential opacity" argument depends on treating definite descriptions as singular terms, whereas in his other writings in the same years Quine was explictly committed to Russell's theory of descriptions, according to which definite descriptions were not singular terms.
Clearly this requires some kind of psychological explanation. It seems that Quine simply could not accept that he had been wrong about something he had invested so heavily in. Few philosophers can.
Let me say, first of all, that I really appreciate anonymousphilosopher's multi-part comment in which the modal logic situation is laid out. Thank you so much for it [and just to be clear, it is not at all something I could have written, even if I had tried.] Let me add one rather poignant story about Quine, who really was a person of some weight and honesty, for all his faults. After Quine published WORD AND OBJECT, the undergraduate Kripke made an appointment to talk to Quine about it. Saul, who had the social skills of a rock, managed to not show up for the appointment. Quine, who respected intelligence in whatever form, agreed to a second appointment, which Kripke managed to keep. Marshal Cohen walked by Kirkland House [I think] as the meeting ended, and swears he heard Quine muttering to himself, "Maybe I am all wrong. maybe I have been wasting my time."
Muttering something to oneself is one thing -- admitting it in public is another.
It takes a lot of courage and honesty [especially in Philosophy -- where theories are seldom disproved per-se] to admit that one's whole theory is wrong -- like Russell did several times, and like Wittgenstein an Ayre did at least once -- especially if this theory became accepted as the consensus in the philosophical world and the author thereof received accolades and laurels for it...
Bob, I'm glad someone got something out of it! I suspect there are many professional philosophers reading this blog who know the story at least as well as I do.
I read your Kripke story in your memoir months ago, and it is one of the (to me) most memorable stories in it. I always suspected that Quine privately recognized, or at least suspected, that he was wrong on modal logic. He was a smart guy, obviously. He couldn't have failed to see the force of Kripke's arguments (and Kaplan's, and Fine's, etc.) when almost the entire rest of the philosophical community did.
One correction, though: I misdescribed completeness. A completeness theorem says that if a formula is valid (in the sense of true in every model), then it is a theorem (i.e., provable in the system whose completeness is at issue).
Also, in my first comment I had two occurrences of "premise" -- the second one should have been (in reference to (2)) "conclusion".
I never defended Quine on modal logic, and am unsure what made you think it worthwhile to type out a (quite comprehensive!) lesson in logic and the history of QML. It was a hell of a thing to wake up to. Perhaps you confused me for one of your students rather than one of your peers.
I think you're mistaken about the in sensu diviso / in sensu composito distinction -- I can't see why you blithely dismiss that this distinction can be made use of in natural language, though (the relationship between natural language and formal logic being what it is) of course it doesn't map on to every use of 'true of each' and 'true of all'; the distinction also exists independently of QML, because there are other ways to state it, like set-theoretically -- but this gets into increasingly esoteric terrain, I don't really see what would hinge on it, and we needn't discuss it. A biographical note of my own -- I first saw the in sensu diviso/ in sensu composito distinction in work by David Lewis (it plays a small but important part in 'Convention'), so its hardly only of interest to medievals. The rest of what you said is of course true and deeply familiar to both of us. I told you that you were rude, not that you were bad at logic.
Hell hath no fury like a logician scorned. Let's all cool it, please, and return to the question at hand, which is why the cost of higher education in the US has soared at three times the rate of inflation in the last three decades. Stay tuned for tomorrow's episode [I grew up on Saturday matinee fifteen minute segments of The Perils of Pauline.]
Well, I'll just second Marinus here, that one can make out the relevant distinction between "each" and "all" in natural language without appeal to some apparatus of formal logic. Just because natural language is ambiguous, doesn't mean that it is not amenable to interpretation to construe the relevant sense. Indeed, its ambiguity is part of what makes for its flexibility in covering such a large number of cases of application, which is doubtful that any formalization could quite achieve.
Marinus, my multi-part comment on QML was in response to Bob, not to you. Anyway, we have moved on.
I have no opinion one way or the other of Anonymous Philosophy ABD's moral fiber, but his primer on the development of modal logic was fantastic. And coupled with Professor Bob's story - also a favorite from the memoirs (that and the one about Kripke and the Peas) - from when he was actually there! And all as a footnote to good social science!!
I guess I don't the genial contempt of a philosopher, but this is a remarkable blog.
A belated – and a somewhat beside-the-point – comment about the Scopes issue above [Prof. Wolff knows I was busy last month doing something important – albeit evil by the reckoning of some (suffice it to say that I am a reservist of the IDF..) – so I could not post this on time]:
As some know, David Stove (yea – that guy) classified a large class of arguments in Philosophy under the common heading: "The GEM". Much of Stove's writing is devoted to debunking the GEM while tracing its influence through the ages and analyzing its appeal.
The GEM is the sort of argument from the premise:
A) "X is only experienced/thought of/.. under conditions b, c, d.."
to the conclusion:
B) "Therefore, X itself is not experienced/thought of/.. , but only X as it is experienced/thought of/.. under conditions b, c, d.."
Now, Stove gives a bunch of examples and counterexamples for this sort of reasoning [compare: one cannot chew an oyster but with one's teeth, therefore, one cannot chew the oyster itself, but only the oyster as it is chewed with the teeth], but I wanted to consider an example that Stove does not give [although he does discuss a very similar case], but which I find interesting [and a bit confusing too]. This example directly involves confusing the scopes.
Consider the following argument:
(1) It is impossible to conceive of anything without this thing being conceived of.
(2) So even if a thing, call it x, could exist without being conceived of, one could not have conceived this latter fact without conceiving of x – thereby making this "fact" a falsehood instead.
(3) Therefore, the existence of all things is conditional upon their being conceived of.
This reasoning is a bit confusing [and not just confused] – isn't it?
Now, it is obvious that (1) is a tautology. It is equally obvious that (3) is not a tautology [and to most it is obvious that (3) is incorrect] – whatever the meaning one ascribes to "conditional upon". So (3) cannot follow from (1). The problem should be looked-for in (2).
Stage (2) seems like a reductio-ad-absurdum of the negation of (3), meant to prove (3) – in and of itself, a standard logical procedure.
Somebody with logical training may notice right away the self-reference contained therein. But this cannot be the whole story.
Relying upon the prohibition placed upon mixing an object-language with its metalanguage to refute the above argument is dangerous: Tarski himself acknowledged that natural languages are capable of meaningful auto-reference, and some charged the procedure of defining a hierarchy of languages as no more than an ad-hoc devise to avoid the Liar's Paradox –solving a philosophical problem by a fiat.
At any rate, I shall bypass the problem of the self-reference contained in (2) and focus instead on the confusion of scopes therein.
[to be continued..]
[continuation]
Suppose someone wakes up in a bad mood one morning and thinks: "Everything is simply sh*t!". Now, we may ask: did this guy just entertain the thought that the number pi is sh..? Well – in a sense. It may so be that this guy has never studied Math and never heard of pi, and even if he did, he did not necessary "have pi in mind" when he was thinking of "everything". Nonetheless, it does follow from his grumpy judgment that the number pi is sh*t.
So here the distinction between "Joe thinks that for each x, x is sh*t" and "For each x, Joe thinks that x is sh*t" comes in handy.
Now, let us return to (2). If one, call her Jane, thinks that there is something, out there, that exists without her, Jane, thinking about it – is Jane contradicting herself? Or, at least, is she entertaining a thought that is evidently wrong? – After all, if she is thinking of the existence of that thing out there, she is, in a sense, thinking of this thing out there – so this thing that she thinks of, if it exists, does not exist without her, Jane, thinking of it.
But here we must distinguish between (i) "Jane thinks that [there is an x, such that (Jane does not think of x)]" and (ii) "There is an x, such that [Jane thinks that (Jane does not think of x)]"
Version (ii) describes an impossible situation – either logically impossible or merely technically impossible – this makes little difference [it seems to me that a problem akin to Moore's Paradox is involved here – but no matter].
Version (i), on the other hand, is totally "kosher" – no absurdity detected there – and therefore no reductio-ad-absurdum follows. Whereas version (ii) is indeed absurd, this is not what is meant when one speaks of thinking about the fact that some things exist independently of one's thinking of them. So all (2) amounts to in the end is the triviality that one cannot think of the existence of something without thinking of this something. From this, (3) obviously does not follow.
Now, I do not know how many [if any] bothered to read all of the above, but, awkward as my analysis may be, it is of relevance to contemporary Western intellectual life:
The confusion above is an instance of a confusion of scope – and such confusions are to be found everywhere in public discourse and in private reasoning. This particular confusion lends weight to relativisms of all sorts – a position which seems to be popular in the Humanities' departments [although less so in Philosophy departments] and seems to rely heavily upon that sort of reasoning.
All this goes to show how important it is that kids in school, for example, learn logic [I doubt that many without any logical training could read though the above..]
Post a Comment