Coming Soon:

Now Available: Volumes I, II, III, and IV of the Collected Published and Unpublished Papers.



Total Pageviews

Friday, June 15, 2012


I have often remarked that I do not consider myself a scholar.  It goes without saying that I also do not consider myself a research scientist.  But my lack of research credentials was brought home to me rather abruptly by Dania Francis, the young advanced doctoral student whom I recruited, at the suggestion of the head of the Spencer Foundation, to assist me with the formulation of what is apparently called a “research protocol” for my new  project at Bennett.

I had decided to select sixty of the incoming one hundred eighty-three Bennett Freshwomen for the Pilot Program I shall be running in AY 2012-2013.  For various reasons, I wanted a representative sample of students, as determined both by their high school GPS’s [that is “grade point average” for my foreign readers] and by their status as either in-state North Carolina students or out-of-state students. 

The Excel spreadsheet of the incoming students provided to me by the “Office of Enrolment Management” is organized in descending order by GPA, so, since I was selecting a third of the class for my Pilot Program, I thought the cool way to choose them would be simply to go down the list, tagging every fourth name.  This produced a representative selection of GPA’s and, as I anticipated, close to a representative sample by geographic origin.  Pretty good, yes?  I then divided the sixty students thus selected into ten groups by simply opening a new field in the spreadsheet called “Group,” and then going down the list of sixty students [still arranged, of course, by GPA] and marking the fields numerically, 1 through 10, in order, thus producing ten groups, each of which had a range of GPA’s.  I checked, and they also had a scattering of in-state and out-of-state students.  Along about now, I was feeling pretty good about myself, thinking that I could do this without the help of a graduate student!

With mock modesty, I explained all of this to Dania, expecting her to exclaim, “But that is exactly the way it should have been done!”  Fat chance.  She looked at what I had done and said quietly, “I think you should choose the students randomly.”  “But,” I replied, “My way has produced a selection that almost perfectly mirrors the entire class.”  “Yes,” she replied, “but there may be hidden variables.  I think the Spencer Foundation would prefer a random selection.”

Well, all of this was in aid of getting money from the Spencer, and my ego is not so large as to stand in the way of a grant, so I said, “How would you do that?”  “Very simple,” she said [rather like explaining rain to a child], “we just assign them numbers 1 through 183, and then generate a random number sequence that selects sixty of them.  Then we do the same thing to group them in ten groups of six each”  [I hope I have that right.]  She drove back to Boston from Amherst, and had it all done by the next day.

I reported all of this to Mike McPherson, President of the Spencer.  Here is his reply:  Sounds great. It's a good sign that she immediately recognized the need to use random assignment, since that option is available.”

Pretty clearly, I have a lot to learn.


Amato said...

Haha. I can relate to the experience. I've been at Spencer for almost a year now, and there’s no doubt that it is a steep learning curve to transition from philosophy to empirical research. Although, to be fair, it hasn't been so much of a transition (most of my work here has been on the less quantitative end of things) as just picking up enough bits and pieces to understand what people around me are talking about.

It's nice to hear you've been in contact with Mike. I'm actually Mike's research assistant, and when you first wrote about this project I thought about recommending that you get in touch with him--looks like you beat me to the punch. He's a great guy, a total educator at heart, and has involved me in a number of projects that are quite philosophical in nature. You might find it useful to check out his book "Crossing the Finish Line," which is on graduation rates at public universities.

All the best with the project. I look forward to updates as it progresses.

JR said...

Are there no "hidden variables" to her method? I still fail to see the superiority of her method over yours. And, if hers is superior, she certainly failed to explain adequately to you why it is.
Does using randomness simply avoid anyone's challenging that there might be something 'sinister" to your method?

GTChristie said...

If the numbering from 1 to 183 was done on a list still sorted by GPA, you probably came out with much the same averages amd means in the student profiles (family income, family education level, individual GPAs, public vs private education history, etc) with just a different list of names. I would suspect approx 1/3 of the names were the same on both lists.

I agree the "hidden variables" argument sounds specious. Pehaps by comparing the results of the two selection methods, one could figure out whether any significant variables differ between the two. That might be interesting in itself and perhaps the young one could support her argument accordingly. That would be an empirical argument at least.

The pursuit of randomness in selection was mostly a shrewd application of grantsmanship, however. The method probably will not change the outcomes of later analysis of student achievement very much, I suspect.

Utopian Yuri said...

random selection eliminates the possibility that there are hidden variables. picking every fourth name almost certainly does not implicate any hidden variable, but like any non-random selection, it leaves open the possibility that someone could come up with a very creative hypothesis for why it's not truly representative. one does not want to waste one's time testing such creative hypotheses. random selection is superior because it forecloses the possibility that one would have to do so.

Kevin said...

To respond to some of the earlier comments: actually, random assignment *does* remove hidden variables, or what social scientists call "omitted variable bias." If you have a sufficiently large N, random selection eventually evens out all of the possible correlational effects of omitted variables, and in very large N, statistically you've basically guaranteed that you've removed any bias. At the least, random assignment in large N guarantees that the selection procedure itself is not correlated to any omitted variable.

Basically, there *are* hidden variables in her method, but a random selection, given an adequate N, renders them obsolete. It's why large-N analyses using random assignment in social science use to be, and largely still are, the gold standard in respect to methods for selecting cases for experimental treatment.

JR said...

To Utopian Yuri --
A count of one hundred eighty-three Bennett women is hardly "a sufficiently large N" to make your case for that number's being free of hidden variables. Perhap one hundred eigthy-three million might work. Or not.

Chris said...

Oh you definitely need random selection, and for very straightforward reasons. At the end of your study, you want to be able to make a causal influence about your intervention, preferably that it played a causal role in an increase in graduation rates for the students who received it. If it were possible to conduct an experiment in this case, you would set things up so that all or as many other factors as possible that might influence graduation rate were either removed or kept the same between the experimental group, the group that receives the intervention, and the control group, the group that doesn’t receive the intervention. In so controlling the other factors, you can get a look at the unique impact of your intervention on the target variable, and therefore make causal inferences about the intervention and any changes in the target variable you observe. Unfortunately, you can’t conduct a carefully controlled laboratory experiment, so you’re going to have to use statistical techniques that in a sense do the controlling of other factors for you. In order for those techniques to work, you have to do your best to make sure that the variables that you can’t control statistically (and since you’re doing social science, there are of course going to be plenty) don’t vary systematically with your research groups (that is, your experimental and control groups, or the group that receives the intervention and the group that doesn’t). If variables you can’t control statistically, that is variables that you can’t measure and then place into any statistical analysis, vary systematically with your groups, then you will be unable to infer that your intervention causes any difference in the target variable between the two groups because it may have been the difference in that other variable between the two groups that caused the difference in the target variable.

Now, the way you were selecting incoming students to be in the experimental group seems fine at first, because it seems to control for the obvious variables that might influence whether someone graduates, but it is possible, in fact likely, that there are variables that you won’t be able to measure (and therefore control statistically in your analysis) that influence, to some extent, who ends up where on the list you were using, and also influence your target variable. Because you don’t know what these variables are, you obviously can’t also use them in selecting your experimental group participants, so it is possible that they will vary systematically with the experimental and control group. The best way to avoid this, then, is to select your participants randomly, because it is unlikely (increasingly unlikely as your sample size goes up) that any variable will vary systematically with randomly selected groups. A random selection is not ideal, because it is possible that those unmeasured variables will still vary with the groups, particularly since you have are restricted to a relatively small sample, but it’s the best method you have available since you can’t conduct a series of carefully controlled experiments.

Chris said...

Sorry to comment twice, but I had apparently exceded the character limit:

If I understand your study correctly, you’re calling it a pilot study because you intend this as a way to determine which particular interventions have promise, and once this study is done, you will conduct a more thorough study of those promising interventions (if not, then you’re not conducting a pilot study). It looks like you are interested in, or perhaps worried about, the influence of some variables on the outcome of your study, and you want to make sure that you take those variables into account in the selection of your samples. That is possible through a method called stratified random sampling (the Wikipedia entry on stratified sampling is pretty informative). I’m not sure you really need this, or that you have the numbers available to do it, but it sounds like your consultant knows what she’s doing and can determine whether it’s appropriate and feasible. If nothing else, you could bring it up with her and she can either say, “Yeah, we can look into that” or, “Nope, that would be a really bad idea.”

Robert Paul Wolff said...

Thank you one and all for the comments. Chris, I will follow up on your suggestions. You may in fact be right that I am not conducting a "pilot study" as that terms is customarily used. If the results are good, I plan to expand the program until it incorporates all incoming students.

Chris Beyle said...

I only mention the distinction because it will influence some of the methodological considerations, both in the research design and the statistical analysis.