Daniel Dennett is one of the leading philosophers of our generation. Here he offers seven tools for thinking.
One that I thought especially good advice for young scholars was first offered by Anatol Rappaport: how to compose a successful critical commentary:
1. Attempt to re-express your target’s position so clearly, vividly and fairly that your target says: “Thanks, I wish I’d thought of putting it that way.”
2. List any points of agreement (especially if they are not matters of general or widespread agreement).
3. Mention anything you have learned from your target.
4. Only then are you permitted to say so much as a word of rebuttal or criticism.
One immediate effect of following these rules is that your targets will be a receptive audience for your criticism: you have already shown that you understand their positions as well as they do, and have demonstrated good judgment (you agree with them on some important matters and have even been persuaded by something they said). Following Rapoport’s rules is always, for me, something of a struggle…
This is an interesting blog post by Christian Sandvig (Illinois iSchool) (via Marianne Ryan) about social science research methods. The title is hooey, but two deeper points are good ones I think — not new, but often overlooked, and he provides some references to more complete discussions (suitable for, perhaps, use in 840, ahem).
His two main points as I see them: methods courses often focus too much on procedure, when they should focus first and foremost on research design and the nature of evidence. And, statistical significance is not substantive significance.
Here’s a short, reasonably good report on a study of faculty opinions about what makes an excellent, good or unacceptable dissertation, from a book (Developing Quality Dissertations in the Social Sciences, B. E. Lovitt and E. L. Wert, Stylus Publishing, 2009).
From time to time I find pages in my areas of professional knowledge that seriously need improvement. On my long to-do list, editing Wikipedia never seems to make it to the top. But I might as well start a list in case I am looking for something to do in the future, or better yet, to suggest as an exercise for graduate students in my area.
Today I noticed:
- Incentive compatibility: For example, the article says that there are different “types” of IC (dominant strategy, Bayes-Nash). These aren’t different types of IC. IC is a constraint (or sometimes a desideratum), and one can impose it on problems which we solve under different rationality assumptions. (This isn’t a very good statement either!) Also, Bayes-Nash is defined incorrectly (the definition given is for Nash more generally.)
- Strategyproof: This one is really dreadful. The concept is defined incorrectly at least once (and the mere fact that it is defined more than once in a single entry is not good): the claim is made that “strategyproof” is equivalent to incentive compatibility + individual rationality. NOT. Also, the rather absurd claim is made that the concept is “most natural to the theory of payment schemes for network routing”. I can’t even fathom what metric one might use to measure whether a concept is more or less “natural” in various settings, but in any case, it seems absurd on its face to privilege network routing applications over all other applications for which dominant strategy constructs (such as strategyproofness) are useful. I actually looked this one up because I heard someone use the concept incorrectly in a research presentation, and that reminded me that a careful definition for strategyproofness is rarely stated, though it is used quite often.
If you happen to pick up on one of these and do some editing, be sure to note it here!
The debate between “qualitative” and…. non-qualitative (it’s not all “quantitative”!) research has been going on for many many years. Qualitative research includes ethnography and various methods based on interview and detailed field observation, often of a relatively small number of cases. Typically, qualitative research eschews the more traditional approach to scientific research, described by King, Keohane and Verba (Designing Social Inquiry (1994)):
start out with clear, theoretically anchored hypotheses, pick a sample that will let you test those ideas, and use a pre-specified method of systematic analysis to see if they are right.
Quals claim their work is underappreciated and underfunded; non-quals criticize qualitative work as “unrigorous, unreplicable, unfalsifiable” (John Comaroff, in Michèle Lamont and Patricia White, Workshop on Interdisciplinary Standards for Systematic Qualitative Research (Washington: National Science Foundation, 2009), available at http://www.nsf.gov/sbe/ses/soc/ISSQR_workshop_rpt.pdf, p. 37.)
Howard S. Becker, one of the leading qualitative sociologists, recently wrote an essay elucidating this debate, and offering some criteria for good qualitative research. He bases it on a review of two NSF reports released (one in March 2009) on the use of qualitative methods. (This is the same Becker known to many of as as the author of Writing for Social Scientists.)
I enjoyed reading this, as someone who has long struggled to understand what criteria are useful for judging whether qualitative research is “good” or not. What constitutes a contribution to knowledge? While Becker’s criteria, unavoidably, are a bit, well, qualitative, he offers specific characteristics to look for, and I find his list convincing, at least as a set of necessary conditions, if not sufficient.
My main beef of comes down to this: Qualitative scholars often describe their work as “exploratory”, and sometimes that it’s purpose is to generate “grounded theory”. I’m all for creative insights and hypothesizing. But how much of a contribution to knowledge is it — especially if the hypotheses can’t even stand alone as rigorously true logical deductions (which may be surprising and enlightening on their won) — if no one ever follows up the exploratory hypothesis generation to actually test, with reliable methods, whether those hypotheses are more or less supported by sufficient, and sufficiently controlled evidence to change our priors?
My colleague, Brian Noble, pointed me to Olin Shivers’s Dissertation Advice”.
Shivers makes one main point, but makes it well: a thesis is an idea, and a dissertation is a document that supports your thesis. This clarifies a lot of thinking about the task, to wit,
You will know what things are essential, and what things are distractions or detours. You will know when to stop writing: when you have demonstrated your thesis. If your thesis committee makes unreasonable demands of you, you will be able to tell them: “(a) My thesis, as stated, is a solid advancement of the field, and (b) I have supported my thesis. This is all I need to do to graduate; your requests are above and beyond this threshold. Cancel them and give me my degree.”
A side benefit is that it provides an unassailable defense to an entire class of attacks on your work. For example, should someone attack your work by pointing out that it does not scale, you simply reply, “You may be correct, but right or wrong, your point is irrelevant. My thesis is that ‘crossbreeding gerbils with hamsters provides an order of magnitude speedup over standard treadmill technology.’ I clearly demonstrate factors of 12-17 in my dissertation; I make no claims beyond an order of magnitude.” This is one of the benefits of focus.
In between he writes pithily about good writing.
You might also enjoy Shivers’s advice on the night before your thesis defense.
I have been chronically sleep-deprived since college. Perhaps as a consequence, I have become interested in sleep research over the years (and I have been diligent about trying to teach my kids good sleep hygiene!).
Not a lot is known about the role of sleep for cognitive activities, but much more is known than a couple of decades ago. What does this have to do with scholarship? Many research studies indicate that long-term memory formation, learning, complex skill performance, and creativity are strongly affected by sleep patterns.
A good place to start learning about sleep research is Stanford Professor William Dement’s The Promise of Sleep. He explains the basic physiology of the sleep cycle and summarizes the state of sleep research (as of about 2000), with interesting results on memory, reaction time, learning, etc.
A lengthy article in today’s New York Times reports on research by Dement, recent work by Prof. Matthew Walker at Berkeley, and others, on the role of sleep in learning and memory. For example, there is a large body of evidence now that the period of deep sleep that occurs relatively early during a normal night of sleep is crucial for encoding and strengthening declarative memory (like memorized facts).
Stage 2 sleep, on the other hand, which mostly occurs during the second half of the night, seems critical for mastering motor tasks (like playing the piano).
A story on LiveScience.com reports on other research by Walker showing that emotional responses to negative stimuli dramatically intensify in the sleep-deprived.
Po Bronson wrote another lengthy journalistic article summarizing research on sleep and learning in New York Magazine (2007).
One piece of suggestive evidence that I find particularly compelling (because of my passion for playing the piano): In his famous studies on deliberate practice and expertise acquisition, K. Ericsson and co-authors reported that the best violinists got measurably more sleep than good violinists and teachers, and also took more naps (1993).