Daniel Dennett’s seven tools for thinking

Daniel Dennett is one of the leading philosophers of our generation. Here he offers seven tools for thinking.
One that I thought especially good advice for young scholars was first offered by Anatol Rappaport: how to compose a successful critical commentary:

1. Attempt to re-express your target’s position so clearly, vividly and fairly that your target says: “Thanks, I wish I’d thought of putting it that way.”
2. List any points of agreement (especially if they are not matters of general or widespread agreement).
3. Mention anything you have learned from your target.
4. Only then are you permitted to say so much as a word of rebuttal or criticism.
One immediate effect of following these rules is that your targets will be a receptive audience for your criticism: you have already shown that you understand their positions as well as they do, and have demonstrated good judgment (you agree with them on some important matters and have even been persuaded by something they said). Following Rapoport’s rules is always, for me, something of a struggle…

Thinking critically about social science method

This is an interesting blog post by Christian Sandvig (Illinois iSchool) (via Marianne Ryan) about social science research methods. The title is hooey, but two deeper points are good ones I think — not new, but often overlooked, and he provides some references to more complete discussions (suitable for, perhaps, use in 840, ahem).
His two main points as I see them: methods courses often focus too much on procedure, when they should focus first and foremost on research design and the nature of evidence. And, statistical significance is not substantive significance.

Content to contribute: Wikipedia

From time to time I find pages in my areas of professional knowledge that seriously need improvement. On my long to-do list, editing Wikipedia never seems to make it to the top. But I might as well start a list in case I am looking for something to do in the future, or better yet, to suggest as an exercise for graduate students in my area.
Today I noticed:

  • Incentive compatibility: For example, the article says that there are different “types” of IC (dominant strategy, Bayes-Nash). These aren’t different types of IC. IC is a constraint (or sometimes a desideratum), and one can impose it on problems which we solve under different rationality assumptions. (This isn’t a very good statement either!) Also, Bayes-Nash is defined incorrectly (the definition given is for Nash more generally.)
  • Strategyproof: This one is really dreadful. The concept is defined incorrectly at least once (and the mere fact that it is defined more than once in a single entry is not good): the claim is made that “strategyproof” is equivalent to incentive compatibility + individual rationality. NOT. Also, the rather absurd claim is made that the concept is “most natural to the theory of payment schemes for network routing”. I can’t even fathom what metric one might use to measure whether a concept is more or less “natural” in various settings, but in any case, it seems absurd on its face to privilege network routing applications over all other applications for which dominant strategy constructs (such as strategyproofness) are useful. I actually looked this one up because I heard someone use the concept incorrectly in a research presentation, and that reminded me that a careful definition for strategyproofness is rarely stated, though it is used quite often.

If you happen to pick up on one of these and do some editing, be sure to note it here!

What makes good qualititative research?

The debate between “qualitative” and…. non-qualitative (it’s not all “quantitative”!) research has been going on for many many years. Qualitative research includes ethnography and various methods based on interview and detailed field observation, often of a relatively small number of cases. Typically, qualitative research eschews the more traditional approach to scientific research, described by King, Keohane and Verba (Designing Social Inquiry (1994)):

start out with clear, theoretically anchored hypotheses, pick a sample that will let you test those ideas, and use a pre-specified method of systematic analysis to see if they are right.

Quals claim their work is underappreciated and underfunded; non-quals criticize qualitative work as “unrigorous, unreplicable, unfalsifiable” (John Comaroff, in Michèle Lamont and Patricia White, Workshop on Interdisciplinary Standards for Systematic Qualitative Research (Washington: National Science Foundation, 2009), available at http://www.nsf.gov/sbe/ses/soc/ISSQR_workshop_rpt.pdf, p. 37.)
Howard S. Becker, one of the leading qualitative sociologists, recently wrote an essay elucidating this debate, and offering some criteria for good qualitative research. He bases it on a review of two NSF reports released (one in March 2009) on the use of qualitative methods. (This is the same Becker known to many of as as the author of Writing for Social Scientists.)

I enjoyed reading this, as someone who has long struggled to understand what criteria are useful for judging whether qualitative research is “good” or not. What constitutes a contribution to knowledge? While Becker’s criteria, unavoidably, are a bit, well, qualitative, he offers specific characteristics to look for, and I find his list convincing, at least as a set of necessary conditions, if not sufficient.
My main beef of comes down to this: Qualitative scholars often describe their work as “exploratory”, and sometimes that it’s purpose is to generate “grounded theory”. I’m all for creative insights and hypothesizing. But how much of a contribution to knowledge is it — especially if the hypotheses can’t even stand alone as rigorously true logical deductions (which may be surprising and enlightening on their won) — if no one ever follows up the exploratory hypothesis generation to actually test, with reliable methods, whether those hypotheses are more or less supported by sufficient, and sufficiently controlled evidence to change our priors?

Olin Shivers’s Dissertation Advice

My colleague, Brian Noble, pointed me to Olin Shivers’s Dissertation Advice”.

Shivers makes one main point, but makes it well: a thesis is an idea, and a dissertation is a document that supports your thesis. This clarifies a lot of thinking about the task, to wit,

You will know what things are essential, and what things are distractions or detours. You will know when to stop writing: when you have demonstrated your thesis. If your thesis committee makes unreasonable demands of you, you will be able to tell them: “(a) My thesis, as stated, is a solid advancement of the field, and (b) I have supported my thesis. This is all I need to do to graduate; your requests are above and beyond this threshold. Cancel them and give me my degree.”

and

A side benefit is that it provides an unassailable defense to an entire class of attacks on your work. For example, should someone attack your work by pointing out that it does not scale, you simply reply, “You may be correct, but right or wrong, your point is irrelevant. My thesis is that ‘crossbreeding gerbils with hamsters provides an order of magnitude speedup over standard treadmill technology.’ I clearly demonstrate factors of 12-17 in my dissertation; I make no claims beyond an order of magnitude.” This is one of the benefits of focus.

In between he writes pithily about good writing.
You might also enjoy Shivers’s advice on the night before your thesis defense.

Sleep and scholarship

I have been chronically sleep-deprived since college. Perhaps as a consequence, I have become interested in sleep research over the years (and I have been diligent about trying to teach my kids good sleep hygiene!).
Not a lot is known about the role of sleep for cognitive activities, but much more is known than a couple of decades ago. What does this have to do with scholarship? Many research studies indicate that long-term memory formation, learning, complex skill performance, and creativity are strongly affected by sleep patterns.
A good place to start learning about sleep research is Stanford Professor William Dement’s The Promise of Sleep. He explains the basic physiology of the sleep cycle and summarizes the state of sleep research (as of about 2000), with interesting results on memory, reaction time, learning, etc.
A lengthy article in today’s New York Times reports on research by Dement, recent work by Prof. Matthew Walker at Berkeley, and others, on the role of sleep in learning and memory. For example, there is a large body of evidence now that the period of deep sleep that occurs relatively early during a normal night of sleep is crucial for encoding and strengthening declarative memory (like memorized facts).
Stage 2 sleep, on the other hand, which mostly occurs during the second half of the night, seems critical for mastering motor tasks (like playing the piano).
A story on LiveScience.com reports on other research by Walker showing that emotional responses to negative stimuli dramatically intensify in the sleep-deprived.
Po Bronson wrote another lengthy journalistic article summarizing research on sleep and learning in New York Magazine (2007).
One piece of suggestive evidence that I find particularly compelling (because of my passion for playing the piano): In his famous studies on deliberate practice and expertise acquisition, K. Ericsson and co-authors reported that the best violinists got measurably more sleep than good violinists and teachers, and also took more naps (1993).

Drago Radev’s skill list for Ph.D. students

My colleague Drago Radev (with help from his former student, not a graduate, Jahna Otterbacher), has compiled a list of skills Ph.D. students should develop before they complete their degree (some are specific to natural language processing or computational linguistics). As with many things Drago does, this rather takes my breath away, and I think I don’t score well enough for him on many (despited being 21 years past my Ph.D.!)
The list is long, so…

Continue reading

Should scholars rely on Wikipedia?

As soon as Wikipedia achieved much critical mass, students began citing to it, and professionals and other writers have followed suit. Should research scholars rely on Wikipedia?
Neil Waters, a professor in the Department of History at Middlebury College, thinks that Wikipedia is a good place to get ideas, to get an initial introduction to a topic, or to get leads on references to pursue. He thinks students and scholars should not rely on it, however (that is, in scholarly currency, should not cite to it as a reliable source). He has published a short, cogent essay presenting his argument in the Communications of the ACM.
I agree with Waters. Indeed, Wikipedia agrees with Waters. This is not an attack on Wikipedia: it is a long-standing and general principle about not relying on (or citing to) tertiary sources in scholarly research, which includes all encyclopedias (even the venerable Britannica). The problems posed by Wikipedia are special, and of special concern, especially for less popular topics, but the principle is general.
One of Wikipedia’s principles is “no original research”, and all fact assertions are supposed to be documented by citations to primary or secondary sources. The latter guideline is followed only partially, but it is one of the quite useful features of Wikipedia for scholars: get an introduction to a topic, and then start following the references to more reliable source material.

Should the digital revolution lower standards for truth?

Should students or scholars cite to Wikipedia as a reliable source? I admit that I have cited Wikipedia once or twice, though only to provide an informal definition and examples of a recent concept (for example, I recently pointed to it for emerging variants on spam such as spim, splog, spit, etc.).
The Middlebury College History Department has ruled that its
students may not cite Wikipedia in research papers or exams (via NY Times). This was prompted in part by six students who recently made the same error by relying on Wikipedia to study for a Japanese history exam.
My inclination is to agree. Rapidly decreasing costs of communications and computation gave us networked information resources, which provide much faster and cheaper access to vast quantities of information. A somewhat unexpected consequence has been that many people are confusing accessibility for reliability, and quote willy-nilly because “it’s on the Internet”. If more information is more readily available, wouldn’t we expect to see people become more selective in picking sources? Certainly, I think that is what I think we teachers and scholars should promote: a higher, not a lower standard.
The leaders of the Wikipedia project do not apparently disagree. Founder Jimmy Wales is quoted in the NYT article as saying that students shouldn’t rely on any encyclopedia as a citation for research. The following statement appears (at the moment!) on the meta-page Wikipedia:About,

While the overall [quality] trend is generally upward, it is important to use Wikipedia carefully if it is intended to be used as a research source, since individual articles will, by their nature, vary in standard and maturity.

Interestingly, one of the three core principles for Wikipedia content is that it be verifiable.

“Verifiable” in this context means that any reader should be able to check that material added to Wikipedia has already been published by a reliable source.

While, if scrupulously and professionally followed, this principle would ensure that we could rely on Wikipedia as a reliable source, I think the main point is different: every statement in Wikipedia, if correct, can be found in a more reliable source elsewhere. Careful students and scholars can search out the more reliable sources.
Indeed, many people I know (including me) advocate using Wikipedia primarily in this way: as an introduction or convenient overview of a topic, identifying facts or ideas that the scholar then verifies elsewhere, in more reliable sources.