Wednesday, May 20, 2009

Faked journals -- Tip of the iceberg?

Through some investigative reporting, a magazine covering the scientific community has provoked a publisher of scientific journals into admitting it created six "fake" journals between 2000 and 2005 that were essentially secret ads for pharmaceutical companies.

One fake journal reprinted real articles from other sources, virtually all of them favorable to the drug manufacturing giant, Merck, according to magazine The Scientist, which broke the story.

Merck appears to have secretly sponsored the creation of that fake journal, The Australasian Journal of Bone and Joint Medicine, which not only did not have the same controls and editorial oversight as real journals, but also did not in any way disclose its connection to the manufacturer. The superficial impression was that it was a real, independent journal.

The publisher of the fake journals, Elsevier, publishes many other real journals, and is a well-known firm. Many of them might carry ads from manufacturers. But in those legitimate cases, the ads themselves reveal any possible financial entanglements, and the editors of the journal are generally assumed to be independent, relying on peer review rather than corporate input to determine which articles to publish. The six fake journals, on the other hand, appear to have been secretly sponsored by companies that determined their contents, even if some of the articles were real and had been previously published by real journals elsewhere.

Elsevier does not appear to have publicly identified which companies sponsored the other five journals, and the only reason we know about Merck's involvement in the sixth is that The Scientist (the magazine that broke the story) had already identified that relationship in an earlier news story. Elsevier has, however, said that all of the fake journals were published out of its Australian office during a period of five years by people no longer with the company, and the company maintains that this sort of practice does not represent its usual priorities.

Nevertheless, reader comments appearing below the article in The Scientist reveal that the discovery has touched off or renewed some serious concerns: Have we in fact identified all of the fake journals? Many small journals exist for short periods of time. Others held by Elsevier -- or even by other companies -- might be similar to the Australasian Journal of Bone and Joint Medicine. And what of the "legitimate" journals? How neutral are they, really?

Writes one reader, TS Raman:
For all practical purposes, a journal that merely looks like it is a "peer-reviewed" is not different from one that has real peer review, but of a very poor quality. I say this because there are scores of journals published by professional or scientific societies, and in-house journals published by institutions, laboratories, etc., all of which are virtually "captive journals". There is a plethora of such journals in India. They are all nominally peer-reviewed, but the review is often a complete farce. The reviewers may not have expertise in the field of study to which the paper pertains: for instance a person whose [nominal] speciality is nuclear physics, may review a paper on biochemistry.
Both are legitimate concerns, but they are neither new nor unique to the scientific community: Readers have always, and will always, need to be able to discern the difference between suspect and trustworthy material. It can be done.

In one 1SC class I am teaching, a team of students gave a presentation on a journal dealing with climate change. A student in the audience asked whether the journal gave much attention to skeptical arguments (that is, arguments that maybe global warming is a natural phenomenon, not the result of pollution). Even as the student asked his question, it was clear he had reached the same conclusion I had, just from listening to the presentation: No, the journal was strictly in the believer camp. If a scientist or lab had data undermining the consensus on global warming, that would not be a good journal to submit to. The journal was real, in the sense that it had real scientists as editors and authors, and used peer review, but it was pretty easy to tell what its biases were, just from a description of its history, its mission, and a list of articles it had recently published.

This is often the case.

We can all tell when we've flipped the channel to an infomercial, too, without necessarily being able to spell out exactly how we know that's what we're seeing.

Similarly, when one is looking at a Web site or a "news" publication or a white paper that is really advertorial (that is, composed of articles that look real but are basically ads for a product), it seldom takes investigative reporting to smell out the advertising. If you read very often, you start to notice little things like ...
  • The presence of trademark symbols (like the R in a circle, or the letters TM superscripted), which only the original company ever includes. No, Merck probably didn't include these in the faked journal, but you'll see this all the time in press releases and "white papers" that are really marketing gimmicks. Why? It all boils down to Kleenex. I'll explain: When we say we are Xeroxing something, or need a box of Kleenex, or rode a Jet Ski, or got into a Jacuzzi, or drank a Coke, we might not in fact be using devices made by those companies. There are personal watercraft that Jet Ski didn't make. There are photocopiers not made by Xerox. There are hot tubs not made by Jacuzzi. All of those are brand names that have become "generic." Having a name become generic seems like a good thing, but there's a nasty consequence: When you go to court later to try to get someone to stop using your name to sell their product, the court considers whether you have been consistently enforcing your trademark. If you have made it clear that the term can only be used by you, then you can win that case. If you have allowed your term to become generic, then you no longer really "own" the terms, and you might lose. For this reason, legal departments of big companies spend a lot of their time putting trademark symbols (R and TM) next to their names on all of their materials, and sending nasty letters to anyone who uses the terms generically. All of this means that when the marketing department cooks up a really convincing fake document that says XYZ Corporation's new BackShaver is amazing, the legal guys are going to fight to put a little TM next to that word, and when they win that fight, they tell the rest of us that the article was written by the guys who make BackShaver. No one else cares about putting the TM there, and no one else is required to do it.
  • The constant repetition of the company's name or the product name, often starting in the first sentence. Marketing people learn that repetition builds name recognition, and that name recognition builds sales. So they make sure they use those names far more often than a person would in natural conversation or real writing. I bet the Merck journal did do this. If it made any changes to the articles or titles of articles, in fact, my bet would be that they inserted company and product names.
  • The absence of snide, sass, and complexity. Real, professional writers are always trying to sell you on their objectivity; marketing departments are always trying to sell you on their product. These lead to important differences in the copy. For instance, real writers often take little shots at everything and everyone, just to let you know that they're independent. These shots might be as simple as little disclaimers or qualifiers ("The new BackShaver is great at removing back hair, but I really hated it when I had to clean the device") or harmless little jabs at the Marketing Deparment itself ("I wish, though, that they'd given the BackShaver a better name -- who wants to stand in line at a store with a box that says 'BackShaver' on it?") Marketing Departments generally cannot stand these sorts of comments. If it never teases, and never has a complicated opinion involving some sort of negative, it's probably an ad in disguise.
  • Dwelling on details of success that neutral writers care little about. Marketing people know that customers will buy things they think are popular, so they rarely can resist pumping even their faux materials full of user statistics ("90 million copies sold") and testimonials ("'I use it all the time,' says Suzy, a freshman at UCLA"). They also try to build up connections between the current product and previously successful products made by the same company, leading to weird paragraphs in which they say that the 2007 version was great, but the 2009 version is better. A real writer might also talk about popularity, but doesn't necessarily see popularity as a good thing (and often throws in a plug for an underdog -- the software reviewer will talk about Microsoft's popularity, but mention he uses Linux himself, for instance). When describing a new edition or upgrade that improves on the 2007 version, he'll say the obvious: that it fixes problems that the 2007 version had. Marketing people won't ever describe it that way: As far as they're concerned, the old 2007 version was also perfect, just not as super perfect as the 2009 version.
  • Presence of contact information. If it has information for how to contact someone in the company, it's probably an ad, even if it doesn't look like one. (Exception: If it says "Call Bob Smith in the legal department at 111-555-1213 to complain about this critical safety issue!" then it's obviously not an ad.)
A few closing thoughts on this issue:
  1. Don't assume that because Merck and Elsevier's Australian branch did this, that they're "getting away with it" -- or that this means you should be unethical too, in order to get ahead. Generally, the corrupt people you hear about who seem to be getting away with things aren't. You hear about what they did, but you don't see the negative consequences. (See my earlier post on this subject.)
  2. A very enterprising student could do some cool detective work on this sort of thing and then report the results in a paper: Through textual analysis, study a handful of other small, short-run journals published by Elsevier with the word "Australasian" in the title (quite a few exist -- see the reader comments for the article I linked above), and look for signs of shadiness. Have the articles previously appeared elsewhere? (Web of Science will tell you.) Does one company's name or product name keep coming up, even though that company doesn't appear as a sponsor of the journal? Or what about the other five journals? We know their names (see the article), but not who sponsored them. Can you figure out which companies were involved in them by looking at the articles they published? (The enterprising student wouldn't have to cover all of this -- even covering one journal could be interesting.) Why is this a worthwhile project for a student? Among other things, discovering a fake journal while you're an undergraduate would probably lead to real publication for you (or, at least, some news stories about your discovery), and give you something to put on your CV.
  3. Be on the lookout for fakery. The Internet has made this easy, because Web sites are cheap. Faking an actual book or physical journal isn't usually cost-effective, due to printing costs. But Web sites are easy to fake. (In fact, if you plan to do what I describe in #2 above, you might narrow your search to journals that don't have print versions.)

Thursday, May 14, 2009

Churning

"But I worked sooo hard on this paper!" a student says to me, upon seeing her grade.

I believe her. But I also think she's wrong. That sounds like a contradiction, but it isn't, really. I used to tell students that there's a difference between work and what I call "churning" -- an activity that feels like work, is very unpleasant, but doesn't really do anything. When you should be working on a paper, but you keep sorting your notes, checking your email, alphabetizing your snacks, and complaining to your friend about how this paper is killing you, you're churning, not working on the paper. It's a natural inclination; I do it, too. And it feels like work, because it takes up a lot of time, and isn't fun. But it doesn't accomplish anything, other than using up your time.

Cal Newport, an MIT graduate student, calls churning "pseudowork," but we're pretty much talking about the same thing. His column (see previous link) is well worth reading, since it explains -- much better than I ever have -- why straight A students tend to spend less time studying and writing than other students.

If you've ever heard the expression "work smart, not hard," that's what he's talking about. His points are spot-on, and I strongly suspect that a lot of students would enjoy their academic lives more if they were exposed to his advice, which he sets up to be pretty easy to follow. His basic mission is to show students how to simplify their lives, have more time for enjoyment, and do better, all at the same time. It is possible. And it isn't difficult.

I used to have a whole speech about churning that I'd give students in my 1A classes, but now I'll probably just have them read Newport's post. It's pretty good.

- GS

Sunday, May 10, 2009

"Margaritaville"

Following my recent lectures about popularization and accommodation, Cody Lewis (a 1SC student) emailed me about an episode of South Park, titled "Margaritaville," in which the South Park team takes a shot at explaining the recent economic meltdown, albeit crudely (pardon the double-entendre). Mr. Lewis pointed out that it was a fairly good popularization (and I, having also seen it, concurred), despite being dressed up as fiction.

Over the course of a few emails, we discussed the state of journalism today, with Jon Stewart and the creators of South Park somehow seeming -- despite the humor -- to be more like investigative journalists than our journalists are. Since I used to be a journalist, and a newspaper editor, I have a lot to say on this topic, and after telling a true story to Mr. Lewis that I thought captured the current journalistic mindset pretty well, I decided to share it here on my blog.

Here it is:

Many years back, I was a newspaper editor for a business journal. I had assigned one of my reporters a pretty standard business story -- a local company had filed for bankruptcy, and it was a big enough deal to warrant an article. He turned in his first draft, and it didn't have a lot of information in it: He said the company wouldn't return calls, so he was stuck with what he could find in the court filing itself. It took a lot of work on my part to get him to do any real digging -- to call people other than company representatives, for instance.

At one point, I asked him this: "How many creditors does the company have?"

Him (sighing): "I don't know. I'll go try to find out."

Later, he returned: "The court filing doesn't say, and the company won't call me back, so I don't know."

Me: "But you have the court filing, right?"

Him: "Yes."

Me: "And it has a list of the creditors, right?"

Him: "Yes, it does, but it doesn't say how many there are."

Me: "No, of course it doesn't. But it has a list."

Him: "Yeah, so?"

Long, long pause, as I waited for him to figure it out. He didn't. So ...

Me: "Count them."

Him (shocked): "You want me to count them?"

That's what I dealt with, pretty much every day, in a newsroom. When I read or watch the news, I still see that basic attitude, that same overreliance on pre-packaged information. Many (not all) reporters seem inclined to lean back in the child-seat and wait to be spoon-fed pre-digested, infantile matter. Didn't use to be like that, but it is now.

And that's why Jon Stewart can show them up. He knows how to count, and isn't afraid to use his fingers to do it.

- GS

Sunday, May 3, 2009

Getting Away With It

I've written this blog entry as a kind of "open letter" to good, honest students who get frustrated because they think cheaters, scammers, and BS artists are doing half the work and getting the same grades. Throughout this entire posting, I'm going to assume that anyone reading it is an honest, hard-working, ethical, concerned student, and that he or she won't take offense at anything written here about dishonest students, whom I hope are not reading. (It's usually a safe assumption, but I thought I ought to state it outright.)

Anyway ... Let's start with a story.

Several years ago, three of the best students I had at the time teamed up to write a paper together. The paper did well, and they were happy. But then a couple of weeks later, they were reading papers that other students had posted online, and they stumbled across one that had ripped off roughly three of their paragraphs.

Ticked and bemused, they came to me.

"Mr. Scott, we really hate to rat on another student, but this is really bothering us. Chuck* plagiarized our earlier paper, and we thought you should know about it."

(* No, his name's not Chuck. All the specifics here are tweaked, to protect his identity -- an issue that actually ties into the point I hope to make with this blog entry.)

"Really?" I asked. "Show me."

They did. They even went so far as to print out the two versions and highlight all of the similarities in carefully documented notes.

I called Chuck in for a conference, showed him the two papers, got him to admit that he'd copied their papers, had him fill out Student Judicial Affairs paperwork, and told him he was getting an F in the class. He asked whether he should keep attending.

I said, "Well, even if you do, you'll have an F in the class, so there wouldn't be a lot of point in it."

Usually, when I say that, students get the hint and take the rest of the term off (at least, from me).

Chuck, however, stuck around. He kept showing up to class, and participating, apparently hoping to change my mind by showing me that he wasn't so easily discouraged. After he'd done this for a while, one of the students who'd turned him in came to see me, looking simultaneously sheepish and rather annoyed.

"I know you're the teacher, and that what's going on with Chuck is none of our business at this point, but I just wanted you to know that we think it's really unfair that he might still pass this class, given that he was ripping off work from other students. We're doing our own work, and working hard, and for all we know, he's just stealing stuff from other students now, instead of from us. I guess we were just wondering why you're giving him a second chance?"

Well, I wasn't, of course. I'd already failed him. But I couldn't say so -- student privacy rules prohibit me from telling you that Bob got an F on a test, or that Susan got an A, or that Chuck cheated and is failing the course. (Interesting note: If your parents call me and ask me how you're doing, I can't tell them. The same laws kick in there. Naturally, there are weird exceptions, and there are situations in which there's no helping it -- when someone on a team paper contributes plagiarized material, I often have to talk to the whole team about it, which means there's some privacy leakage along the way.)

At any rate, what this meant was that I couldn't tell Concerned Good Student that she was wrong, that Chuck had failed the class, and that he was still showing up for reasons totally unfathomable to me.

Instead, I had to say, "Well, thank you for letting me know about that. I assure you I'm taking the matter very seriously."

She snorted, like what I'd just said was PR-ese for "I don't care at all; stop bothering me." And I don't really blame her. But I couldn't tell her. Those are the rules.

Here's another rule that applies to this situation: Unless Chuck threatened to knife someone (or did something similar), I couldn't tell him to stop showing up. He'd paid his tuition. If he wanted to put in all of that work for an already promised, guaranteed F, that was his business.

As far as the three Concerned Students know, Chuck passed the class with flying colors. He didn't, of course. But as far as they're concerned, he "got away with it."

This story is not at all unusual. I see this same basic narrative repeated several times a quarter.

Speculation by good students that such-and-such bad student is "getting away with it" is common. And it's almost always wildly wrong.

I've had students come to me all worked up because they're sure the team flake who never showed up to class and never contributed to the team project is going to unfairly get the same grade as the rest of the team. If I'm aware of the behavior, he doesn't. But I can't tell a student that Bob got an F on the project, unless that student is Bob. So I say, "Thank you for the information. I'll take it into consideration." And that's pretty much all I can say.

A similar story, this time about BS: I had a student -- perhaps the best writer I've had in a class all year -- write in a blog last term that, after reading material by some fellow students, she had decided many of her fellow students were BS artists who throw together long strings of big words in an attempt to impress, even though their sentences say little, or are vague, or are empty. She concluded that most of them would probably get A's, and she'd get a B, even though she was pretty sure she wrote better than they did. She figured she'd keep writing simply, even if it meant a lower grade, out of principle.

I knew which students she was talking about, and they weren't getting A's. In some cases, they were far from it -- and for many of them, long strings of BS were the primary reason they were struggling. She, on the other hand, was a fabulous writer (still is), and should have known it by then, since that was her second class with me.

I'm not sure where these "getting away with it" narratives come from, but they're persistent -- and they're so often wrong, that I finally decided to address the matter here. These narratives are wrong for several reasons, but I'll focus on three:

1. With very rare exceptions, we're not blind, stupid, or inexperienced.

Most of us can see the obvious. Most of us have been teaching for a while. All of us, before we taught, were students: We sat in those uncomfortable chairs; tried to figure out how to arrange things on those fold-out "desktops" so that our arms and notes and other gear could all fit; watched people pass notes in front of us; felt the person behind us kick our chair repeatedly; and watched the clock a lot if the lecturer tended to drone. Outside of class, we liked to think we were super-scholars, capable of acing classes without always showing up, doing the readings, or following instructions closely -- believing this sort of thing gave us more time for dating and playing networked videogames in the dorm hallways. We all had at least one buddy who liked to brag that the 6-page paper he just turned in was "total BS, with nothing comprehensible at all." We are full people, and have histories that are much like yours, but longer. Those histories include awareness of the sorts of things that other students do. Also, from the front of the room, our view of the scene has improved a lot more than you'd think, and we see a lot more than we comment on aloud.

2. "They" are bad at gambling.

Really. Maybe they're okay in Vegas, but they're lousy gamblers in a classroom. Dishonest students have a staggeringly high likelihood of getting caught, partly because most types of cheating are easy to catch, and partly because cheaters invariably (due to random chance) make some sort of dumb error after a while. And the payoff is limited: That paper from an online essay site probably doesn't match the assignment quite right, so it's doomed at the start to receive a low (and possibly non-passing) grade; all of the screwy citation habits that go into trying to "cover up" plagiarism stand out in a paper and can cost the author points even if the plagiarism itself isn't detected.

(Think about it this way: If you don't cite your sources, you can get nailed for not citing sources. If you cite the sources you plagiarized from, the grader will notice the crime if he or she looks them up. If you cite other sources as a smokescreen, you get nailed for fabrication -- for citing a source that really didn't say what you say it said. That's a kind of academic dishonesty, too, and every bit as serious as plagiarism. If you make up sources -- as I've had a few students do -- those are the easiest to catch of them all. The whole citation thing is designed to make verification of your research possible. Any attempt to mess with that verification makes it unverifiable -- and the paper gets a lower grade because of its unverifiability. This is a very hard game to win, if one treats it like a game, which is why I say these are bad gamblers.)

What about BS? We all know BS, pretty much, when we read it. (A quick definition: BS occurs when an author is so unsure of his knowledge, understanding, or ideas, that he fills his page with verbal fog, with sentences and phrases that mean nothing to him, but which he hopes will fool others into believing an idea was present.) Teachers tend to feel insulted when they see BS. Because they want to be fair to the occasional student who has ideas but is actually unclear, they try to give some benefit of the doubt, but they still grade the paper down for poor articulation, poor grammar, poor word choice, poor style, or any number of other features that tend to go hand-in-hand with BS. That is, the BS tends to punish itself, and most teachers will simply let it do so.

3. Long-Term Ramifications

Finally, there is something like a tortoise-and-hare effect involved here. The good, honest, struggling student, who scrapes by with an honest and bloody C, might have great reasons for resenting the flakey BS-er who managed a B- simply because he did a couple of assignments well, when he cared. But over the long haul, I'd bet my money on the C student doing better.

Why? Well, that brings me to another story.

Several years ago, while I was working on my Ph.D., I enrolled in a series of undergraduate statistics classes: three quarters' worth. The first class had about 350 students in it; the second 110; and the third about 30. I hadn't taken an undergraduate-level class, with undergraduates, for a very long time, and found the experience ... fascinating.

Just before class in both of the first two courses, if homework was due, students would be madly swapping answers with each other, sometimes in full view of the teachers, who pretended to ignore the answer market. These answer swaps seldom dealt with why or how the answers worked - they just involved trading of answers, blindly. When tests came up, similar trading produced community notes for last-minute cramming sessions. Shortly after tests and homework were completed, students forgot most of what they were supposed to learn.

But each class built on the one before it. By week two or so of the third class, there was this huge gap between those of us who had tried to understand the material and those who had gamed their way through. The latter students were baffled most of the time, I'd say, and did terribly. A lot of them dropped or withdrew. If they needed the third class for graduation, I don't know how they managed it, since they'd already "passed" the previous two and thus could not retake them to learn the stuff they'd missed.

A lot of the world works like this, particularly in a university setting. Bad habits smile at you now, but kill you later. Every once in a while, a former student asks me for a letter of reference. Not surprisingly, they're all hardworking, honest students who have some reason to imagine I'd say nice things about them. The bad apples that you kind, hard-working, Concerned Students worry so often about have limited options in similar situations -- they've burned those bridges. They're trying to figure out how to pass their upper-division classes with skills that atrophied while they were faking their way through lower-division units, and they're scrambling to find anyone to write them a letter. Many of them do find three letter-writers, but they never compare well to the letters and letter-authors of the honest students: The bad apple gets a vague and bland letter from a former TA ("I can confirm that I had Chuck in a literature discussion section during the spring of 2006, and that he showed up for discussion sections a few times"), while the ethical hardworker gets a letter that says she's an ethical hardworker, written by a recognizable name in the field.

In short, don't worry about the trolls out there -- they seem tough, but tend to quietly expire off-stage. In a few years, you'll look around, and wonder where a lot of them went.

Monday, April 27, 2009

Confusion and Learning

A common misconception about the meanings of words like teaching and learning is that learning and teaching occur when the teachers tell the students what they need to know, and then the students remember it.

Those of us who have been students should know better -- very, very few people ever learn this way. We forget what we memorized for that test. We parrot stuff back to teachers without necessarily understanding what it is we're saying or why we're saying it. (Those freaks among us who remember all this stuff -- a trifling percentage of the population -- do very well on game shows, but with startling frequency, don't do so hot at creative or analytical work.)

When do we learn? When we have to figure something out for ourselves, we learn, and remember it well. When we have to explain something to someone else, we often find we learn it pretty well. When we use knowledge and apply it to problems, we learn pretty well, then, too.

What's interesting -- but often unnoticed -- about all of these situations is that they involve confusion. You start off confused about something, but work it out on your own, until you reach understanding, and then you know it forever (or until your next head injury).

One mark of a sharp, well-trained mind is that it's comfortable during moments of confusion, and has learned to see them as okay. But a lot of us treat confusion as a bad thing -- something to be avoided. Students don't want to be confused, and teachers often (figuratively) wring their hands in despair when they realize their students are confused about something. But in a class where learning is happening, some confusion is inevitable -- it's the first step in the learning process. First, you think Thomas Kuhn makes no sense, but you plug on and try to make sense of him. If you keep at it, eventually you get it, and then you've learned something. (I'm not saying that all confusion is good. Just as there are good and bad types of fat, some types and causes of confusion are terrible for you. But a lot of the stuff that causes complaints is good fat, or good confusion, and unfairly indicted.)

The best ways to eliminate confusion in a class are often bad for you: The teacher can have you do things you already know how to do, or she can give you such clear step-by-step instructions for everything that you can surf the class on autopilot with -- as Hermione Granger in Harry Potter and the Order of the Phoenix puts it -- "no need to think." If the teacher instead asks students to figure something out, confusion is inevitable, at least until learning sets in.

Some people think that procedural confusion -- confusion about what to do or how to do it -- is among the bad-fat confusions. I used to think that, but now I'm not so sure. Lately, I've been doing little experiments to see how well students learn things when instructions are vague and fuzzy (some of my readers will have doubtless noticed). I give students the sorts of missions they'll get in a real workplace ("Hey, Bob. Write me a press release!") and the sorts of instructions one usually gets in those environments ("How? Don't ask me. I don't know. Look it up somewhere.") I worked for more than a decade in the "real world" outside of academia, in government, in industry, in newsrooms, and this sort of thing is remarkably common. The people who get promoted, who do the best, are the ones who can manage in a sea of vague instructions, who can do solid, quality work without hand-holding. Generally, they're people who have learned the hard way that they are able to figure things out, if they really want to do so. They come up with their own instructions, and being their own masters, become masters. Even speaking for myself, I know that the stuff I've had to figure out in this way -- the instructions I've had to give myself -- are the most useful sets of instructions I've ever had.

So for about a year now, a few times a quarter, I throw students a fairly vague, work-style prompt, in this sort of spirit: "Hey, the president wants mission statements and department philosophies, in memo format, by 7 a.m. tomorrow. Go write one for us. ... No, I don't know what he's talking about either. Figure it out. And make it good." And then I see how they do. I grade easier on these than I do on other papers (because not to do so would be evil), and try to make sure they know where things went wrong afterwords, but for a while, at least, they have to come up with their own plans, instructions, and standards. My goal here isn't emotional security, but learning, either during the process or right after it, when they can see where things went wrong.

Saturday, April 18, 2009

Kuhn, Popper, Mars, and Venus

As my student readers already know, I recently read a large stack of student papers about scientific philosophy, in which students were responding to essays by Kuhn, Popper, Masterman, and Bacon, authors who are not only tackling some tough questions but who are arguing with each other about them.

I was pleased that many students seemed to have understood many of the key points -- these writers aren't easy to understand. Even Popper, the clearest of them, has some tricky elements. It often takes students half a quarter to get this stuff, so I'm happy that so many have gotten the broad strokes so soon.

INTRIGUING GENDER DIFFERENCES

However, not surprisingly, some students had trouble understanding the readings. I bring this up not to embarrass them (it's difficult material, and I expect to be working with the class on understanding it for a while longer), but because I was struck by how differently men and women handled readings that were tough to understand:
  • In general, men who didn't understand the readings tended to be dismissive of them, saying things like, "Maybe it's just me, but anything that's this difficult to understand probably isn't worth understanding." Or they'd talk for a page or so about how pointless the whole exchange was.
  • Meanwhile, most of the women who had trouble understanding the readings tended to focus on the character of the debate, rather than the content of it. Their papers were about moods and tone and attitude, rather than about philosophy. As a result, I read pages and pages about how Kuhn and Popper didn't get along and couldn't play nice with each other.
I don't generally look for gender differences, but every once in a while the pattern is so pronounced, so demonstrable, that it hits me with the force of a full-hand slap. This one was interesting, in part, because the class in question is a 1SC class -- a science-writing class. If this were a regular English class, most of the women would be in the humanities, and most of the men in the sciences (that's how they usually are grouped, at any rate), so I wouldn't necessarily think it was a gender difference. In a regular English class, I might chalk it up to a disciplinary difference. Also, one might hypothesize that women in traditionally male-dominated fields like engineering and computer science might react and write more like men, either because they've learned to do so in adapting to the testosterone-laden environment, or because women who think more like men are more likely at this point to be interested in those fields, or for any number of other reasons. Well, that hypothesis -- and the disciplinary hypothesis -- have been, for me, falsified. Faced with difficult readings, men and women just seem to react differently. I have no idea what to make of all of this, but I do find it interesting.

GOALS AND METHODS

I'd like to spend the rest of this post erasing those gender differences a little, by trying to explain some of what's going on in the readings, and addressing some of the common misunderstandings.

Getting most new readers to the point where they understand this stuff requires multiple steps, and this blog entry is just the latest. Here's a recap of the previous steps:
  1. Before the readings, I introduced the subject matter and some key terms that come up in the readings (falsification, gestalt, paradigm).
  2. The second step was the reading itself. Some people get it as soon as they read it, though they're rare. Truth be told, I didn't "get" it fully the first time, so I'm sympathetic to others who don't. (Specifically, I had a knee-jerk dislike for Kuhn, and favored Popper at first. It took me a while to realize that Kuhn had something useful to say.)
  3. The third step was the first class discussion on the subject.
  4. The fourth step was an in-class writing assignment. How was this a step toward understanding? It's a phenomenon called writing to learn. The basic idea here is that many people who think they don't understand the readings will start to understand them as they write about them -- as they explain the points, they start to comprehend them better. You've probably felt this before: On page 5 of a paper, you suddenly get something you didn't get when you were on page 1. That's what I'm talking about.
  5. The fifth step was a group paper, which attempted to capitalize on something called collaborative learning. This draws on the ancient observation that people learn more by trying to explain things to other people. If you've ever noticed that you learn more when teaching someone than when trying to learn the same thing yourself, that's the basic idea here. My hope was that as students worked with each other on the paper and tried to make sense of things, they'd try explaining their ideas to each other, and things would start to make more sense as they did so.
  6. The sixth step is feedback on the previous two steps, through comments on the papers.
  7. This is the seventh step, which is basically an attempt to address some of the common questions and errors I've seen.
AN ATTEMPT TO EXPLAIN THE KUHN-POPPER DEBATE

Let's boil down the Kuhn-Popper debate as simply as we can. The debate centers on two apparently simple questions, both of which turn out to be quite tough:

1. How do we know if something is scientific? What the heck is the difference between science and non-science? Most people agree that astrology isn't scientific, but why isn't it?

2. What constitutes good scientific behavior? What should good scientists do?

Kuhn's answer is tricky, but Masterman explains it pretty well in the second half of her article, and Kuhn, in his last article, says she's right. So let's start with Kuhn's answer, as argued by Kuhn and explained by Masterman. (Yes, Masterman agrees with Kuhn. Those of you who said she disagrees with him mistook criticism of his wording for criticism of his ideas. She likes the ideas, but thinks he writes unclearly.)

Kuhn's and Masterman's answer, in a nutshell

Imagine this: A group of people invents a model for how the world works -- a picture, a graph, a diagram, an analogy. For instance, physicists like to explain gravity by describing spacetime as a rubber sheet; if you put objects on the rubber sheet, it warps, in much the same way that planets and stars warp spacetime. Climatologists and computer scientists, meanwhile, have designed elaborate computer models of our global climate system, and when they want to understand how climate works, they rely on those models.

Those models are darned useful. But it's important to remember one thing: The model isn't the universe. It's just a metaphor for the universe. If it's a decent metaphor, it'll help us think about the universe in mostly accurate ways, but it won't be a perfect fit. Most importantly, it won't be complete. The computer simulation of our climate will be missing a variable. The rubber sheet analogy is missing some spatial dimensions.

So what do our people do, after coming up with their model/metaphor, with its holes and occasional inaccuracies?

Answer: They try to fix it. They try to fill in the holes, tweak the metaphor to cover the inaccuracies, and so forth.

They keep doing these repairs until two things happen:

1) New discoveries and new tools, like new telescopes or better computer algorithms, make it clear that the current model/metaphor has a lot of inaccuracies or holes that still need fixing; and, at the same time, ...

2) Someone comes up with another model/metaphor that challenges the old one.

At this point, the two models have a kind of run-off contest. People start to try to stress-test them, to break them, to see which one holds up best under fire. When the new model/metaphor works better, they throw out the old one and start all over again with the surviving model.

Now, let's match up the above description with Kuhn's terms.
  • The people involved are scientists.
  • Their model/metaphor is a paradigm.
  • The practice of fixing the model/metaphor, filling in holes, is puzzle-solving. A scientific period in which scientists are mostly solving puzzles is called normal science. It's called normal because most of the time, that's what's going on.
  • The stress-testing that occurs during that run-off election between competing models is extraordinary science. (Note: Popper calls that stress-testing falsification.)
Kuhn argues that normal science (the gradual fixing of paradigms) is still science, and that it's a perfectly fine, even crucial activity. The stress-testing that occurs during the occasional revolution is also good science, but it's rare, and can't really happen all of the time.

And that brings us to his answer to the second question: What constitutes good science?

Kuhn basically says, "This is the way scientists seem to work. It seems to work just fine, so this is probably the right way to do it."

Popper, in a nutshell

Popper agrees with the description of science that Kuhn presents. He thinks that's pretty much what happens, and even, in fact, kind of figuratively smacks his forehead in a duh! gesture when he says that he'd completely missed some parts of that description before Kuhn pointed them out. About normal science, he essentially says, "Holy cow. You're right. Scientists solve puzzles most of the time. How silly of me to miss that!"

However, he thinks Kuhn is wrong to defend normal science -- the gradual fixing of models and paradigms. Sure, that's what most scientists do, he grants. But the ones who do that are not very good scientists.

According to Popper, all scientists should act, all the time, like they're in the middle of one of those revolutionary periods that Kuhn calls extraordinary science -- they should be constantly trying to falsify the models, and become increasingly suspicious of those that don't survive the tests.

Put another way, Popper looks at what scientists do when multiple, competing models are duking it out, sees how scientists prefer the ones that best survive attempts at falsification, and thinks, Wow, that's really cool! Why don't we just do that all the time?

A final note

All of the above is very simplified, of course. Like a model/metaphor, it's useful in some ways and not quite complete in others. And there's more to the debate, since Kuhn and Popper respond to each other's objections several times.

But if you understand what I wrote above, then I'd say you've understood the main, most important issues.

If the above discussion helped, would you please let me know, either in a comment after the post, in an email, or in class?

Thanks!

Sunday, April 12, 2009

Why Readers Are Better than Fans

I like to read. A lot. Three of my favorite authors to read are Neal Stephenson, Tim Powers, and George R.R. Martin. (All are science-fiction/fantasy authors -- I am a geek at heart, and perhaps in face and social grace, as well.)

But I wouldn't call myself a fan of those authors. I call myself a reader.

The distinction between the two is significant, I think, and it's a good one to keep in mind if one plans to have a career in writing, in oratory, in the arts, in sports, or in politics, where fans happen. For instance, President Obama has fans, and thanks in part to them, he's now in the White House. If he hasn't yet, he will someday appreciate the difference between fans and supporters, and if he is wise, he will wish more for the latter than for the former.

It is good always to remember that fan is short for fanatic, and that longer term might be a fairly accurate one.

One of the better recent depictions of a fan in pop culture is brought to us by Brad Bird, the writer-director of The Incredibles. In the film, the chief villain, Syndrome, starts out as Mr. Incredible's "biggest fan," a boy eager to play to side-kick. As boy and as man, Syndrome has high expectations for Mr. Incredible, and waxes bipolar in fits of praise and condemnation for the man: He thinks Mr. Incredible did a great job beating his machines, and likes that the hero hid under the bones of another superhero, but Syndrome is scathing when it appears Mr. Incredible called for help, a move Syndrome sees as "weak." Stalking away, he proclaims, "I've outgrown you."

Bird is tapping into real fan behavior here, as it's something with which folks in the film industry are well-acquainted: the fan has wild, unpredictable mood swings. Make him happy, and he'll lick the bottoms of your shoes. Disappoint him, and he'll take an electric drill to your kneecap. But don't count on a middle ground: There isn't much of one. One of the most telling characteristics of a fan -- one of the best ways to tell him apart from a reader, supporter, or viewer -- is that he rarely if ever says, "Eh. It was okay." Either the heavens parted for him, or it's hellfire time.

I started thinking about this while checking up on one of the authors I mentioned earlier: George R.R. Martin. I'm fond of his "Songs of Ice and Fire" series, which features long, carefully plotted books, and long gaps of time between installments -- each novel appears to take twice as long to write as the one before it.

The next novel in his series, A Dance with Dragons, has been in production for quite a while, and has encountered several delays. This has ticked off his fans, who, like Syndrome, are loudly proclaiming, repeatedly, sometimes several times a day, that they are done with him (for a small taste test, see here, and here, and here). They have said some rather horrible things about the man, prompting some rather defensive posts on his Web site and on his blog, and in response to those posts, they've decided to take offense. Meanwhile, his readers (who are not the same as his fans) are patiently checking for updates, and when they see that the book isn't done yet, they move onto other things.

(An aside: If you've read any of Martin's series, here's an explanation for why each novel is taking longer to write, and why we should expect that trend to continue for the rest of the series. Simply put, it's easy to churn out sequels when one is writing to formula. I do not say this as a put-down to formula writing. Star Wars, The Matrix, Lord of the Rings, and Harry Potter all follow a formula that Joseph Campbell calls the "monomyth" -- a single story structure that is pretty easy to follow and remember. The skeleton of the story was written for them in old myths, ages ago. I love all of those stories, despite their adherence to formula. But Martin isn't writing that kind of story. Most monomyth stories follow a hero from common or humble backgrounds, who is called to adventure, trained by an elderly wizard or mentor, treated to some sort of "magical" flight, given a gift that will help him in his quest, and thrust at least temporarily into death's domain, only to return and win. Usually, there's a prophecy or oracle involved. Once an author has mastered that story pattern, he can write it forever, and hardly anyone ever notices that Morpheus, Gandalf, Dumbledore, and Obi-Wan are all basically doing the same job. If Martin were writing monomyth, he'd certainly be done by now. But he's not. Martin has dozens of characters, none of which can properly be called a "main" character. All of them are plotting and engaging in intrigues. With each novel, he adds some new faces, and takes away others, usually in bloody and permanent ways. Each book has hundreds of pages of tricky, scheming details, none of which are easy to remember because none of them follow an easily memorized, familiar pattern. Martin doesn't like old patterns. He wants his work to read more like history, like something as complicated as the real world. As a result, each time he writes a novel, he makes his back story more complicated. So the next novel has to take all of that stuff into consideration, and stay consistent with it. With each novel, this will get more difficult to do. I do not know whether Mr. Martin, as talented as he is, will be able to finish what he has started. I can't think of many authors who could at this point. Okay, this parenthetical is over. Back to my original point ...)

George R.R. Martin is discovering the difference between fans and readers, and is probably realizing that, although fans can really boost his royalties, it's the readers who keep him sane, and who seem to appreciate how monumentally difficult his job is to do well. His note, which ticked off fans and ignited a flood of support email from readers, might have seemed rash; perhaps it was. But if it drives away bipolar fanatics while keeping readers friendly (as it seems to be doing), then in the long run it's probably a healthy thing. That said, author Patrick Rothfuss, who seems to be having similar fan difficulties, might be better at eliciting reader sympathy than Mr. Martin is: His opening comic strip, at the top of a post about fandom, is priceless. (Interestingly, it makes a reference to Martin, and Martin has mentioned Rothfuss's comic in return.)

This brings me back to Obama -- and to my main point. Yes, I do have one. Here it is: President Obama has arrived in the White House largely due to fans. Not supporters. Not political alliances with people who've decided to tolerate him. Fans. (Yes, supporters, etc. exist, too. But they aren't where his muscle comes from at the moment.) Many of his former fans already are angry with him, and I suspect it's going to get worse -- because they're fans. They won't brook political compromise. They will have a hundred unrealistic expectations, and he won't meet them, because he's ... well, human, I suspect. He won't meet their timetables for getting things done, particularly when it comes to first-time-voting fans who mistakenly believe he's been President since his election in November.

The media keep talking about the prospects for Obama being assassinated by a racist with a rifle, and I'm sure the Secret Service thinks about that possibility a lot. They're paid to do so. But if the Secret Service are truly on the ball, they're also going to start to get tougher with the crowds of Obama fans at public events. If Obama Girl shows up, they're going to frisk her carefully, and ought to.

Thursday, April 2, 2009

Sita Sings the Blues

In an earlier post about a short film titled "Fetch," I said I hadn't yet seen Nina Paley's other feature film, Sita Sings the Blues. Well, now I have, and I can see why critics say it is one of the best films of last year, even though it was never distributed in movie theaters. It's incredibly playful, and silly, and touching.

And now it's online. For free. (However, if you like it, you might consider sending Paley a donation. She spent years on this thing, and will only ever make money through voluntary donations.)

Here's a brief synopsis. Like Slumdog Millionaire, it's largely set in and about India. Like Slumdog, it's radically different in structure and style than what we're used to seeing. Like Slumdog, it's uplifting and fun, but sprinkled with depressing content -- both films basically take depressing content and help us get over it, and into a happy place.

Sita's tagline is "The greatest break-up story ever told," and it's an apt tag. The film has several stories, all dealing with breakups, and at least one of them is based on real events. The creator, Paley, was dumped by email by her long-term boyfriend. According to some articles surrounding the film, the dumping that happens to "Nina" in the animated film is pretty close to what actually happened to her. In response, she made a film that blends her own story with the Ramayana (a romantic Indian epic, also about a really terrible break-up), and with a bunch of old jazz numbers by Annette Hanshaw. It doesn't sound like it would combine well, but it does -- it's hypnotic.

Sometimes in the Spring, I teach English 1C, in which the goal is to do deep textual analysis of things like films, novels, plays, or poems. If I were teaching 1C this term (instead of 1SC, which I like better), I would probably be using Sita as a subject text -- a thing to study. It's such a rich weave of intertextual references, feminist re-readings of old stuff, critical commentary (by some narrators in the film, who do a kind of Mystery Science Theater routine), semiotics, and a zillion other things, that I think we could get a lot of mileage out of it. And it's fun.

However, since I'm not teaching 1C, I'll just post the link on the blog. If any of you watch it, let me know what you think.

Monday, March 9, 2009

"Sentence Enhancers"

My son, age 4, is into Spongebob Squarepants these days. A recent episode about profanity cracked me up -- Spongebob and Patrick end up swearing like sailors, convinced that a particular word (presumably the F word) is a "sentence enhancer."

What I like about that is that it's right. It nails the actual effect of profanity, and a bunch of similar phenomena.

The other day, I used the phrase "It ain't easy" in a discussion board post written for the editors in our class. Perhaps some readers were startled to see an English teacher use ain't. But ain't is a sentence enhancer, and I used it quite deliberately for that very reason. It's a far stronger version of isn't.

From whence does it draw its strength? Answer: From the fact that it's illegal.

Making things illegal often makes their effects more powerful. This arguably goes for drugs, gambling, racial slurs, taboo topics, profanity, and some types of "bad" grammar (like ain't).

Banning a word gives it a new meaning: Now the word means, in addition to what it meant before, that you feel so strongly about something that you're willing to break the rule prohibiting the word use.

If the F word were simply another synonym for intercourse, it wouldn't have any more impact than "make love," "sex," or "hump." But it's the F word, a word so dirty that people abbreviate it in polite company. As a result, it is far, far more useful than "hump." If I walked into class one day and growled at the class, "For the last time, folks, cite your F-ing sources!" but didn't use the abbreviation, I guarantee people would 1) hear what I said, and 2) remember it. Yes, I'd probably receive complaints and get a talking to from the dean. (There are drawbacks to using powerful sentence enhancers. Getting fired or arrested rank top among them, in some situations.) But it would certainly make an impression, and stick in the memory. Its very illegality ensures it.

When I used ain't, I did so for the same reasons. I know it's likely to startle a little, particularly because I'm an English teacher. Also, I won't get fired or disciplined for saying ain't, which makes it a bit better than the F word in my book.

As a teacher of language, I sometimes wonder about people who ban words -- and whether they realize what they're doing. Really, if you want to disarm a word, abuse it. Overuse it. Render it ubiquitous, and adopt the thing. Use it in new ways, so that its meaning starts to shift. If you want to make it into a weapon that adept wordsmiths will suddenly find more useful than it was before, ban it.

My father, who once taught speech, made up a swear word at the beginning of a term, and emphasized that no one in the class should ever say it. The students initially snickered at the idea of a "made up" swear word, and probably played with it a little at first, just to snub the rules. But my father enforced the rule against it for a while, and usage dropped off. Then one day, he asked the class a question, a student gave him the answer, and my father called the student that banned word. The entire class became shocked and offended, and jumped to the student's defense. All over a word that hadn't existed just a few weeks before. The word, as I recall, didn't even have a specific meaning. Its entire meaning was "Don't use this word -- it's offensive." All of its power came from the ban. Without the ban, the class would have probably forgotten it even existed.

This brings me back to Spongebob, whose writers deserve both a pat on the back (for a fun episode) and a gentle rebuke. See, Spongebob and Patrick don't realize their "sentence enhancer" is socially unacceptable. But they love it. They think it "fancifies" their sentences, so they use it liberally, with gusto, until they drive all of the customers out of the Krusty Krab. In actuality, a real-life Bob and Patrick wouldn't find the word so fun and useful to use unless they first knew it was unacceptable. They'd see the word, shrug, and likely ignore it.

Now, I know what some of you are thinking. (For my students: This transition brings me to the "anticipating objections" stage of my post.) Some of you are thinking, "Well, sure the F word is popular partly because it's prohibited. But it's also useful because it's so flexible. It can be used in so many different ways -- it's one of the most flexible words in the English language!"

Yes, it is flexible. Wonderfully flexible.

So are all profane, banned words. That's entirely my point. As soon as you ban them, you give them the additional "I'm making a point for emphasis" meaning, and then people start to use them to emphasize their points, even if the words aren't entirely right in terms of subject-matter. I can say "Get in the F-ing car," and you'll read that to mean I'm in a hurry. But if I say, "Get in the love-making automobile," you'd probably imagine a 1970's van with a mattress in the back, and wonder about my intentions. Because the F-word is banned, it can be used entirely for emphasis or offense, while love-making -- not being banned -- only means "love making."

And on that note, I will end this humping blog-post.

Sunday, March 8, 2009

Regaining Perspective

So there's this short film that my four year old son, Ronan, and I love to watch together on my laptop. (Ronan's picture appears below -- he's holding the new kid in the family. A still appears at left.)

The short film is called "Fetch," and it is an animated piece by Nina Paley. (If you've heard of Sita Sings the Blues -- often described as the best film of 2008 that no one saw -- this is the same woman who made that film, which I've not yet seen, sadly.)

In "Fetch," Paley toys delightfully with artistic perspective. The story is simply animated, 2-D, with a single black line in the background at the beginning. It looks at first like it's probably the place where the floor meets the wall, but as the story unfolds, that simple line becomes a zillion other things, including a ceiling, a ledge, a floor, a wall, and more. Then the man in the piece moves to the right, and more lines appear. She then plays similar games, but with a richer canvas.

I like this film because it illustrates a point I make frequently in composition and argumentation classes. I sometimes talk to students sometimes about "framing" -- the ability to make one thing appear dramatically different, just by shifting the vantage point. I generally apply "framing" to things like evidence: For instance, someone criticizing a tobacco company might point out it had $1.2 billion in profits last year, while someone more sympathetic might note that profits have declined 50% over the past three years. Although these seem contradictory, both can be correct, if the company had profits of $2.4 billion three years earlier. The part of the statistic you choose to focus on says something about your perspective on the issue, and can control the perspective of others.

The film illustrates the same principle, but with artwork. It's a truly visual illustration.

However, I come up with a completely different reason for liking the film when it draws close to final exams and the end of a term. The tagline of the film is something like, "A man, his dog, and a ball lose perspective." And they do (mostly, the man does). But by the end, he regains perspective, and it's at this point that you realize Paley's been making a point all this time. It's a familiar point, and in most hands, it would be trite. But Paley somehow transforms the trite into something profound here, and I find it particularly relevant (and relaxing) to watch this film during week 10 of a quarter for that reason.

Sunday, March 1, 2009

At right, I've uploaded a photo of my older son, Ronan, with his new little brother, Colin, for the curious. (At some point, I'll post some "real" blogging material. I actually have several items I want to write, but I haven't had much of a chance due to all-nighters and bottle-feedings. I'll force them in some time soon.)

Friday, January 30, 2009

Hazing and the Idols of the Marketplace

One of my students recently sent me a link to a year-old news article about hazing. It reports the results of a university study, in which researchers concluded that there's an awful lot of hazing still going on, even though most clubs and fraternities have banned it.

The most interesting thing to me about the study is its definition of hazing. It includes binge drinking, singing in public, and events (like skits) where participants are mocked.

Back when I was an undergrad, I was in both ROTC and Sigma Chi (a national fraternity). I didn't drink, and still don't. I don't smoke or do drugs, and never have. I've never been to a strip club. At no point did my fellow participants ever force (or try to pressure) me to do any of those things. (Indeed, I was not the only non-drinker in my fraternity.) I've always been fairly proud of my chapter, in part because it was so welcoming to a square, goody-two-shoes like me. If you'd asked me before I read this study whether the Iota Alpha chapter of Sigma Chi hazed, I would have said no, emphatically.

And I still do.

It's easy to define your way into a problem, and it looks to me (from the news article -- which might be distorting the study a bit; it's hard to tell) like that's what the researchers have done. Sir Francis Bacon might accuse them of worshiping the Idols of the Marketplace, his way of saying that they allowed the fuzziness of words to get in the way of truthful science.

When I was in the fraternity, I sang in public. It's something Sigs do. We serenade. (We have, in fact, perhaps the most famous fraternity serenading song in existence.) As a member of ROTC, I called cadence during runs. It's part of the culture -- a thing that everyone does, not a punishment inflicted only on newbies. If you don't like either of those things, you don't join. Similarly, if you don't like singing in public, don't join the church choir, or go Christmas caroling. I don't know very many people who would count singing in such groups as hazing, but the researchers apparently did. By doing so, they increased the amount of hazing in the country, not in actuality, but in the realm of words -- in Bacon's marketplace.

Okay, you might say, putting singing in that definition was a little iffy, but the drinking stuff seems reasonable.

Not necessarily. I knew guys in the fraternity who drank a lot during parties. I knew guys who didn't drink at all. The thing is, according to my understanding of the study's "hazing" definition, all of the guys who willingly filled their cups with more beer than was strictly healthy were being hazed.

By whom, exactly?

The last I checked, a verb requires a subject. Who is doing that hazing, when the definitions are so broad? Who hazed the handful of Brothers I knew who engaged in binge-drinking? They certainly didn't have to. I never had a single beer, nor a single calorie of heat from Brothers over my lack of enthusiasm for alcohol. Who hazed me, when I went to public places and sang with my Brothers? It was fun. I liked doing it, or would have done something else. If I had to point a finger, I wouldn't know where to aim it.

I have no doubt that hazing is alive and well. I also have no doubt that in some cases, it does involve binge drinking or singing -- when it is forced for the amusement of observers, rather than volunteered for the enjoyment of participants.

But it looks suspiciously to me like the researchers in this case looked at Greek organizations, wrote down a list of every activity they could think of Greeks engaging in, and made that their definition of hazing.

In short, it looks like, to them, being Greek equals being hazed.

If that's true, they're not really doing science. To understand what they are doing, you'll first need to recall that their definition of hazing also includes public mockery or embarrassment.

And that's what I suspect they're doing: They're trying to embarrass organizations they don't approve of, in the hopes they'll shrink away and cease to exist.

Put bluntly: The scientists are hazing the Greeks.

And in doing so, ironically, they've proved their implied thesis. Being Greek does equal being hazed after all. But now, it's by academics.

Wednesday, January 28, 2009

Mutual Assured Destruction

Last year I was teaching a 1A class that met at 7:10 a.m. (yuck), and one morning when I dragged myself in, I saw an interesting thing: My students were there, but they were in the process of packing up their bags. They were all about to leave.

They seemed stunned to see me.

"Where's everyone going?" I asked, thinking that perhaps I hadn't heard about some evacuation warning.

"We thought class was canceled," a student replied. She was looking puzzled, but perhaps more than that, annoyed. She seemed a little cross at me.

I looked at my watch. I was on time. In fact, a minute early.

"Why'd you think it was canceled?" I asked, setting down my bag and books. At this visual cue, the other students started settling back in.

The young woman who'd answered my first question (I'll call her Jane for now) appeared to take charge at this point.

"Well, your Blackboard site certainly gives that impression," she said. "Your list of office-hour appointments on Blackboard shows that you're meeting in your office with students right now, so you can't be holding class, too."

I hesitated. That didn't sound right. "I'm pretty sure," I finally said, "that I haven't scheduled any office appointments for 7 a.m."

"No, Mr. Scott: You have a bunch of them. I read it last night," Jane said, firmly. "It seemed pretty clear that class must have been canceled."

"Really?" I was puzzled, and pretty sure she was confused. But just in case ... "Perhaps we ought to look at the schedule."

So I turned on the classroom computer, pulled up the schedule on the overhead, and Jane became mortified when there weren't any appointments on the schedule for that morning. She turned beet red, stammering that she must have misread it.

A couple of weeks after the term ended, I got my evaluations: "Mr. Scott really needs to work more on not embarrassing students in class," read one comment, and another from the same class echoed the sentiment. At least one of those comments was not from Jane. (Most of the feedback was positive, but those comments got to me a little.)

This is not an isolated incident. In fact, these sorts of challenges are getting more and more common for some reason. I don't mind challenges per se, particularly when they're correct -- if I've made a mistake, a publicized correction is crucial if students are to learn.

And it's not so bad when the student is wrong, but challenges me in private (by email, or in my office). We can settle those privately. No one needs to know about the student's error, and much embarrassment is prevented.

But when a student issues one of these challenges in the classroom, and the student is wrong, I have a hell of a dilemma. Consider:

Many times, the issue is critical -- something that other students in the class, listening to the exchange, really need to be clear about.

1. In such an instance, I can save the student's ego by allowing the mistake to stand -- and watch the rest of the class leave misinformed about something important.

2. Or I can say, "Gee, I'll check on that, and get back to you," to the student. In that case, the student is corrected, but the class is not -- it walks out the door just as misinformed as with option #1.

3. Or I can clarify the situation, which invariably (no matter how hard one tries to be gentle about it) embarrasses the student who just walked out on that limb, and then read in my evaluations about how I embarrassed someone. I generally choose this option, and tolerate the comments.

Of course, there are situations (like with the "canceled" class described earlier), in which the issue doesn't appear critical to the lessons being taught -- it's something "miscellaneous." These are actually trickier.

1. It's tempting to imagine (particularly if you are not a teacher) that if Jane says loudly in front of others that I scheduled office hours during class time, that there's really no reason to correct her in view of fellow students. One might assume I can say, "Oh, really? That's weird. I'll have to look into that," and simply move on with the lesson. But it's not quite that simple.

2. The problem is that letting Jane spout off makes teaching more difficult: It's an ethos thing. When students hear complaints voiced, but they don't hear the teacher actually confirm that the charges are true or false, they become more inclined to think the teacher has made other, big mistakes as well, and is simply keeping quiet about them. 1 In this environment, students become less inclined to wonder whether they themselves have erred. (When human beings already have one "likely suspect," they are reluctant to look for a second.) Worse, many of those students keep these suspicions to themselves, and mutter them to each other in the back of the room. The teacher, unaware of many of the grumbles, can't address them. At a certain point, it can become very difficult to teach, if the class is convinced you're a goof. The class just stops listening. I've seen this as a student. I've seen it when observing other classes. And in one very, very bad summer-school class I taught five years ago, I saw it happen to me. It's ugly no matter where in the classroom you sit.

3. The problem is that correcting Jane in class risks screwing things up, too. Even if you've tried to be gentle about it, clarity requires firmness, and audiences sometimes mistake firm for cold or cruel. As a result, students sometimes decide you're a mean old ogre (like Shrek, but without the charming accent or sense of humor). They stop speaking up in the classroom. They stop taking risks. They stop coming to office hours for help with problems. But perhaps worst of all, they stop learning -- people simply aren't inclined to listen to people they think will be mean to them. It's another ethos thing.

So, basically, if you're a teacher, and a student makes a public challenge that's wrong, you're in a pickle. Fail to correct him, and you lose confidence. Correct him, and you risk losing empathy. Either way, your ethos is likely to suffer.

The only reliable way, in fact, for a teacher to come out fine when a student issues a challenge is for the student to actually be right. If the student is right, and I acknowledge it, the class becomes more open, more suitable for learning. I wish it happened more often.

But whenever I hear a confident, assertive challenge in a classroom, and I know (or suspect) the student's off-track, I cringe inside. If you ever see this happen, and see me pause, as though I'm trying to figure out how to handle it, the most likely reason for my hesitation is that the student involved is incorrect, and I'm now trying to figure out how to rescue both himself and myself at the same time, in full view of the class. It's not as easy as you'd hope.

--------

1 Telling the class that Jane is right, when she's actually wrong, isn't an option. It's lying, and, even if you think "it's a white lie," those are bad for ethos too.

Unholy Trinity

Samantha Rose writes in her blog about how she’s irked by the fact that people round everything off to the nearest multiple of 5.

I'm with her on the number 5. She’s not insane. Perhaps 1 minute, 14 seconds is precisely the best time setting for that frozen burrito. There's nothing magical about the number 5, except that we like symmetry, and we like things to match our numbers of fingers and toes on each limb.

But there's another number that's given this sort of special treatment, for no good reason at all, and if you teach writing (like I do), it really starts to nag at you: There is nothing magical about the number three.

Nevertheless -- and this is particularly true when I teach business writing classes -- everything in a paper seems to come in sets of three: I get three reasons, of course, but also three parts to a plan, three bullet points, three key facts, three verbs ("We will create, distribute, and implement a plan to increase revenue"), three verbs and three nouns ("We will create, distribute, and implement a plan to increase revenue, marketability, and productivity"), and so forth.

The abuse of three is rampant. For this reason, I am overjoyed (at least for a second or two) when I get papers that say things like "there are two chief reasons" or "I will compare four possible solutions," simply because they indicate the author is not possessed by what I have come to think of as the “unholy trinity.”

Saturday, January 10, 2009

Let There Be Illumination


Walking through the supermarket today, I spied the Jan. 12 cover of Time Magazine, which depicts a sweater-wearing person with a compact fluorescent bulb for a head. Think Edward Scissorhands, but ... er, brighter. Alongside the sweater-wearing bulb is the following teaser text: "Why We Need to See the Light about Energy Efficiency."

I didn't have the cash to buy the mag, so I left it on the stands. But since leaving the store, I've been thinking about that CFL bulb, which has come to symbolize for me a kind of blindness in policy-making. It's a relatively new blindness, and those afflicted with it tend not to realize it: We have a tendency to ignore the ways that policies affect people who have (or plan to have) children. We tend, moreover, not to think about pregnant women.

Allow me to elaborate, using the CFL as an example. There's been a push over the past few years to phase out old bulbs and make the CFLs mandatory, the rationale being that CFLs consume far less energy than "normal" (incandescent) bulbs do. Because they consume so little energy, power plants don't have to burn as much carbon-dioxide-producing fuel as they used to. In short, CFLs cut down on the emission of greenhouse gases. If everyone switches to CFLs, we can save a lot of energy, use less fossil fuel, and help fight global warming. That, in short, is the logic behind the CFL, and it's doubtless why Time Magazine chose to use one as a mascot on its energy awareness cover.

But there is a problem with the CFL: It contains a little bit of mercury, about enough to cover the tip of a pencil. That doesn't sound like much, but you wouldn't want to inhale that much nerve gas, and mercury is -- in fact -- a neurotoxin. Some forms of mercury are very lethal -- you wouldn't want to touch a drop, even with rubber gloves. The mercury in the light bulbs isn't nearly that nasty, but it is nevertheless the subject of some debate. People argue over whether the bulbs should have warning labels, over how to properly dispose of a dead CFL bulb, over whether the bulbs will leak mercury into water systems if they are introduced to landfills, and similar issues.

Here's the issue that concerns me the most, when it comes to these "green" bulbs: You absolutely do not want one in your house if you have children, or if there's a pregnant woman living there. Bulbs break in homes with children. Lots of things break. Bulbs are simply one of them, and it's a fact of life. And broken CFLs don't mix well with little kids: Even in small doses, inhaled mercury can retard brain development in growing minds, and is particularly harmful to fetuses and to children under the age of 6. (See also here.)

I learned the above stuff the hard way: Early this summer, about a week after my wife and I learned she was pregnant, one of those bulbs broke in our house -- a lamp using the bulb toppled onto my wife's desk and computer area. I knew women are supposed to avoid fish because of possible mercury content, so I kept her away from the desk while I struggled with the clean up.

And that clean-up was a struggle. Following official federal and state instructions on health-related Web sites like the EPA's and guidelines from a study conducted by the state of Maine, I ventilated the area by opening windows; I threw out most of the stuff that the glass had come in contact with; I cut away a large swath of carpet, rolled it up and disposed of it. I wore gloves. I took off my wedding ring (because gold attracts mercury). I even shaved off my goatee, because dust had flown up into my face while I was cutting out the carpet in the affected area. I kept my wife out of that room for about three weeks, and tried to keep my four-year-old son out of there, too, though that was tougher.

That was all for one bulb, and if it seems like an overreaction, you should have seen our ob/gyn's reaction to the news about the bulb and the fact it contains mercury: She told me my wife should find another place to live for the duration of the pregnancy. My reaction was comparatively subdued.

One of the things that frustrated me most about this incident is that if you look on official, government and environmentalist pages about CFLs, they tend to talk about how wonderful they are and how we should switch to them. Yet they say that if the bulbs break, we should take all of the above precautions. Many of them recommend that we "Get pregnant women and children out of the area during the clean-up."

But it never seems to occur to the authors on any of these sites -- or to legislators who are behind the drive to make these bulbs mandatory -- that some women are single parents. Some mothers don't have other homes to move into. Some have husbands who are away so often that the wives are likely to be the ones dealing with the clean-ups. And some, it must be remembered, don't yet know they're pregnant. And it's precisely at that time, when the fetus is so new that even the mother doesn't know it's there, that mercury exposure can have the most severe side effects.

Even in their rebuttals to CFL concerns, advocates tend not to think about families, or to take them very seriously. For instance, one defense of the CFL argues that the amount of mercury in my bulb at home is miniscule compared with the amount of mercury a powerplant puts into the air. This is true, but it also completely overlooks the reason that parents are worried about the lightbulbs: It's not the total amount of mercury released globally but rather the magnitude of the local dosage that matters -- I can step outside and breathe just fine, and not worry about the amount of mercury the power plants are emitting because that amount has spread itself so thin it's become negligible, but when a bulb breaks in my home, I now have 25,000 or more nanograms of mercury in a single room, and the accepted safety limit is 300 nanograms. If I'm thinking about what my children are breathing while they're bouncing on the couch, I'm far more worried about what broke on the carpet next to them, inside a closed room, than I am about a power plant 30 miles away. The global perspective, though accurate, does nothing to alleviate my concerns as a parent.

Meanwhile, I've heard a few CFL advocates suggest that when we install our CFLs, we should just get rid of our carpets. It's a simple matter: Just get rid of the carpets, and then you don't have to worry about little bits of mercury getting stuck in them, evaporating every time you vacuum. It's easier to clean a wood floor than it is to clean a carpet.

Sure. All of that is true. I vastly prefer wood floors to carpets anyway. However, and at the risk of sounding repetitive ... I have children. Children crawl. It's a natural thing. Most people remember that. Ever try to crawl on wood flooring, or hard tile? Children also fall. A lot. Ever fall on tile or hard flooring? Families have carpets and rugs for a reason. And we're going to keep that carpeting until they move out, even if it means cleaning gum and chocolate out of the rug fibers every week. (See Footnote 1.)

It seems to me that the very people most inclined to push environmentally-oriented policies must not have children in their homes. Instead, they have this huge blind spot: It simply doesn't occur to them that

1) bulbs might break;
2) there might be pregnant women or small children in the breakage area; and
3) that the very home that has children is also likely to have carpets.

Perhaps the supporters of such policies don't believe in children, and think anyone who has one is irresponsible. Perhaps they had children long ago, and those children are no longer in the house. Maybe they simply don't want to be parents. Whatever the case, when they start talking about what we ought to do, they have a weird tendency to ignore the fact that families full of rugrats exist, and that's unfortunate, because those families are likely to remain blind themselves to the risks behind things like CFLs. The talking heads will say CFLs are good, so parents will buy them. When the bulbs break, parents will cheerfully sweep them up, vacuum the carpet (unaware of the EPA's advice against it), and then let the kids keep playing. Years later, when the kids have trouble doing math, staying focused, falling asleep, or keeping their hands from trembling, they'll see a therapist instead of a toxicologist.

And that's a nasty combination: If the supporters act as though there are no children, and the parents act as though there are no risks, we might very well -- in trying to save the planet -- create a new public health disaster on the order of thalidomide. I hope that's not the case, sincerely. But I do worry about it at times.

- GS

Footnote:

(1) There also seems to be a bias in favor of homeowners in the lightbulb discussions: If you're renting an apartment, you can't just cut out your carpet. Well, I suppose you could, but there will be complications later...

Wednesday, January 7, 2009

Mixed Signals

A few years ago, after becoming increasingly frustrated with the cable company, my wife and I decided to go with satellite. Now we have more channels than we'd ever watch, and some of the ones I'd never heard of are fairly entertaining, not so much due to their content, but because I'm intrigued that someone would go through the effort to create them -- and that (apparently), enough other people would watch them to comprise a decent market.

There's a research channel, for instance. It shows college lectures. Lots of them. I've watched a presentation by a political communications professor, one by a Nobel-Prize winning astronomer, and a few others that were good for curing insomnia. I don't remember what they said, but I do remember that I need to reupholster my chair, which is the thing that finally woke me up.

But perhaps the most intriguing channel to me right now is a high-definition movie channel -- the name of which escapes me, since on the menu it appears only as a string of five letters, acronym-style -- that plays old films in true HD. That's a pretty neat thing, because some of these films (like James Bond movies) are the sort you really want to lose yourself in, and it's easier to lose yourself if you can tell what the threadcount is on the sheets of Bond's bed.

So it's great, in that you can immerse yourself in the HD scenes -- right up to the commercial breaks. And that's the part that intrigues me: The channel also has commercials. See, to my thinking, an HD movie and a commercial break are philosophically at odds with each other. One immerses, and the other interrupts. It's great to be able to see the rifling on the bullets being fired in a film, but not so great to be able to count chancres on a Girls Gone Wild ad, or see up Billy May's maw well enough to know he needs work on his third, upper-right molar. That sort of thing can derail a mind for life.

I was excited when I first found the channel, but not so thrilled at the ads, not because I'm philosophically against ads -- heck, I have a child, and need the break so I can make him sandwiches -- but because the combination makes so little sense. A paid subscription channel would make sense, but HD films don't mix well with Billy Mays.

Come to think of it, very little does.

Hemicyon?

It's tough to find a name that isn't taken, these days. Tougher still to find one that is free, and that you like. Many clever little two-word combinations that I'd thought of ages ago, and which used to be -- near as I knew -- unique to my brain have since occurred to other brains, and other people have now claimed real estate on the World Wide Web using those phrases.

So I went with an extinct animal that I happen to find interesting. You can look it up. Hemicyon was a dog-bear -- a hunting, hypercarnivorous, pack animal that roamed the plains of the Northern Hemisphere way back during the Miocene. The body was powerful like a bear, sleek and fast like a dog, and in behavior might be thought of as a velociraptor with fur. You would not have wanted to stumble across a hungry pack and have them see you as dinner.

What does this choice symbolize? What is its secret meaning? I don't have one. It was a cool animal, and the name was available. So I went with it. Maybe I'll think of a message for it later, but right now, it's a nearly random name pick. That's not such a strange thing to do. Parents do it all the time: Pick names for babies, with no idea what they mean, or not really caring. My name is Graham, which means "gray homestead." (That's why I go by "Gray" for short.) My son's name is Ronan, which is Irish, and means "baby seal." When we picked the name for my kid, my wife and picked it because it sounded neat, and we didn't know anyone else who had it. We didn't attach any importance to the baby seal thing. And I'm pretty sure my parents didn't mean much when they basically called me a house. Frequently, names are just names, and it's not a good idea to read too much into them.