Astronomy magazine, to which I subscribe, asks on this month’s cover: Do you believe in the BIG BANG? 5 reasons you should. I was initially a little confused by the cover, in that with the exception of a couple of unregenerate Hoyle-loving solid-statists out there, probably the entire of the magazine’s 185,000-member subscriber base has probably already signed off on the whole Big Bang thing; it’d be like Parenting magazine having a cover story that asked if its readers believed in pregnancy.
But of course, the article is not for Astronomy’s regular readers, per se. It has a two-fold aim. The first is to lure whatever Creationists might be lurking near the magazine rack into opening up the magazine and getting a point of view on the genesis of the universe without the Genesis interpretation. I think this is sort of sweet, since I don’t really think most Creationists really want to challenge their beliefs; after all, Jesus didn’t tell them to question, merely to believe. But you can’t blame the Astronomy editors for making the effort.
The second aim is to give non-Creationist parents some reasonable ammunition at the next school board meeting, when some Bible-brandishing yahoo demands the science curriculum be changed to give equal footing to whatever damn fool brew of mysticism and junk science they’ve cobbled together this year to make an end-run around the separation of church and state, and someone rational needs to step in and point out what evidence exists to suggest the Big Bang actually happened.
In that case, the object is not to convince a Creationist of the veracity of the Big Bang; any Creationist who shows up at a school board meeting is already a lost cause in terms of rationality. The idea is to appeal to the school board members that the Big Bang is not interchangeable with the idea that God whipped up the universe in seven days or that the universe was vomited up by a celestial cane toad that ate a bad fly or whatever other pleasant, simple teleological shortcut one might choose to believe.
In this case, I again I appreciate Astronomy’s intent; it’s nice to know they believe a school board might be amenable to reason. Personally, however, I would skip the middleman preliminaries, which is what such an appeal to reason would be. I’d go straight to the endgame, which would be to inform the school board that if it went ahead and confused science and theology, I’d be more than pleased to drag in the ACLU and make it take all the tax money it was planning to use on football uniforms and use it to pay lawyers instead. I’m not at all confident of a school board’s ability to follow science, but I’m pretty sure most of its members can count money. And here in Ohio, at least, they sure do love their football.
Astronomy notes that based on an NSF survey, less than a third of Americans believe in the Big Bang. Part of the problem comes from most people simply not paying attention in science class — evidenced by the fact that only 70% of Americans believe in the Copernican theory, which posits that the Earth is in orbit around the Sun, and you’d have to be fairly ignorant and/or inattentive not to believe that. Another part of the problem comes from the idea that the Big Bang might somehow conflict with religious beliefs — that the end result of accepting the Big Bang as a theory is an eternity of Satan cramming M-80s behind your eyeballs and cackling, “You want a Big Bang? I’ll give you a Big Bang,” before lighting the fuse with his own pinky finger. But a large part of it also has to do with language itself, and how it’s used to confuse.
For example, the word “theory.” Commonly speaking, “theory” equates to “whatever ridiculous idea that has popped into my head at this very moment” — so people have theories about UFOs, alligators in the sewers, the Kennedy Assassination, the healing power of magnets and so on. The somewhat debased nature of the word “theory” is what allows Creationists and others to say “it’s just a theory,” about evolution or the Big Bang or whatever bit of science is inconvenient to them at the moment, implicitly suggesting that as such, it should be paid little regard.
However (and Astronomy magazine has a nice sidebar on this), the word “theory” means something different to scientists than it does to the average Joe. In the world of science, the initial crazy idea that you or I would call a theory is a “hypothesis”; it’s not until you can provide strong, verifiable evidence that the universe actually conforms to your hypothesis that you’re allowed to say it’s an actual theory. So to recap: Crazy idea = hypothesis; crazy idea + independently verifiable facts to back it up = theory.
The Big Bang is a theory not because it’s just this zany idea a bunch of astronomers thought up one night while they were smoking dope in the observation dome; it’s a theory because of a preponderance of evidence out there in the universe suggests this is how the universe was created — to the near exclusion of other hypotheses. It’s a theory to the same extent that gravity is a theory, and be warned that if you don’t believe in gravity, you’ll probably fall right on your ass.
“Believe,” incidentally, is another problem word, since its common usage is synonymous with “I have faith,” and faith, by its nature, is not particularly evidentiary. Someone who says “I believe in Jesus,” is declaring faith in Christ, whose nature is ineffable. One wouldn’t say that one has faith in the Big Bang — and rightly so.
Fundamentally, one doesn’t “believe” or have faith in much of anything as it regards science, since as a process science isn’t about believing at all. It’s about testing and verifying, discarding what doesn’t work, and refining what does work to make it better describe the nature of reality. For a scientist, a belief functions at the level of a hypothesis, which is to say, it’s an idea that requires testing to determine whether it accurately models reality.
Even at their current stage of understanding about it, it’s probably not accurate to say that scientists “believe” in the Big Bang theory, to the extent that there are still holes in the theoretical model that need to be plugged and scientists working to plug them (Astronomy magazine points out these holes, as it should, since doing so doesn’t expose the weakness of the Big Bang theory, but the strength of the scientific process). If it turns out that the Big Bang theory is ultimately incompatible with the data, it’ll have to be thrown out and something more accurate created to replace it.
Asking whether one “believes” in the Big Bang doesn’t really answer any questions — it merely suggests that the Big Bang is itself part of a faith-based system, equivalent to a belief in Christ or Allah or Buddha or whomever. This is another piece of semantic ammunition that Creationists and others like to use: That science is just another system of “belief,” just another species of religion. Not only is science not just another species of faith, it’s not even in the same phylum. Faith is a conclusion. Science is a process. This is why, incidentally, the two are not ultimately inherently incompatible, just as driving somewhere is not inherently incompatible with having a fixed home address.
If I were putting together a poll on the Big Bang, I wouldn’t ask people if they believed in it. I would ask them, based on the evidence, what model of universal creation best described its current state. I’d make sure I left space for the “I have no idea” option. I believe — and this is just hypothesis, not a theory — that the data from that question would be informative.
I’m still getting a lot of mail from Confederate partisans over my recent posts on how the Confederacy was evil, and so are its flags. Most of these apologists are spieling out lines suggesting that, yes, yes, fine, the Confederacy did institutionalize slavery. But today its flags mean entirely different things, like pride and heritage and (inevitably) states rights over federal rights. Why can’t we (meaning, presumably, the folk not in the states of the former Confederacy and the descendants of the people the Confederacy explicitly enslaved) just get over it? My God, haven’t the decent white folk of the South suffered enough? They lost their country, after all.
Well, let me make a counter-suggestion, which is that I’ll start trying to forget that the Confederate flag is fundamentally evil, if the Confederacy-pushers will acknowledge that the Confederacy was in fact, a big fat loser, and therefore any of its symbols are less than fertile ground for positive associations.
Loooooooooooser. And it isn’t just a loser in war. Although it is that, let’s not forget — and it lost that war big. Sure, they kept it close in the first half, but after that it was a blowout. The North had a deeper bench. Even a post-game late hit on the North’s general manager (while he was in his luxury suite, for God’s sake!) couldn’t change that fact. But even tossing aside the war, the Confederacy is a loser in so many other ways it’s hard to know where to begin. But let’s begin anyway, shall we?
States’ rights: Loser. The Confederacy so bungled the states’ rights issue that it ended up establishing the primacy of the federal government over states, and additionally ensured that no other state could ever secede from the Union again. Oh, and then the former Confederate states were subjected to a rather unfortunate period of time (it’s called the Reconstruction) where they had about as many state’s rights as the District of Columbia. So, in all, not a particularly shining example for states’ rights.
This where Confederate partisans grumble that yeah, but technically the Confederacy was right on the constitutionality of secession. Well, kids, two things: One, nuh uh. Clearly that was a matter open to interpretation, which is why you had to fight a war about it (which — did I mention? — you lost). Two, even if the Confederacy were technically right on secession, this is a really stupid argument anyway. What, like the United States is just going to go, “Gee, okay, what we’d really like is to have a hostile neighbor to the south of us, competing with us for land on this here North American continent?” I mean, Christ, people. Get a grip.
Clearly we think the Colonists were in the right when they drafted up the Declaration of Independence and suggested that we and Britain had to go our own ways. But they still had to fight a war regarding the matter — and win it. I don’t recall the Colonists being shocked, shocked when Britain didn’t exactly roll over and cheerfully lose a few thousand miles of North American coastline. They knew what they were getting into. So it’s a little silly to suggest that the Confederates, either then or now, should feel otherwise. It’s just whining.
When it comes to things like land and constitutions, being right is half the battle; the other half of the battle is the actual battle you have to fight to enforce your claim. The Confederacy lost that part, which is just as well, because they were way off base with that whole secession thing to begin with. Bad premises, bad results.
Heritage: Loser. Let’s be honest here. There is almost no truly Confederate heritage, if only because the Confederacy in itself didn’t last long enough to generate any while it was an ongoing concern, and while it was around, it was too busy trying to survive to do much of anything else. There is of course a rich heritage of Confederania now, but it exists entirely as the fly-blown leavings from the Confederate corpse, rather than the fruits of a living tree, and that’s not entirely the same thing.
Confederate partisans try to backdate Confederate heritage to before the Confederate era, but I don’t think that is something we should cede to them. There is indeed an antebellum Southern culture, but the participants of that culture did not equate their culture with the political entity known as the Confederacy, since that entity didn’t exist. If they didn’t I don’t see why the rest of us should make that equation, either.
Part of the whitewash campaign of the Confederate partisans is to try to sell the idea that Confederate symbols somehow encompass the entire history of the South, and they don’t, neither prior to the Confederacy nor after. Let’s remember that Confederate and Southern are not synonyms. Southern heritage is a fine thing; Confederate heritage is not. Using the symbols of the latter to represent the former is presumptuous.
Pride: Loser. Proud of what? Of the fact the Confederacy precipitated a civil war that killed hundreds of thousands of men on both sides of the battle? Which — let’s never forget — it lost? Of constitutionally enslaving black people? Of being the cause of the devastation and occupation of the Southern states by Union troops and carpetbaggers?
Oh, yes, Confederate friends, that last one was your fault. We know all about that whole “War of Northern Aggression” line you’ve got going down there, as if you were just sitting there minding your own business when all of a sudden Sherman popped up and started, like, burning things. However, allow me to suggest that from the point of view of the United States, trying to make off with half the country, as you did, seemed like a fairly aggressive maneuver at the time. I’ll be happy to know if you disagree, since then you won’t mind if I come over and take over half of your house, preferably the half with the hot tub.
Individual Southerners feel pride in ancestors who went out and fought (and sometimes died) for the Confederate side of the war, which as I’ve mentioned before is just fine. But I don’t see how one can ignore the fact that all those Johnny Rebs would have been safe as houses had the Confederacy never existed. Prior to December of 1860, it’s not as if the armies of the north were perennially massed at the Mason-Dixon line, champing at the bit to torch the south, and the poor southerners had no choice but to hoist grandpappy’s musket and slug it out at Antietam.
Many of the Confederate apologists with whom I’ve corresponded maintain that their ancestors fought and died to protect their homes, not for the ideals of the Confederacy, and I suspect that in many cases that’s probably true. It still stands whatever their personal reasons for fighting, they fought because of the fact of the Confederacy, which was an evil institution, for reasons I’ve outlined before. Essentially, these people fought and died because an unnecessary and wholly evil entity invited trouble to their doorstep. Someone needs to explain to me why one should feel pride in that.
(Anyway, I do think there needs to be a line drawn in terms of responsibility. Not every Confederate soldier was fighting simply to protect the homestead; at least a few here and there had to believe in the principles of the Confederacy or at the very least the right of the Confederate states to go their own way. These people were wrong, however bravely they may have fought. It’s well and good that they were defeated, since the “independence” they would have bought was rotten to begin with.)
The only real pride one should have as a Confederate partisan is Loser Pride, in which one invests one’s energy in a perennially losing entity primarily as an exercise in existential humility; i.e., Cubs fans. But even Cubs fans have the possibility for glory in that the Cubs are an ongoing concern. The Confederacy, on the other hand, is deader than a gay bar in Branson and will stay that way. It will never be anything but a loser.
Useful Flags: Loser! Look, the Confederacy was so screwed up that it couldn’t even get its flags right. The first official Confederate flag was the Stars and Bars, which was rather too similar to the flag of the United States; it made things even more confusing on the battlefield than they already were. So, the Confederacy decided on another flag, which was largely white. The problem with this flag was that it pretty much looked like a flag of surrender — it was that whole “field of white” thing it had going. Obviously this was problematic if in fact you weren’t trying to surrender, or alternately, if you were, since the Union folks wouldn’t be able to tell right off whether you were giving up or fixin’ to stab them with your bayonets, so they’d be better off shooting you just to be sure.
So out comes a third flag, which, unfortunately for the Confederacy, came out just about the time the Confederacy was imploding from total loserness and teetering on the cusp of non-existence. Shortly thereafter, another flag flew at the Confederate capital, Richmond, and other points south: The flag of the United States of America. And personally I’m hard-pressed not to see that as a vast improvement.
Given the voluminous evidence of the total loser-osity of the Confederacy, you’ll understand why every time I get a letter from someone proclaiming the Confederate flags to be a positive symbol, I just get flummoxed. Frankly, it’s difficult to think of any flags anywhere at any point in time that are as steeped in complete failure on as many social, cultural and political levels as these are. It’s just so damn sad that people are still out there trying to delude themselves otherwise.
The only explanation I can come up with that makes any sense is that certain people from the south simply cannot think rationally about the Confederate flags, much in the same way that certain otherwise totally rational Christians freak out about the fact they’re descended from stooped, hooting proto-primates just like the rest of us. It’s a blank spot in their brain in which they choose not to allow thought of any sort.
Fine. As I’ve said before, if you want to believe that the Confederate flags represent anything but an evil and ultimately pathetically inept institution, and all the consequent stupidity that followed through its use by segregationists, morons and demagogic flag wavers who’d rather rile up the easily excitable than actually make the South a better place for all its citizens, then by all means go right ahead. We’ll agree to disagree.
But please don’t write to me saying that the meaning of the Confederate flag has changed or should change. Short of wiping out the history of the Confederacy itself and pretending it never existed, this isn’t going to happen. The Confederate flag a symbol of evil, and like most symbols of evil it’s much better used as a reminder of the damage evil can do, than it is as a misplaced symbol of pride.
The Confederate flags are the symbols of losers, and those who glorify losers. I really wouldn’t have it any other way.
If you are over the age of 18, a US citizen, and you’re not voting on Tuesday, you are a stinky stinky moron, and my response, should I ever hear you bitch about the government of the United States, will be to laugh like a drunken horse right in your face. If nothing else, taking a few minutes out of your day to vote will entitle you to two to six years of unmitigated kvetching about your system of government. Talk about a return on your investment.
If you are under the age of 18, not a US citizen, and are voting on Tuesday — well, that’s just no good for anyone. Go shoot some pool or something.
I vote. Almost as important, I am politically independent. I’ve always been registered as an independent. Part of this is due to my contrarian nature regarding joining any organization; once you join something, the people in it start wanting you to do things with them. The next thing you know, your weekends and Wednesday evenings are given over to bake sales and irritating rallies and stuffing envelopes until your fingers are sausage-shaped agglomerations of paper cuts. When I die, I can guarantee you that my list of regrets won’t include not spending more time doing any of those things. Probably the only organization I see myself joining at some point is the PTA, although that will be exclusively a preventative measure, to head off any attempts to ban Huck Finn or the Harry Potter books before someone has to haul in the local branch of the ACLU, and my tax dollars go to pay lawyers rather than to educate my child.
But part of it is due to the fact I dislike political parties. This is usually for one or more of the following three reasons: Their overall platforms, their tenuous relationship to the actual principles of democracy, and their general emphasis on getting an agglomeration of their kind elected rather than finding the best representatives of the people that those representatives are supposed to elect. Some parties set me on edge more than others — I’ve never made any secret that I distrust the GOP to such an extent that I tend to think that people who register Republican have some sort of unfortunate brain damage that keeps them from thinking clearly — but to be clear, I don’t like any of them. Ultimately political parties are about someone else telling me how I should exercise my franchise, based on the idea that they’ve got so many other people planning to vote the same way. Or to put it another way: “50 million Dubya fans can’t be wrong!” Well, yes, they can.
The one fly in the ointment here is that generally speaking enough people do register for and support political parties (specifically the Republicans and the Democrats) that here in the US those are the flavors you get when it comes to election day; indeed, over the history of my voting, I don’t think I’ve ever voted for someone who wasn’t of one or the other party (despite my general paranoid suspicion of the GOP, I have voted for Republicans in the past and would do it again in the future if — as in the past — there were compelling reasons to do so. See, that’s what being independent is all about). Certainly tomorrow I’ll be voting for candidates from those parties. But as is often the case, it’s not so much that I’m voting for a particular candidate as voting against another, and putting my defensive vote into a bin where it can do the most good. This set-up is hardly my fault. The problem isn’t that I’m not part of a political party, it’s that so many other people are.
Look at this way: If you registered as an independent, more candidates would have to think independently and focus on what actually works for their constituency — an actual representative democracy rather than one where the parties offered their candidates just enough leeway from the party platform not to alienate the voters in their district. Political races would get proportionately less “soft money” from the outside world, meaning the would-be politicians would have to actively engage in grass-roots campaigning, which again means the candidates would have to be more responsive to the needs of the constituency.
There’d be less political triangulation in the primary season, in which moderate editions of political party candidates get the bounce to appease the hard-line nutbags that constitute the political baggage of both parties — or are bounced through the maneuverings of the other party, which is hoping to for an opposing candidate that alienates the maximum number of moderate voters. Party politics would also enter far less into the sausage-grinding process of law-making, since the concern the lawmakers would have is whether they’re pleasing the people back home, not the party. We’d see ever-shifting alliances of politicians, based on specific issues, rather than an inflexible platform that grown men and women have to be politically “whipped” into adhering.
Where’s the downside here? Is there a real downside to the end, not only of the two major political parties, but to all political parties altogether? Certainly not for the individual. You can still have your own political beliefs, you know. You can still be a “pro-choice” card-carrying member of the ACLU without being a Democrat; you can still be a “pro-family values” gun-toting member of the NRA without being a Republican; you can still be “pro-polyamory” Ayn Rand-worshiping dateless freak without being a Libertarian. And you wouldn’t have to put those little cardboard signs on your lawn every two years.
Just think about it the next time you go to register. If you really want better political discourse in this country, do it by making the politicians listen to you and your neighbors, not your party affiliation. Vote independent. Hell, the reduction in political junk mail as you’re taken off the party mailing list will be worth it alone.
Based on a (very good and civil, mind you) e-mail conversation I had over the weekend, I think now is a fine time to expand some points I made here over a year ago, when I wrote my “Southern Heritage is a Crock” column. So here we go:
The Confederate States of America was a fundamentally evil institution. Period, end of sentence. That’s “evil,” spelled “E-V-I-L.” “Evil,” as in “morally reprehensible,” “sinful,” “wicked,” “pernicious,” “offensive” and “noxious.” “Evil,” as in “the world is a demonstrably better place without this thing in it.” Evil. That’s right, evil. Once again, for those of you who haven’t figured it out yet: Evil. And for those of you yet hard of hearing, the ASL version:
Really, I don’t know how much clearer I can make it.
The CSA was a fundamentally evil institution because it codified slavery into its system of government; N.B: Article IV Section 2 of the Constitution of the Confederacy. And lest you think this was just some sort of mamby-pamby sop thrown in the CSA constitution to please the slave-holders, let’s go to the historical record, to a speech by CSA Vice-President Alexander Stephens in March of 1861, in which he discussed the CSA Constitution at great length. The entire text is here, but allow me to excerpt considerably (and to place emphasis on the relevant passages) from Stephens’ comments about slavery and its role in the CSA, both in its constitution and in its very formation:
“The new constitution has put at rest, forever, all the agitating questions relating to our peculiar institution — African slavery as it exists amongst us — the proper status of the negro in our form of civilization. This was the immediate cause of the late rupture and present revolution. [US President Thomas] Jefferson in his forecast, had anticipated this, as the “rock upon which the old Union would split.” He was right. What was conjecture with him, is now a realized fact. But whether he fully comprehended the great truth upon which that rock stood and stands, may be doubted.
“The prevailing ideas entertained by him and most of the leading statesmen at the time of the formation of the old constitution, were that the enslavement of the African was in violation of the laws of nature; that it was wrong in principle, socially, morally, and politically. It was an evil they knew not well how to deal with, but the general opinion of the men of that day was that, somehow or other in the order of Providence, the institution would be evanescent and pass away. This idea, though not incorporated in the constitution, was the prevailing idea at that time. The constitution, it is true, secured every essential guarantee to the institution while it should last, and hence no argument can be justly urged against the constitutional guarantees thus secured, because of the common sentiment of the day. Those ideas, however, were fundamentally wrong. They rested upon the assumption of the equality of races. This was an error. It was a sandy foundation, and the government built upon it fell when the “storm came and the wind blew.”
Our new government is founded upon exactly the opposite idea; its foundations are laid, its cornerstone rests upon the great truth, that the negro is not equal to the white man; that slavery — subordination to the superior race — is his natural and normal condition. This, our new government, is the first, in the history of the world, based upon this great physical, philosophical, and moral truth. This truth has been slow in the process of its development, like all other truths in the various departments of science. It has been so even amongst us. Many who hear me, perhaps, can recollect well, that this truth was not generally admitted, even within their day. The errors of the past generation still clung to many as late as twenty years ago. Those at the North, who still cling to these errors, with a zeal above knowledge, we justly denominate fanatics. All fanaticism springs from an aberration of the mind — from a defect in reasoning. It is a species of insanity.
“One of the most striking characteristics of insanity, in many instances, is forming correct conclusions from fancied or erroneous premises; so with the anti-slavery fanatics; their conclusions are right if their premises were. They assume that the negro is equal, and hence conclude that he is entitled to equal privileges and rights with the white man. If their premises were correct, their conclusions would be logical and just — but their premise being wrong, their whole argument fails.
“I recollect once of having heard a gentleman from one of the northern States, of great power and ability, announce in the House of Representatives, with imposing effect, that we of the South would be compelled, ultimately, to yield upon this subject of slavery, that it was as impossible to war successfully against a principle in politics, as it was in physics or mechanics. That the principle would ultimately prevail. That we, in maintaining slavery as it exists with us, were warring against a principle, a principle founded in nature, the principle of the equality of men. The reply I made to him was, that upon his own grounds, we should, ultimately, succeed, and that he and his associates, in this crusade against our institutions, would ultimately fail. The truth announced, that it was as impossible to war successfully against a principle in politics as it was in physics and mechanics, I admitted; but told him that it was he, and those acting with him, who were warring against a principle. They were attempting to make things equal which the Creator had made unequal.“
Lots of Confederate sympathizers like to say that what the Confederacy was really about was state’s rights, and all that. But I don’t know. Let’s put on one side a bunch of Confederate sympathizers who understandably want to downplay their fetish’s unfortunate association with that whole “people owning people” thing. And on the other side, let’s put the CSA’s second-highest executive, speaking about a Constitution he helped create, specifically discussing the role of slavery in his country’s formation. When it comes to what the Confederacy was really about, who are you going to believe?
Yes, the United States had slavery (and continued to have it, even during the Civil War; that Emancipation Proclamation thing of Lincoln was only effective in rebellious states), and isn’t blameless of other nasty habits, including brushing the natives off land it wanted to own. However, the United States did not codify evil into its Constitution by enshrining the practice of slavery; as Stephens proudly notes, it took the CSA, among all other countries in the world, to do that. The United States has done evil, but is not fundamentally evil in its formulation, as is the CSA.
It comes to this: When someone tells you the Confederacy was about something other than people owning people, they’re either being intentionally disingenuous or (more charitably) are ignorant about the deep and abiding role slavery had in the formation of the CSA. It was about other things, too. But, and in an entirely appropriate, non Godwin-izing use of this particular political entity, the Third Reich was about more than just exterminating the Jews. It just happens that that’s the one cornerstone policy of the Reich that, you know, sort of stands out.
Given that the CSA is a fundamentally evil institution, it’s clear that any of its trappings are symbols of evil, including those flags Confederate sympathizers love so well. This is a pretty cut and dried thing: If the answer to the question “Was this symbol/flag/insignia/whatever used as an identifying object by the Confederate States of America?” is “Yes,” then it is, point of fact, a racist and evil symbol. If you’re wearing such a symbol or otherwise endorsing it in some public way, it’s not unreasonable for people who see you wearing such symbols (particularly the descendants of former slaves) to wonder if you’re either racist and somewhat evil yourself or, alternately, just plain dim.
If you have an ancestor who fought for the CSA, then, yes, he fought for an evil institution — but no, I don’t think it makes that individual evil in himself. I think it’s perfectly reasonable and right for the descendants of Confederate soldiers to note the bravery and valor with which they fought, and to commemorate their individual efforts on the field. I think it would be nice if they additionally noted that it was sad that the government for which they fought was ultimately undeserving of their blood and defense, and that it was rightfully expunged from the world, but that’s another matter entirely.
(My correspondent this weekend asked me an interesting question as to whether a memorial for American soldiers who died in combat should include names of Confederate soldiers — the genesis of this question being some fracas he’d heard about at a northern university that was putting together such a memorial. My response is that it shouldn’t, for the reason that either the CSA was its own country, in which case its soldiers weren’t “American” soldiers (“American” understood to refer to citizens of the US), or it wasn’t its own country and the Confederate soldiers were in open and treasonous rebellion, and as a general rule one does not commemorate traitors, particularly ones whose rebellious actions ultimately caused the deaths of hundreds of thousands. I don’t have a problem with such memorials in formerly Confederate territory, but the rest of the United States is not obligated to follow suit.)
Now, look: I understand that for a lot of Confederacy fans, it really isn’t about race or anything else other than pride for the South. My response to that is: Groovy. Go for it. Love the South. What y’all need to do, however, is get some new symbols, some that don’t hearken back to the Dixie Days, when you went to war for the right to keep owning people. The Confederacy was evil, and now it’s dead, and its being dead is front and center the best thing that there ever was about it. There is the South, and there is the Confederacy, and a good thing for you and for the rest of us would be the realization that these two things don’t have to be synonymous.
I’ve been spending the last few days working with the constellations, drafting images for the cartographers over at Rough Guides to turn into actual star charts (hint: It’s easier to do when you’re making screenshots off of astronomy software, as I’ve been doing. Yes, you have to get permission from the software makers before you do this sort of thing. Yes, I did). There are 88 officially recognized constellations, but I ended up with 69 charts, on account that I paired up several of the smaller and/or less impressive constellations. Sad to say, many constellations just don’t rate their own star chart.
It’s not like they care, mind you. They’re just abstract representations of earthly objects projected into the sky by humans, using stars that have only a passing relationship to each other. Stars that look close in our night sky can be hundreds of light years apart; it’s that whole “space is three dimensional” thing (and actually, space is four dimensional — some stars we see in the sky may already be long-dead and gone, it’s just taking a while for the news to reach us, thank you very much Dr. Einstein).
I don’t think most people realize how many strange and pointless constellations are sitting up there in the sky. In a way, this is only natural (said, of course, ironically): Most of us live in urban areas, where light pollution and other sorts of pollution conspire to blank out fainter stars from our view. I remember living in Chicago and looking up and being able to see nothing but the 10 or 20 brightest stars — really not enough to go naming constellations by. Since many of the more obscure constellations are composed mainly of faint stars, why should people know them? When it comes to constellations, you can’t know what you can’t see.
The other reason is that constellations just don’t mean what they used to people. When you’ve got PlayStation 2, what do you need with the constellation Vulpecula (this is not a knock on PlayStation 2, said the Chief Entertainment Media Critic for Official US PlayStation Magazine, quickly, before he can get fired for disloyalty). If you can make out and recognize the Big Dipper (which, strictly speaking, is an asterism, not a constellation), or maybe Taurus or Orion, you’re doing just fine.
Still, it’s interesting to know what weird and freaky objects are up there in the sky. For example, did you know that there’s a giraffe walking around near the celestial north pole? It’s the constellation Camelopardalis (pictured above), which, being circumpolar as it is, is always hovering in the night sky here in the northern hemisphere. Its near neighbors include two bears, a bobcat, a dragon, and a guy carrying around a couple of goats. I think it’s a little out of place.
The fact of the matter is that Camelopardalis is a fairly recent constellation, created just a few hundred years ago by an astronomer who noticed that there was this wide swath of space with no constellation in it; he just spotted a few dim stars (none higher than 4th magnitude, which means you won’t be able to see them n the suburbs), strung ’em together, and there you have it — instant constellation.
Other lesser-known constellations in the northern sky: Delphinus and Equuleus (the dolphin and horse, respectively), Sagitta (the arrow) and Vulpecula (the fox), Corvus and Crater (a crow and a cup, and they actually share a mythological story together), Canes Venatici (hunting dogs) and Coma Berenices (Berenice’s hair, and isn’t that a weird one: A wig in space). The thing about these constellations is that if you can identify one of them, you’re probably the sort of person who can identify them all. Not that there’s anything wrong with that, mind you. I am writing an astronomy book, you know. I want you to be know these things.
The earth’s southern hemisphere has a lot of unfamiliar constellations for most of us, but that’s to be expected, since most people on the planet live in the northern hemisphere, rather above the equator, thus there are constellations down under that we never see: Chameleon, Pavo, Apus, Hydrus, Tucana, Octans — all circumpolar to the South Pole.
Be that as it may, the southern hemisphere has a lot of constellations seem a little odd in their own right; many of them were described and created during the Age of Exploration (when the Europeans hopped in their ships to travel the world and surprise the natives of other lands with Jesus and smallpox), and so describe scientific objects: Microscopes, telescopes, compasses, air pumps, carpenter’s levels, chisels, pendulum clocks and octants. A fan of rationality though I may be, I’m not at all impressed with any of these: I want the night sky to be filled with wild animals and mythological heroes, not to resemble Galileo’s laboratory.
Given the fact that so many constellations are dim and/or obscure and/or just plain lame, I have an idea. I say we yank most of the constellations. I figure we have to keep the signs of the zodiac, otherwise we’ll have to fund an Omnibus Astrologers’ Assistance Bill in congress, and then keep on some of the most obvious constellations in both hemispheres: Orion, Centaurus, Ursae Major and Minor, Crux, and so on. Say, the top 25 or 30 constellations get to stay. The rest: Gone. Then we start voting on new constellations — and by “we” I mean pretty much the whole planet. You may not know this, but the night sky is officially pretty damn Eurocentric, up to and including the parts that can’t actually be seen from Europe (although there is a Native American in the southern sky — Indus — and I bet he’s surprised to be so far from home). It can’t hurt to let the voting power of China or India put in a constellation or two (or three, whatever).
The only rules I’d put in would be that the new constellations couldn’t be of real people — thus avoiding the constellations Mao, Elvis and Dale Earnhardt — and that we’d pretty much want to avoid any technological advance of, oh, the last 100 years. That way we’re not stuck with the constellations TiVo, Nintendo or Cell Phone. Other than that, let ’em rip. We’ll let the astronomers keep the old constellations, of course, because there’s no point in having to rename the entire sky for scientific purposes. It’s like how they use Metric and stuff. You know, just because they do doesn’t mean we have to. And it’ll get people looking up at the sky again. That’s not bad.
Oh, come on. It’ll be fun. You won’t miss dumb ol’ Camelopardalis anyway.
Since it looks like we’re heading toward one, here’s my take on war.
1. It should be done if it’s necessary. For now, I’ll be vague as to what constitutes “necessary” because it’s very much open to interpretation.
2. If you’re going to do it, then you should make sure your opponent ends up as a grease spot on the wall, and that his country is reformulated so that it never ever bothers you again.
In the best of all worlds, both of these are fulfilled; you have no choice but to go to war, and you squash your opponent like a plump grape underneath a sledgehammer. But to be entirely honest, if I had to choose between the two of these, I’d pick number 2, if only because if we must participate in an unjust war, ’tis better it was done quickly. That way the stench of our pointless involvement is over quickly, and we expend as little matériel as possible (not to mention, you know, the deaths of those who fight our wars for us are kept to a minimum). Also, if you have the first, but not the second, what you end up with is a long-standing crapfest that you will ultimately have to revisit, whether you wish to or not.
Such as it was with the Gulf War. I’m not a terribly big fan of that war, but I’m perfectly happy to cede the point that it was necessary to some great extent. Yes, it was a war about oil. Thing is, while we can argue about the need to reduce our oil consumption (I tend to think the greatest advance in technology in the last couple of decades is the coming age of fuel cell and alternate energy cars), ultimately we still do need oil, and certainly needed it in 1990.
And of course it’s not like it was just a war about oil on our side of the fence; had Kuwait’s primary export been goat meat, Saddam would have been less likely to get all fired up about reintegrating the lost 19th province of Iraq. The Gulf War also offered the added attraction of the possibility of turning Saddam into a fine particulate mist with the aid of a well-placed smart missile. He’s a morally disagreeable enough person, and his regime largely worthless enough to have made the case for its dismantling persuasive.
The Gulf War took place while I was in college, and I remember being at candlelight vigils in the quads, not to pray that the US stopped the madness of the attack, but that we kicked the righteous hell out of the Iraqis and that it would all be over quickly. I had a brother in the Army, who was over there in the fight. The longer the fighting went on the better the chance something bad would happen to him. Fortunately, it was over quickly, and we learned what happens when a large but poorly-trained, badly-equipped army goes head-to-head with a highly-trained, massively-equipped army: The poorly-trained army loses people by a ratio of more than 100 to 1. We squashed the Iraqi army, all right.
But we didn’t squash Saddam or his regime, and ultimately, I find this inexplicable. Saddam should have not been allowed to continue to rule. His personal detention (to say the least) and the dismantling of his political machine should have been part of any surrender. War isn’t a football game, after all, where the losing coach gets to try to rebuild for next season. Particularly in Saddam’s case, where he was the aggressor; he started it. The penalty for starting a war (which, to be clear, you then lose, miserably) should be a nice 8×8 cell with no phone privileges until you die.
Lacking the will to depose Saddam, we (and by we I mean the US and the UN) should have been willing to back up the weapons inspectors with the immediate and massive threat of force. Simply put, any facility that the weapons inspectors were denied entry to should have been bombed into pebble-sized pieces within 15 minutes of the inspectors leaving the area. Aggressive countries that have been defeated in war do not have the luxury of “national dignity” or whatever it is you want to call it. The fact that we just spent more than a decade letting a hostile regime jerk the world around is angrifying (a new word. Use it. Love it).
Let’s turn our attention to the new war we’ll be having soon. Toward the first point, is this war absolutely necessary? I doubt it. I think it would be much more useful to swarm the country with weapons inspectors and high-altitude bombers that track their every destination. After the first few times Saddam’s precious presidential palaces are turned into powder when the inspectors are turned back, they’ll get the clue. I see nothing wrong with reminding Iraq on the point of a missile of its obligation to let us look anywhere for anything. Clearly they won’t like it, but, you know. So what.
Many suggest that the purpose of the coming war will to be to assure that Iraq cannot ever threaten any of us, but this achieves the same goal at lesser cost (and without exposing our military to undue chance of death). If indeed containing that threat were the goal of the upcoming war, this works just as well, and will have the additional value of being what was actually the correct response anyway, and only the better part of a decade late.
However, it’s clear that Dubya wants a war for purposes not related to weapons containment; indeed, his administration is utterly disinterested in that aspect of the Iraq problem, except as a convenient trope to sell the war to inattentive voters. Dubya wants regime change, and I can sympathize. Saddam has been in power a decade longer than he should have been, and I can think of worse uses of the American military than clearing out bad governments around the world. If Dubya said something along the lines of “First we get rid of Saddam, and then we’re going to pay a call to Robert Mugabe,” well, that’s a barricade that I’d be inclined to rush.
I’m not holding my breath on that pronouncement, however. Ultimately I suspect that Dubya wants Saddam out as part of a father-avengement thing, although what Bush I needs to be avenged for is unclear; Bush I isn’t dead at the hand of Saddam, after all, nor injured, nor in fact seriously put out in any recognizable way. I believe at best Dubya is avenging his father’s taunting at the hands of Saddam. If that’s the case, Dana Carvey had better go to ground as quickly as humanly possible. This is of course a poor reason to send a nation into war, but Dubya does have the advantage of a decade’s worth of stupidity in dealing with Iraq providing him with some actual legitimate reasons to plug Saddam.
Let’s get down to brass tacks. On balance, the end results of fighting this war will be (cross fingers) the removal of Saddam and the dismantling of his political state and (incidentally) a clearing out of whatever weapons capability that may exist. For those reasons, I’m not opposed to fighting a war with Iraq now. Be that as it may, even those people who fully support a war against Iraq are rather painfully aware that the stated reasons that the Dubya administration wants to gear up for war are window dressing for a revenge fantasy. It is possible to fight a just war for less than entirely just reasons. We’re about to do it.
Just, necessary or not, let’s hope that this war is total, complete and ends with Saddam dead or in chains, his system smashed, and Iraq occupied in the same manner as Japan or Germany was at the end of WWII, with an eye toward making the revamped country successful and benign (the scariest things to come out of Japan and Germany in the last 55 years, after all, were Godzilla and the Scorpions, respectively). Anything less will be, in a word, unforgivable. If we mean to wage war, let’s wage war like we mean it.
I’ve had a long and somewhat excruciating journey back from San Francisco, although thanks to standard airline practice of overbooking and begging for volunteers, I am now the owner of a free trip to anywhere in the continental US. Depending on future travel plans, I actually made a profit on the trip. So it’s not all bad. Be that as it may, I have to play catch up on a number of things, including invoicing my clients for the month. In short. no recap of my journey (I can assure you, however, that I had a really fabulous time). Maybe later.
Instead, I present to you an article I wrote for the Dayton Daily News, which appeared this last Sunday. It’s a response to a NY Times article by author Joseph Epstein, in which he suggested that everyone who thinks they might want to write a book should just keep that book to themselves. As you might expect, I think Epstein’s opinion on the matter is entirely full of crap.
I’d’ve linked to the article on the DDN site if it were up there, but it’s not. I’m presenting it here instead. This is the “unedited” copy; it differs slightly from the printed version, which was edited for space and does not have me using the phrase “shove it” — which to be entirely honest, I didn’t really expect to survive the editing process anyway. Anyway, here we go.
By John Scalzi
Author Joseph Epstein had a message for would-be authors this week Drop dead.
“According to a recent survey, 81 percent of Americans feel they have a book in them — and that they should write it,” Epstein wrote in the September 28 edition of the New York Times. “As the author of 14 books, with a 15th to be published next spring, I’d like to use this space to do what I can to discourage them.”
And discourage them he does. Epstein — a professor at Northwestern whose most recent book, curiously enough, is called Snobbery The American Version — notes that every year 80,000 books are already published in the United States, “most of them not needed, not wanted, not in any way remotely necessary.” Many people who want to write a book, Epstein suggests, do so with the idea of leaving something for posterity, and to proclaim their personal significance to the world. However, Epstein notes, “Writing a book is likely, through the quickness and completeness with which one’s book will die, to make the notion of oblivion all the more vivid.”
Ultimately, Epstein concludes, “Misjudging one’s ability to knock out a book can only be a serious and time-consuming mistake. Save the typing, save the trees, save the high tax on your own vanity. Don’t write that book, my advice is, don’t even think about it. Keep it inside you, where it belongs.”
Well, as the author of or contributor to several books, I’d like to offer a counter-proposal for you would-be authors As nicely as humanly possible, tell author Joseph Epstein to take his advice and shove it. There are many things this world has too much of, but books and storytellers are not two of them.
Epstein is right about some things. Most of the people who think they want to write a book never will. Of those who start, most will give up about 50 pages in, when they realize writing a book is actually work. Most of those who manage to finish writing a book will never see their book published, or will have to resort to vanity presses, and most copies of the book will sit the boxes in which they were delivered. Of those authors that do get published (and get paid for it), most will have the dubious pleasure of watching the book disappear off bookstore shelves in a few short months, to migrate to the remainders bin or sent off to be pulped into paper towels. If you want immortality, negotiate with your higher power, not a book publisher.
But to say that book-writing is difficult and publishing industry is competitive is not the same as saying that people should not write books. That’s like saying that because most people will never get signed by a major label or make an album, they shouldn’t bother to learn an instrument. Or that since most people will never be hired as a chef or open a restaurant, they should just stick to microwave meals. Thing is, most people have figured out that they’ll never be a four-star chef or a rock star. Most people don’t even worry about it. In each case, the skill is its own reward.
That’s why people should write books. They should write books because it shows a love of language and because writing is a skill worth having. I don’t think anyone would argue that we as a people should leave literacy and self-expression up to the professionals; among other things, that’s a fine way to narrow down that professional class.
People should also write books because despite Epstein’s implicit dismissal, every human being has a story to tell, and most of us have more than one. Admittedly, most people can’t write well enough to write a whole book. Most people can’t knit a sweater or compose a song, either — but could with time, effort and encouragement. Likewise, writing is a skill that improves with practice. Could having 81% of the American population working on their writing skills really be such a bad thing?
Anyway, here’s a secret writers don’t want you to know: Good writers are frequently not the professionals. As just one famous example, “Harry Potter” author J.K. Rowling was a divorced mother on public assistance before she started writing, scratching out pages in a café while her daughter napped. Presumably Epstein would have encouraged her to smother Harry Potter in the literary womb. Good writers come from everywhere; good stories — and good books — are often where we least expect them.
Let me provide another example closer to home. There’s a guy down the street from me named Darrell Gambill. He’s not a professional writer; he has a farm and works as a machinist at Goodyear. He had a story he wanted to write, about boxers and guardian angels. So he wrote it His book, The Lion’s War, was published last year. I don’t know how well The Lion’s War is doing; I don’t expect to see it on the bestseller lists or taught in classrooms around the US, or made into a feature film. But so what? The author wrote the story he wanted to tell. I’m glad he didn’t save the typing, or the trees, or the tax on his own vanity. His book is outside, which, contrary to Epstein’s opinion, is where books belong.
Finally got my copy of Uncle John’s Bathroom Reader Plunges Through the Universe, and was mildly surprised to learn that the cover art was different than what is pictured On Amazon. It’s actually blue and black. So if you’ve ordered it, don’t be shocked when it looks different. It’s a feature, not a bug.
With that, I’m out of here for a few days. I’m off to San Francisco to see a few folks and to speak at JournalCon 2002; I’ll be on the panel discussion “Writing for Fun and Profit.” That’s fair since I do both. I’ll be back on Monday but probably won’t update this site until Tuesday at the earliest but more likely next Wednesday. Until then — well, it’s a big Internet. I’m sure you’ll keep yourself amused. Here’s one last science article to send you off.
Did Noah’s Flood Really Happen?
Some think they’ve found the historical event that launched the legend of Noah’s Ark. Others aren’t so sure.
You know the story of The Flood, of course: One day God, annoyed with humanity, decides that what the Earth really needs is a good long soak. So He commands His faithful servant Noah to build an ark to hold two of every species (except livestock and birds, for which he needs to carry seven pair of each — a detail many people forget); once that’s accomplished, God unleashed a flood with rain that lasted for the fabled 40 days and 40 nights.
Many Christians take this account as the gospel truth. Others, however, wonder if the story of Noah isn’t rooted in some more local and less globally catastrophic event — one memorable enough, however, to spawn a series of flood legends. Besides the Biblical story of the flood, other civilizations in the Eastern Mediterranean area also had significant flood legends, including the Greeks (who has Zeus creating a flood to punish the wicked), and the Sumerians and Babylonians, whose flood legends also include a righteous family, and an ark filled with creatures (the Sumerian version even had the ark’s owner, a fellow named Utnapishtim, release birds to find land).
In 1999, two Columbia University researchers named William Ryan and Walter Pitman put out a book called Noah’s Flood, which offered a tantalizing suggestion The flood in question happened near the Black Sea around 7,000 years ago. At this time, the theory goes, glaciers left on the European continent from the last ice age melted, sending their runoff into the Mediterranean Sea. As the Mediterranean Sea swelled, it breached the land at the Bosporus Strait, near where Istanbul stands. This breach released a flood of water into a freshwater lake that sat where the Black Sea is today. This freshwater lake was quickly inundated with salty Mediterranean water (at the rate of six inches per day) and grew to the present size of the Black Sea within a couple of years — bad news for the humans whose homes and villages were situated on the shores of the former freshwater lake, and certainly memorable enough to be the basis for many a flood legend.
Ryan and Pittman’s flood theory appeared to get a major boost in 2000, when famed underwater explorer Robert Ballard discovered the remnants of human habitation in 300 feet of water, 12 miles into the Black Sea, off the coast of northern Turkey. Ballard also found evidence of the Black Sea changing from fresh water to salt water: Sets of freshwater shells that dated back 7,000 years, followed by saltwater shells that dated back 6,500 years. Somewhere between those times, it seemed, the Black Sea was born out of a freshwater lake. It’s also the historically correct time for Noah’s famous flood.
Aside from the obvious housing problems that this rising tide of saltwater presented anyone living on the edge of the freshwater lake, it would also have the rather unfortunate side effect of killing anything that lived in the freshwater lake itself — most creatures that live in freshwater environments will die off in saltwater environments (and vice-versa).
However, the newly arriving saltwater species wouldn’t have been much better off: Salt water is denser than fresh water, so the new water from the Mediterranean sank under the fresh water, and the oxygen exchange between these levels of water was pretty much blocked. Any saltwater creatures that came along for the ride eventually suffocated. All those dead animals probably made the Black Sea a stinky place to be for a while. The silver lining here, however, is that oxygen-free water makes for a fabulous medium to preserve shipwrecks. Any boat that’s sunk to the bottom of the Black Sea since about 5500 BC is still there, unmolested by local marine life.
So, case closed, right? We’ve found the famous Biblical flood? Not so fast: In May of 2002 a group of scientists published an article in GSA Today, the magazine of the Geographical Society of America, refuting the idea of a sudden flood of Mediterranean seawater flooding into the Black Sea area. Their contention is that based on mud samples they’ve found in the Marmara Sea (just on the other side of the Bosporus Strait from the Black Sea), there has been interaction between the Mediterranean and the Black Sea area for at least 10,000 years — suggesting that the Black Sea filled in over a much slower period of time: About 2,000 years or so. So while the water levels in the Black Sea definitely rose, the rate of their rise wouldn’t constitute a “flood” by any conventional standard of the word.
That’s where the debate stands at the moment — those who think the Black Sea was created in two years, and those who contend it was created in two thousand. In the scientific search for Noah’s Flood, the jury is still out.
I’m continuing my cavalcade of science-related pieces in honor of the release of Uncle John’s Bathroom Reader Plunges Through the Universe, to which I contributed a significant number of the articles contained therein. The pieces you’re reading here this week are rather similar in tone and content to the pieces you’ll find in the Uncle John’s books (explicitly in the case of the articles I wrote, and implicitly even in the one I didn’t, since they wouldn’ta bought so many of my pieces if they didn’t fit the general tone of their book), so if you like it, consider getting the book (click on the graphic for an Amazon link). Remember: You don’t have to only read it in the bathroom. Also remember the contest I’m running: The winner will get a whole stack of Scalziana. Yes, that’s a word. At least it is now.
In an Alternate Universe the Cubs Win the World Series Every Year
Ready to get your mind blown? Get a load of this The “Many Worlds Interpretation” of quantum physics.
Chicago Cub fans are a long-suffering lot: Their beloved Cubbies have been choking for almost a century now, failing every year since 1908 to win the World Series. And there’s no relief in the form of Chicago’s other team, the White Sox, which have found themselves similarly throttled since 1917. At least their misery is shared by Boston, whose Red Sox have been laboring under the “Curse of the Bambino” since 1918.
But what if we told you Cubs and Sox fans that your misery is unfounded — and that in fact your teams have won the World Series? Not just since 1908 (or 1917, or 1918), but every single year since. That’s right. Each of these teams. The World Series. Every. Single. Year. It’s true.
“Not in this world,” you say. And you know what? You’re exactly right. Not in this world. But in other worlds, and in other universes, each of which has been created from our universe. It’s doable in something that is called the “Many Worlds Interpretation” of the universe, and it’s actually possible, thanks to the deeply freaky and unsettling nature of quantum physics.
Here’s how it works. Think about an electron. Now, most people think of an electron as a little ball that whirls around the nucleus of an atom, very quickly. But a quantum physicist would tell you that in reality, an electron isn’t a ball, but a haze of mathematical probabilities, all of which exist at the same time. The electron could be in point A, or it could be at point Z, or in any (and every) point in-between. It’s only when you make the effort to observe the electron — to actually look at the thing — that the electron “decides” where it wants to be, and picks one of its possible locations to be at. For folks who don’t regularly dabble in quantum physics, the idea of a sub-atomic particle “deciding” to physically exist only when you observe it is more than a little creepy. But, hey, that’s how the universe is; we’re just telling you about it.
Up until 1950, scientists handled the idea of an electron (or any quantum event) collapsing into one possibility by suggesting the idea of multiple theoretical “ghost worlds” in which the electron shows up at a different point — as many possible points as it’s possible for that electron to collapse into. However, these “ghost worlds” don’t actually exist; they’re just a theoretical construction that’s convenient to use. Well, in 1950, a Princeton graduate student named Hugh Everett said: What if these “ghost worlds” actually existed?
In Everett’s theory, an electron collapses into a single point when it’s observed, just like it always does. But the event also creates entirely new alternate universes, into which the electron collapses to a different point — so the universes that are created are exactly the same, except for the position of that one single electron. How many universes are created? One for every single possibility. Depending on the quantum event, that can be quite a few universes, just from a single electron collapsing. Consider how many electrons exist in the universe (our universe), and you’re presented with a staggering number of universes being created, every instant, throughout the entire span of time that our universe has created. And that’s just with the electrons (there are, of course, other quantum events).
Again, this idea is truly wild. But the thing is, the physics on this theory checks out. It really is possible that the universe works this way. The catch (and there’s always a catch) is that there’s no way to test it. Any universes that are created from the quantum splittings are impossible for us to visit or observe.
What happens with these possible “other” universes? Well, they just keep existing — away from us, in their own space. There’s no reason to assume that what happens in those universes from the instant they split off from our own is what happens in our universe. In alternate universes, anything can — and as far as we know, anything does — happen. In a universe that split off from our own in 1908, it’s perfectly conceivable the Cubs came back in 1909 to beat the Pittsburgh Pirates to the NL pennant — and then took the Series again from the hapless Detroit Tigers for the third year running. And then came back in 1910 (which they did in our universe, incidentally), and won the Series again (which they did not). And again in 1911, and in 1912, and so on and so on. Admittedly, this would get boring for anyone who’s not a Cubs fan. But don’t worry, guys. In other universes, your team is the one that wins every single year, or (if you choose not to be greedy about it), any year you wish for it to win.
This doesn’t just work with baseball, either. Ever wonder what it would have been like if you’d married someone else? You did — in another universe. Wanted to be an astronaut? You are — in another universe. Wanted to race a monkey-navigated rocket car across the Bonneville Salt Flats? You did it, baby. Just not here. Yes, it’s a little sad the other yous are having all the fun. On the other hand, think of all the other yous that are sleeping on steam grates or doing time in the big house for petty larceny or woke up in the hospital with their bodies amputated from the third vertebrae down and a doctor saying “What the hell were you doing, letting a monkey navigate your rocket car?” You’ll realize this world is not so bad. Even if the Cubs don’t have chance.
I’m continuing my cavalcade of science-related pieces in honor of the release of Uncle John’s Bathroom Reader Plunges Through the Universe, to which I contributed a significant number of the articles contained therein. The pieces you’re reading here this week are rather similar in tone and content to the pieces you’ll find in the Uncle John’s books (explicitly in the case of the articles I wrote, and implicitly even in the one I didn’t, since they wouldn’ta bought so many of my pieces if they didn’t fit the general tone of their book), so if you like it, consider getting the book (click on the graphic for an Amazon link). Remember: You don’t have to only read it in the bathroom. Also remember the contest I’m running: The winner will get a whole stack of Scalziana. Yes, that’s a word. At least it is now.
Have We Got a Disease For You!
Looking for a little something to make you stand out from the infectious crowd? One of these maladies may just do the trick.
We know how it is. You want to be different from the other guy. Everyone else is walking around with a cold or a flu — your standard issue rhinovirus or influenza bug — but you want something different. Something that you’re just not going to catch on any street corner. Well, then, come one down. Right now we’ve got a nice suite of diseases, maladies and genetic conditions that will make you stand out in the crowd, if only because you’ll have to be locked in a sterile room with two or three levels of biological isolation protocols placed between you and the outside world. Won’t that be fun? Oh, don’t worry. Some of these diseases and maladies aren’t even fatal.
Carotenosis: Let’s start off with something relatively benign, shall we? Carotenosis comes from over ingestion of beta carotene, a pigment that you’ll find in vegetables such as carrots — your body turns it into vitamin A, which is, generally speaking, a good thing. Ingest too much beta carotene, however (say you eat nothing but carrots and drink nothing but carrot juice, just because you were curious to see what would happen) and eventually your skin will turn orange That’s carotenosis, a real example of “you are what you eat.” Carotenosis won’t kill you, it’ll just make you look funny, but massive doses of vitamin A can cause: Nausea, vomiting, irritability, hair loss, weight loss, liver enlargement, menstrual problems, bone abnormalities, and stunted growth for the kids. So if you find yourself turning orange, lay off the rabbit food for a couple of days.
Hereditary Methemoglobinemia: You say orange really isn’t your color? How do you feel about blue? This genetic malady causes a malformation in the hemoglobin molecule in your blood, reducing your blood’s capacity to carry oxygen. This turns arterial blood sort of brownish, and in folks of a Caucasian stripe, this will give your skin a distinct — and distinctive! — bluish tinge. True, your blood’s not exactly richly oxygenated, but that’ll just give you a fashionable appearance of ennui. But there is a catch: Hereditary methemoglobinemia is recessive, so by and large it’s prevalent only where (oh, how to put this delicately), a family tree has a few too many recursive branches. This was the case in the most famous case of this ailment, the Fugate family of Kentucky, where a high incidence of cousin-marrying eventually caused a number of Fugates to be blue, and not just in the traditional “I’m feeling mighty low” sense.
Want blue skin but would rather not have father-uncles and sister-cousins? There is also acquired metheoglobinemia, which you can get by exposure to certain toxic chemicals. However, the side effects of this variant are headache, fatigue, tachycardia, weakness and dizziness at low levels of exposure, followed by dyspnea, acidosis, arrhythmias, coma, and convulsions at higher levels, which is then followed by death. Speaking of feeling blue.
Kuru: Enough with this skin color nonsense, you say. Give me a truly distinctive disease! Fine, if you really want to make an impression, try on kuru for size. Even the name tell you it’s something truly nasty, since “kuru” means “trembling with fear” in the language for the Fore, the New Guinea highland tribe in which the disease reached epidemic proportions in the middle of the last century. Kuru’s first symptoms are headaches and joint pains, followed several weeks later by difficulty in walking, and uncontrolled trembling while asleep or while stressed (which would be most of the time, considering). Tremors become progressively worse, confining the patient to bed. This is followed by total loss of the ability to swallow or eat, and after that you’re just a hydrating IV drip away from doom. Oh yes, you’ll definitely be the belle of the ball with this one.
One minor detail, which would be how you catch Kuru in the first place You have to eat brains. Specifically, human brains. Even more specifically, human brains already infected with kuru. This is how the Fore got it — as part of their funeral rituals, they ate the brains of their dead. Not quite up for a Hannibal Lector moment? Well, fine. Let’s move on then, shall we.
Necrotizing Fasciitis: Or as you know it, flesh-eating bacteria! The funny thing is (and that’s funny as in ironic, not funny as in “non-stop chucklefest”), the affliction does live up to its name The bacteria involved in necrotizing fasciitis (which include the usually somewhat less virulent Group A streptococcus that give us run-of-the-mill ailments like strep throat) can actually eat through an inch of flesh in the space of an hour. What will make you truly paranoid is that early symptoms of necrotizing fasciitis are remarkably similar to flu symptoms, including vomiting, diarrhea, dehydration, weakness, muscle pain, and fever.
It’s the second set of symptoms — very painful infection around a cut or a bruise and/or a rapidly growing infection around said bruise — that will have you rocketing towards the doctors and praying that Western Civilization’s rampant misuse of antibiotics in everything from bathroom soaps to livestock feed hasn’t caused your personal area of infection to be packed with drug-resistant bacteria that will simply laugh cruelly at whatever it is the doctor administers to fight them.
The good news here is that the odds of your flu-like symptoms devolving into necrotizing fasciitis are a couple hundred thousand to one (your odds are somewhat greater if you’ve just had chicken pox, however). If you really want to reduce your fretting, wash any cut or scrape you get with warm soapy water, and keep the wound dry and bandaged, just like you’re supposed to. And rethink your desire to have a truly unique disease. After considering necrotizing fasciitis, a nice run-of -the-mill cold is beginning to look mighty attractive.
All right! One of my books for 2002 is now out: Uncle John’s Bathroom Reader Plunges Into the Universe (my other book for 2002, The Rough Guide to the Universe, now looks like it’s going to be released in 2003. Which is good — too much product released at the same time is no good, especially when both books have the word “Universe” in the title). Of course, I advise you all to run out and buy this book as soon as possible — if you can’t wait to get to a bookstore or are otherwise incapacitated (you’re being held down by stoats, say), then head on over to Amazon.com. I’m all about facilitating purchases.
The Uncle John books, if you’re not familiar with the series, are compilations of short articles (sized just right for light bathroom reading, hence the title); this particular one has a science theme — not just astronomy, but also health and earth sciences. I should note for the sake of clarity that I am not the “Uncle John” of the title: Indeed, technically, this is not my book at all. I am but a mere contributor. However, I wrote 40 articles in the book, which by page count is about a quarter of its total, and I think what I’ve written is pretty interesting. And I have very high regard for the Uncle John’s folks, so even if I hadn’t written a fair chunk of this book, I’d want you to go out and buy it anyway.
So what did I write about? Here is a sampling of the titles of articles I wrote for this one:
*Cool Astronomical Terms to Make Friends and Impress People
*Read a Weather Map Like a Pro
*How to Make a Black Hole
*”You Think I’m Mad, Don’t You?” (Mad scientist movies)
*The Body’s Second String (Little-known organs and systems)
*Big Moments in Forensics
*10 SF Books Even Nongeeks Would Love
And there are 33 others spread around the book. No, I’m not going to tell you which ones they are. I want you to guess.
In fact, let’s make this a contest. Go out and buy Uncle John’s Bathroom Reader Plunges Into the Universe (or, if you’re cheap and can weather annoyed bookstore staff, thumb through it at the store) and then send me the list of articles you think I wrote. The person who gets the most correct will win a John Scalzi Multimedia Gift Pack, which includes an autographed copy of The Rough Guide to the Universe (which is solely written by yours truly), an autographed copy of The Rough Guide to Money Online (a classic of the online money management genre!) a personally-burned CD compilation of Musicforheadphones plus extra tracks, and an electronic copy of Old Man’s War, the novel I’m currently shopping around. It’s a fabulous gift pack with a street value of, oh, I don’t know, $28 or thereabouts. The winner will get it sent whenever it is I get my author copies of Rough Guide to the Universe.
The rules: First, you have to send your list of guesses to me by December 31, 2002. Second, put “Universe Article Guesses” as your e-mail subject header, so I can filter them to a special mailbox and keep track of them. Third, if you were on the list of readers that I sent the Uncle John articles to while I was writing them, obviously you’re not eligible (and if you are one of these people, don’t tell anyone the titles of the articles; that’s just not fair). In the event of a tie, I’ll pick a winner by flipping a coin or whatever. No purchase necessary, but you’ll look fairly cheap if you don’t.
To give you a taste of the tone of the articles in the book, all this week I’ll be posting articles that I wrote for Uncle John’s Bathroom Reader Plunges Into the Universe but which didn’t make the final cut for whatever reason (4 didn’t make it; 40 did. I have no complaints). The first one is below. I’ll post another on Tuesday, one on Wednesday and one on Thursday (after which I’ll be out for a few days while I travel). So enjoy, and good luck with the contest.
You Smell Great!
Thinking about getting that pheromone-laden cologne? Hold that thought.
There’s a new special ingredient to cologne these days: Pheromones — chemicals your body secretes, or so you’re being told, that can help you attract the sort of hot mate that will get all slobbery with little or no prompting (or even noticeable social skill) on your part. And you think to yourself Finally. That whole flowers-and-chocolate-and-pretending-to-be-
interested-in-the-conversation thing was killing me. And off you go, to buy your pheromone cologne and let the chemicals do the talking for you. Well, before you pull out your credit card, let’s have a quick reality check about pheromones, humans, and you.
First off: Yes, pheromones really do exist, and they are chemicals that living things give off, not unlike a scent, in order to communicate with other members of their species. These pheromone communications are all over the board: Ants and termites, for example, will use pheromones to lay down a trail that other ants and termites can follow. Queen bees use pheromones to signal bee pupae that they’re going to be worker bees and not queens themselves. Wounded minnows will release pheromones to alert the rest of the school of fish to danger, a sort of fish version of the wounded soldier who says arrrrgh, I’ve been shot, go on without me.
However, many species use pheromones specifically to attract sexual partners. Insects are famous for this: Certain species of moths are so sensitive to a female moths’ pheromones that just a couple of molecules of it can get them running (well, flying. You know what we mean). Male wild boars have a pheromone that will actually cause a female of the breed to lock her hind legs into a sexually receptive position: No flowers-and-chocolate routine needed there. Even non-animals get into the act: Fungi, slime molds and algae all use pheromones to makes themselves super-sexy to other fungi, slime molds and algae. It’s not love, the fungi/slime mold/algae says to its excited new friend. It’s just pheromones.
So there you have it: Pheromones = instant sex appeal, right? Sure, if you’re a slime mold. But it’s never been proven that humans use pheromones to make themselves more attractive to the opposite sex. In fact, until 1998, it wasn’t even clear that humans were receptive to pheromones at all. There were several reasons for this, not the least of which was that the organ used by many animals to receive pheromone signals — a thing called the vomeronasal organ — is all but non-existent in humans. What small vomeronasal organs we have are tiny notches tucked away in our noses, and it’s not at all clear that they’re connected to anything.
What changed that was a study performed at the University of Chicago by researchers Martha K. McClintock and Kathleen Stern. While an undergraduate at the U of C in the early 70s, McClintock noted that the menstrual cycles of the women in her dormitory eventually synced up (it is, by the way, very typical U of C undergraduate behavior to notice this sort of thing), and suspected pheromones might have something to do with it. To check this, she collected sweat samples from nine women (by having them wear gauze in their armpits), and noted where in their menstrual cycle those women were. Then she took those sweat samples and daubed them under the noses of 20 other women. Yes, yes, total icks-ville. Science is not for the squeamish.
What she found was that the women who sniffed the sweat had their menstrual cycles noticeably lengthened or shortened, depending on what sweat they were sniffing. Sweat from women early in their cycle caused the sniffers to shorten their own cycles, while sweat from women later in their cycle had an opposite effect. There you had it: The first strong indication that humans can and do pay attention to pheromones. McClintock’s study left open many questions, such as how exactly the pheromones did their signaling, or even whether pheromones would work on other people who weren’t actively sniffing sweaty gauze. But those are details to be worked out.
There are some indications that humans use pheromones (or something very much like them) in helping to determine mates — in other studies, women appear to be attracted to the smell of men who have immune systems that are different from their own (this study involved women sniffing sweaty shirts). But again, it’s important to note that so far, there’s no conclusive study that specifically identifies a human pheromone that actually makes one sex more attractive to the other sex (or one sex attractive to the same sex, if you want to go that direction).
What does this mean for your pheromone-laced cologne? Basically that it’s a waste of money. The only thing we’re reasonably certain human pheromones do is manipulate the menstrual cycle, and generally speaking, that’s not something you really want to fiddle with, for everyone’s piece of mind. Your best course of action at this point is to stick with your current cologne and try to brush up on your social skills. Hey, people have been finding love the old-fashioned way for millennial, without the use of pheromones (so far as they knew). It could work for you too. Flowers and chocolate can’t hurt either.
Here’s an interesting question for you: Considering that the music industry essentially dictates the shape of the youth culture, how can it be so thickheadedly clueless about talking to teens about file sharing? The latest music industry salvo in this direction is a Web site called MusicUnited.org, which is designed to bring home the point that nearly all file sharing is illegal and wrong. Let’s take a moment and discuss all the ways that this site is going to fail miserably.
1. It’s not a cool site. It’s not cool in its intent, of course, since its intent is to keep kids from doing something they want to do, which is to share files with each other. But you can get past that if you can get your message across. The site totally screws this up right from the beginning: One of the headlines on the front page says of file sharing: “It’s illegal and it’s a drag!”
A drag? I mean, good Lord. I’m 33 and I winced when I saw that. It immediately calls to mind your junior high health teacher trying to use hep slang to tell you about why drugs are bad. The worst thing an adult can ever do when speaking to “the kids” is try to use current slang and fail (the second worse thing is to use it and use it correctly, and yet still sound like you have no clue). The site immediately sets itself up to be mocked purely on the basis of how it presents its message, which means the message won’t even get considered.
2. The site threatens. Despite the nice (but too conservatively-designed) graphic design, the textual tone of the site is one of distinct and total menace. Every bit of text reinforces ominously that file sharing is illegal (and wrong), and that there are severe penalties if you’re caught: The site’s favorite bit of trivia in this respect is the maximum penalty for copyright violations, which is five years in the stony lonesome and a $250,000 fine. “Don’t you have a better way to spend five years and $250,000?” asks the site.
Please. The minute the music industry actually ever pressed for the maximum sentence for copyright violations to be imposed on an actual teenager is the minute the shit really hits the fan. No one in their right mind believes that the penalty for a college student downloading the White Stripes album from Kazaa should be half a decade of prison rape and being traded in the exercise yard for a carton of Kools. If the RIAA actually pressed for this for a single casual downloader of music, the backlash of public opinion would destroy the music industry. They know it, and more importantly the kids know it, too. Waving around a big threat stick when you have no ability to use it makes you look sad, desperate and weak, which is certainly no way to get a teenager to listen to you.
3. The site romanticizes file-sharing. The music industry is using the same style of rhetoric against file-sharing as responsible adults used against drug use in the 60s and 70s, during which time, you’ll recall, the kids made drug use pretty much the cornerstone of youth culture. Because anything that really pisses off the grownups is worth doing more than once.
Now, this is not going to be an exact analogy, and thank God for that, since the last thing the world needs is a Cheech & Chong-like pair of wacky file sharers making movies about ripping off the music industry. But it’s good enough, and is certainly more than enough to make the kids feel that by downloading Vanessa Carelton, they’re striking a blow against the Man, or whatever it is the kids are calling “the Man” these days.
The site additionally compromises its position by featuring an area that details the civil and criminal penalties parents can face when teens download files, thereby informing the kids that here is yet another way that they can get back at their parents for having birthed them and forcing them to grow up in suburbia. Good move.
4. The site picks the wrong musicians to plead its case. On the site and in a newspaper ad that runs today, the music industry hauled out the stars to make its point, featuring quotes by Britney Spears, Nelly, Dixie Chicks and (wait for it) Luciano Pavarotti. This is supposed to reflect the depth of diversity of the musicians want you not to share files. The problem is, each of these artists is a multi-platinum artist whose net worth is in the millions. Britney Spears is worth over $100 million personally, as she noted recently in a People interview. The kids are not going to be sympathetic to a bunch of millionaires complaining they money is being taken from them. I know this because I’m not sympathetic to them.
The sort of musicians who should be highlighted in a campaign like this are the ones who actually will get hurt by file sharing: New musicians, musicians with smaller followings, musicians who aren’t already millionaires. The Web site features a couple of these, hidden so far down that their quotes are buried. But you tell me, which of these quotes is more compelling to you?
“Would you go into a CD store and steal a CD? It’s the same thing, people going into the computers and logging on and stealing our music. It’s the exact same thing, so why do it?” — Britney Spears
“I live with my drummer and guitarist and we have no money. Our survival is based solely on the purchase of our music. Music is not free. Even the street performer gets a dime in his box.” — James Grundler, Singer/Songwriter, Member of Paloalto.
Personally, I think the “Dude, I’d like to eat” line from a struggling musician carries rather a bit more moral weight than the “Golly, it’s like stealing from a CD store!” line from a 20-year-old woman who has more money than she can reasonably expect to spend in a lifetime. If nothing else, the kids who want to be musicians will feel closer to the situation of the guy in Paloalto than to Britney.
The final problem, however, is one that the music industry made for itself, which is widely-held perception that music is both absurdly expensive and that the vast majority of the money that gets paid for a CD goes to everyone but the people who actually make the music. The reason for the perception is that it’s true. Why should a kid believe that $18 is a fair price for a CD when he or she can burn one at home for about 50 cents? The economics of record contracts are now common knowledge as well, and when a kid realizes that his or her favorite band can sell millions of CDs and still be in the hole to the record company, there hardly seems to be an incentive to support a system that appears to screw the people who make the music.
The site notes that making an album these days can cost $1 million or more, but this doesn’t argue against pirating music, it argues against spending so damn much to make a record. I review indie albums every week on my IndieCrit site, and the sound quality of a sizable percentage of those recordings rivals anything you’ll hear from a major label. I can guarantee you those indie artists aren’t spending a million making their CDs. They’re also not to blame for creating a system of promoting music that requires an outlay hundreds of thousands of dollars to get music added to the playlists of ever-more consolidated radio stations, which play ever-safer music.
I’m not suggesting the kids are striking a blow for artists rights by boycotting the unfair system. That’d be a little much. Most of them just like not having to pay for the music. It’s more that they can spend on video games. But it wouldn’t hurt if the music industry wasn’t perceived as a bloated, vaguely vampiric entity that appears to survive by sucking the life force out of the people who make the music that kids respond to.
If I were the music industry, I’d scrap the MusicUnited.org site and try for something that starts with the assumption that the kids aren’t the enemy and have to be threatened, but are actually reasonably intelligent people who might be persuaded to spend money to support their favorite musicians if it could be intelligently explained to them why this is actually a good thing to do. In the meantime, the site is the music industry equivalent of “Just Say No” — The right message, perhaps, but the utterly wrong way to say it.
Krissy came home the other night with Who Moved My Cheese? It was pressed onto her at work by one of the managers at her new place of employment, who told her that all new hires were actively encouraged to read it (Here’s a clue to the sensible Midwestern frugality of her new place of work: Rather than buying a copy for every new hire, which would cost $20 a pop at list price, they simply lend out the same copy over and over). My understanding is that it’s arguably the number one business motivational book on the market. Well, I’m in business, and I prefer to be motivated, so I read it. And now I can say, if this is what people are using to motivate themselves in corporate America today, no wonder the Dow is where it’s at. It is, without exception, the stupidest book I have ever read.
The motivational lessons in the book come in the form of a parable, suitable for reading to your three-year-old, about four creatures in a lab-rat maze. Two of them are mice, and two of them are little mice-size humans, and they eat the cheese that’s placed in a certain location in the maze. Eventually, the amount of cheese decreases and then disappears. The mice, who noticed the decreasing amounts of cheese, take off through the maze to find more cheese. The little humans, on the other hand, bitch about the loss of cheese and reminisce about the days when cheese was plentiful. Eventually one of the humans gets off his ass and heads out to find more cheese, and in doing so, has a motivational epiphany every few steps, which he feels compelled to scrawl on the walls of the maze.
Eventually he finds more cheese in the maze, as well as the mice, who have grown fat and happy with their new store of food. The little human considers going back for his friend, but then decides that, no, his friend must find his own way through the maze. He leaves his old pal to starve, as that’s almost certainly what his dim, stubborn friend does, and feels all shiny and self-important for finding his new cheese.
The entire parable is framed with a conversation between several friends, one of whom is telling the parable, and the rest of whom spend the parable’s epilogue wondering how they ever got through their professional and personal lives without hearing about the cheese (an interesting rhetorical cheat, incidentally — the author is confirming the usefulness of the book by creating characters that are helped by its philosophy, but which don’t actually exist in the real world. This is a very Ayn Rand thing to do).
The overall idea of the book is that change is inevitable and if you’re smart, when it happens you won’t spend much of your time bitching about how you don’t like change; instead you’ll adapt to the change and get on with your life. The “cheese” represents all the things you’ve come to rely upon. Well, let me save you 20 bucks and boil the lesson of the book down to exactly five words: Shit Happens. Deal With It.
Also, the book throws in a few other lessons, which are hopefully unintended:
1. Life is a maze that has been laid out without your control or consent. The best you can do is run through it and hope you run into the things that make you happy.
2. You have no control over the things that make you happy — their quantity and quality are controlled totally by outside forces, with whom you cannot interact, and which have no interest in your needs.
3. The mice in the parable understood that the “cheese” was decreasing but neither informed the little humans nor seemed interested in helping them once the cheese was gone. Mice represent the “other.” You cannot trust the “other.” Stick to your own kind (alternately, the mice represent management, who know more about the reality of the situation, and the little humans are the rank-and-file, intentionally kept in the dark by management. Either way: Not to be trusted).
4. The one little human found more cheese but decided not to return to help his friend, rationalizing that it was up to his friend to find the way. Moral: Once you’ve got yours, you don’t need to share. It’s not your responsibility to share your knowledge with others, even if the cost of sharing that knowledge is trivial and doing so will immeasurably improve their lives (i.e., in this case, keep the other little human from starving to death).
In other words, the formulation of the book posits a world that is confusing and sterile, in which the things that might make us happy all exist outside of ourselves, and in which the ultimate successful qualities are selfishness and paranoia. I wonder how popular this book was at Enron and Global Crossing.
Look, people. If you ever find your “cheese” decreasing, don’t run around frantically in a maze, looking for something else to replace it. Simply learn to make cheese. Which is to say, be responsible for creating your own happiness internally instead of relying on something outside of you to provide it, and living in fear that it will go away. This way, when the cosmic forces take away your cheese, you can look up and say, screw you and your stinkin’ maze. I’ll move when I damn well feel like it.
Even better, you won’t have to compete with others for your cheese. Heck, eventually you’ll have surplus cheese to give to your friends who might be starving for some. You can teach them to make cheese, too. Give a man a piece of cheese, and he has cheese toast for a day. Teach him how to make cheese, and you’ve got a life-long fondue party pal.
Mmmm. Fondue. Much better than scampering blindly through a maze. Or paying $20 for a book that condescendingly tells you that’s what you should be doing with your life.
Interesting feedback from the Bob Greene thing the other day. Aside from the journalistic schadenfreude of watching Bob Greene fall — which is considerable, so that’s a warning to all of you who wish you had his career up until last weekend — the largest spate of e-mail I got about it came from 40-plus-year-old men who wanted me to know that they don’t like 18-year-old girls. Not at all. My universal response to these fellows was: Good for you. I’m sure your wives are proud.
As it happens, I’m not so keen on 18-year-olds myself; in the grand scheme of things, procuring one today would be more trouble than it’s worth. This has nothing to do with their physical charms (about which I’ll comment in a minute) and pretty much everything to do with the fact that at the age of 33, the only two things I have in common with the typical 18-year-old girl are that we are both human and speak the same language, plus or minus a couple dozen words of slang. To be terribly male about it, I suppose I could have sex with an 18-year-old if I had to. I just wouldn’t enjoy the post-coital conversation very much. So if it’s all the same I’ll pass. Fortunately for me, there are not great throngs of 18-year-old hotties at my door, licking the window panes to entice me to let them come up for a romp. You can imagine my relief.
Over at Slate, Mickey Kaus begs to differ about my point concerning Greene’s encroaching mortality being a consideration for his boinking a teenage woman; Kaus writes:
“Why do men — like Scalzi here, or Warren Beatty in Shampoo (or whoever wrote Warren Beatty’s lines in Shampoo) — have to explain their desire to have sex with attractive women in terms of a struggle against mortality (“middle-age-death-denying” in Scalzi’s words)? You mean they wouldn’t have sex with young women if they were in good shape and knew they were going to live to be 300? They didn’t want to have sex with young women when they were young themselves? It’s sex! Millions of years of evolution have designed men to want it and enjoy it.. It’s stupid to try to explain this urge in some highfalutin’ literary or spiritual way — and revealing that even relatively no-BS men like Scalzi (or Nick Hornby in High Fidelity, to name another) feel that they have to.”
Let’s separate this out. There’s the first point, on which Kaus is entirely correct, which is that boinking hot young women is really its own excuse. You all know the drill concerning the genetic and cultural reasons for this, so let’s pretend I’ve made all those points so we can move on. There is the point to be made here that (some) men are turned off by the yawning chasm in life experience between themselves and the average 18-year-old, and therefore prefer the company of women nearer their own age. As I mentioned earlier: Good for them.
On the other hand: Provide a man with the brain of a 45-year-old woman (yes, he’ll suddenly become smarter, ha ha ha, thank you very much) and tell him he can put it either into the body of a fit, attractive 45-year-old woman, or into the body of a fit, attractive 18-year-old woman. Let’s all not pretend that the 45-year-old body is going to do anything but sit there with a blinking neon “vacancy” sign flashing over its head. In a perfect world (for men) women would hover around age 23 forever (In a perfect world for women, I expect you’d see a lot more variation in age, from a Heath Ledger 22 to a Pierce Brosnan 49, with the median being a Brad Pitt 38).
Still, conceding this point, which I readily do, doesn’t mean that middle-age dudes still don’t actually see (or at least rationalize) porking the young as a fist in the snout of death. It’s not especially highfalutin’ to point it out, it’s actually pretty sad and common. If you’re thinking about death, or how you’ve squandered your potential in middle management or wherever, you want to do things that make you feel alive. Having sex with young women is the male mid-life crisis version of the Make-A-Wish Foundation. It doesn’t keep you from dying, but at least you get to go to the Magic Kingdom one more time.
Whether this is the particular case with Bob Greene is another matter entirely. As journalist Nancy Nall notes on her site, Greene has had a reputation as a skirt-chaser for a while now, so if these scandalous rumors are true, he’s merely pursuing a modus operandi honed over decades (eeeeew). In which case Kaus carries the day. This encounter really is less about middle-aged angst than it is just about making a fast and easy booty call on the Youth of America: Dinner and dessert. Let’s hope it was at least an expensive dinner. Taking the girl out to Harold’s Chicken Shack before slipping her the drumstick would just be chintzy and sad.
Moving away from the realm of horndog newspaper columnists and the teenage girls they cavort with, let me take a minute to bow down to my own superfabulous wife, who as you may know started a new full-time job on Monday. She was at the job roughly six hours before she got a promotion into another department; the department had an opening, saw her resume and made a (barely) internal hire. This is a testament both to Krissy’s fabulousness and her new company’s ability to judge talent. I’m pleased because at this rate of ascent, Krissy will be able to support us all on her income alone by about this time next week. Which means I can retire and spend more time on the important things, which are, of course, video games. Yes, I’m aware that this last statement means that if anyone in this relationship is going to be traded away for a new hot and young plaything, it’s going to me. It’s a risk I’m willing to take.
Header in my Spam box today: “Barnyard animal rapers take it to the extreme!!!” Jesus. Aren’t they there already?
Speaking of taking it to the extreme, Chicago Tribune columnist Bob Greene resigned his position over the weekend because someone blabbed to the Tribune (in an anonymous e-mail, no less) that Ol’ Bob had a sexual encounter with a teenage girl a decade ago (he would have been in his mid-40s at the time). He had met the girl in connection with his newspaper column. Interestingly enough, it’s that last part that seems to be the smoking gun, not that she was a teenage girl and he was a middle-aged guy with what looks like a bad haircut, although all of that looks bad enough. Apparently she was the age of consent, even if she was a teenager (there’s a couple of years where those two overlap). But having sex with someone you meet in connection with a story is a no-no.
That Bob Greene would have sex with a teenager while he was huffin’ and puffin’ away at middle age is not much of a surprise. First off, he’s a guy, and if the average 40+ guy gets a chance to boink an 18-year-old without penalty (or in this case, a penalty delayed by several years), he’s going to take it. Undoubtedly he’ll have a good rationalization (we always do, and Greene, being a writer, probably has a better one than most), but to cut to the chase, he’ll do it because she’s hot and young, and because during middle age the Veil of Male Self-Deception, even at maximum power, can no longer hide the fact that one day the man will die, and that between now and then, the number of truly hot young women he can have without paying for them is small and getting smaller, fast. So that’s reason number one.
Reason number two that it’s not at all surprising is that Bob Greene is, by self-appointment, Boomer America’s Newspaper Columnist. Well, was. Anyway, as a chronicler of the Boomer Nation observing itself, it was only a matter of time. Boomers have never done anything that wasn’t eventually about them; it’s the funky never-ending narcissism thing they’ve got going. No, that doesn’t make the Boomers evil — every generation has its annoying tics (my generation, for example, has a tendency to whine like kicked puppies being shown the boots that will get them in the ribs), and this is the Boomers’. Also, rather unwisely, the Boomers made a fetish of their youth when they were younger — hey, they were young, what did they know — and they’re not handling the inevitable decrepitude well. Narcissism + Getting Older = Irrational Behavior, often involving younger women in ill-advised trysts. As Boomer America’s Newspaper Columnist, how could Greene not do this? He’s just staying true to his demographic.
Reason number three is that Bob Greene telegraphed the idea he’d do (or did, depending on the timeline) something like this a decade ago in his perfectly awful novel All Summer Long. The story involves three life-long high-school chums, who when confronted with the stirrings of middle-age do what all newly-middle-aged men do in mediocre quasi-autobiographical fiction written by newly-middle-aged Boomer men: Take a long vacation away from their families and responsibilities to “find themselves” on America’s byways. This, of course, often involves extracurricular sex with hot babes. In the case of Bob Greene’s obvious stand-in inside the novel (a nationally well-known TV journalist named “Ben”), this means having sex with a graduate student roughly half his age. In real life, Greene diddled with a high school student closer to a third his age, but, speaking as a writer, one always tries to make oneself look better in fiction.
Now, Greene didn’t have to follow through on the whole sex-with-a-much-younger woman thing just because he wrote about it. Mystery writers write about killing people all the time; most of them don’t actually attempt to follow through. But sex with a younger woman won’t kill you (just your career) and anyway let’s revisit points one and two here. It wasn’t inevitable, but when a guy draws himself a roadmap and hands himself the keys to the car, it’s not entirely surprising he ends up in Whoops-I-slept-with-someone-my-daughter’s-age-ville, looking for a motel that rents by the hour.
Be all that as it may, I do have to wonder what the problem is here. Greene’s sleeping with a teenage woman is gross to think about, but they were both of legal age, and even if she was just barely so, “just barely so,” counts as legal. So far as I know, Greene applied no coercion other than his not-especially-dazzling celebrity, and as everyone knows, if a great many celebrities didn’t do that (especially the not-especially-dazzling ones, and especially ones, like Greene, who have a face for radio) they wouldn’t get any action at all; they’re just as lumpy and furtive as the rest of us.
Journalistically speaking, having sex with someone in one of your stories isn’t very smart and is definitely suspension-worthy (a nice long “leave of absence” would have been good), but it’s not a crime. From what I can tell, Greene even waited until after he had written about the woman to hit her up. The Tribune is labeling it a “breach of trust” between journalist and subject, but if he did in fact wait until after he had written about her (and did not write about her post-boinkage), where is the breach? What I see is simply middle-age-death-denying sex, which God knows is common enough. Unseemly, sad and more than a little creepy, but there are worse things a journalist can do. Hell, it’s not plagiarism.
There’s probably more here than what we know now, that’s my only guess. It’s worth noting that the Trib didn’t fire Greene; he apparently offered to resign and the resignation was accepted. If I were a corporate suit, I’d’ve taken the resignation too, since it was an easy way to distance my company from Greene’s compromising position.
Also, I think Greene should have been cut as a columnist years ago, not because he’s morally tainted, but because he’s a boring columnist. He stopped being interesting and started being filler long before he did his questionable after-school activities. From a purely utilitarian point of view, there’s no downside to Greene hightailing it out of town, excepting that there will be the painfully rationalized mea culpa six months down the road as part of Greene’s inevitable comeback (America loves a reformed sinner).
But based on what we know now, this isn’t the way Greene should go out. If he needed to be yanked, he should have been yanked on the merits of his writing (or lack thereof), not because of sex he had a decade ago with a legal adult who apparently gave her consent after she was no longer his journalistic subject. Greene is getting popped on a dubious technicality, and though I would have never imagined I’d say something like this, I think he probably deserves better. Getting canned for being a boring columnist would probably have been harder on the ego, but at least it would have been a reasonable excuse for getting escorted from the building. I won’t much miss Greene’s columns, but even I wish he could have had a better final act.
People have asked me when I thought that blogging would finally “pay off” — that is, that it will finally become a viable way for a writer to make money. This question comes coincidentally close to the debut of the Blogging Network, a sort of “United Artists”-model concern in which a number of bloggers have offered up content on a “premium” model. From what I understand, the reader pays $3 a month for access to every blog on the Blogging Network. Half of the money collected goes to support the network itself, and half is distributed to writers, the percentage based on their popularity. Probably the highest-profile blogger to put material up behind the subscription firewall is Bill Quick, who is also a prolific science-fiction writer, and an Internet acquaintance of mine for several years. Former San Jose Mercury News writer and columnist Joanne Jacobs is also on hand and is putting up adaptations of her book in progress. So overall, it’s a good time to be thinking about blogging and money.
Let me start with the Blogging Network economic model first before I get to the general concept of bloggers making money. Simply put, I’d be very surprised if the Blogging Network worked to any financially useful extent for the bloggers involved.
Content subscriptions are a risky model online. Bill Quick holds up Salon’s 40,000 premium subscribers as proof that people will pay for good content online. However, it’s worth remembering that those subscribers comprise between 1% and 2% of Salon’s total readership — meaning that more than 98% of Salon’s readership didn’t want to pony up the cash. Considering a “successful” blog is one with a few thousand readers (not the couple million Salon has), a similar paid conversion rate would come out to 50 subscribers or so.
This seems consistent with the number of subscribers the network appears to have so far. Blogging Network posts its “annual run rate” — a public announcement of how much they grossed so far, which at the moment is $1255.80. This is their first month, so running the math, that’s 420 subscribers (one of them, in the interest of disclosure, being me). There are 16 bloggers currently participating, so presuming I’ve run the numbers correctly, if each of them brought an equal number of their readers to the Blogging Network, each of them has managed to convince 26 or 27 readers to convert to the paid model. It’s still early, so they’ll probably grab a few more. However, the payment for the site is month-to-month, so after the first month, it’ll be a matter of keeping the old subscribers as well as gaining new ones.
This is where things will begin to get tricky. Existing bloggers can only convert readers they already have; once they’re behind the subscription firewall, the only people who will see their new material are those who have already signed up. Potential readers will have no access to material to see if they like it enough to sign up — unlike a paper magazine, you can’t just thumb through a subscription site (at the very least, you can’t thumb through the Blogger Network site, so far as I can see). Readers probably won’t sign up for things they can’t read. One solution to this is for the blogger to continue his or her free site as a loss leader to convince people to sign up for the premium material — again, a trick from the Salon playbook. The problem with this is, look where’s it’s gotten Salon: a 1 to 2 percent conversion rate.
Blogs are by and large a solitary pursuit, so the blogger has increased his or her workload considerably: He or she has to create a free blog which is of sufficiently high frequency and quality to convince readers to convert to the premium material, and then a premium blog of similar frequency and higher content quality to justify a continual $3/month purchase. I would imagine a counter-argument to this might be that the $3/month buys access to several blogs, not just one, so it’s not accurate to put all the weight on a particular premium blog.
But as a practical matter I don’t see how you can avoid it. If you advertise your product as premium (which you do implicitly by charging for what has been essentially a free resource up to that point), all the content has to meet that higher standard. If a reader perceives that one or two of these “premium” blogs are good stuff but the rest are slack, they’re reasonably likely to believe they’re not getting their $3 worth. There’s also the matter that as far as I understand it, the Blogging Network rewards high page views, which — presumably — would be generated by high quality content. Coming or going, the blogger has cut out more work for himself or herself.
There’s also the problem with the format of blogs, in terms of justifying their status as paid content. Most blogs are essentially agglomerations of links with short, functional commentary added; one reads the commentary, but it’s usually dependent on the link for context; if you don’t link, you’re missing half (or more) of the story. In this way, blogs represent a new kind of content: Conduit Content, in which the primary idea of the content is to lead to you somewhere else. This is opposed to Destination Content, the much more traditional brand of content, in which the primary idea of the content is to keep you engaged with the material at hand (This site, incidentally, deals primarily in destination content, which is one of the reasons I’m deeply ambivalent about it being called a blog, or me a blogger).
Conduit content is a truly fascinating concept and probably worth study academically, but it doesn’t make a compelling case for being paid for. It’s fundamentally about the link, not the writing surrounding it, and any idiot can make a link. The very best examples of conduit content, as writing, are not terribly far removed, in terms of utility, from mediocre examples. Many of the best-regarded bloggers (Glenn Reynolds being one, and Bill Quick himself being another) will frequently simply air a link with minimal commentary at best, making that link indistinguishable (as writing) from that of the dittohead blogger who feeds off better-written sites for links in the first place. Indeed, blog indexing sites like Blogdex and Daypop are frequently more compelling as functional blogs than the blogs they track and chart — not at all unlike how the S&P 500 Index outperforms 90% of living, breathing fund managers. In short, if an automated indexing tool can create a blog that is functionally competitive to a human-created blog, why would one want to pay for a human-created blog?
One way to answer this would be to make the premium blogs destination content, which computers can’t yet create (at least, not very well) — Joanne Jacobs could be thought of as an example of this, since she’s offering up adaptations of her book in progress in her Blogging Network blog, and I think it’s a fine idea. But here’s the catch on that: How many bloggers — even the good ones — are actually good writers? And of those, how many are so good that you’d actually want to pay for their work? For the former of these categories, the answer is few, and for the latter the answer is even fewer.
It’s not at all a coincidence that many of the most popular bloggers write professionally; despite the egalitarian nature of the Blogoverse, very good writing needs to be developed over time and usually in the presence of an editor or two (writers hate that fact, but there it is). There are indeed “non-pro” bloggers who can and do write compelling and consistent (important when you’re thinking of charging) destination content. I’m fond of citing Steven Den Beste, who was an engineer by trade, as an example of this (although he shows no interest in blogging for dollars). But they are thin on the ground relative to the number of bloggers who can’t.
This is true even on the Blogging Network, which outside of Bill Quick and Joanne Jacobs appears to be populated by writers whose level of writing quality is no better than the large majority of blogs and online diaries that are available for free. In one sense, this is fine news for Bill and Joanne, since I suspect the lion’s share of net proceeds of the Blogger Network will be going to them. But it’s not so fine in the sense that the average quality of material on the Network doesn’t justify the cost. It certainly doesn’t justify $35.88 a year, which is five dollars more than what Salon charges on an annual basis, and more than what I pay for my combined subscriptions to Esquire, Rolling Stone and Mother Jones (which I got as a premium for my Salon subscription). And I don’t think the potential financial return will at all justify the amount of effort Quick, Jacobs and others will have to put in to make the Blogging Network function at even a modest level.
In a larger sense I’m not optimistic that blogging will ever be a profitable endeavor in itself. I think it’s instructive that the vast majority of professional writers who blog are realistic enough to understand that the value of a blog lies in its promotional value. My high school classmate Josh Marshall is a fine example of this: He’s parlayed Talking Points Memo into a reputable advertising platform for himself — his work there is frequently referenced in the Washington Post and the New York Times — which he can then use to logroll into a higher profile in the world of political writing. James Lileks has wrung two books out of his Web site and his sizable online audience no doubt appeals to his publisher as a potential book audience.
As a professional writer, I’ve certainly used this site to build professional relationships: A fair chunk of my income comes from people who found out about me through material on this site. But it’s not at all worth the effort to make the site a profit center in itself, if for no other reason than the cost of lost opportunities is substantially higher than the potential income the site could produce. Andrew Sullivan is reputed to have made somewhere in the area of $30,000 off his Web site last year, which is boggling for a personal site offering nothing but writing and links to an Amazon store, so good on him. But relative to the amount a writer of his reputation could have made otherwise, 30 grand is less. Sullivan has his reasons, I’m sure, for choosing the less remunerative route, and bless him for it. But most writers prefer to take the path of least financial resistance.
I also don’t think it’s a bad thing if blogging, in itself, never becomes a profitable enterprise. The act of blogging (or of writing online in other forms) offers some tremendous advantages for writers and readers that are fundamentally based in its “we’re doing it because we love to” nature. Amateur — in the classical sense of someone who does something out of love — is not at all a dirty word, and financial success is almost certainly the wrong metric by which to judge the success of blogging as a medium. Blogging may never be profitable, but it is already useful, to some extent it is influential (not as much as bloggers like to think, but more than mainstream media gives it credit for), and in any event it is usually pretty fun. It’s worth doing, even if from a dollars-and-cents point of view, it’s not worth much.
Someone wrote in not long ago to ask me why I haven’t written about Athena recently. The short answer is that what with the books and all, I haven’t actually been writing about much of anything here, much less Athena. But the other reason is that simply that I haven’t much felt like it. As most of you know, while I’m usually pretty personable in this space, I don’t really get all that personal — I try to avoid talking about my neuroses on a constant basis, for example, and as far as any of you know, my wife and I have never had a cross word or misunderstanding. I prefer to keep it that way. I know many of you feel you know me (and in some cases that feeling is actually true), but some things are my own, and not yours, and I have no problem keeping them that way. This isn’t a confession booth or a therapy couch, at least not directly. Not every thing needs to be said in public.
In the case of Athena, as she grows older I grow more cognizant that her life is not merely an extension of my own, or just fodder for the space here or with some other writing assignment. Don’t get me wrong, I will still blather on about her and about being a dad, and so on and whatnot (especially if there’s money involved! Mmmmm…sweet, sweet money). But on the other hand I’m not in a rush to chronicle every last adorable moment or pride-bursting achievement. Others do that, in traditional media and online, and more power to them. I don’t intend to do it as often as they. I heartily intend to bore my audience in other ways.
And yet (and of course), I love talking about her, and writing about her. As I’m sure I’ve mentioned before, one of the great surprises about fatherhood has been how consistently fascinating having a child has been. Before having Athena, I had expected that a kid of mine wouldn’t really become interesting until it could actually speak; therefore the first two years of your child’s life was something of a waiting game, counting down the time until you could actually engage your spawn in conversation. Like most of my assumptions involving parenthood, this one was spectacularly wrong; Athena was interesting from the get-go, and she gets more interesting as she goes along.
Every parent thinks that, obviously (or should think it, in any event), and the fact that we do makes me wonder where along the way we forget that kids are capable of surprising leaps of, if not intelligence, at least intuition and imagination. It’s probably because most of everything before five is kind of a haze. I once interviewed Orson Scott Card, a novelist who has written several novels with incredibly precocious children, and I asked him if he had ever met a child who was as self-possessed as the kids in his books. His response was that children had the same subtlety of thought as adults, they just lacked context and experience. The children he wrote about were exceptional, but in some sense he was simply translating the inner life of children in a way that adults could understand.
In my opinion Card’s a little overgenerous in the general sense (he never really did write about anyone but truly exceptional children, the sort that write extended political essays or fight multi-tiered battles with aliens, rather than the kind that like Fruit Roll-Ups and Blue’s Clues), but he’s correct in the basic premise that children can be sophisticated thinkers rather more often than adults give them credit for being so. Athena has yet to best either her mother or me in a game of logical reasoning, but that’s mostly because we have the better part of three decades on her. Like a raptor poking at the fences in Jurassic Park, she’s constantly testing for weaknesses and slip-ups, and it’s really actually enjoyable watching her try to get one past us. It’s only a matter of time before she does.
Mind you, when she doesn’t, she’s still not above having a tantrum to try to get her way, so she’s still very much the three-year-old. These tantrums typically don’t work. But hope spring eternal. In the meantime, and as you can see from the picture, she’s strong-willed, smart, and sporting a ‘tude, and no, I have no idea from where she might get that. I don’t expect she’s all that different from other children her age, although I wouldn’t mind terribly if she were. I wouldn’t mind her being an exceptional ‘tude-sporter.
In any event, Athena will continue to make her appearances here, and probably on a not-infrequent basis. But I hope you don’t mind if I keep some (many) things to myself, between me, her mother and her own little person. Eventually, she’ll be old enough to tell you more about herself and her point of view, if she wants to. If she wants to, I think it’ll be worth the wait.
There’s nothing that makes me want to gag more than conservative white men in power who complain people are bigoted against their kind. That’s what Florida governor Jeb Bush is doing, talking specifically about Jerry Regier, a man Bush picked to run Florida’s Department of Children & Families — you know, the one that keeps losing track of children until they end up dead on the side of the road, at which point it’s a race to see who can find their little bodies first: Concerned truckers steaming north on the 95, or the alligators.
Regier is a fundamentalist Christian. He also signed his name to a 1989 article which condoned corporal punishment to the point of causing actual physical damage to the child. He’s since denied writing the article; he says he was just the co-chairman of the committee that oversaw the article, and had no control over the content, so you can see how useful it was for him to be co-chairman of that particular committee. However, it’s largely immaterial, since he did admittedly write an article a year earlier for a conservative Christian publication in which he affirmed whacking on kids, based on Biblical justification, and plumped for the idea men’s dominance over their wives and the desirability of keeping the womenfolk at home. So, basically, the guy Bush has running his child welfare agency is on the record giving a thumbs up to beating children and keeping women in the thrall of men.
Naturally, there’s been something of an uproar over Regier’s appointment. Yesterday, Bush, who already has enough problems, defended Reiger by crying bigotry, saying that there’s a “soft bigotry that is emerging against people of faith.” Of Regier himself, Bush said, “It really doesn’t matter if Jerry has a deep and abiding faith and it certainly doesn’t disqualify him for public service. I think there’s bigotry here and it troubles me.”
Well. There is indeed bigotry going on here. But it’s not that people are bigoted against fundamentalist Christians; they’re bigoted against people who advocate child beatings and spousal subjugation running a government department that’s supposed to prevent child beatings and spousal subjugation. Reiger’s “deep and abiding faith” is entirely immaterial. If Regier were a fundamentalist Christian who hadn’t put his name on an article that suggested it was okey-dokey to discipline children to the point of causing internal bleeding, people wouldn’t care who or what he worshiped (actually, that’s not at all true — if he worshiped, say, Chango, the Yoriba spirit that’s so popular with the Santeria folks, people would be in an uproar even if he never once signed off on switching a child until it welted. Funny thing, that).
However, he did do that, and now people are justifiably concerned. Again, it’s not about faith; it’s about Regier’s own personal views. Regier muddies the water by claiming his opinions are based on scripture, so it’s mildly ironic that Regier said yesterday that “for somebody to use their religious beliefs as a cover for abusing children is wrong.” I’m glad he said it; if I were a Floridian, however, I’d want him to specifically spell out where he believed the line between righteous discipline and abuse might actually be.
In any event, Bush does faith a disservice by suggesting that people are bigoted against it. The vast majority of Americans have faith; suggest to them they they’re bigoted against it, they’ll probably tell you to go to Hell, a suggestion that would refute your assertion on a number of levels. It’s a cheap and cynical misdirection to mask the real issue, which is that people are worried that based on his own expressed opinions, Regier is spectacularly the wrong person to run the department he’s supposed to run.
Strangely, even as Bush was running down those he thought were bigoted against faith, he did a not-so-subtle discounting of the very sort of faith he was trying to prop up. When a reporter followed up on Regier quoting the scripture that read “Smite him with the rod,” in the context of child discipline (this being the article Regier actually wrote), Bush dismissed the query. “Without getting into biblical references, do you think that saying `an eye for an eye, a tooth for tooth’ actually means that someone ought to poke your eye out?” Bush said.
Well, actually, if you’re like many fundamentalist Christians, the answer to that is yes — many fundamentalist Christians take the Word at its word, which is why they spend so much time and energy trying to convince the rest of us that God whipped up the entire Universe in six days, evolution is a crock, and that we’re all the relatives of two humans who lived in a nice garden until it was discovered that they just couldn’t follow directions at all. To suggest to these folks that the Bible doesn’t really mean what it says is probably a little bit offensive, or, at least, it should be.
In any event, it’s well worth it to know if Regier believes that he’s got the God-given right to smite a child with a rod — not because he might believe he’s got the God-given right to do a thing, but because he might believe he actually can beat a child with a large stick. The first of these is his own business, but the second of these is everyone else’s.
My wife is in summer school (she is 16, you see — no, not really), and among the classes she’s taking this quarter is an introduction to ethics course, one of those courses where the great moral issues of the day are plopped on the table and everybody goes back and forth on the issue but nothing really gets resolved; not unlike the UN, but somewhat less expensive to participate. The textbook for the class is called Taking Sides, and it features about 20 contentious issues, like “Should Abortions Be Legal?” or “Should Great Apes Be Given Human Rights?” with one essay on the topic arguing for the question, and another, naturally, arguing against. Nowhere present is the third essay, in which the first two essayists are labeled pedantic twits, followed by the suggestion that everyone reading the book should simply go out for cheeseburgers and a round of pool. It’s a real shame it’s not there.
Unsurprisingly, most of the topics that are under consideration in Taking Sides are topics that I already have fairly strong opinions about; perhaps also not surprisingly, it seems that most of the time is not at all like the opinion of the book’s appointed pro and con representatives. This is because in most cases of ethical and moral conundrums, the arguments of those totally for or totally against an issue exist in a rhetorical fantasyland that has no real relationship with the world human beings actual live in. Ethics isn’t mathematics; one can’t take as given certain things in order create an elegant and coherent system. Human beings are messy things, after all. Ethics and morality are and always shall be a messy business.
Nowhere is this more evident than in the first question that Taking Sides posits: “Is Morality Relative to Culture?” The cultural conservative position on this, of course, would be no — there are certain aspects of morality that are independent of one’s culture (and, conveniently, those aspects of morality tend to be those moral aspects which a cultural conservative finds convenient). Cultural liberals, of course, tend to take exception to both the theory and practice of an absolute morality, since those “absolutes” tend to get in the way of whatever activity it is that they’re enjoying and that the conservatives are worried that they are enjoying too much. The only problem with this position is that taken to its extreme it means that you have no right to complain when someone speaks glowingly to the morality of, say, female genital mutilation in they Sudan. Sure, it’s immoral here, but in the Sudan, they’ve been doing it forever. It’s perfectly moral there.
Both positions are fundamentally pretty stupid. The conservative position of an absolute morality has always struck me as weak, because the construction of an absolute morality (which almost always conforms to their morality of choice) is a tacit admission that they can’t sell their lifestyle without divine intervention. All assertions for an absolute morality that I know of eventually lead back to a God of some sort, the existence of which is fundamentally unprovable. There may be someone out there is who is arguing that that there’s a Chomsky-like “deep structure” for morality, which would be independent of an end-point celestial lawgiver, but if there is, I haven’t heard of him or her, and I can’t really imagine any cultural conservative wanting to use Chomsky-ian tools to make a point; it’s just not in them to be agnostic about the provenance of their argument.
On the flip side, it’s difficult to intellectually to support a position on morality whose finally reductive argument leaves room for the aforementioned genital mutilation or shoving little girls back into a burning building to die because their heads aren’t properly covered, as so recently happened in Saudi Arabia. Neither argument satisfies because neither argument has anything to do with the real world.
Here’s an argument that I think works: Yes, morals are relative to culture and independent of any larger, overarching system of morality that all of humanity shares. But if one believes that morals are relative to cultures, it does not therefore follow that one must believe that all cultures are created equal, or that the moralities therein are equivalent. This is an argument that allows you to say: “Your morals are rooted in your culture — but your culture truly sucks.”
I don’t have any problems with this formulation at all. On the one hand, America’s culture owes most of its distinct and durable character to a marvelous act of intellectual manufacture on the part of the founding fathers. They created a political culture almost entirely out of whole cloth, and by doing so helped to create the social and moral culture that supported the aims of the political culture. Neither of these existed anywhere on the planet prior to the founding of the United States, and even attempts within the United States to fight them (the Civil War comes to mind) ended up ultimately strengthening them (mind you, there are still some kinks to work out). There are certainly numerous cultural threads to the social life of the US, but the most important one — the one that ensures personal liberty — was a whole new thing.
Moreover, this created culture and morality is a better one (by and large) than others. Part of this can be seen pragmatically: The US is the most powerful country in the history of the world because the culture and morality of personal liberty has allowed for the creation of a rich, healthy, hard working and (reasonably) intelligent populace. But it’s also evident simply in what it allows, which is for just about everything, once you’re an adult. An open and free society can include, as a subset, and damn fool thing you want to believe in — even a morally restrictive lifestyle (I mean, I live near Amish). The only real restriction on this is that you can’t drag other people down with you if they don’t want to go, but if you can live with that, have at it.
Cultural conservatives believe that having morality dependant on culture ultimately leads to anarchy, but I don’t see that as being the case. Most people are smart enough to see that their freedom to do whatever they want stops when whatever they want unwillingly involves someone else (more accurately, people realize that someone else’s freedom to do what they want stops when it involuntarily involves them). People don’t want anarchy; it cramps their ability to do what they choose to do. Thus we have a society that, with a few reactionary spasms now and then, largely lets us live as we want to.
It’s hard to beat that, and I’ll pit it against any other culture, and any other morality, any day of the week.
Based on the immense (and frankly, somewhat inexplicable) popularity of the last Whatever, I now present All The Things I Didn’t Know I Didn’t Know About Mowing My Lawn, an excerpt of my upcoming (and no doubt soon-to-be-spectacularly-successful) yard care book, Everything I Ever Knew About Mowing I Learned in Just the Last Two Weeks. Any resemblance between what you read here and heartwarming lessons about life and love is purely coincidental. Unless it helps me turn this pathetic idea into another Chicken Soup For the Soul-like juggernaut. In which case, I meant to do that.
1. You Must Mow Counter-Clockwise. The reason for this is that the blades of death attached to the underside of the lawn tractor take the mulched, decapitated grass stalks and fling them out from the right side of the mower. If you mow counter-clockwise, you get an evenly-distributed dusting of mulch that feeds and fertilizes the lawn much in the same way that beef fats and by-products are used in cow feed to plump up your incipient hamburger (or were, until Mad Cow Disease. Stupid Mad Cow Disease). But if you mow clockwise, you blow the mulch into a continually smaller and higher pile of ever more finely chopped grass particles, until what you’re left with is an unstable ziggurat of grass motes which will collapse upon you at the slightest provocation, saturating you in mower leavings and making you look like the Swamp Thing’s wimpy, suburbanized cousin, Lawn Thing (“Lawnie,” as he is known, derisively, to his kin). You will never get the grass stains out.
2. You Must Not Sweat the Baseball Diamond Pattern. Look: If the Yankees are paying you 75 grand a year to mow a diamond pattern into the Field That Ruth Spat Tobacco Juice Upon (as I believe it is formally called), then by all means make a diamond pattern with your lawn mower. If they’re not, you might as well try to get through your mowing as quickly as possible because you’re just going to have to mow again next week (If the Houston Astros are paying you to make a diamond pattern, go the extra mile and make the diamond look like the Enron “E.” I’m sure they’ll get a big kick out of that one). Any temptation to mow any sort of design into your lawn other than the most utilitarian round-and-round spiral is probably a good sign that you need either to get away from your lawn more often, or you need to be whacked in the head with a sturdy board. It’s your choice.
3. Try Not to Think of the Lady Bugs. Over the course of mowing, you will undoubtedly mulch dozens of these friendly, colorful, useful beetles; you’ll see them clutching the ends of grass stalks, their red, speckled carapaces winking like a 3rd graders’ craft beads just before you run them over and either crush them with your tractor wheels and fling them into the abattoir of whirling blades slung to your tractor’s undercarriage to be diced into confetti. Try not to feel guilty about their tiny little deaths, even though you have the sneaking suspicion that killing lady bugs is the only thing that actually enrages Jesus, and that each lady bug you whack gets you a century in purgatory, where demons force Bowflex commercials upon you until your sins are completely scraped away. Try not to think about the lady bugs at all.
4. Your Lawn Will Try to Shame You. Your front tractor wheels bend down grass stalks, which keep them from being fully mowed, so when you look back, you’ll see little wheel-width-wide rows of slightly taller grass, mocking you to the other grass stalks. Remember your place on the evolutionary ladder, go back and teach those leaves of grass a lesson. Mock you, will they. Let’s see them mock finely-edged blades of metal whirling at thousands of revolutions per minute! Yeah, who’s mocking who now? Huh? Huh? Huh?
5. No Matter How Much It Seems to Be So at the Time, Those Birds Really Are Not Trying To Attack You And Peck Out Your Eyeballs. They’re just after the bugs that are busily fleeing your mower. Honestly, that’s all it is. Oh, fine. Wear protective goggles, you baby.
6. When You Are On Your Lawn Tractor, You Must Wave to Anyone Going By On the Road. And if you live in rural America, as I do, you must especially wave at the farmers cruising by on real tractors; you know, the ones that make your lawn tractor look like a frisky Maltese next to a Great Dane. The farmers really get a kick out of you waving to them; they sort of chuckle and think to themselves I bet that idiot thinks he looks real sharp on that toy as they wave back. Given the sorry state of the American family farm (evidenced by the fact that Congress and the President just sent $190 billion of our tax dollars to prop them up), I feel it’s my duty as a patriotic American to give the local farmers at least one thing to feel smug about.
7. You Will Eat a Bug. Probably more than one. The sooner you accept it, the sooner you can get past it. Just as long as it’s not a lady bug. Jesus is mad enough at you already.