To send us off into the Easter weekend, please see this fine stack of new books and ARCs that have arrived at the Scalzi Compound. Do you see anything you’d like to have the Easter Bunny deliver to you? Let us know in the comments!
To send us off into the Easter weekend, please see this fine stack of new books and ARCs that have arrived at the Scalzi Compound. Do you see anything you’d like to have the Easter Bunny deliver to you? Let us know in the comments!
I’ve had my new Nikon D750 for a couple of days now, which is enough time to offer up my first impressions of it for those of you who have an interest in such things.
Not entirely surprisingly, I like it a lot. It’s a definite improvement over my previous DSLR, the Nikon 5100, which, to be clear, is a perfectly capable camera, which is why I gifted it to Athena. But the D750 offers a larger and more sensitive sensor and also 50% more resolution, among other improvements including wider effective ISO range and a faster shutter. What I notice mostly after a few days with the camera is that it’s more responsive in low light than the 5100, which is great because I hate using flash, and that the sensor picks up more data so its easier to tease out a useful picture. As an example, see the pictures above; the first one was the picture as it came out of the camera; the second was what what I was able to pull out of it using Photoshop on the RAW format file. That’s not bad!
I noted that I decided against the “kit” lens for the D750 and instead bought a 50mm f/1.8 prime lens, as well as a 28-300mm f/3.5 – 5.6 zoom lens with vibration control. Of the two the zoom lens is the easier one for me to use so far because it’s operable like the kit lens on the 5100 and also offers a lot of flexibility, in terms of being able to zoom in on a subject, and the vibration control means somewhat fewer fuzzy pictures. The prime lens is a little bit tricker for me, at least right now — you pretty much have to take it on its own terms. The photos I’m getting from it are great, but I’m also having to take a lot more pictures to get to that one great one. This is not a complaint, just a recognition there’s a learning curve going on with that lens.
And with the camera generally, I have to say. The D750 is more camera than I necessarily know what to do with yet, which is, mind you, one of the reasons I bought it; I want to be able to explore its capabilities. That said, most of the time what I end up doing with DSLRs is setting them to take pictures in RAW format and then use Photoshop and other software to do what other photographers do in camera, through settings. I don’t think this is a problem — the camera doesn’t care, and I’m not worried about impressing other photographers — but I am constantly reminded that the camera offers more than I use, and it’s up to me to follow up on that.
All told, however, I’m very pleased with my purchase. I’ll probably take it along with me to the next couple of conventions I’m at, so if I see you there, come over and take a look.
Andy Farke asks:
For decades now, various think-pieces have commented on a divide between sciences and humanities. “The Two Cultures” by C.P. Snow is an early version of this, but it is manifested today in discussions about STEM and STEAM, the value of liberal arts, and discussions on the purpose of a college education. To some extent, science fiction writers inhabit multiple worlds. Do you think that a science/humanities divide is real, and if it is, how could it it be bridged? Or is it necessarily something that needs to be bridged? I’ve often seen the issue framed as how scientists can learn something from those in the humanities, and would be interested in your thoughts on that; but what about the reverse situation?
I don’t think it would come as a surprise to anyone that I am a proponent of the classical idea of a liberal education, in which the aim of the education is to create independently acting and thinking human beings who are well educated on a broad number of subjects, so these people are able to meaningfully contribute to the development, maintenance and governance of their nation and world.
To that end, a broad education for everyone includes not only science but humanities; not just humanities but science. A scientist should through her education be able to understand and appreciate a sonnet or a symphony or a painting; a writer should be able through education to understand paleontology or astronomy or physics.
Indeed, I think what probably needs to be chucked entirely is the idea that science and humanities are on either side of the fence from each other. Music is based in science, as an example; acoustics, mathematics, psychology and biology all play a role in how music is made and appreciated. The musician who knows about these topics (among others) has potentially better control of their instrument — not just the one they play, but the one in their head. Blocking yourself off from any knowledge because you perceive it as being on “the other side of the fence” is intentionally hobbling yourself and your creativity. The more complete your knowledge, and the better your education prepares you to synthesize knowledge from disparate sources, the more you can do with the talents you have.
Here’s a fun fact about me: when I was a kid, I wanted to be scientist. In point of fact, I wanted to be an astronomer. That was what I wanted to be until I got into junior high and then all of a sudden math became something that was really difficult for me. Fortunately around that time I started to realize that writing was something I was good at, and so my ambitions shifted. But when I decided to write instead of going into science, I didn’t throw away my love for astronomy or other branches of science. It was still great stuff even if I bumped up against the limits of my mathematical talent. My education in science continued, commensurate to my level of understanding.
Decades later that education keeps paying off, and not just in the sense of my job. Although it has in that, certainly — I’m a science fiction writer who uses the contemporary understanding of science as the springboard for fictional speculation about the universe. I’ve also written about science as well, in newspaper and magazine articles over the years as well as in books, including my own astronomy book, which was widely-praised and which went through two editions. Science has been good for my mortgage.
But as I noted above, I don’t think you get an education for your job (or just for a job). That science education also keeps paying off in how I’m able to understand what’s going on in the world in general, which matters for how I think and how I live and (importantly, especially these days) how I vote. Knowing science in addition to all the things I know about the humanities makes me a more engaged citizen and human.
Of course, not everyone is interested in science, or literature, or math, or whatever, and that’s totally fair. But here’s a thing I believe: 80% of every possible subject is understandable to the average human being — it’s that end 20% where you start getting into specialized knowledge which requires real commitment to the subject. I’m not worried about that last 20%; I’m more interested in that 80% most people can follow. If we can manage educating people on that part, we’re good.
This thinking is also, not entirely surprisingly, why I decided to go to the University of Chicago, which had (and has) a “core curriculum” which every student has to take. The core curriculum of the school has expanded a bit since I was there — it is not so insistent on the dead white guys, as I understand it — but what remains is the important part of a liberal education: The idea of a broad-based, wide-ranging education across disciplines, designed to impart knowledge but also designed to teach one how to learn, and how to cross-pollinate concepts and ideas across many different fields.
Mind you, there are a lot of people who don’t feel this sort of education is important, i.e., “shut up I’m just here to learn how to code/write/diagnose/whatever” and ask why, especially on the college level, they should spend tens of thousands of dollars to learn things that they don’t find important. Well, again, it depends on what you think education is for. If it’s just for a job, fair enough. But if it’s for more than a job — and I think it is — then you’re doing yourself a disservice not getting a wide education, and you’re doing the rest of us a disservice while you’re at it. These days in particular we have enough of people who only know a little and don’t care to know anything else.
So, yes: The division between science and humanities is more artificial than not, and everyone could stand having a wide-ranging education on a number of subjects, no matter what they currently do or what they want to be when they grow up. What I want everyone to be when they grow up is well-educated and able to reason adequately. It’s not guaranteed that this will make for a better world, but again, going in the other direction doesn’t seem to be doing us any favors these days.
Patricia Ruggles asks:
You’ve confessed before to being at least somewhat egotistical. Do you think it’s possible to be a successful writer if you don’t have a pretty big ego? Writing is notoriously solitary, and requires long periods of continuous performance without a lot of positive reinforcement. Doesn’t take a pretty good opinion of yourself to stay convinced that somebody will want to read your stuff when you are finally finished with it?
Well, I’ve admitted to having an ego, yes, and can be seen as being egotistical. I think there’s a difference between those two things.
Also, no, I don’t know that you have to have a big ego in order to be a successful writer.
Part of that is that it depends what you mean when you say “successful writer.” What is the definition of success? Material wealth? Excellent writing? Reputation that exceeds one’s own mortality? The thing is, none of these in itself requires a large ego, or outsized egotism. Particularly in regard to the latter two, I have in mind Emily Dickinson, who was certainly an excellent writer and whose reputation in death is far greater than it was in her life, in no small part because her first published collection was in 1890, four years after her death. During life, she lived an eccentric and secluded life — not generally the hallmarks of someone with what’s generally understood to have a pretty big ego.
Is Dickinson a successful writer? I think absolutely: I strongly suspect her work will be remembered long after mine is forgotten. Did she have a big ego? If I had to guess, I would say no, at least in terms of how I think “big ego” is being referred to here.
Ego can be part of the reason people write. It is for me: I rarely write just for myself, since I already know what I’m thinking and I’m too lazy to write down my own thoughts just for me. I write so I can be read by other people; I like that other people like what I write. But there are also people who only write for themselves, who never have the desire to show others their work — at least, not until well after they are dead. Another historical example: Samuel Pepys, widely considered the English language’s greatest diarist, whose diaries, while bound by the author for preservation, were not published until 150 years after his death. Pepys is another successful writer by any account, but save for binding the loose pages of his diary into volumes, where is the evidence of a big ego? I don’t know that Pepys ever dreamed his diary (which among other more significant things includes ample evidence of his various adulteries) would ever see wide circulation.
The thing is, people write for a lot of different reasons, not all of them tied to ego. Some people write for other people to read them. Some people write because they want to express themselves. Some people write for just a few people, and to please them, not to please themselves. Some people write simply because they are good at it and people are willing to pay them for it. Sometimes it’s a combination of factors. I write for others, but I also write because people pay me for it, and occasionally I’ll write something just for one or two people, meant for them alone. And at the end of the day, I write what I would want to read — which is to say that although I write for others I also write for myself. I like reading my own books!
(Also, and independently, as far as writing being solitary: It can be but doesn’t have to be. I know writers who band together and take writing retreats with each other — they spent their work day in front of their words but they take breaks to chat with each other about how things are going. Others write in coffeeshops so they can be around people when they work. Writers frequently show works in progress to friends or confidantes — I myself will give my wife chapters to read when I’m done writing them. And writers also splash themselves all over social media and blogs as a way to visit with fans and other writers.)
I think having an ego can be helpful for a certain type of writer, and I’m willing to say that I’m one of those writers. I like writing to an audience and I like interacting with an audience beyond the confines of my books (hello!). My ego helps when it comes time to market books too; I’m usually happy to be interviewed and go do public events and interact with fans and readers. That willingness to be open and accessible, which is in part fueled by my own ego-driven desire for attention and approval, has been beneficial. And finally my ego means that I feel less of the self-doubt and “impostor syndrome” that other writers have been known to have. My attitude about writing as my gig is, yeah, I got this. Which is why I could sign that stupid long book contract I did and not freak out about owing a dozen books over a decade.
But that’s me, and how I do writing is not the only way, or the best way, or even a good way for any other writer who is not me. In my own personal experience I know many writers who I would not characterize as being particularly egotistical or ego-driven; they just like to write and write well enough to sell. Some of them are plagued with self-doubt and the belief that no one really wants to read what they write, and sometimes their work basically has to be dragged out of their hands by an exasperated editor or agent. At least a couple of these authors sell at least as well as I do, as far as I can tell. They just do their thing differently than I do. Which is great! There is no one true path to being a successful writer, in no small part because, again, there’s no one definition of what “successful” means, when it comes to being a writer.
So, a pretty ego can be useful when it comes to being a writer. But I think you can be a writer — a good writer, a successful writer — without one. All you really have to do to be a writer is write.
There are a couple of people in the thread who asked this, so I’ll just use Thomas Hewlett’s question to represent them:
You’ve mentioned several times that you don’t drink alcohol. I do a lot of work with addiction/recovery and I’m wondering about your relationship to alcohol and drugs and what led to your decision to not drink. Or is this simply a case of “that stuff doesn’t taste good”?
It’s true: I don’t drink alcohol except in very rare circumstances (like, half a glass of champagne at my wedding), I’ve never smoked cigarettes, I’ve never taken an illegal drug, and outside of Novocaine at the dentist’s office, I’m generally reluctant to take legal drugs either; my wife always expresses surprise if I go to the medicine cabinet for ibuprofen, for example. So what’s the story there?
Well, to begin, and initially the reason I avoided the stuff, my family has really bad addiction issues. I’m a child of alcoholics and drug users, and I’ve seen first hand what the stuff can do to people whose brains are wired to leap out of their seats when drugs are around, not only in family members but in the people who were around my family. Many of the people I knew growing up were either struggling with addiction, or trying to get clean, or dealing with the shitshow of a life that is crawling out of the hole that addiction puts you in. All of which reinforced the idea for me early on that this was not what I wanted for my life, or in my life.
This did mean when I was younger I could be pretty humorless about alcohol and drugs. When I was a little kid I was convinced a single beer or puff from a joint would put you on the fast track to being (in the words of South Park) homeless on the streets giving handjobs for crack, and I would sometimes freak out about it. I got better about this as I got older and learned that not everyone had the same addiction problems as I saw in the people around me (this is where I note that for a large part of my childhood my mother was active in the Alcoholics Anonymous community, so I really was surrounded by addicts, albeit ones trying to get and stay clean). But, yeah, as a kid I was definitely not cool with a beer and a joint. I figured it meant you were doomed. Dooooooooomed.
On a personal level, the residual effect of that childhood paranoia manifests itself with a continued personal lack of interest in alcohol or drugs. I’m no longer paranoid that a single shot of hard liquor or a toke would turn me into an uncontrollable gibbering addict, but on the other hand given my family’s inarguable problems with the stuff I don’t feel the need to play the odds, either. I’m not foolish enough to think I don’t have all the features of an addictive personality, nor am I foolish enough to believe that age and understanding will have much compensatory effect against my body’s physical desire for addictive stuff. All in all, best to leave the stuff alone. There are other things to keep me occupied.
When I was younger, there were some people who were amazed that I didn’t drink or do drugs. “Aren’t you curious?” was a question I got a lot (answer: No, because I’d seen enough of it in my life, thanks), sometimes followed by the person, almost always a dude, who would be all “Dude, I’m totally getting you drunk tonight!” because he thought he was doing me a favor my making me relax through alcohol. It didn’t work since someone trying to get me drunk made rather more tense (this sort of thing was almost always about alcohol, I’d note. People smoking pot would offer you the joint, but if you didn’t want it, they were always “cool, whatever” and off it would go to the next person).
Occasionally when I was younger someone would get offended that I didn’t drink, because they thought I was judging them for drinking. Well, when I was a kid, sure, I’d do that. By the time I was drinking age, I didn’t care what other people were doing with their bodies, unless it was directly affecting me. Which is the way I feel today. I don’t drink; I’m fine if you do.
Nowadays, at age 46, no one is in the least offended that, or usually even curious about why, I don’t drink or do drugs. At this age, everyone knows people who stopped drinking or doing drugs, because they are in recovery. No one blames them for it, because everyone knows someone whose life got righteously screwed up because of substance abuse issues. If not drinking or doing drugs is what it takes for you not to have a messed-up life, good on ya. I do assume at this point that most people who notice that I don’t drink or do drugs assume I have some substance abuse history. Well, it’s true, I do; just not mine. I also don’t mind if people assume I’m in recovery. It’s not correct, but it’s not an insult, and if someone is judgey about people in recovery, then they’re the asshole.
(This is the point where I will note that I know a lot of contemporaries in recovery from drugs and alcohol, and they have nothing but my respect and admiration. Recovery is hard, man. Admitting you have a problem is hard. Quitting a thing your body is crying for is hard. Making amends to the people you hurt is hard. Staying on the recovery path each day, every day, is hard. Part of the reason I never started drugs or alcohol is that I saw close up at an early age how fucking hard recovery is. I’m not entirely sure I could do it. Given what the alternative to recovery is, that’s not good. So, yes: People in recovery? You rock, I salute you. Keep on keeping on.)
At this age there are other reasons I don’t drink or do drugs. In the subject of alcohol, first off, I’m cheap, and alcohol is expensive and I don’t understand how people just throw their money down that particular hole (to be fair, I feel this way about Starbucks, too). Second, alcohol has calories and as a middle-aged dude who already weighs more than he likes, I don’t see why I should add to my woes in this regard. Third, given what I know about myself in terms of where I make conscious efforts to inhibit my behavior, I’m pretty sure I’d be a raging asshole when I’m drunk. You know that thing I wrote once, about how the failure mode of clever is asshole? It’s not just a pithy statement. It’s a reminder to me of my own failings. I expect that were I drunk, I’d try to be clever all the time, and would fail.
With drugs, well. I’ve never been a fan of the recreational use of pot, since that shit stinks like wet dogfarts and causes jam bands, neither of which fill me with joy. Pretty much all the other recreational drugs that exist out there just seem like a fast track to either being an asshole and/or losing a bunch of your teeth in one terrible fashion or another. The exception here seems to be psychedelics, which I worry that if I took would cause me to freak out more than I would like, which means that such a freakout would likely be a self-fulfilling prophecy.
Finally with both drugs and alcohol, at the end of the day I like being in control of my own self, as much as I can be, because I’m responsible for my actions and my self. Given what I know of myself and my likely addiction issues, drugs and alcohol would make it harder for me to be in control of myself. This would make me very unhappy, and that in itself would have a number of unpleasant knock-on effects.
All told: Drugs and alcohol are not for me, thanks.
But if they’re for you — and you’re not swimming in addiction issues (in which case please seek help), and you’re not bothering anyone else with your fun (and if you are, stop being an asshole) — then that’s great, and enjoy yourself. Anyone who’s seen me at a convention knows my natural habitat there is in the bar, hanging out and laughing with people. I wouldn’t be there if I was spending my time pursing my lips in disapproval at people loosening up through judicious use of booze. I am a lifetime designated driver, and I’m cool with that, too; I like making sure people get home safe.
I’m not a pot enthusiast, but generally speaking I’m for its legalization, and while I’m less sure about blanket legalization for other currently not legal drugs, the more I look at the mess that is the US response to drugs, the more I lean toward the general libertarian idea of “legalize it all, tax the shit out of it,” with a substantial chunk of that tax earmarked for treatment of addiction (rather than, say, incarceration, which is what we have now and which isn’t working particularly well as far as I can see). My personal prohibition against any of this stuff should not imply one for everyone else.
But yeah, for me, prohibition it is. The good news is, so far, my life has done okay without drugs and alcohol. They’re not things I feel a lack of.
His was one of the first intelligences behind the concept of artificial intelligence, and yet Alan Turing is defined not only by what he said on the topic, but also by what he never got the chance to say. Jim Ottaviani, author of the graphic novel The Imitation Game (Leland Purvis, illustrator), graces this site today to share more.
Elon Musk calls it “our greatest existential threat.” Bill Gates says “I agree with Elon…and don’t understand why some people are not concerned.” And Stephen Hawking, who cosigned a letter about it with Musk, Gates, and others said this in a Reddit AMA: “We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.”
What about the guy who came up with the gold standard test for determining whether a machine could think? What would Alan Turing say about AI? Quite a lot, it turns out, almost all of it quotable and said many years before Gates—much less Musk—was born. (For his part, when Turing was writing about this Hawking was an 8 year old climbing out the windows of the new family home in St. Albans to get away from his sisters.)
He said most of it in a 1950 paper called “Computing Machinery and Intelligence,” published in the philosophy journal MIND, back when a philosophical paper might change the way computer scientists think, and might even read stuff that wasn’t in computer science journals. Admittedly, that’s mostly because there were no computer scientists besides Turing, Johnny von Neumann, and a few of their respective protégés, nor were there journals for them.
He said his piece in what storytellers might call his third act. Turing’s first act came in his early twenties, when he successfully tackling the famous “decision problem,” or Entscheidungsproblem posed by David Hilbert. Another mathematician, Alonzo Church, also solved it independently and somewhat traditionally; straight up math, with techniques you’ll remember (probably with no joy) from your high school geometry class.
Turing? He solved it by inventing the modern computer in the abstract, then programming it and running it in his head. Nice work. Enough to make you famous for creating the Universal Turing Machine, a fame that will last until our computer overlords rewrite history to get rid of the human role in their invention.
So what do you do in your thirties, for your second act? How about playing a key role—arguably the key role—in breaking the German Enigma code during World War II, work that saved countless lives and accelerated the Allied defeat of the Nazis. And maybe designing an honest-to-goodness mechanical computer to do it?
Done and done.
The Turing Test for artificial intelligence was the culmination of his third act. If you don’t know what it is—not likely here on John’s blog—it’s…well, sorry. I’ll say no more about that because it’s the lynchpin of our narrative.
We don’t get to know about his fourth act, and that’s the big idea at the heart of our book. Turing was thoroughly modern in so many ways: a mathematical genius, the inventor of modern computers, an expert in encryption, and, as it happens, open about his sexuality in a time when that just wasn’t done. But he died over sixty years ago, his life cut short because a foolish and ignorant society (it’s not completely fair to call it ungrateful, since Turing’s code-breaking work was kept secret until the 1970s, and at that time computers weren’t part of daily life for people like you and me) allowed him to be arrested and convicted for the crime of being gay. Yes, it was a crime in 1950s England, and might as well have been in most other places in the world. The consequences of this were miserable for Turing, and by the time you finish our Imitation Game, I hope you’ll agree that they weren’t good for the rest of us as either.
As I said before, the game itself plays an important role in our graphic novel, so I’m glad Abrams liked our title as much as Leland and I did. And I think Turing would have appreciated it too, to the extent he’d notice. I suspect he would have been proud but perplexed by a biography, and only mildly interested in it. He’d have much rather told you about this new theory he was working on, or a science fiction story he was finishing up, or…
I can’t even imagine what else that might be, but I do know for certain that the world would have benefited—and been much more interesting—if we’d had decades more of him thinking, discovering, and stretching the boundaries of intelligence.
When I got the new camera, I skipped the kit lens that came with it and bought two specific lenses instead: a 50mm f/1.8 prime lens (which doesn’t zoom), and a 28-300mm f/3.5 – 5.6 zoom lens (which zooms rather a lot) which comes with vibration dampening, so presumably at full zoom you won’t get too much shake.
The pictures yesterday were with the 50mm lens, which I like a lot — it works fantastically in low light, which is great because I dislike using flash and try to avoid it whenever possible. Today I went outside and shot with the zoom, to see how it does. The results are below. All the pictures are zoomed in to some amount or another (the picture of the dude on the bike was at full zoom; I was roughly 800 feet from him when I took the photo).
Not bad at all for a general purpose lens. I suspect I’ll be using the 50mm most of the time when I’m shooting indoor or shooting portraits, but it’s nice to know this lens will fit my needs if I’m going to be in a variety of situations.
My younger child, a sophomore in college, has asked me to use “they” “them” as their preferred pronouns. I live in a very liberal and gender-choice aware New England college town, and I still find this difficult to consistently comply with. Sometimes my English major brain rebels at using plurals for a single person, sometimes I just don’t want to have that conversation with a stranger, especially one who has already stated views that suggest they have no sympathy for the preferences and realities of others. Sometimes I’m just tired and it’s hard to keep it all straight. So, what do you think of gender neutral pronouns? Can you suggest something…better than they, them? Am I being disrespectful of my child by failing to consistently respect and comply with their request? And how would you, or an older, female, Southern version of you respond to the boor who immediately brings up Caitlyn Jenner and insists on calling “him” “Bruce”? And, since you love writing questions, have I used too many “””‘s in this question?
Small things first: The number of quotation marks seems fine to me, and as far as “they” “them” and “their” are concerned, not only is there a long history of their being used as singular pronouns, it’s something that’s rapidly becoming standard usage. When you feel weird about using them for singular usage, just remember a lot of commonly-accepted grammar rules were invented fairly recently as a way for the status-anxious to feel better about how they used the English language. And, you know, that’s just stupid. Good grammar is that which makes the language clear, not that which makes it clear someone else isn’t following arbitrary rules.
As for how I feel about gender-neutral pronouns: I’m for ’em, and specifically I like “they,” “them” and “their.” One, I already know the words, which means that they’re easier for me to incorporate into my daily usage than other gender-neutral pronouns which have been more recently invented or drafted into service; two, I’ve already used “they” “them” and “their” as gender-neutral singular (and plural!) pronouns for years; they’re already part of my personal style guide.
I prefer them, in fact, to “he or she,” both because it’s a less awkward construction and because I know more people now who neither identify as “he” nor “she.” Inasmuch as “he or she” is meant to be an inclusive construction, when you know people who identify as neither (or both, or either on a sliding scale contingent on factors, or whatever), then you realize it’s not actually as inclusive as it’s meant to be. In which case: Hey! “They” offers a really easy solution.
When someone asks you to refer to them by a particular set of pronouns and you’re reluctant to comply, are you being disrespectful? Yup! Self-identity is important, and refusing to accept someone else’s identity for your own reasons will be taken to mean that you dislike or disagree with their choices about who they are. And this is your right, but it means you’re saying that your choices in this regard are more important than the choices of the person who has to live with their own identity every single moment of their lives.
Which is a hell of a thing to say. Are you sure you want to say that? And how would you feel if someone made that choice about you? I identify as male (and cis-gender), and my pronouns are of the “he” set. If someone consistently and purposefully used a set I didn’t identify with, I’d want to know why. And here’s the thing: generally speaking, when someone does misgender me, they’re doing it specifically to be disrespectful. I have assholes out there who use the “she” set of pronouns when referring to me because in their minds, it’s a terrible insult to call a man a woman, and this is a sign of their contempt.
Now, as it happens, I’m not insulted by the “she” set of pronouns being used for me, because I don’t believe being a woman is an inferior state of being. It’s not correct, but it’s not an insult. But my point of view on the matter doesn’t change the fact that the misgendering is intended to be disrespectful and an insult. Likewise, the boor calling Caitlyn Jenner “Bruce” and “him” is almost certainly being disrespectful. Bless their heart.
So, yes: Not using someone’s preferred set of pronouns is disrespectful.
With that said, let me share a personal story here. In the reasonably recent past, a friend of mine who went by one set of pronouns let it be known that from that point forward, they would like to be known by another set. When I read that, I wrote to them that I would be happy to comply, and also, because I had been using a different set of pronouns for them literally all the time I had known them before, it’s possible that from time to time, and despite my intent, I might fuck up and use the previous set. If I did, first, sorry about that and I would try better, and second, please call it out if they saw me do it, because I didn’t want them to think it was intentional, and I wanted them to know it was all right to correct me and to expect an apology. Thus I let them know I respected who they are, that I was also fallible, and that when I failed them, I wanted to do better going forward.
People aren’t perfect. We’ll all screw up from time to time and fail the people we know, the people we like, and the people we love. It’s okay to acknowledge that will happen even as we work to accommodate the people we know, like and love. I do find in my experience that if you acknowledge that you might mess up but will consciously work to improve when you do, you end up messing up less over time, and when you do, people are generally more willing to be understanding.
So: Use people’s preferred pronouns. If you unintentionally screw up, correct yourself, apologize if you feel you should, and try to do better from there on out.
Let me also note that the pronoun thing is one of the best current examples of both the culture and individuals being on a journey, and that even people who mean well, or who want to do what’s best, can still be behind the curve. I’m not where I am with pronouns — and all the aspects of gender and identity that the pronoun issue is semaphore for — because one day I woke up and decided I was going to be cheerfully progressive on the issue. In fact, it wasn’t all that long ago that I would have argued about what the “real” identity of someone was, and whether it was bounded by their genetics, and whether just because you wanted to use one set of pronouns, that other people should then be obliged to accept your request, and so on.
What’s changed over time with me? Well, some of it is simply knowledge — knowing more people who are trans and genderfluid, and learning more about science and culture, which over time convinced me that a binary understanding of gender is woefully incomplete, and that maybe my own stances should reflect that.
But as much as that — and even more than that — was the question of who I was, and who I wanted to be in respect to others. Simply put, a strong person, a person who is good and kind and righteous, does not need to demand that other people have to shoehorn their self-identity to someone else’s expectation. A strong person, a person who is good and kind and righteous, says to the other person “tell me who you are” and accepts the fact of what they’re told.
Which is not to say I am a strong or good or kind or righteous person. As noted above, I’m as fallible as the next person, imperfect and otherwise still trundling on the karmic wheel of suffering. But I know who I want to be, and who I want to be is not someone who freaks out other people’s gender identity (or their sexuality, or their cultural identity and so on). So I work on not doing those things.
Am I perfect about this? Nope: See above story about me acknowledging that I would probably screw up a friend’s gender identity. And likewise, people who want to do better can just be starting on this particular path, and will screw up, and fumble and otherwise be imperfect. That’s okay, just as it’s okay for people to get exasperated and frustrated and angry when their identity is imperfectly understood or accepted, even by the people who hope to be good people. I would get exasperated and frustrated and angry too, if I were in their shoes. I wouldn’t feel at all shy about saying so, either.
In any event: Yes, when someone tells you what their pronouns are, use them, won’t you? It doesn’t seem too much to ask. It requires nothing from you but practice. In return you acknowledge who they are as human beings. And with that simple recognition of their identity, you, too, acknowledge who you are as a human being. That matters, too.
(There’s still time to ask questions for 2016’s Reader Request Week — get your requests in here.)
Because apparently now I’ve made it a thing to write something about the Tuesday night primaries on Wednesday.
1. Hey, hey, Bernie Sanders fans! You had a good night last night, with Sanders thumping Clinton in Utah and Idaho by roughly 80/20 in both states and overall winning more pledged delegates than Hillary Clinton for the first time in a long time (62 to Clinton’s 55). That’s good stuff, and makes the argument that Sanders should stay in it for the long haul.
But is it enough? Fivethirtyeight’s delegate tracker suggested that in order for Sanders to be on track to win the nomination, he needed to win 74 delegates last night; he fell a dozen short of that, even with the comically lopsided caucus wins in Utah and Idaho. Clinton, on the other hand, needed to win 57 to hit her target; she got 55. Which is to say, apparently by fivethirtyeight’s calculus, both Clinton and Sanders failed to hit their marks last night — and Sanders failed more.
(Those are CNN’s current numbers, I should note. Associated Press’ numbers are better for Sanders: 67 to 51. Which means Sanders was seven delegates off his target, while Clinton was six off of hers. Smaller margin, same result.)
This doesn’t mean Sanders is overall in a worse position than he was yesterday, since two big wins can give him momentum in future contests. But it’s a reminder that Sanders at this point not only has to win, and win big, but he has to keep Clinton from hitting her numbers, or at least make sure she misses her numbers by wider margins than he does. He’s got a complicated job, he does.
Personally, I’m interested in seeing how the Washington state caucuses go this Saturday; I think they’ll give some indication on whether Clinton’s going to put this away fairly easy (even if Sanders stays in until June) or whether she’s going to have to scrape out the win one delegate at a time, 2008 Obama-style (or, you know, lose, which could happen, as unlikely as I think that is at this point).
2. Neither Sanders nor Clinton hit their Fivethirtyeight delegate numbers last night, but they can take heart in knowing that on the other side of the fence, neither Donald Trump nor Ted Cruz got close to their numbers either (and Kasich got a big fat goose egg, so there’s that). Trump, who won Arizona, was off his delegate number by a dozen; Cruz, who slammed Trump in Utah, was off by three times that number.
But then it’s pretty clear the plan now is not for Cruz to get enough delegates to win the nomination outright; it’s to deny Trump enough delegates to do the same. And it worked last night, but the question now is whether it’ll keep working. Utah is filled with LDS church members who for various reasons dislike Trump as a candidate, which made it easier for Cruz to rack up a lopsided victory over Trump (and Kasich, who actually looks to have finished ahead of Trump in Utah).
But Cruz can’t count on that same advantage in Wisconsin, the next GOP state to go to the polls, which is winner-take-all and where Trump holds the lead; he can’t count on it in New York, which is after that, where the most recent poll has Trump up over Cruz by a ridiculous 52 points. There’s not much on the map that looks friendly to Cruz until May, if you ask me, and most of the contests until May are winner-take-all or winner-take-most (Kasich is likely an also-ran in these contests too). Cruz can’t win, but it’s not clear he can make Trump lose, either.
(And then there’s the problem for the GOP that even if Cruz beats Trump, he’s still friggin’ Ted Cruz, who has even less of a chance in the general than Trump.)
Let me put it this way: My political crystal ball is notoriously cloudy, but even so, at this point I would give Bernie Sanders a better chance of winning the Democratic nomination than Trump not getting the GOP nomination outright. Both could happen; both seem to me unlikely.
3. Oh, and Jeb Bush has endorsed Ted Cruz. Yeah, that’ll help.
The Roman Empire in the New World? That’s the idea of Sidewise Award winner Alan Smale’s The Clash of Eagles trilogy, of which Eagle in Exile is the second book. But in imagining an alternate history, how does one give honor to actual history, and avoid the easy traps of historical fiction? Smale offers up his thoughts.
I was still a recent import to the U.S. when the hoopla surrounding the Columbus quincentenary started up. My own one-man version of the British Invasion was going rather well at the time; what I’d originally thought would be an educational three-year stint in the New World was being overwritten by the strong urge to stick around. Nearly a quarter century later I’m still here, and I’m now an American myself.
From my outsider perspective it was gratifying to see how quickly the simplistic and myth-based story of Columbus I was used to got replaced with a more factual, thoughtful, and nuanced reconsideration of his voyages and impact. I was just beginning to get published as a writer of short fiction at the time, but even then ideas were swirling around my brain. Yet it took another decade and a half, much more writing experience, plus the unanticipated kick-start of reading Charles Mann’s 1491, for my conscious and unconscious minds to get their acts together.
In Clash of Eagles, the Roman Empire never fell. Now it’s the early thirteenth century and a legion under general Gaius Marcellinus is marching west from the Chesapeake Bay towards the great Mississippian city of Cahokia, a thriving community of some 20,000 people. (Cahokia really existed, of course. The Mississippians were mound-builders, and even today it’s fun to stand on top of what we now call Monks Mound, a giant earthwork 100 feet high and 1000 feet across at the base, look out over the surrounding more gently-mounded landscape, and imagine how glorious Cahokia must have been in its heyday…)
And that was the Big Idea behind Clash of Eagles: Ancient Rome invades North America when the Mississippian Culture is at its height. Subtext: Invoke a different European invasion of the North American continent, in a different way and at a different time but with fairly similar motives – plunder and personal glory – and explore what happens.
Hold up a mirror to the world we know. Attempt a new perspective on the culture clash between invaders who have “discovered” this great new world of Nova Hesperia, and the people who have been living there all along.
Of course, along the way desperate battles, pathos, and hardship ensue.
As the second volume, Eagle in Exile, begins, Gaius Marcellinus is living in a Cahokia that’s suffered considerable death and destruction due to its Mourning War with the Iroqua of the northeast. Marcellinus has done his level best to help his new Cahokian friends, with – let’s put it kindly – mixed results. And then there’s a coup. Marcellinus and a small band of his Cahokian friends are expelled from Cahokia and have to survive as stateless wanderers on the Mississippi. But, but: in the meantime, the Emperor of Rome has hardly forgotten about Nova Hesperia. More legions are coming, and Cahokia is not ready for them. Unless Marcellinus and his new friends can turn things around, they’re hosed. And there may be an enemy even greater than Imperial Rome on the Hesperian horizon.
This kind of story has antecedents. All stories do. The theme of the helpful and notionally more ‘advanced’ outsider entering and influencing a foreign culture has been explored from A Connecticut Yankee in King Arthur’s Court and Lest Darkness Fall to Dances with Wolves and Avatar. Which was kind of the point. I wanted to dig into a new version of the “discovery” of the North American continent. But I also wanted to turn the Avatar cliché on its ear, because I’ve never believed it. I’m generally unsatisfied with protagonists who adapt into new and radically different cultures with such speed and ease that they’re indiscriminately slaying members of their old culture by the end of the book (or movie). Perhaps there are exceptions, and even noble ones, but by and large honorable human beings just don’t behave that way.
Marcellinus is an honorable man. He’s hardly blind to Rome’s flaws, but he will live and die a Roman. He tries to convince himself — sometimes on tenuous grounds — that his actions are in Rome’s interests as well as Cahokia’s.
More crucially, Marcellinus has sworn an oath to never take up arms against Rome. This puts him in a bit of a bind. He is no longer a mere soldier. He has made new friends, new family, a new community and new allegiances, and he can hardly abandon Cahokia and the other North American peoples to their fate when his inside knowledge of Rome might be able to help them.
He can’t fight Rome, and yet he can’t not help Cahokia. Really, what’s a guy supposed to do?
So, the Big Idea of Eagle in Exile: wild adventure in an ancient North America, while in the process standing that comfy Dances with Wolves trope on its ear. With a secondary theme or minor or, hey, side order of: what does an honorable man do in an impossible situation?
With the easy answers ruled out, Marcellinus has to get creative. And after all, it’s not like everyone is just going to do what he says. Cahokia’s chiefs and elders have their own ideas, their own friends and enemies and concerns, and they don’t line up neatly with Marcellinus’s. Marcellinus is quite good at war, but he’ll have to develop a range of other skills to negotiate a treacherous landscape like this. He’ll have to learn fast, think on his feet, and try not to get killed or – given his less than stellar record so far – try not to get anyone else killed either.
I have to say, I’m glad my arrival in North America was calmer than Marcellinus’s. I might not have made it quite as far as I have.
I’ve been wanting to step up to a full frame DSLR for a while now, so this week I went ahead and ordered the Nikon D750 (for you Nikon geeks out there, I thought about getting the D810 instead but the D750 has nearly the same set of features for a lot less money, minus some extra resolution I would likely not take advantage of anyway). Right out of the box I decided to snap a few photos. Unfortunately I didn’t switch it over from JPEG to RAW before getting the first few pictures, but even so, what came out of the camera was pretty enough. Here are a couple of shots from this very first set.
So far, so good.
In case you’re wondering what will become of the D5100, it’s now the property of Athena, who has shown interest in and talent for photography. I’m sure she will put it to good use.
Autonomous cars, do they change how you will work in 10 years?
Do they change how I work? No, because I work from home, on a computer, which means I don’t have to go anywhere else to work. A car is generally not involved in my workflow at all.
Which is not to say I can’t wait for autonomous/self-driving cars. Are you kidding? These things will be the best thing ever for me. Why? Let me count the ways:
1. There are two types of people: Those who enjoy driving, and those who enjoy being at places where driving has to happen to get to them. I am in the latter group. Driving, as an activity, doesn’t interest me in the slightest. I drive because I have to go somewhere, not because I enjoy driving to get somewhere. That being the case, a self-driving car takes the part of driving I like the least — the actual driving — and gives it to the car to do.
2. Which means I will have more time to do the things I like to do, i.e., loiter on the Internet, listen to music, maybe do a little writing, talk to other passengers in the car. This will make the drive-time more enjoyable.
3. I get road ragey when I’m driving but not when I’m a passenger, so self-driving cars would make the transportation experience a hell of a lot less anger-inducing for me.
4. Other people on the road are idiots, so self-driving cars being used generally would vastly reduce the number of stupid people behind a wheel that I would have to deal with.
5. I’m getting older, and eventually I’ll get old enough that no matter what I’m likely to be a danger to others on the road. Self-driving cars will still allow me autonomy without endangering others when I drive.
6. Also, come on, let’s face it, I’m already not the world’s greatest driver. The nation’s roads are likely to be marginally safer even now if I’m not the one behind the wheel.
7. Nap while the car drives to my destination? Don’t mind if I do!
8. I don’t drink alcohol, but for those friends of mine who do, the idea that they could get home safely without endangering themselves or others is a nice thought.
And so on.
Now, I do realize that there are legitimate privacy and security concerns that will need to be addressed before full automation comes to cars — we don’t want our cars broadcasting where we’re going to all of the world, and it would do no good to have cars that are crackable so someone can hijack them with us in them. There are also practical issues like who is liable in a crash involving an automated car, whether or not a human always needs to be alert to take over driving, and other such things. All excellent points to consider.
But even so: self-driving cars where is mine I want mine now now now now. It’s fair to say I want a self-driving car more than I want, like, colonies on the moon. Colonies on the moon are nice, but a self-driving car is going to be great for my life now.
Although, again, won’t change how I work at all. Sorry.
(There’s still time to ask questions for 2016’s Reader Request Week — get your requests in here.)
Have you ever wondered how you will be remembered by the “science fiction community”? How future critics will use you in comparison to future authors……about the legacy you have left behind you when you have gone……if you will be lost among the hundreds of authors, as many from the 50’s have been…..? No offense……but even great authors have no books reprinted….etc etc…..
Well, one, after I’m dead I don’t think I’ll be worried about how I’m remembered, by the science fiction community or anyone else, because I’ll be dead, which I suspect means I’ll be beyond caring about anything. This is a strangely comforting thought: So long, universe! It’s your problem now! So there’s that.
Prior to my incipient oblivion, which is to say for the next 40 to 50 years, I don’t worry too much about not being remembered. One, presumably I will continue to produce work for the next 25 or 30 years at least — writing is a career where one can have some longevity — so I’m likely to continue to be in the stream of commerce and notability in my field. Two, I have enough status in the field thanks to existing work, sales and awards that my inevitable decline into irrelevance might be managed as a gentle descending glidepath rather than a precipitous cliff fall.
Three, you know what, if until I die I have friends and family and people I care about and who care about me, even if I were forgotten by the science fiction public while I were alive, I would probably be fine. I had a nice run in there, and there are worse things than to be forgotten.
In any event, the question is not whether I or my work will be forgotten, but when. Why? Because nearly every writer is forgotten, as is their work, given enough time. A couple get cosmically lucky in terms of their cultural legacies — Shakespeare is the go-to example in English — But between 1616 (Shakespeare’s death) and today, 400 years later, hundreds of thousands of published authors in English alone (if not millions) have slipped out of history. Their work may exist, in libraries or rare volumes or in archives like Project Gutenberg, but no one reads them, save the occasional academic desperate for a serviceable thesis. When you know that the vast majority of those who write, and the vast majority of what is written, tumble down history’s hole, you have a pretty good idea of what your eventual fate will be.
You don’t even have to be dead for it to happen. The large majority of my published work prior to 2005 is “out of print” — either officially out of print in terms of publication, or accessible only through specialized archives, digital or otherwise. I have a publishing history that goes back to 1991 (or 1987 if you want to throw in my college newspaper), which includes thousands of film and music reviews, hundreds of columns, dozens of newspaper and magazine features and several books, all published before 2005. Unless you already have a physical copy of any of these, you are unlikely to see any of them, ever. For that matter, the first four years of this blog — 1998 through 2002 — are not on the current iteration of the site; they’re accessible through the Internet Archive, but there’s no real indication anyone visits that. Other things I’ve written on other sites on the Internet are likewise inaccessible, though closed sites, reorganized sites, link rot and other such things.
Again: The majority of everything I’ve ever written — things that had audiences of hundreds of thousands of people when they were printed — has already effectively vanished from history, when I’m 46 years old and still actively writing. Is this a horrifying tragedy? Well, no, not really. I mean, if you really want to find my Fresno Bee review of, say, the long-forgotten 1993 Wesley Snipes thriller Boiling Point, then knock yourself out. But I guarantee you that if you do find it, you will not marvel at its genius. The review doesn’t necessarily deserve to be forgotten, but it doesn’t make a very good argument to be remembered, either. A lot of my “lost” writing is like that.
But in time even my good writing is likely to slip out of the public consciousness, even in specialized fields like science fiction. Look, new science fiction readers have heard of Asimov and Clarke and Heinlein — the chances they’ve read them, or at least read anything more than their one or two “greatest hits,” is increasingly slim, and will get slimmer the more time passes. This may outrage some folks who think you can’t truly appreciate the genre unless you take a survey class in it, but the average reader doesn’t care about that. They’re not going to go all the way back to Jules Verne or even Larry Niven just to have sufficient historical perspective in the genre to read the latest book by James S.A. Corey, or Ann Leckie or by me. 40 years from now, new readers aren’t going to read our stuff as a prerequisite to read whatever is new and exciting in the genre then.
And that’s fine. I’ve frequently said that I’m not interested in writing for the ages, since I won’t be there and the ages will take care of themselves in any event. I’m writing for people now, who will enjoy the work now, and also and not entirely coincidentally, pay me for my work now, so I don’t have to do anything else for a living. Will it last? You got me. I suspect I’ll still be remembered fifty years now because people who are reading me now will still be alive then. A hundred years from now I may be remembered for one book. If I’m remembered two hundred years from now, I’d be impressed as hell with myself, if I weren’t already dead for probably 150 years.
(Incidentally, the book of mine that already exists that I suspect I’d be best remembered for in 100 years? Redshirts. It’s not my “obituary book,” the book that’ll show up in the opening graph of stories about my death; that will be Old Man’s War. But I suspect it’s the one that will age the best, in part because it’s specifically about its time and therefore resistant to going “out of date” in terms of technology and prediction (particularly of social mores) the way science fiction can do; in part because it functions as both story and metastory, commentary and metacommentary, which means it’ll be interesting to teach, and being taught is important for the longevity of a work; and in part because it’s funny and easy to read. Will I be right? Well, on the slim chance anyone’s reading this in 2116: You tell me. Or tell my corpse; again, I’m probably long dead.)
None of this isn’t to say I wouldn’t be happy, in an existential sense, to have my work, and therefore me, remembered 100 or 200 years (or more!) into the future. I’m not going to live forever and any personal immortality I will earn will be through what I write. I think it might be nice for any future descendants of mine to brag to their friends that the book they’ve been assigned in class is from their great-great-great-grandfather (or granduncle, or whatever). I think it’d be fun to have people argue about whether I still have relevant things to say or should be considered “of my time.” It’d be nice to be remembered for being a writer, hopefully positively, when I’m gone.
I’m just not staying up nights worrying if it will happen. If it does, great. If not, I’m having a hell of a lot of fun now, and enjoying the small serving of notability I get today, for doing what I do. It won’t last; it never does. On this side of the grave or the other, I’ll likely to be forgotten, and no matter what the sun will eat the earth five billion years from now anyway and eventually the entire universe will proton decay out of existence, so, you know. Be ready for that.
In the meantime, I’m going to enjoy what I have today, with the people I have with me now. Seems the best thing to do.
(There’s still time to ask questions for 2016’s Reader Request Week — get your requests in here.)
The real world can sometimes get you down. But if you’re a writer, at least, you can use that as an opportunity to imagine another world. At a low point, Susan Jane Bigelow did just that — and her novel Broken was the result. Here she is to tell you about it.
SUSAN JANE BIGELOW:
Hope is a fragile thing, especially when times are bad. It’s easy to get lost in cynicism, to dwell on the awfulness of people and governments and systems, and resign ourselves to whatever fate is in store for us. After all, if we don’t get our hopes up, they can’t be dashed… and sometimes, hope feels so far away that it’s hard even to imagine we could ever feel it again.
In 2004, after failing at my job as a high school teacher, getting a new job for a lot less money, and watching what felt like political disaster unfold when John Kerry lost to George W. Bush, I wrote a book about hope to make myself feel better.
That book, Broken, turned into a four-book series. And really, at its heart the Extrahuman Union series is about is trying to find that narrow thread of hope to carry us through the darkest times.
I suppose it is also about superheroes in space. That’s important too.
The world of this book is teetering on the brink of disaster. The grip of a fascist government is tightening around everyone, and there’s nothing anyone can do to stop it. Earth and the dozens of colony worlds that make up the Confederation are falling into a long, long darkness.
Only Michael Forward can see a way through. Michael is just a kid, but he’s been saddled with extrahuman powers that let him see the possible futures of everyone he looks at. He knows how bad things are going to get, but he also knows that there’s a slender path through the darkness that leads to a better future for everyone. All he has to do is find it.
For that, though, he needs the help of Silverwyng, a former member of the Extrahuman Union who started living on the streets of 22nd Century New York after she lost the ability to fly, and who now goes by the name “Broken.” Broken has no hope. Everything she loved about her life is gone, and she is nothing but a mess of fury, despair, and cynicism when Michael finally tracks her down.
This is the story of how she helps Michael Forward and the orphan baby Ian, but it’s also the story of how Broken comes back to life. It’s the story of how she remembers who she was, and starts to have faith in herself and in the idea that she could have a future.
Broken is the first chapter of her story, to be continued in the forthcoming books Sky Ranger, The Spark, and Extrahumans.
And yes, I wrote it to make myself feel better about politics. But I also wrote it because one of my fundamental beliefs is that things can and will always get better, no matter how bad it seems now. Fate is cruel and life is hard, but faith in humanity and hope for the future are worth hanging on to.
This is not an easy thing to write. There’s a fine line to walk between hopelessness and corny, and it’s very tempting to swerve to one side or the other. The first draft of this book, which was written for NaNoWriMo 2004, was a lot darker than the final product. There was a lot more death and despair. You’re lucky I cut out the part where Broken eats a dead cat. You’re welcome.
As for why I chose to use super-powered people, well… they’re cool! But they’re also symbols of hope, in a way, especially some of the better ones. Implicit in a lot of superhero narrative is the idea that no matter how bad things may get, the day will always be saved.
I still believe that it will be. And I hope that Broken succeeds in conveying that!
We’re getting cosmic for this next question, from Greg, who asks:
Earthlings have 4 billion years to figure out space colonization before the sun goes red dwarf and consumes the earth Galactus style. They also have 4 billion years before the Andromeda galaxy collides with the Milky Way galaxy, which will likely require massive technology to survive. Can we pull it off? Can we even survive that long?
Well, before we begin, let me make a few corrections here.
Actually, the sun will not turn into a red dwarf, it will turn into a red giant, which has a very real chance of expanding out to the size of Earth’s orbit, swallowing it up in the process. That’s likely to happen closer to five billion years from now, not four billion years from now. Not that it will matter because a mere billion years from now the sun is going to be brighter and hotter than it is now, which will likely turn Earth into something like Venus is today, i.e., a hellish world where greenhouse gases have run amok, so that’s probably the deadline we’re working within.
Also, the Andromeda Galaxy colliding with the Milky Way Galaxy? While it is likely to happen in 4 billion years or so, it’s unlikely any of the stars in either galaxy will collide with each other — the distances between stars is just too great. It’s possible (although unlikely) the Solar System might be ejected into deep space because of the gravitational effects of two galaxies merging, but the solar system itself should be fine. Mind you, by that time the Earth would be uninhabitable anyway because of the sun heating up, but the galactic smash-up will be neither here nor there to that.
So: The now amended question is: Will humans figure out space colonization before the Earth is rendered uninhabitable by the sun, which barring anything else will almost certainly happen a billion or so years from now, and will we survive that long in any event?
The answers: Maybe, and probably not.
Last part first: Humans, which is to say the species Homo sapiens, is about two hundred thousand years old, which is actually not that old as species go. We evolved out of previous species of the genus Homo; probably Homo heidelbergensis, which went extinct around the time we showed up (probably coincidence, I’m sure). Before heidelbergensis was Homo erectus, from which it was likely descended, and which has also gone extinct. And so on and so forth.
Here’s the thing about species: Generally, they don’t last very long (geologically speaking). Over time, most species are likely to do two things: Evolve into another species, and/or go extinct. To be clear, sooner or later, every species goes extinct (see the ticking timebomb of the sun, above); only some evolve into something else. But it is very rare, generally speaking, for a species to last more than a few million years.
Why? Because the Earth is an unstable place, given enough time — temperatures go up, then they go down. The amount of gases in the atmosphere fluctuates significantly. Ice ages happen. Global warming occurs. Every now and again an asteroid drops in to really screw everything up. Die offs of the majority of all the extant species on the planet have happened several times (and some folks are warning that we’re in the early stages of a new one, thanks to human activity messing with the planet). When the ecologies change, the niches that species developed to take advantage of change too. This is rarely a good thing for the species in question.
Current humans have existed for a mere 200,000 years, in a genus (Homo) whose oldest member existed only 2.5 million years ago — barely even yesterday in geologic time. It would be optimistic in the extreme to suggest that Homo sapiens, as it exists today, will still be with us a billion years from now — 400 times as far into the future as our entire genus extends into the past. Given the assiduousness with which we’re currently reworking the ecology of the planet (unintentionally or otherwise), we’re probably making it more difficult for the species to last another 10,000 years, much less a billion.
But we’re smart! I hear you say. Sure, that’s true, but does it then follow that a) we’re smart enough not to basically kill ourselves by wrecking the planet, b) that our intelligence means that evolution is done with us. The answers here, if you ask me (and you did) are: We’ll see, and probably not. In the latter case, there’s an argument to be made that our intelligence will increase speciation, as humans intentionally do to our species what natural selection did unintentionally before, and do it on a much shorter timescale, in order to adapt to the world that is currently rapidly changing under our feet, in no small part because of our own activities.
So, no. Human beings, meaning Homo sapiens, will almost certainly not be here a billion years from now. We’re probably not even going to be here 100 million years from now, or 10 million years from now, or, hell, even a million years from now. The question is whether our evolutionary descendants will be around, a new branch (or branches) of the genus Homo. My guess is: A million years from now, yes, and we may even recognize them as human. Ten million years from now, maybe, but we could probably only vaguely see them as being descended from us. A hundred million years from now, if our descendants are still around, there would be no family resemblance at all. A billion years from now, well. Remember that your direct ancestors from a billion years back were single-celled eukaryotes who had just figured out this great new thing called “sex.” That’s how far back in time we’ll be from any of our descendants then.
Now, as to the other question, will we have figured out space colonization by a billion years from now, sure. Look, if we really decided that space colonization was something we wanted, we could have a couple million people in space in the next hundred years, easy. The issue to my mind isn’t really technology — I suspect we have the tech to make roughly serviceable colonies in space (and on the moon and on Mars) right now, and we could scale up from there in the next hundred years, no problem. The issue is whether we want to make the effort, and swallow the frankly ridiculous set up and maintenance costs, of permanent space colonization. Barring a Seveneves-like catastrophic event, we probably won’t, because why would we? We’ve got a nice planet down here, even if we’re currently mucking it up a bit, with lots of raw materials and space to work with. It’s easier to try to work with what we have down here, at the bottom of a gravity well, then send people up there and try to make that work.
I mean, yes, sure, eventually the sun will eat the planet, and it will swaddle it with greenhouse gases long before then. But again, the operative phrase here is “geologic time.” These events are going to happen so far out in the future that the human mind — the Homo sapiens mind — literally cannot process how far out in the future it will be. I mean, shit. We think waiting two days for something to arrive to our house via Amazon Prime shipping is forever. To make a mind constructed like that consider the unfathomable expanse of a billion years is folly.
Rather than worry too much about a billion years from now, or five billion years from now, I’d rather have us think about the next hundred years, and what we’re going to do with them. Make no mistake, when we talk about the fact we’re “wrecking the Earth” what we mean is that we’re wrecking it for us. As soon as we’re gone, there’s no other species taxing the planet to the same extent we are. What life remains — and life will remain — will speciate out to take advantage of how the planet is then, and will fill the niches, and over time the planet will change again, and speciation will happen to take advantages of those changes, too. The Earth doesn’t need us, and it won’t miss us when we’re gone. It’ll just… go on. It will do that if we die off, or if we take to the stars. But honestly, the first of these is far more likely than the second.
I’d like for humans to be here in a hundred years, and in a thousand. After that, we can worry about the next million years, and then the next ten million, and so on, until we get to the billion year mark and a much hotter sun. We’ve got a lot of time between now and then, however. First things first.
(There’s still time to ask questions for 2016’s Reader Request Week — get your requests in here.)
Welcome to Reader Request Week here on Whatever, where you suggest the topics I then write about. And let’s start off with this one, from Kilroy, who asks:
Urban v. Suburban living: Why I live on a big ass property in the middle of nowhere with awful internet when I could be living it up in a nice house in a big city with all the benefits of modern society and be around more people with the same political and social ideals that I do.
(Note that the “I” here is meant to be me, John Scalzi, not him, Kilroy.)
I’ve noted several times on Whatever how it is I came to live in Ohio, so there’s no point in going into great detail about it again at the moment (the short version: My wife’s family is from here and she wanted to be closer to them as our daughter grew up). I think the question is really about why I, a generally liberal, cosmopolitan sort of fellow, who has the means to move somewhere more in line with my politics and lifestyle, chooses instead to continue to live in a small, rural, conservative town in a small, rural, conservative county, in the Midwest, which is generally less cosmopolitan (and liberal) than the coasts.
Fair question, and here’s why:
To begin: we’ve paid off my mortgage. We’re not in a rush to get another one. I mean, we could afford a new one, I suppose, in a larger city than this, but why? To have the same home lifestyle experience we have where we live, we would have to spend a truckload of money we no longer have to spend here in order to replicate it. Why would we do that?
Well, possibly, to have a richer cultural and social experience than I do. Okay, sure, but let’s qualify that. I lived in the Washington DC area for several years, which meant that at my fingertips I had a whole range of cultural and social activities — and I took advantage of them and saw concerts and events and went out to eat at restaurants and such. And it was great! But we did those cultural events maybe a couple of times a month at most, and went out with friends maybe once a week. The rest of the time we stayed at home and watched movies or read or played video games or whatever.
Fast forward to today, and you know what? Living where we live, Krissy and I go to cultural events fairly regularly, and go out with friends maybe once or twice a week. The rest of the time we stay at home and watch movies or read or play video games or whatever. Which is to say we are who we are, regardless of whether we live in a large metropolitan area or in rural Ohio.
Bear also in mind what “rural Ohio” means. I live in small town of 1,800 and see Amish clopping down my road in their buggies on a daily basis. But this small town of 1,800 in rural Ohio is 45 minutes from Dayton, 90 minutes from Cincinnati or Columbus and two hours from Indianapolis. If I want to see a musical, or look at art, or go to a concert, or go get Ethiopian food, or any other number of things, it’s pretty doable, and the time commitment to and from is not actually all that much greater than it would be on the subway or the freeway. As I frequently say, I live in the middle of nowhere, but it’s the middle of nowhere, Ohio, as opposed to the middle of nowhere, Nebraska. I can go from nowhere to somewhere pretty fast.
The other thing here is that aside from this, I do travel a frankly enormous amount. In the next two months I’ll be in Los Angeles, Detroit, Chicago and Madison, Wisconsin for sure, and there may be other trips I’ll be taking as well. During each of those trips I will see friends, eat well, and go see (or participate in!) cultural events. Because of my travel commitments, I sometimes see friends who live thousands of miles away more often in a year than I will see some of the people who live in my hometown. It also means that when I do get home from travel, what I want to do is not see anyone other than my family and pets for a while. Which means, in point of fact, that living out in the middle of nowhere is perfect for my mental equilibrium.
Now, Kilroy points out another possible advantage to living elsewhere, which is that there would be more of a chance of people having the same mostly liberal-ish politics as I do, as opposed to living where I do, which is a county that went 72% for Romney in the last presidential election, and chose Trump over Kasich in the recently completed GOP primary, 43% to 40%. Even if I moved down the road to Dayton, I would find people whose politics and social stances are much more congenial to my own.
And maybe I would, but two things here. One, there’s the math question of whether I’m willing to pay tens of thousands of dollars a year in a mortgage (or hundreds of thousands of dollars to buy a house outright) simply for the benefit of voting near people who vote like me. That math doesn’t check out, especially because for things like state-wide and Senate and presidential elections, it doesn’t matter how my county votes, it matters how the people in my state vote overall. It’s true my US Representative and my state reps are likely to be Republicans (they all are at the moment), but, eh. That’s life sometimes.
The other thing is that just because people don’t vote like I do doesn’t make them horrible humans; conversely there are horrible humans I know of who share my politics. My next door neighbor and I pretty much cancel each other out when it comes to who we vote for every single election, and he’s as fine a neighbor as I’ve ever had and I would be hard-pressed to find one better. I’m pretty sure he likes me just fine too.
This should not be a surprising fact of life. A civilized society is one where you can disagree politically with your neighbor — sometimes bitterly — and still feel comfortable feeding his cats while he’s away and being glad he enjoys shoveling the snow off your driveway. Meanwhile I can think of at least a couple of people who vote like me up and down the line who I won’t willingly be in the same room with if I can avoid it. Our politics are not the whole of who we are as a person. It’s been politically advantageous for a while now for some folks to suggest we are only who we vote for, and that you can tell everything about us by who we want as president (or senator, or representative, etc). It’s not true, for most people, anyway.
I like my neighbors; I think most of my neighbors like me. I like the little town I live in; I think my little town likes that I live here. I like looking up at the night sky and seeing the Milky Way. I like that I can open my door and just let my pets out, and that every once a while a neighbor dog will come up to the house and ask if my dog can come out and play. I like it that my neighbor’s chickens walk up and down my yard like they own the place. I like it that if there’s a car in my driveway my neighbors don’t recognize, they’ll text just to make sure we know about it. I like that I can take sunset pictures from my deck that make other people jealous. I like the idea that I’ve been writing science fiction in a town where a traffic jam is three cars behind an Amish buggy.
That said, it’s true the Internet here sucks. I’ve had the same speed Internet for the last ten years. It’s possible that will continue to be the case for the next ten years. Dear CenturyLink: You suck.
But honestly, for me and for my family, that’s the major drawback to living where we do. And if the major drawback in your domestic life is slow Internet, well. You’re doing okay, no matter where you live.
(There’s still time to ask questions for 2016’s Reader Request Week — get your requests in here.)
It’s not much and it won’t stay very long (the temperatures will be in the 40s today), but here they are. I can say with confidence that these are the final snows of winter because the Vernal equinox here in the northern hemisphere arrives tomorrow at about 4:30am UTC, or actually 11:30pm tonight where I live. So, yeah, this is pretty much it. Winter is leaving.
Not that it was much of a winter in general; I can say with some confidence that in fifteen years of living in Ohio, this is one that racked up the least amount of snow and cold. We had only a couple of days genuinely frigid weather, as opposed to the couple of years before this when we had the polar vortex squatting over us. Once again, as a native southern Californian, I’m all for this. More mild winters for Ohio, please. Given that overall warming of the planet anecdotally seems to mean either very mild or deeply frigid winters for us, if I have a choice, I’ll pick the former, thanks.
Anyway: So long, winter — and given the 10 day forecast for the area, so long to snow. Now we get a month of clouds and drizzle. I can live with that.
And here we are, with another just lovely stack of books and ARCs that have come to the Scalzi Compound. What looks good to you? What would like to take home to your very own bookshelf? Tell me in the comments!
Here’s a nice thing that came in the mail the other day (to my mother-in-law’s house, oddly, but whatever): A commendation from the Ohio Senate! It’s for winning the 2016 Governor’s Award for the Artis in Ohio, and also, apparently, just for being a creative sort of dude who lives in Ohio. Well, okay! I’ll take it, with appropriate thanks and appreciation. This is kind of neat.
Interestingly, this is the second commendation I’ve received from the Ohio Senate; the first was in 2006, when I won the Campbell and was nominated for the Hugo for Old Man’s War. It was neat then, too. Does this predict a third commendation in 2026? Perhaps! (No.)
Regardless, it’s nice when your state appreciates you. I like Ohio, too.
1. As a reminder, I’ve withdrawn my work published in 2015 from award consideration, a fact I’ve mentioned here more than once, and which is well-known in science fiction and fantasy circles. I have no interest in that work being nominated, or suggested for nomination, for awards. To the extent that I am able, in the event my 2015 work is a nominee or finalist for awards, I will decline nominations or withdraw from consideration. This year, please nominate other people and works for awards instead.
2. As this is and has been my stated and well-known wish for the last several months, you may assume any presence of my 2015 work on any slate (or “recommendation list,” nod, nod, wink, wink) designed to produce award nominations is unsolicited and unwelcome and contrary to my expressed wishes, and my work has been placed on that slate without my knowledge, approval or consent.
3. Likewise, as it has also been my long-held position that I would never voluntarily participate in an award nomination slate, you may assume that my presence on any such slate is not voluntary, particularly, again, this year, and that again my appearance on it is without my knowledge, approval or consent.
4. If I or my work has been placed on an awards slate without my desire, knowledge or consent, it’s worth asking what other work may have been placed on such a slate, also without the desire, knowledge or consent of the author. You might also consider what sort of person would add an author and their work to an award nomination slate without their consent, and why those doing so would choose to do such a thing.
5. Some explanations as to why one might place someone or their work on an awards nomination slate without their expressed consent could include but are not limited to:
a) Desire to bring the legitimacy of quality to an otherwise dubious assemblage of potential nominees;
b) A transparent attempt to hide an overall political agenda by bringing in outside work, and/or to use that outside work as camouflage (i.e., slate, minus unwilling draftees to slate, equals actual slate);
c) The hope that by nominating good, outside work, other more dubious work will also get nominated as people vote the entire slate;
d) Latching on to the good reputation of the outsiders and their work for the publicity value, to draw attention to other more dubious work;
e) Being an asshole to people you don’t like, because you’re an asshole.
6. But it’s also entirely possible that those crafting award nomination slates are merely innocent enthusiasts of my work, wishing in all good will to promote a thing of mine that they love. That’s a lovely sentiment, and I appreciate the thought. However, inasmuch as I have a long-stated opposition to myself or my work being on slates designed to produce award nominations (or “recommendation lists” nod, nod, wink, wink, that are designed to achieve the same result), I would then simply and with due appreciation request they withdraw my work from their slate. This would be the case any year, but particularly this year, when I’ve already noted publicly, more than once, that I’ve withdrawn my 2015 work from award consideration.
Note well that in a perfect world I should be able to have my work dropped from a slate for any reason, or no reason, particularly from a slate I did not ask to be part of, and to which my work was added without my desire, knowledge or consent. That would seem to be the polite and respectful thing to do on the part of the slate makers. And not just me, of course; any person who’d prefer they or their work not appear on a slate (or even a particular slate) should have their wishes respected.
7. If those who have made an award nomination slate, who did not seek the approval of those they have placed on it to be on it, will not then remove those who ask to be removed, at once and without delay, it is reasonable to ask why they will not, and what purposes their refusal serves. See point “5” for some possible explanations. I would particularly note sub-point “e.”
8. In sum:
I’m not seeking award consideration this year;
I would not willingly participate on an award nomination slate;
If I’m on such a slate it’s without my consent;
Those who have put me or my work on such a slate should remove me from it;
If they won’t remove me, or anyone who asks to be removed, they’re likely assholes;
And maybe you should factor that in when thinking about them and their motives.
That about sums it up.