The Big Idea: James L. Cambias
Posted on January 28, 2014 Posted by John Scalzi 59 Comments
I can say this with some authority: I’ve known longer than anyone else working in science fiction today that James Cambias is a terrific writer. I know this because when I was editor of my college newspaper, James turned in some fantastic articles about the history of the university and of Chicago, the city our school was in — so good that I was always telling him he needed to write more (he had some degree program that was also taking up his time, alas. Stupid degree program). After our time in school, James made it into science fiction and has since been nominated for the Campbell, the Nebula and the Tiptree.
So it comes as absolutely no surprise to me that James’ debut novel, A Darkling Sea, is racking up the sort of praise it is, including three starred reviews in Publishers Weekly, Kirkus and Booklist, and comparisons to the work of grand masters like Robert Silverberg and Hal Clement. He’s always been that good, in science fiction and out of it.
Here’s James now, to tell you more about his book, and how one of the great tropes of science fiction plays into it — and why that great trope isn’t really all it’s cracked up to be.
JAMES L. CAMBIAS:
Small groups of people can have a huge impact on history. The Battle of Bunker Hill was fought by two “armies” which could easily fit into Radio City Music Hall together, without any need for standing room.
I wanted to tell the story of a tiny, remote outpost which becomes the flashpoint for an interstellar conflict. But I had a problem: most of the reasons for interstellar conflicts in science fiction are actually pretty lame.
Seriously: who’s going to fight over gold mines or thorium deposits when the Universe is full of lifeless worlds with abundant resources? And even if we find worlds with native life, it’s fantastically unlikely that humans will be able to live on them without massive technological support.
So there’s not going to be range wars, or fights over the oilfields, or whatever. The sheer size of the Universe makes conflict difficult and unnecessary.
Which means a war with an alien civilization has to be about something other than material wealth. It has to involve the most dangerous thing we know of: ideology.
In my new novel A Darkling Sea, a band of human scientists are exploring a distant moon called Ilmatar. Like Europa, Ilmatar has an icy surface but an ocean of liquid water deep below. The humans have built a base on the sea bottom in order to study Ilmatar’s native life forms, including the intelligent, tool-using Ilmatarans.
But they aren’t allowed to make contact with the Ilmatarans, because of another star-faring species called the Sholen. The Sholen are more advanced scientifically than humanity, and have adopted a strict hands-off policy regarding pre-technological societies. A policy which they insist the humans follow — or else.
That’s all very well, but there’s a problem with that attitude. The native Ilmatarans aren’t passive beings. They are curious and intelligent. One group in particular are very interested in preserving and expanding scientific knowledge, and it’s that band of scientists who come across a reckless human explorer. He winds up advancing the cause of science in a very unpleasant way, and the violation of the no-contact policy inflames the Sholen suspicions of the humans.
The humans resent what they see as bullying by the Sholen. The Sholen suspect the humans have imperialist ambitions. Tensions keep rising and eventually explode into outright war — a war fought by two dozen individuals on each side, at the bottom of a black ocean under a mile of ice.
Alert readers may notice that the ideology which creates this powderkeg in the first place is nothing less than Star Trek’s famous “Prime Directive” — a noble ideal and a hallmark of science fiction optimism.
I’ve always hated the Prime Directive.
The Prime Directive idea stems from a mix of outrageous arrogance and equally overblown self-loathing, a toxic brew masked by pure and noble rhetoric.
Arrogance, you say? Surely it’s not arrogant to leave people alone in peace? Who are you, Cortez or someone?
No, but the Milky Way Galaxy isn’t 16th-Century Mexico, either. The idea of forswearing contact with other intelligent species “for their own good” is arrogant. It’s arrogant because it ignores the desires of those other species, and denies them the choice to have contact with others.
If Captain Kirk or whoever shows up on your planet and says “I’m from another planet. Let’s talk and maybe exchange genetic material — or not, if you want me to leave just say so,” that’s an infinitely more reasonable and moral act than for Captain Kirk to sneak around watching you without revealing his own existence. The first is an interaction between equals, the second is the attitude of a scientist watching bacteria. Is that really a moral thing to do? Why does having cooler toys than someone else give you the right to treat them like bacteria?
“But what if they come as conquerors?” you ask. “That’s not an interaction of equals!”
That’s entirely true. And of course an aggressive, conquering civilization is hardly going to come up with the idea of a Prime Directive. It’s a rule which can only be invented by people who don’t need it.
Which brings me to the second toxic ingredient: self-loathing. I’d say that only post-World War II Western culture could come up with the Prime Directive, as that’s about the only time in human history we’ve had a civilization with tremendous power that’s also washed in a sense of tremendous shame. Previous powerful civilizations felt they had a right, or even a duty, to conquer others or remake them in their own image. Previous weak civilizations were too busy trying to survive. Only the West after two World Wars worries about its own potential for harm.
The Sholen in my novel have that same sense of shame. Their history holds more horrors than our own, and their civilizational guilt is killing them. They’re naturals for a “Prime Directive” philosophy. For them, humans are an ideal object for their psychological projection. They see all their own worst traits in humans, and assume the worst about the motives and intentions of humanity. The result confirms each side’s fears about the other.
As to what happens then, well, read the book.
—-
A Darkling Sea: Amazon|Barnes & Noble|Indiebound|Powell’s
Read an excerpt. Visit the author’s blog.
>>Alert readers may notice that the ideology which creates this powderkeg in the first place is nothing less than Star Trek’s famous “Prime Directive” — a noble ideal and a hallmark of science fiction optimism.
I’ve always hated the Prime Directive.>>
Having read (and reviewed the point)–I see where you are coming from. I’d say more in this space, but yeah, spoilers. The Sholen were one of the most interesting things in the book, second to the ilamatarans.
I’m not sure I agree entirely with James’ analysis of the Prime Directive. The purpose is to avoid putting a more advanced civilization in a position to actively influence the natural development of a less advanced civilization, and thus alter their evolutionary trajectory.
Who are we (humans, Starfleet, whomever) to have that kind of impact, benign or not? As for the desires of the other species, I would argue that just because someone (or a lot of someones) want something, doesn’t mean it’s a good thing for them. The Prime Directive recognizes that we don’t have the knowledge OR the wisdom to be able to determine what is right or not right for another intelligent species.
Before it got all silly in later seasons, this was kind of addressed in the TV show Earth: Final Conflict. The show was developed from an idea that Gene Roddenberry had been working on from the end of the original Star Trek series, until his death. Majel found the bits and pieces, and brought it to life. The show looks at how humanity would respond to a (supposedly) benevolent species interacting with us. Worth checking out.
I think this is the first really positive review I have seen john give a novel. He usually just posts a neutral blurb. I’ll definitely give it a shot.
though this is pretty funny…
“So there’s not going to be range wars, or fights over the oilfields, or whatever. The sheer size of the Universe makes conflict difficult and unnecessary.”
Scalzi’s old man war universe is about a universe packed with species that do nothing but fight over rocks like this. I like your rationale for your approach and coming up with new ideas.
Guess @ January 28, 2014 at 10:43 am:
OMW assumes that there are large numbers of planets out there which already have life on them, and that the life there is usually compatible with life from other planets. This is a standard SF idea from way back, and it lets you write stories drawing inspiration from human history on Earth.
But it looks really unlikely to be true, given what we know know about planet formation, biology, etc.*
Other planets are going to either be lifeless, and impossible to make inhabitable without vast stretches of time or technology so advanced that the war or settlement historical models don’t make sense for other reasons. Or they will have life on them which will make them in some ways even more difficult to make inhabitable by humans.
I suspect Mr. Cambias is working from the latter assumption.
* It may have also been implausible back in the good old days too, but let’s not get into that.
The most interesting question around a ‘prime directive’ for me is the impact that contact would have on a lower tech civilization in terms of ongoing development. I’d tend to assume that a civilization that had such a rule would likely have at least one (probably more) historical instances of attempted contact. If all such attempts resulted in some sort of violent implosion of the lower tech culture then perhaps it would make sense to impose a hold on contact with societies in early stages. I can certainly imagine that there would be huge uphevals in a society that was presented with tech a thousand years beyond their current capabilities…both in terms of their continued development (why work to make a better widget when you know that the guys visiting could start selling you vastly better widgets for almost nothing at any time) and in terms of cultural momentum (these guys know things we couldn’t discover without another millennium of hard work and it makes more sense to lobby for access to their data than to do the hard work).
This might end up settling out in a handful of generations with the new culture having assimilated the new tech and moved into a place where they’re part of the broader civilization. It might result in civilization wide collapses that aren’t readily recovered and result in violence and a dark age. I’m not sure how many cases of the latter would be needed before the high tech folks decided to back off, but I’d hope that if that were the way things generally worked out that at some point we’d decided to stay hands off and let them develop on their own for as long as possible…
Very interesting idea. I’ll add this to my list.
:le sigh: Another item added to my Kindle wish list. Scalzi, you owe me big!
Kyle Wilson @ January 28, 2014 at 11:23 am
Another possibility is that the lower tech civilization just gets remade in the image of the high tech civilization, and their culture vanishes.
One of those things that seems bad from a high level viewpoint, but not necessarily from the viewpoint of a particular individual.
Another aspect of this is if you permit contact, how do you prevent members of your civilization from abusing or exploiting theirs? In theory you can regulate your citizens to prevent this, but in practice it might be so difficult that the only workable regulation is “Don’t do anything, ever.”
I can see both sides of the “Prime Directive” issue … funny how reality doesn’t fit into those nice theoretical boxes. :-)
Also, I wonder if Mr. Cambias has written anything else (particularly non-fiction, essays, etc.) about the “self-loathing … [of] post-World War II Western culture”? If so, I’d be interested in reading it; many points to ponder there.
Mostly, though, this novel sounds awesome & will be added to my TBR list!
Thank you for the heads up. I will be buying and reading this book.
I’ve never liked the prime directive either. Contact between cultures with a wide developmental gap is certainly a comlex issue. Categorical rejection of contact doesn’t seem to me to be the best position ethically or pragmatically.
It does relieve you of responsibility. It can also burden you with responsibility. If it is within your capabilities to prevent the devastation of a culture but you take no action because of the “prime directive,” you have assumed some responsibility for the outcome. “Natural course of evolution” be damned. Nature is indifferent. Natural outcomes are not, by any stretch of the imagination, always, or even mostly, better for people. I include sentient aliens in that people. There is no “preordained Purpose that must not be interfered with.”
The myth of the noble savage is just that, a myth. The evidence is clear. Knowledge and technology allow an improved standard of living, better health, longer life, more time to pursue non life essential interests, all for more of the people in the population. If you have achieved this and determined after diligent consideration that you can help another culture achieve this, why should you not? And such a venture need not be purely altruistic. It could be a profitable and benificial positive sum endeavour for both cultures. That it would be very complex, that you could never be 100% sure of the outcome is true. But then, it is true of pretty much everything. So? This does not justify a categorical rejection of contact.
I don’t like the Star Trek Prime Directive myself. It lands as little more than a writer’s tool similar to the “Three Laws of Robotics”. Here is a rule for things to follow, then what happens when the rule is broken? Hijinks ensue.
It’s arrogant because it ignores the desires of those other species, and denies them the choice to have contact with others.
Well, if I’m not mistaken, doesn’t the Prime Directive only apply to planets who do NOT have warp drive technology? Kirk and Picard didn’t avoid contact with a species if they were running around in spaceships. Given that, honestly, what choice would anyone have to say “no” to super-advanced warp technology?
The thing that seems missing here is the amount of change that happens to a culture as a result of its technology. You don’t drop in on a pre-industrial age planet, give them fusion power, teleporters, and warp drives, and not have any impact on their culture. That’s the “horseless carriage” idea. When people were first tinkering around with steam engines, most people predicted the world would look exactly the same but with “horseless carriages” instead of horsedrawn carriages. The problem is that introducing massive technological advances created the Industrial Revolution and completely remapped how society operated.
I think human ideas have advanced as technology has made more things possible. One can watch the development of weapons over centuries and see how it changed the way people constituted their governments. When the way to wage war was knights in armor, the only people with power were kings who could afford training knights for years. With the invention ofthe musket, and eventually the rifle, the training needed to fight dropped to a couple months, and power becamed democratized. I don’t think it is a coincidence that the ideas of democracy didn’t really take hold until the shot heard round the world.
Denying those stages of development to a stone age planet, thrusting them directly into warp drive technology, would be taking creatures with stone-age level thinking and giving them the capacity to wipe out the planet and more.
I always saw the Prime Directive as acknowledging the massive changes that come with massive new technology, and basically saying giving an iron-age planet warp drive level technology would be like dropping a bomb on their culture, and hey, we don’t want to do that.
Loved Cambias’s short story “Balancing Accounts,” and he gets bonus points for naming the planet from Finnish mythology. I’ll definitely check it out.
Sounds like a great story. Will definitely have to check this out.
Diane Duane played with that concept in one of her YA novels when a man carrying a technical encyclopedia drops it after being bumped a hundred years into the past. So, what happens when one acquires advanced technology without the wisdom to use it or the vision to see the potential unintended consequences?
Enough about the Prime Directive – I’ve got to go buy the book now.
It’s interesting to me that the prime directive debate seems to always be framed in terms of hypothetical encounters with alien species. In fact, here on earth there are some thousands of uncontacted peoples–various tribes living in remote areas that have had little to no contact with the outside world. It’s an ongoing debate as to what the most moral thing to do in these situations is. So perhaps the discussion is not as hypothetical as it’s often presented.
Nice to find someone who can formulate an intelligent plot! I like what Cambias has written.
Please be advised that the arguments below are not intended to be either a personal attack or literary critique. Your idea is fairly original for a work of fiction and therefore interesting. I find your reasoning flawed, but flawed reasoning can still be an ingredient in brilliant storytelling. I didn’t address everything you said, because some of it I agree with.
Most.
You said it yourself. The universe is vast and slow by our standards. We fight over resources on Earth precisely because of relatively easy access. It’s easier to fight over a platinum mine than go wrangle an asteroid. Unless distance is somehow rendered meaningless – an improbable eventuality to say the least – proximity will always take precedence. Competing factions may find this sufficient casus belli.
And incredibly pointless. Any civilization with the means to colonize interstellar space has almost certainly long since mastered bioengineering.
Never say never. Conflict is almost always unnecessary here on Earth, yet it still occurs. War is rarely about absolute necessity; almost always about discord.
Not one war in history has increased material wealth, and most have done the reverse. Resource wars are about control of commons, which is about ideology. All conflict shares a common character, that of domination. The disputes and spoils are merely expressions of that character.
That’s a nice ideal. But reality is messier. Cultures, let alone whole species, are not individuals. Several Trek episodes show the existence of warp-capable civilizations becoming or having already become pawns in local pre-warp politics. Cortés didn’t just show up and single-handedly wipe out the Aztec Empire, nor Pizarro the Incas. The arrival of Europeans destabilized Native American civilizations. The pilgrims were as disruptive in the long-run as the Conquistadors.
You realize, of course, that in an universe governed by Special Relativity, any interstellar contact between species is likely to include at least one party that’s been around and doing the civilization thing for millions and perhaps billions of years. We’ve had recorded history for a few thousand, and are only now just beginning to learn how to modify our nature until now molded only by the gradual hand of natural evolution. Cooler toys wouldn’t even begin to cover the difference. Vernor Vinge uses the metaphor of the flatworm attending the opera. And while I think his idea of a technological singularity is highly unlikely, the same effect could be achieved by several eons of incremental change. The knowledge difference would be literally incomprehensible to the younger civilization. We may have more in common with bacteria.
A generalization so broad as to be scientifically meaningless. Oh I’m sure any competent historian could cherry-pick a basket of evidence to support your hypothesis, or refute it depending on their observer bias, but a culture comprising millions of diverse individuals and thousands of fluid subgroups does not reflect the pop psychology already misattributed to a single human personality. Certainly the individuals that come up with Prime Directive-like ideas have coherent motivations, but there is no reason to assume all their motivations are the same. That you simply assume it must be Western shame says more about you than them. Now if you want to present a specific case of non-interference principles and offer evidence that those individual(s) were motivated by shame, that would be an argument to lend credence to. Otherwise it’s punditry, not sociology.
@Matt
The word alter implies something from which it is altered. The future is indeterminate. There is no outcome until the events of the present occur, and among those events are the decisions of decision-making beings. Deciding that something is the right outcome is precisely the arrogance to which I believe the author is alluding.
Part of the equation, just like the contactees. Who are we to decide what is best for others?
Do you not see the hypocrisy in combining those two sentences? Determining what is right or wrong is exactly what the Prime Directive does. It removes the choice from the potential contactees and places it solely in the hands of the contactors. It quarantines them without their consent or even knowledge.
@Guess
Old Man’s War also posits a universe in which most civilizations do eventually render distance almost meaningless (there is a limit to the instant space jump), while a critical resource, stable habitable ecospheres, remain scarce and most species are either incapable of or unwilling to reach ideological accord, yet are at or near technological parity with each other. In such a highly improbable universe it’s only logical that belligerents would fight over the best real estate.
@Captain Button
I would be wary of saying it’s really unlikely when we haven’t even directly observed any Earth-sized telluric planets in their stars’ habitable zones. I would say it appears conditionally unlikely based the the best current analysis of very limited data.
Cargo cults. One of the few compelling arguments in favor of technological non-interference. And yet, right or wrong, it remains a position of self-superior arrogance for it assumes that a culture is more valuable that the self-determination of the members who may leave it to assimilate into another.
@Kyle Wilson
Then why not buy the alien widgets instead, learn how they work, then improve on that rather than simply duplicating research. This has happened repeatedly in contact between human civilizations. The Arabs learned Greek math from the Romans, then advanced it further, then traded the knowledge to early Renaissance Europe which further advanced it. Central Europeans pioneered rocketry and atomic physics, then some taught it to Americans and Western Europeans who advanced it further. There seems to be an underlying assumption in your argument that cultures will be demoralized by not doing all the pioneering themselves. But almost no civilization in recorded history has started from scratch or worked in a hermetically sealed knowledgebase. They’ve seen further by sharing and by standing on the shoulder’s of giants.
So? I’m working on a dissertation developing nonlinear quantum algorithms. Most of my time is spent learning math and science someone else pioneered based on someone else’s work so on and so forth back to whatever forgotten Sumerian decided to start counting things. The original content of my research will be a small fraction of what I have to learn to develop it. Should I ignore discoveries that weren’t made by my direct ancestors? Should Captain Picard be in charge of the textbook exchange?
@derrelle
Since natural is a useful, albeit highly contrived, distinction between outcomes factoring in the actions of self-aware tool-designers and outcomes where designers are removed from the equation, there is nothing natural about the Prime Directive. It is designed to govern the interaction between designing civilizations. Starfleet is never shown to be forbidden to interfere in ecosystems without “sentient” lifeforms. The idea that something stemming from the choices of human beings is natural is merely a corruption of the word’s use to endorse one set of actions over another. At best it’s naïve. At worst it’s deceit.
You said it yourself. Nature is impartial. That includes ethics. Why should you pursue utilitarian ends? And who are you to decide for others what is valuable in life? Choosing for others an “improved” standard of living for values that you consider superior is every bit as arrogant as choosing to quarantine cultures. The only non-arrogant course of action is informed self-determination, but that’s a lot more work than non-interference or moral imperialism.
@Greg
This presupposes two other conceits common in science fiction: aliens that think like humans and the inability to adapt culture to rapid technological development. Neither are impossible, but neither are foregone conclusions either.
So, in essence, they can’t be trusted to decide for themselves, but we can be trusted to decide for them?
@Jim Caplan
Where exactly does such wisdom come from? What about human history suggests the consistently wise or prudent use of its own technologies?
John: A late hit. Your readers may be interested in checking out the “in-universe” Web site about Ilmatar, the exploration mission, and other background info. It’s at http://www.ilmatarmission.com.
Thanks to everyone for the kind remarks; I hope you all enjoy the book.
@Gulliver
I think you have misunderstood me. My comment regarding “natural” was a criticism of the idea that contact with a less advanced culture would “interfere with their natural development or evolution.” A common argument that was in fact used by an earlier commentor. Your discourse on the uses and meaning of “natural” and the implications for the user of that term is presumptuous at best given the limited information you started with.
Again you misunderstand. You again presume to much from too little information. As could be expected with such a complex subject, a comprehensive recounting of my thoughts on this subject would be many pages in length. If you would like to ask “Do you mean that you think it is okay for the more advanced culture to decide what would be better for the less advanced culture?” Then I would have the opportunity to say, “Why no, no I don’t. Whatever gave you that idea?” That horse your on is high. Your assumption of my lack of ethics is not so nice.
Even if we assumed for the sake of argument that it was okay to make contact with a less technologically advanced species there’s still the question of when is the best time to do it. If we were to do it too early in a civilization’s development there’s a big chance that the natives will just start worshiping us as gods and seeing what we do as magic because they wouldn’t even have the concepts to even imagine in theory what airplanes or germs are.
It seems that you’d want some baseline. In the Star Trek universe, the baseline is that a culture has to invent some form of FTL drive before the Federation can contact them. But the bar doesn’t have to be raised quite so high. For example, we could decide that a civilization should possess some form of scientific culture or form of government or have progressed beyond the hunter-gatherer stage before we come bearing gifts.
Alternatively, we could say that no criteria have to be met before we establish contact. But that’s not realistic. There will always be some kind of criteria in the realpolitik sense. We wouldn’t want alien Nazis waging war on us with tech they either brought off us or reverse-engineered nor do we want species to become completely dependent on us in a type of intergalactic welfare.
So what’s the baseline or criteria that should be used to decide whether we should reach out?
“Previous powerful civilizations felt they had a right, or even a duty, to conquer others or remake them in their own image. Previous weak civilizations were too busy trying to survive. Only the West after two World Wars worries about its own potential for harm.”
You may be making the West too monolithic, as though the Allies were all of one mind, culture and policy. The USA and the British Empire had different conceptions about what to do after the war, at least as far as FDR and Churchill were concerned (though admittedly by halfway through 1945 both were out of the picture). According to this, “Franklin Roosevelt was committed to dismantling the British Empire” and historian Arthur Schlesinger, Jr. said in a review of As He Saw It by FDR’s son Elliott that the book’s central thesis was “Roosevelt saw Great Britain and its imperial system as a far greater adversary to the United States than Russia.”
I don’t know much about post-war political history of the “special relationship” but I know that one of the themes of my all-time favourite film, A Matter Of Life And Death (1946, aka Stairway To Heaven in the USA), is just this sort of Anglo-American relationship difficulty. In it RAF pilot David Niven falls for Boston servicewoman Kim Hunter (just as he also falls out of his bomber without a parachute and miraculously survives), and has to prove he is worthy of her in his own mind by arguing in a fantasy “afterlife” courtroom sequence that, among other things, Britain is just as democratic and freedom-loving as the US.
(It’s not all that the film is about – mainly, I think it is telling Niven, who is medically fighting for his life, that he should not succumb to what we would call Survivor Guilt. He was willing to lose his life during the war, but he did not die. Now the war is over and he has future new responsibilities in the peacetime; the film is set over the end of the war in Europe, ie early May 1945, and Kim Hunter’s character is called June – ’nuff said.)
But anyway. Sounds an intriguing idea for a book!
@darrelle
And I replied to Matt’s comment as well. You nonetheless said:
I agree with each of those sentences in isolation. But the implication of using the last in a discussion of the Prime Directive is that there is a natural outcome contingent on whether one civilization interferes with another. My sole point is that natural is an inapplicable term in any such scenario, since all actions are taken by people (using your inclusive definition with which I concur).
The fact remains, there is, by definition, no natural outcome to contact between civilizations. I was making no implications for the user of the term. Perhaps your belief that was I was derives from my statement about the corruption of the word. If so, then allow me to clarify. I am not saying everyone who misuses it is doing so endorse one outcome, only that a desire to see certain choices as superior is the main contributor to the word’s corruption in the English language.
This:
The non-arrogant stance would not be to deny or aid, but to inform the contactees and abide by their decision. They may not share your view of what constitutes an improved standard of living, may not even think the same way you do. If your goal is not to decide what is best for them, then they must determine it for themsleves, and they may not share your values.
I never said you lacked ethics. Quite the opposite. I said that nature lacks ethics. Maximizing better health, longer life and more leisure time for as many people as possible are utilitarian ends. Ends contactees may not value. If you simply misunderstood me, then I apologize for failing to communicate my meaning. If you’re just trying to pick a fight, look elsewhere.
It just occurred to me we’re essentially having an argument over the modern version of the White Man’s Burden. What obligation do we have to uplift (hat tip to David Brin) other species? Maybe we don’t want the headache of educating and dealing with less advanced species. That’s a valid reason for a Prime Directive that doesn’t come out of arrogance or self-loathing.
You can make the argument like the British Empire did that we have an obligation to interfere but you could just as easily argue that a better use of our resources is to leave them alone until they have something worth contacting them for.
Cambias: It’s arrogant because it ignores the desires of those other species, and denies them the choice to have contact with others.
Greg: Given that, honestly, what choice would anyone have to say “no” to super-advanced warp technology?
Gulliver: So, in essence, they can’t be trusted to decide for themselves, but we can be trusted to decide for them?
Deep breaths, man. If I recall, you lean libertarian, and choice is high on the priority list of libertarian thinking, which perfectly explains with this nugget:
The non-arrogant stance would not be to deny or aid, but to inform the contactees and abide by their decision.
You’re asserting it is non-arrogant based on the assumed infallibility YOU place on choice.
If aliens a billion years more advanced than us showed up and said “Hi, wanna learn our ways? your choice.” their very appearance would have a massive impact on the world, before the choice was even presented. So, it isn’t like you can just show up and give a planet the choice of (1) going on a magic carpet ride or (2) returning to the way they were before. By your very presence, you’ve permanently altered that world.
There was a star trek episode where Picard was glimpsed by some pre-warp drive aliens they were observing. and they practically started a religion where picard was a god and they tried to divine what this god wanted them to do. and maybe did it include killing someone.
On some level, it is impossible to not “deny or aid” when you make contact, because revealing the simple knowledge of your existence could have massive effects, and you have no way of knowing which way it would go. It might cause peace to break out across the planet. It might cause millions to commit suicide as their world instantly disappeared.
What if the warp drive aliens who show up on earth are Tralfamadorians who take you on a journey where they tell you they’ve been to many other worlds and yours is the only one that talks of this “free will” nonsense, and then they demonstrate by means we cannot fathom now that choice is a complete and total illusion, to the point where it is undeniable truth to even the most devout libertarians of planet Earth.
It would be arrogant to declare that this would not have an impact on you beyond the mere information it provides you.
Introducing alien life to an isolated planet would be a singularity-type event. It would be impossible to predict the effect such an event would have. It would be like introducing bronze to the stone age, iron to the bronze age, Steam Power to the iron age, industrialization to the agricultural age, electronics to the industrial age, and so on. The effects of each one of those advances was impossible to predict or understand before hand.
Basically, we’re all sitting here in a pre-steam-age era, predicting that this new fangled steam engine will allow horseless carriages and predicting that everything else will remain unchanged or will change in predictable ways. No one had a clue how vast the effects were going to be.
We have no clue what would happen if humans violated the Prime Directive because we’ve never been through it ourselves. We’ve never made contact with alien life from another planet. We’ve never gotten beyond the horse-drawn-carriage to see what the real effects of the industrial revolution would be.
So, I don’t see the Prime DIrective as arrogance, I see it as humility. I see it as an acknowledgement that we don’t actually know what the effects would be on ourselves, and so we don’t want to inflict that potential kindn of damage on another planet.
Wow. This is going to be my next book. I will buy it this week. I love the switch with the alien culture mirroring the post World War 2 western mindset. I am fascinated, intrigued, and hooked. Culture and ideaology are complicated but one element that is difficult in the US is the combination of institutional arrogance of “we have it figured out” merged with the self flagellation of the actual pride and components that form the bedrock of what success the US has had to date. It hasn’t been a pretty history, but the course of mankind has not generally been a pretty one and there tends to be good to balance the bad and bad to balance the good.
Too much more to say but as I started down the tracks I realized that this is neither the time nor the place and as an in public forum everyone has differing opinions, experiences, and beliefs that shape their personal perspectives.
Suffice it to say I am ready to dive into the world presented by this novel and explore the author’s take on the subject.
And perhaps I won’t be malleted for my tangent… ¡Verdad!
So I posted without seeing what comments were already on the thread. West coast…work…all conspire to put me behind the times.
First off, what deep thoughts and interesting conversation has been sparked by this topic.
Second: I am no longer worried about being malleted.
Hope everyone is having a good winter, snow day, etc and staying safe.
65 degrees today in my part of Cali.
@Greg
I’m tranquil as an inland sea, dude. I’m just pointing out that quarantine is assuming the moral authority to decide for the potential contactees.
I lean civil libertarian (a nontrivial distinction). It’s not a matter of choice. Choices will be made no matter what values are in play. The question is who does and who should be making the choices.
Well, no. I’m not even saying it’s the best option. I’m saying anything else is effectively deciding for them, whether uplifting them, quarantining them, or given them a false choice by giving them access to only some information about their galactic situation.
But as pointed out, the Prime Directive isn’t really a commentary on alien contact, which would in reality almost certainly be unfathomable (as I said earlier), but rather about the relationship between technologically disparate human civilizations.
Didn’t say that.
Certainly. Your part of the equation, not outside it. Doesn’t mean the only choices are getting assimilated or being put under interstellar lockdown. Diplomacy is complicated. Finding a stable relationship is difficult. Simple solutions like the Prime Directive are appealing because they take all the hard work out of diplomacy.
That would be the opposite of informed, and it was Who Watches The Watchers, one my personal favs.
Nobody said a little remote recon first would hurt.
The knowledge would have to exist anyway. The Tralfamadorians did not create it, unless their some sort of omnipotent masters of reality, in which case game over. Otherwise they merely gave me their textbook, so to speak. Although I do find it amusing that you think I value free-will, or even think it exists in the nebulous, ill-defined, wishy-washy metaphysical sense. I suspect this may have to do with you thinking of free-will, a silly term if ever there was, the way most people seem to think of it, a supernatural phrase with no logical causally-based definition. Suffice it to say, there’s a reason you’ve never heard me use the term. It’s a moronic superstition.
The Prime Directive is a metaphor for intercivilizaitonal contact where at least one is unprepared (uninformed) to understand the other. That’s happened many times on Earth. It isn’t really about aliens. Alien contact would be an Outside Context Problem. There are no answers to questions you lack the knowledge to frame, because you can’t even ask them correctly. Discussing the social implications of contacting real live aliens is about as rigorous as asking what it would be like if God(s) showed up.
Then why draw the line at interstellar contact. I mean it’s pretty obvious that the Prime Directive is being robustly enforced on every warp-capable planet and interest group in the Federation’s area of influence, or it would be about as effective as a non-binding UN resolution to cross your heart and hope to die. The Federation could easily stop contact between pre-warp cultures too, but it arbitrarily defines Us and Them at the warp-drive level. And they have lots on in-universe data on edge cases where the Prime Directive went pear-shaped. Who’s to say they couldn’t make a reasonable prognosis of what contact with civilization X would entail? The Federation has come into contact with numerous species and entities vastly more scientifically and technologically advance than itself, yet it hasn’t started worshiping them. Why assume pre-warp civilizations would fall to their knees to worship them.
Another problem with thinking about the Prime Directive as being about actual alien contact is assuming aliens would think in such a way as to worship sufficiently advanced technology. Aliens might have no concept of worship.
The problem with making contact with societies with a much lower tech level than yours is that unless you have a way of contacting every single intelligent being on the planet simultaneously, you’re going to be picking winners.
You will pick the structures you communicate with and those you don’t and you stand a more than even chance of entrenching those societal structures (and profoundly disadvantaging other social structures).
I come from New Zealand where colonial issues are perhaps more talked about than in the US and colonialism happened in a different way to the US (or Australia). People (well some people anyway) put genuine good-hearted thought into some parts of colonialism and still managed to screw things up on a monumental scale (I don’t even want to know what the intergalactic equivalent of rabbits would be, but it wouldn’t be pretty).
To be perfectly honest, I don’t believe there’s even a fraction of enough acknowledgement of just how awful parts of Western history are (parts of New Zealand history also get sanitised for the kids), so the idea that the west is “drowning” in self-loathing ranks right up there with the idea that Japan is drowning in self-loathing over their citizens’ actions in WW2.
Aha! I got to this part…
+++it’s that band of scientists who come across a reckless human explorer. He winds up advancing the cause of science in a very unpleasant way+++
…and thought “that sounds a lot like The Ocean of the Blind, that story I read in a magazine a while ago that I thoroughly enjoyed”. A quick Google, and sure enough!
So, yes, this is definitely going on the list. I’m looking forward to getting back under the ice with those characters.
Heh. “What a lot of bubbles!” I read that story on a plane and was trying to stop myself from simultaneously laughing and wincing.
Annamal: “To be perfectly honest, I don’t believe there’s even a fraction of enough acknowledgement of just how awful parts of Western history are (parts of New Zealand history also get sanitised for the kids), so the idea that the west is “drowning” in self-loathing ranks right up there with the idea that Japan is drowning in self-loathing over their citizens’ actions in WW2.”
Have you read Cambias’s book? It’s certainly not some kind of simplistic, anti-Western manifesto, if that’s what you’re thinking. I certainly see how you could get that impression from the remarks you quoted.
Basically, this is a book about how two civilizations can blunder into war and bloodshed despite (mostly) claiming to have good intentions.
The Sholen are a hierarchical, consensus-driven society. Individual Sholen really don’t understand humans at all. Sholen anthropologists do kinda-sorta understand humans, but even they tend to see as us exotic savages—either “noble” savages or dangerous ones, depending on their mood. The Sholen also have the technological upper-hand (though not overwhelmingly so), and they’re quite happy to boss humans around, all while denying that they’d ever do any such thing.
The humans are, well, humans. In this case, most of the humans are clever researchers suffering from a bad case of amateur political theories of various sorts. Plus a good dose of indignant self-righteousness.
The Ilmatarans, in turn, have pretty limited technology, but they’re certainly not stupid. They have both writing and scientific societies. Politically, they’re complicated, and they don’t map well to modern US politics—their society is deeply territorial up right up to property lines, beyond which it’s both democratic and communitarian, with strong guest-host traditions.
So when these three societies collide, things are going to get messy. The humans find the Ilmatarans fascinating, the Sholen are (both deliberately and accidentally) misreading the humans, and a few of the Ilmatarans are desperately trying to pursue their own dreams of academic glory. And when tensions ratchet up, a Sholen “Guardian” beats a human to death.
Is Cambias trying to make some sort of point about western civilization in his portrayal of the Sholen? I don’t actually know. There’s a lot of stuff going on in this book, and the various aliens have too much agency for me to untangle any single “moral” after a single reading.
Greg: By your very presence, you’ve permanently altered that world.
Gulliver: Certainly.
then they have no choice. You’ve already altered who they are and what they think.
Greg: What if the warp drive aliens who show up on earth are Tralfamadorians
Gulliver: they merely gave me their textbook,
Dude, if that isn’t the peak of arrogance, I don’t know what is. Aliens a billion years more advanced than us give us all their knowledge and you think it will merely be a textbook of information.
Maybe it won’t affect you on an individual level, but there will be massive changes on a cultural and social level. Which, I think, is where the libertarian thinking shines through, There is no such thing as society, there is only individuals. And all that.
The invention of the steam engine may not have had much impact on any single individual, but culturally speaking it created the Industrial Revolution. The introduction of warp drive, teleportation, replicators, and unlimited power might cause you to continue living in pretty much the same way you’re living now, but at a society level, the effects would be unfathomable.
They’re aren’t merely giving you a textbook of information. It will cause a social revolution of impact that no one can predict.
The industrial revolution caused massive changes to how people worked, huge negative impacts on the environment, monopolies of power, abuses of factory workers. And it took a century or more to iron out the wrinkles. environmental regulations against dumping toxic waste, labor laws to stop child labor, abuse towards workers, and create safe working conditions, and anti-trust laws to stop the robber barons.
The biggest lie of libertarianism is that we don’t need regulations for any of this, that if we strip away all regulation and merely give every individual choice, that somehow, the environment won’t be a dumping grounds, child labor wont be a problem, and monopolies would magically be erased by competition. because… choice.
No, it doesn’t work that way.
You drop a billion years of advanced technology on earth, it isn’t going to be merely a textbook of information. It’s going to have massive effects on society that no one can predict where it ends up.
The Prime Directive is a metaphor for intercivilizaitonal contact where at least one is unprepared (uninformed) to understand the other. That’s happened many times on Earth. It isn’t really about aliens.
Look, if you want to assert that all things in SF can only be metaphors for human to human interactions that have happened many times in the past, then that’s you’re thing. I think the idea is to look beyond what we know now and consider the future that isn’t known yet.
First SF novel I read in grade school and it made me realize how contained my thinking was, and got me to thinking about a much bigger world.
Alien contact would be an Outside Context Problem.
It’s like predicting what will happen after a singularity event. It’s impossible to predict. But we can see what happened to humans after various technological jumps. some good. Some bad. overall, I’d say we’re better off, but not before going through some really crappy times. The technology jump of nuclear weapons could very well have destroyed the planet. And the idea of the prime directive can be seen as “we don’t know what will happen, but we don’t want to be the cause of something like THAT happening to another planet.”
Why assume pre-warp civilizations would fall to their knees to worship them.
I don’t think you understand the point of what a singularity is. It’s an event beyond which it is impossible to predict what will happen. I am not assuming pre warp civilizations would worship warp drive aliens. But there’s nothign to say it couldn’t happen either. It’s a singularity. We don’t know what will happen.
It’s not merely a textbook of information. It’s an unknown.
@emk
This for me is key. Thanks to your comment, I’ve decided to put this novel on my list for the next time I’m in the mood to read a first contact story. With the sole exception of Childhood’s End which managed to pull it off, I loath stories where contactees are treated as children.
@Greg
The one does not follow from the other.
*sigh* It’s a metaphor. My point being that if its unfathomable, then it isn’t proven, since proof requires understanding. If it’s proven, it can be learned, and therefore is something we could have eventually discovered on our own. Now, I happen to think it likely that we would not be able to understand everything, or most, of what actual surviving eons-old aliens understand, any more than I can explain Shakespeare to a flatworm, but the flatworm can’t even be made aware of Shakespeare.
You’re going to have to decide if you want to talk about what actual alien contact might be like, in which case words like arrogance and humility are meaningless, or what this book is about, contact as a metaphor for human cultural clash.
I like you, Greg, but I’m not going to take time to refute every last belief you mistakenly ascribe to me because you think I’m a libertarian.
Simply knowing technologies exist doesn’t equal being able to duplicate them.
And the more advanced civ can give the less advanced civ a choice as to whether to find out.
I honestly have no idea who you’re even arguing with at this point. I get it, every debate is about libertarianism for you. It isn’t for me.
No, not all things in SF, just the plot crux of this Big Idea’s novel.
Good, those are my favorite kinds of SF as well. Glad we can agree on something.
Exactly!
Or it made world war so dangerous that the two largest militaries in history narrowly avoided making WWII look like a skirmish. There’s no way to know with any real certainty how history would have played out without the Bomb. All we know is what happened.
Which is why it’s arrogant. It assumes we know what will happen. Now look, if a human society democratically decides that it doesn’t want the responsibility that goes with being part of giving another civilization a choice, that’s it’s prerogative and even understandable. But to pretend that we know what’s best for the contactees is plain arrogance.
Didn’t both Kirk and Picard violate the Prime Directive all the time?
@Gulliver
My original comment, . . .
. . . is a fairly straight forward claim that claims of the type “interfering with the natural course of events [i.e. interacting with a less advanced culture] is a bad thing to do,” is a fallacy. As in Naturalistic Fallacy. And as such necessarily includes that . . .
And, to further clarify just in case someone wants to point out that “really everything is natural,” I was addressing the meaning of the term “natural” that is typically used by those using the argument I was addressing.
Regarding my original statement . . .
. . . and your responses to it. As I pointed out previously you assumed quite a bit more than what I actually said, for example that by my use of “help” in that statement that I meant deciding for them without their input. I neither said or implied that, and in fact did not, and did not intend to, address that specific issue in any part of my comment. Therefore when you respond with . . .
. . ., in which every assumption you make about my views is wrong, it seems more like someone who isn’t taking the time to read very carefully and is less interested in mutual conversation than expounding on their favorite issues regardless of if they are on point or not. Hence my suggestion that before doing that it would be better to ask some questions to test your assumptions before suggesting that the other person is an unethical, arrogant moral imperialist.”
Now, don’t get me wrong. I don’t think that people should be prevented from being rude or insulting. But I do think that when people are they should not expect others to not respond.
Furthermore, I well understand that I am making assumptions about you here, and that I could be wrong. I don’t know you and I accept that my interpretations of what you have written here could be off to one degree or another.
@ Annamal
I don’t even want to know what the intergalactic equivalent of rabbits would be, but it wouldn’t be pretty
Since we’re talking about Prime Directives, wouldn’t the equivalent be tribbles? (Or flat cats if you prefer your tales be familial or guinea pigs if you prefer first causes…)
@Greg
We have no clue what would happen if humans violated the Prime Directive because we’ve never been through it ourselves.
Actually, we have. Take any isolated group being contacted by technologically advanced explorers and you’ve got the basics of the Prime Directive. Sometimes it works out OK for both parties (e.g., the re-introduction of horses to North America by the Spanish). Sometimes it is disastrous for both parties. Sometimes it works better for the less advanced party than the advanced one (e.g., the development of the Cherokee syllabary). But most of the time it works better for the more advanced group than it does for the less advanced one (e.g., the colonization of Australia).
Today, there are several countries that have a version of the Prime Directive in place; Bolivia and Brazil are the most obvious examples. They recognize the right of these “uncontacted peoples” to be left to live their lives as they choose and provide sanctions against those who violate those wishes (with minor exceptions for anthropologists, who are allowed to visit but can’t leave any technology behind).
Greg: And the idea of the prime directive can be seen as “we don’t know what will happen,
Gulliver: Which is why it’s arrogant. It assumes we know what will happen.
Seriously dude? We don’t know what will happen assumes we know what will happen??? You’re killing me here.
I don’t even know how to respond to this because I say one thing and a sentence later, you’re telling me I’m saying the exact opposite thing.
But to pretend that we know what’s best for the contactees is plain arrogance.
The point isn’t that we know what’s best, the point is we DO NOT KNOW how it will turn out. We do know that historically vaguely similar situations in human history have turned out one of a variety of different ways, and those different outcomes vary from “OK” to “not bad” to “My god, what have I done”. And we might extrapolate that given that sort of history, that it is indeed impossible to predict how this particular situation would turn out, and given that it MIGHT turn out horribly badly, that we don’t want to be the cause that instigates that horrible outcome, and therefore, prime directive.
It serioiusly boggles my mind how many times I’ve said WE DO NOT KNOW only to have you turn it around and tell me I’m asserting that we KNOW what will happen and based on that KNOWLEDGE, we KNOW WHAT”S BEST for them.
Jeebus.
Greg: We have no clue what would happen if humans violated the Prime Directive
JohnD: Actually, we have.
There is a semantic difference between “what would happen” and “what could happen”. We have some idea what COULD happen. There are quite a few possibilities that we can extrapolate from humans contacting previously isolated human cultures. Some get wiped out. Some catch up and become equal partners. And various outcomes in between.
We don’t know what WOULD happen, which means we don’t know what WOULD happen in some particular case. If we KNEW what WOULD happen, i.e. we KNOW it WOULD be a good outcome, then there is no reason for the Prime Directive in that situation. The reason for the Prime Directive is we don’t know what WOULD or WILL happen, we only know a range of possible outcomes that COULD happen, and quite a few of them are horrorible outcomes that we don’t want to instigate, therefore Prime Directive.
At the core of the Prime Directive is the acknowledgment that we do NOT KNOW what WOULD/WILL happen in a particular case but we have a sense that some possible outcomes COULD be horrible.
tam: Didn’t both Kirk and Picard violate the Prime Directive all the time?
The prime directive is sort of like the three laws of robotics. Here is a rule, now what happens when we break the rule? A robot story about the three laws that doesn’t involve a robot violating the three laws would be a boring story. A story about the prime directive that doesn’t involve someone violating the prime directive would likewise be boring. Establishin gthe rule establishes what is “normal” in that world. Then breaking the rule can become the “out of whack” event for the story.
There is a semantic difference between “what would happen” and “what could happen”.
And there is yet another between those two and “what has happened”. Given that my post discussed what HAS happened, you appear to have missed the point.
At the core of the Prime Directive is the acknowledgment that we do NOT KNOW what WOULD/WILL happen in a particular case but we have a sense that some possible outcomes COULD be horrible.
You should check out what Teller predicted atomic weapons might do; “horrible” is an understatement. But they ran the Trinity test anyways. Just because some outcomes COULD be horrible, that doesn’t mean that you shouldn’t proceed; you just proceed with as much caution as you can.
@EMK “Have you read Cambias’s book? It’s certainly not some kind of simplistic, anti-Western manifesto, if that’s what you’re thinking. I certainly see how you could get that impression from the remarks you quoted.
Basically, this is a book about how two civilizations can blunder into war and bloodshed despite (mostly) claiming to have good intentions.”
Actually based on the description supplied by the author, I was expecting a simplistic pro-western manifesto about “toxic” western guilt.
Since the entire internet is awash with precisely that kind of sentiment, I don’t feel like this would supply anything new or interesting (especially considering the fact that C J Cherryh’s foreigner series sounds very similar).
Greg: We have no clue what would happen if humans violated the Prime Directive
JohnD: Actually, we have.
Greg: There is a semantic difference between “what would happen” and “what could happen”.
JohnD: Given that my post discussed what HAS happened, you appear to have missed the point.
Given my post talked about what WOULD happen, you appear to have changed the subject and then refused to acknowledge you changed the subject when I pointed it out to you.
JohnD: You should check out what Teller predicted atomic weapons might do;
Yes, yes, they were taking “gentleman’s bets” as to whether it would set the atmosphere on fire. It was a gentleman’s bet because no one would be alive to collect.
Just because some outcomes COULD be horrible, that doesn’t mean that you shouldn’t proceed;
Yes, yes, because there is and was universal agreement that not only should the trinity tests be conducted (july 16) even though war in Europe had come to an end (may 8), but also that those two atomic bombs should be dropped on Japan.
I think you might want to reconsider your example. It kind of actually disproves your entire point.
Also, there is a semantic difference between COULD and SHOULD that you’ve quite glossed over.
you just proceed with as much caution as you can
What? No. Absolutlely not. Sometimes you don’t proceed because the risks are too great. The US had stockpiles of biological weapons stockpiles until 1973 and chemical weapon stockpiles (originally tens of thousands of tonnes) are still in teh process of being destroyed even now. The reason we’ve signed treaties saying we won’t use these weapons in war and won’t stockpile these weapons in peace is because the risk is too great. So, no, sometimes you don’t proceed with caution. Sometimes you stop entirely.
You can argue that chemical or biological weapons can be used safely, but pretty much the entire developed planet has decided that no, it’s too dangerous, it’s got to go. Do not proceed no matter what.
The question of the prime directive can be viewed in a similar vein. i.e. the damage that contact with a pre-warp planet could cause could be too great, and therefore it will be prohibited via the Prime Directive or some other kind of treaty.
Given my post talked about what WOULD happen, you appear to have changed the subject and then refused to acknowledge you changed the subject when I pointed it out to you.
Greg, here’s the bit I took exception to: “We have no clue what would happen if humans violated the Prime Directive“. I quoted that part and pointed out that we do, in fact, have a clue. We have several, based on past incidents. I’m not changing the subject; I’m merely making the assumption that what you write is what you mean.
I think you might want to reconsider your example. It kind of actually disproves your entire point.
How? Because, in spite of the variety of views, they proceeded cautiously? That was my entire point – that despite knowing that horrible things could happen, the folks at the Manhattan Project proceeded with as much caution as they could. Bethe checked to see if the atmosphere would ignite and the scientists agreed that it would not so they proceeded with the Trinity test.
The generals argued over the best way to use the weapons (Patton wanted six of them to clear the beaches on Honshu for the invasion) and decided to drop two bombs on Japanese military targets; it turned out that the Japanese General Council thought that America only had one until the second bomb was dropped.
In both cases, the people with the most background in the field made the best and most cautious decisions that they could given the information that they had. And given the alternatives (an estimated million Japanese civilian casualties, several tens of thousands of Allied casualties), they seem to have made the right choice.
Also, there is a semantic difference between COULD and SHOULD that you’ve quite glossed over.
Yes there is. It is a shame that you keep confusing them.
What? No. Absolutlely not. Sometimes you don’t proceed because the risks are too great.
And you don’t realize that stopping where you are is part of “as much caution as you can”? If the risks outweigh the benefits, then you don’t proceed. But if the benefits outweigh the risks (or the risks of the alternatives are worse than the risks of the selected action), then you do – cautiously.
The US had stockpiles of biological weapons stockpiles until 1973
We still have stockpiles of biological weapons. Not as many as we used to have, but we still have them “for research purposes”.
You can argue that chemical or biological weapons can be used safely
They can be and have been; witness the eradication of smallpox as one example of the safe use of a biological weapon. The problem is that it is much, much easier to use them unsafely, which is why they are generally banned for use in warfare but allowed for other uses.
Folks, let’s make sure we’re being civil to each other on this highly speculative subject, please.
I’m excusing myself from this thread because I think Greg and I are talking past each other at this point. Thanks to the original author for a stimulating Big Idea.
JohnD: I quoted that part and pointed out that we do, in fact, have a clue.
A clue as to what *could* happen. Not what *would* happen. If sentient beings were billiard balls whose current position and velocity could be determined, then we indeed know what would, or what WILL happen, with them in the future. But they’re not billiard balls so their behaviour is not exactly known.
At best we have an idea of what *could* happen. A list of possible outcomes based on what we’ve seen humans do to other humans in the past. That list is not exhaustive and some alien race could react in a way completely different than anything on the list of past human behaviour.
When you roll a die, it COULD land on any number from 1 to 6. But before you throw it, you have no idea what number it WOULD land on. Only after it has stopped bouncing do we finally know what number it would land on because only at that point did it reveal the path it took and reveal its final number.
We have no clue what WOULD happen if humans violated the Prime Directive.
We have a list of possible outcomes based on human interactions in the past that we can extrapolate. But that list is not a complete list. Nor does it inform us which way the outcome WOULD actually go.
That’s the difference between would and could. That’s the difference between what WILL happen and what CAN happen.
If the risks outweigh the benefits, then you don’t proceed. But if the benefits outweigh the risks … then you do – cautiously.
Here’s the problem with that attitude applied to what is alien: there is no numeric value you can put on cost/benefit analysis for “do we violate the prime directive or not?” What is the cost? you don’t know. What is the benefit? we don’t know. That’s the whole friggen point of alien. It is alien and therefore unknown.
There was a old SF story about this human on an alien world, can’t remember the name of it. The alien children were short and stocky and the adults were extremely tall and maybe even frail. At some point when a child became old enough, the aliens would take the child and put it on a kind of rack and stretch out its body until it became the long, skinny, fragile form that adults had. One of the humans thinks this is a form of torture and decides to save one of the children just before they’re supposed to be put on the rack. As they’re fleeing the adults, they run through some water and the alien child stops, his feet grow into roots, and the rest of his body transforms into an underwater plant. permanently. Turns out the “rack” was the aliens way of transforming children into an adult phase while keeping them from turning into unconscious seaweed.
We have no idea what WOULD happen if we intervened. We can compile a list of what we think COULD happen based on human interactions with other humans, but that list is far, far from complete list of what COULD happen.
A clue as to what *could* happen. Not what *would* happen.
Ah, I see – you seek perfect knowledge and in the absence of that believe that it is better not to act.
That list is not exhaustive and some alien race could react in a way completely different than anything on the list of past human behaviour.
I never claimed that it was exhaustive. I merely pointed out that we do, indeed, have a clue.
We have no clue what WOULD happen if humans violated the Prime Directive.
And you are still wrong on this. Because we have observed what has happened in other, similar instances, we do indeed have a clue. And if we pay any attention to how the aliens behave, we will have yet more clues.
As an analogy, you are claiming that because two chemicals have never been mixed together before, we have no idea what the reaction WOULD be and I am claiming that we’ve got this nice table of the elements over here and it gives us a pretty big hint as to what the reaction might be and the range of possibilities.
Here’s the problem with that attitude applied to what is alien: there is no numeric value you can put on cost/benefit analysis for “do we violate the prime directive or not?” What is the cost? you don’t know. What is the benefit? we don’t know. That’s the whole friggen point of alien. It is alien and therefore unknown.
Heck, we face that problem all the time with human questions. Do you know what we do? We make our best estimate of what might happen and then work from there.
Here is the basic difference in our viewpoints: You are substituting “alien” for “unknowable”. I am substituting it for “unknown”. In my viewpoint, we can progress. In yours, we are stuck right where we are.
JohnD: I am claiming that we’ve got this nice table of the elements over here and it gives us a pretty big hint as to what the reaction might be and the range of possibilities.
Ah, well, great minds on earth have been struggling with the technological singularity problem for quite some time now, trying to predict what will happen when machines achieve a certain level of intelligence. Given your unlimited powers of divination, we’d all love to hear what is REALLY going to happen. Perhaps you could write up what the outcome will be, with proof enough to convince the experts, and get yourself an easy PhD and probably millions of dollars.
Here is the basic difference in our viewpoints
The difference is you don’t think intractible problems exist. I do. You don’t think there is such a thing as unintended consequences. I do. You don’t think there is any genie that we let out of its bottle which cannot be tamed or shackled to our control. I am not so confident in mankinds abilities that I think our potential futures do not contain some dead ends which we cannot get out of.
You think you can cut your way out of any Gordian Knot you encounter. I think our knowledge is much more limited than that.
Given your unlimited powers of divination, we’d all love to hear what is REALLY going to happen. Perhaps you could write up what the outcome will be, with proof enough to convince the experts, and get yourself an easy PhD and probably millions of dollars.
That’s an amusing strawman that you’re attacking, Greg. It is a shame that you have chosen not to discuss the actual point: that we do, in fact, have a clue as to what happen when a technologically-advanced civilization meets one that isn’t as advanced.
The difference is you don’t think intractible problems exist. I do.
No, the difference is that I do not believe that alien encounters must necessarily be intractable problems but you do, on the basis of no evidence whatsoever. As for unintended consequences, that’s something that I have worked with for many years. What you fail to realize is that every decision we make has unintended consequences, from what to have for breakfast to working on warp technology – and that those consequences can be good as well as bad.
So should we sit in caves and quiver at the thought of the horrors that fire might unleash? Or do we proceed with our eyes wide open and looking for any unintended consequences so that we can decide if we want to mitigate them or embrace them?
JohnD, That’s an amusing strawman that you’re attacking
So should we sit in caves and quiver at the thought of the horrors that fire might unleash
I must say, you accusing me of strawman in the same post as you fabricated that little nugget gave me quite the chucke.
It is a shame that you have chosen not to discuss the actual point
Dude, the point is you’re talking about what CAN happen. I was talking about what WILL happen. And somehow you got onto a side track of what what SHOULD happen. and those words all mean different things. The point is you’re not seeing we’re talking about different things.
I read the beginning. Seems like Scalzi.
Greg:
“I was talking about what WILL happen.”
Greg, as you know the future, will you give me the names of the next ten Super Bowl winners? I have bets to place.
Aside from this: Everyone, lower the heat a couple of notches, please. This is the second time I’ve had to suggest this.
Scalzi: Greg, as you know the future, will you give me the names of the next ten Super Bowl winners? I have bets to place.
Uhm, I think you have it backwards. I said that we have no clue what will happen if we were in a position to break the prime directive.
JohnD disagreed.
I’m saying that we can’t predict the next ten superbowl winners and we can’t predict a prime directive scenario.
I must say, you accusing me of strawman in the same post as you fabricated that little nugget gave me quite the chucke.
Since when is talking about fire in a discussion of unintended consequences a strawman? Using fire has plenty of unintended consequences, ranging from tasty meat to burning down your neighbor’s house.
However, you were claiming that I said something that I didn’t and then attacked that position, which is the very definition of a strawman argument. What I said is that we’ve got some ideas (aka “clues”), based on past experience. You twisted that into “unlimited powers of divination”. Do you see the difference between the two situations?
Dude, the point is you’re talking about what CAN happen. I was talking about what WILL happen.
Which requires perfect knowledge.
And somehow you got onto a side track of what what SHOULD happen
Actually, I’m discussing what COULD happen. You are once again attacking a strawman.
And what started us on this merry-go-round was your statement that “We have no clue what would happen if humans violated the Prime Directive.” I pointed out that there have been historical situations in which an advanced technology group met a lower technology group and those provide clues. Our fundamental problem is that you are (incorrectly) insisting that the verb “would” means that there can be no uncertainty as to the outcome and I’m pointing out that the noun “clue” means that there can be.
Greg: I was talking about what WILL happen.
JohnD: Which requires perfect knowledge.
ugh. Ok man. How about this. When I said that we have no clue what will happen if humans violated the Prime Directive, I’m actually agreeing with you that we do not have “perfect knowledge”. We have some ideas of what could or might happen, but thats not what will happen, becaues knowing what iwll happen requires perfect knowledge. So, hey, lookeythere, we agree on something.
you are (incorrectly) insisting that the verb “would” means that there can be no uncertainty
depending on context, that’s exactly what it means. When talking about what WILL happen in the next ten superbowls, there is only one outcome and because we dont have “perfect knowledge” as you word it, we won’t know who WILL win those games until after they’re over.
The only difference between what WILL happen in the next superbowls and what WOULD happen in a Prime Directive scenario is that the prime directive scenario is hypothetical. But either one, there is only one outcome, and we don’t know what it WILL be until its done.
In my first response to you, I tried to explain we were using different semantics for the terms: “There is a semantic difference between “what would happen” and “what could happen”. ” Maybe you misunderstood.
Greg: you got onto a side track of what what SHOULD happen
JohnD: You are once again attacking a strawman.
You said in one post: “Just because some outcomes COULD be horrible, that doesn’t mean that you shouldn’t proceed;” and in another post you said: “should we sit in caves and quiver at the thought of the horrors that fire might unleash”
I said I could see the prime directive in a non-arogant way, namely acknowledging our limited knowledge, or as you might put it, our lack of “perfect knowledge”, and because of our limited knowledge, thta we decide the prime directive is a way for us to not have unintneded consequences wipe out some alien race by accident. I said I could view it as a form of humility, not arrogance. That isn’t the same as saying we SHOULD have the prime directive. I wouldn’t even begin to say “should” around the prime directive because our knowledge of anything in that realm is completely absent. We know nothing of alien. How can we assert what we SHOULD do when we know nothing? You invoked “should” a couple times. I never used the word “should” (well, except in direct reply to you using “should”, mainly to say I wasn’t using “should”.)
When I said that we have no clue what will happen if humans violated the Prime Directive, I’m actually agreeing with you that we do not have “perfect knowledge”.
Except that having a clue doesn’t require perfect knowledge; it just requires some knowledge. Having a clue means that we can narrow down the possibilities in some fashion, perhaps based on experience in similar situations. And, given that we have had experience in similar situations, we do have a clue.
We have some ideas of what could or might happen, but thats not what will happen,
But it is a clue about what will happen.
you are (incorrectly) insisting that the verb “would” means that there can be no uncertainty
depending on context, that’s exactly what it means.
And in the context of “no clue about what will happen”, it doesn’t mean that. Instead, it means “used to express probability and often equivalent to the simple verb“. So “have no clue about what will happen” parses out to “have no knowledge about the possible range of outcomes”. And, as noted previously, we do have knowledge about the possible range of outcomes.
When talking about what WILL happen in the next ten superbowls, there is only one outcome
But when talking about encounters between different civilizations, there are ranges of outcomes which is why we frequently say “It may be that the lower tech group will be crushed by the new technology, or perhaps they will find a way to adapt it to their culture”.
You said in one post: “Just because some outcomes COULD be horrible, that doesn’t mean that you shouldn’t proceed;”
And you decided that “should” meant “damn the torpedoes, full speed ahead” instead of “you just proceed with as much caution as you can” (the modifying phrase that I included). That’s the strawman – your insistence that I am demanding that we trade willy-nilly with all and sundry as opposed to my statement that we should base our decisions on what is already known and work out plans to deal with the inevitable unforeseen consequences.
and in another post you said: “should we sit in caves and quiver at the thought of the horrors that fire might unleash”
You left out the other part: “Or do we proceed with our eyes wide open and looking for any unintended consequences so that we can decide if we want to mitigate them or embrace them?”
Notice how it changes the meaning of should from imperative to rhetorical?
I said I could see the prime directive in a non-arogant way, namely acknowledging our limited knowledge, or as you might put it, our lack of “perfect knowledge”, and because of our limited knowledge, thta we decide the prime directive is a way for us to not have unintneded consequences wipe out some alien race by accident.
Which is why I brought up the real-life examples of a Prime Directive.
How can we assert what we SHOULD do when we know nothing?
Except that we don’t “know nothing”. We know quite a bit.
John D, Greg:
You two need to wrap it up. It’s clear at this point this is not being resolved and we’re deep into “arguing to argue” territory. Take it into e-mail, please, if you want to continue further.
Yes, sir. Please accept my apologies for wandering off the topic.
I’m breaking my own stated intention to excuse myself from this thread. I did not see darrelle’s last reply to me and there are a few things I want to clear up. I acknowledge that this makes me a hypocrite and if you, John, with to Mallet me for that, I will understand. Either way, this will be my final comment in this thread.
@darrelle
I was agreeing with your original comment about the natural course of things, not arguing. Either I worded it poorly or you choose to assume every portion of my initial reply was a rebuttal, or both. However, I take responsibility for my own failures to communicate clearly.
I took your list of what knowledge and technology improves as a statement about the value thereof. From that it sounded like you were advocating for the promulgation of the same, not the contactee’s input. In your subsequent reply you said I was making assumptions about your intentions, but you declined to clarify which of my assumptions were incorrect. Nonetheless, I did indeed misunderstand your intention on that specific point and I appreciate your clarification.
I did no such thing. I responded to your hypothetical arguments. I can’t help it if you take an abstract ethical debate personally.
I neither intended my tone to be rude nor did I do anything to intentionally insult you. On the contrary, I was enthusiastic about replying to your initial comment because you made at least two points that had me nodding my head (one of which I actively expounded upon) and you struck me as an incisive debate partner. I realize that my aggressive debate style and my frequent tendency to not realize when an abstract comment might be taken personally may come off as an attack on someone’s character. All I can say is that was not my intent and I’m sorry if I was insensitive.
I think we got off on the wrong foot. I wish you well and take my leave.
This sounds great. I really like the Prime Directive comparison. I love Star Trek. It was absolutely my gateway drug to science fiction but I always found the Prime Directive bothersome. Also, I love undersea adventure stories. Definitely looking forward to reading this.
Whelp, I think “Close Encounters of the Third Kind” grossly misrepresented communication as much much easier than it realy is.
@Gulliver
I agree. It appears I misread you and I apologize for that. Thank you for taking the time to respond and explain as you did.
There is some humor in this, for me at least. I agree pretty closely with just about everything you wrote on this Prime Directive issue.
Be well.