Edward M. Lerner e-interviewed by Hilary Williamson (November, 2009)
Edward M. Lerner has degrees in physics and computer science, as well as an MBA. Lerner worked in high tech for thirty years (including seven years as a NASA contractor), as everything from engineer to senior vice president. He is now writing SF full time.
His short fiction has appeared in Analog, Artemis, Asimov's, Darker Matter, and Jim Baen's Universe magazines, on Amazon Shorts, and in several anthologies. He has co-written three Ringworld prequels (Fleet of Worlds, Juggler of Worlds, and Destroyer of Worlds) with Larry Niven. Writing solo, Lerner is also the author of the cyber-fiction collection Creative Destruction and four SF novels: Probe, Moonstruck, Fools' Experiments and Small Miracles.
Lerner calls his latest book 'a novel of medical nanotechnology'. In it, after Garner Nanotechnology's experimental nanotech-enhanced protective jumpsuit keeps lowly sales-support engineer Brent Cleary alive through a pipeline explosion, he changes in disturbing ways.
Q: Your last two SF novels could be viewed as 'near-future near-apocalyptic thrillers' for the way they show rather reckless science taking us to the edge of the abyss. Is this a possibility that concerns you or simply a good fictional device?
A: "Near-future, near-apocalyptic thrillers." I like that. Yes, that description certainly fits my most recent two novels.
But rather than "reckless science," I would say "human science." As in, to err is human. The books' perceived dangers to society rogue artificial intelligence in Fools' Experiments and errant nanotech in Small Miracles came despite fairly rigorous precautions. Merely not foolproof precautions. And because (as in the real world), nonscientists were willing to take chances with matters they did not fully understand.
That's not to say (if you'll allow me a double negative) that science and technology can't be used irresponsibly. Of course they can. But as a fictional device, the Mad (or Reckless) Scientist is something I try to avoid. It's too clichι. And I've never met one.
Q: Why do you think SF writers typically imagine AIs (like 2001's Hal and yours in Fools' Experiments) that are adversarial to humanity? Must we be in conflict?
A: There's a storytelling reason for the pattern you cite: fiction thrives on conflict. When a story's main futuristic touch is AI, I think it's natural that the relationship be adversarial.
There's plenty of SF in which the AIs are friendly, but then, for lack of drama and conflict, the AI isn't front-and-center in the plot. Two classic examples: Mike (the lunar revolutionary and generally unsuspected AI) in Heinlein's The Moon Is a Harsh Mistress and Sigfrid von Shrink (the AI psychologist in Pohl's Gateway).
In my own InterstellarNet stories, AIs are generally humanity's allies. I could do that because the SFnal focus is on other tech and the extra-solar aliens.
Q: In both your recent novels, artificial intelligences emerge from a critical mass of processing power and very fast, stimulus-driven evolution. Is anyone in the field trying to create AI in this fashion, or are we still a long way away from being able to achieve that critical mass?
A: How soon? That's the big question, of course. If all it takes to achieve AI is computing power if intelligence is purely an emergent property of computing complexity we must be close. The Internet embodies a lot of interconnected computing power. If stimulus-driven evolution plays a part (as I suspect it must), then how/when intelligence emerges from our technology is far less predictable.
But (avoiding spoilers), I'll add that the emergent intelligences in Small Miracles are not exactly AIs.
Q: Another commonality is that, in both novels, you give readers (at least part of the time) the point of view of the emergent awareness. Why did you choose to do that?
A: The short answer: it was fun. The slightly longer answer: for balance. Going inside the figurative head of the other forced me to consider its perceptions, needs, and motivations.
And the final answer: onetime techie that I am (I spent thirty years at such places as Bell Labs, Honeywell, and Northrop Grumman), I wanted to think about how AI and nanotech might actually develop.
Q: Greed fuels both your plotlines - the government's for advanced weapons in Fools' Experiments, and both government and corporate greed in Small Miracles. Do you consider societal controls on technological development inadequate?
A: Yes but the right answer is less than obvious. Democracies don't seem to elect the technologically savvy, and hence the technology policies and controls we get are often (says the technologist) ill-considered. Scarier still is the potential for technology misuse in non-democracies, wherein there is little public participation to guide decisions on if/how/when to introduce prospectively revolutionary technologies.
Still, we humans have survived the adoption of many revolutionary technologies whose implications we surely underestimated at the time. Like: fire, agriculture, language, the printing press, industrialization, and computerization. So chalk up my recent technothrillers to cautionary tales, not predictions of doom.
Q: I am personally fascinated by the medical possibilities of nanotechnology but, after years of experience in software development, would be very nervous about hosting a bug-ridden nanobot inside me. Don't you think a myriad of smaller problems induced by software error more likely than an emergent AI (though I certainly enjoyed reading about the latter)?
A: From my own years in software development and more years as a software consumer I share your concerns. "Computer science" is, of course, misnamed nothing about computers or software qualifies as science. Nor even as mature engineering very little software is provably correct. And even if a piece of complex software were provably correct, the proof would relate only to adherence to a functional specification written by fallible humans. The actual problem is often something broader and different than the formal problem statement.
Human biology involves roughly 100,000 distinct proteins, most of whose functions biologists do not yet understand. Introducing a new molecular-level machine a term that describes both proteins and nanobots into the human body is something to be approached with a lot of humility.
But the full risk is nanotech, not merely nanobots, encompassing uncertainties going beyond software. Material properties change as particle sizes and surface areas are downsized to the nanoscale. We have much to learn about the toxicological risks of nano-anything among (or within) living cells.
Q: You end your novels with the classic SF dark twist - on your site you mention reading early SF writers like 'Heinlein juveniles, and Golden Age SF anthologies'; did they influence you much?
A: Very much. That SF exposure is a big part of why I went into technology. (Credit Sputnik and the early space race for the rest.) And despite the cautionary element to some of my writing, I inherited a generally optimistic view of the future from my youthful reading.
Cautiously optimistic, I'll add. No one in a Heinlein novel is ever handed a happy ending they have to work for it, and avoid various perils, and knowledge always helps. A world with AI and nanotech could be wonderful given hard work and forethought on everyone's part.
Q: You have co-written Ringworld prequels (Fleet of Worlds, Juggler of Worlds, and Destroyer of Worlds) with Larry Niven, one of the giants in the field - how did that come about and how does it feel?
A: I'll take the easy part first. How does it feel to work with such a well-respected figure of the field? Great, as you might expect. Very satisfying.
The collaboration came out of a panel that Larry and I shared at Worldcon in 2004. "My Favorite Planet" is a perennial con topic: what real or fictional world would the panelists like to visit? I picked the Fleet of Worlds: five worlds that are home to a trillion very alien aliens, flying free of any star to escape a galactic calamity. The Fleet is a setting briefly glimpsed in Larry's incredibly popular Ringworld, and in my mind, anyway that glimpse raised more questions than it answered.
Larry's comment from the panel was, "I don't have a suitable plot idea." So I emailed him soon after the con to say, "Well, I do."
Q: How do the two of you work together as a writing team? Is it a long distance cooperation?
A: Very long distance, because we live on opposite coasts. Mostly we swap notes and text drafts by email, resorting to the phone when an email discussion fails to converge.
Q: I enjoy SF as well as fantasy. According to Orson Scott Card the latter is experiencing a 'golden age'. With notable exceptions (such as Nick Sagan and Tobias S. Buckell) I have the impression that SF has stalled lately. What do you think?
A: 'Stalled' seems harsh (I'm visualizing a plane with sputtering engine, about to crash), but perhaps SF has plateaued. Happily, that doesn't preclude the genre finding a new upslope.
SF introduced the public to many keen things: space travel, atomic energy, artificial intelligence, nanotech, time travel, extraterrestrial life, mega-artifacts. If SF has plateaued, it's for lack of new concepts offering a comparable sense of wonder.
Perhaps the biggest recent (relatively speaking) Big Concept in SF is the Singularity: what happens to the world and to humanity if/when one of the potentially self-replicating technologies AI, genetic engineering, nanotech takes off and changes civilization or the nature of humanity itself so dramatically that we can't imagine it. It's hard to write about things we presume we can't imagine!
Much of my recent writing has dealt with, well, call it Singularity Light. AI and nanotech, as we've discussed. (And a gengineering book? Maybe someday.) I envision big changes coming to everything we know, yes, but humans remaining human and the world(s) remaining comprehensible.
Q: Can you tell us anything about your current writing projects? Are there any sequels or more prequels in the works?
A: Look for two new books in the next year or so. Betrayer of Worlds is another collaboration with Larry Niven. It's also in our Ringworld prequel series but with a big difference. The hero of the Ringworld books is, of course, Louis Wu. But except for glimpses in a couple of Larry's short pieces, readers have yet to see Louis Wu before his two-hundredth birthday. Betrayer of Worlds reveals Louis's back story and a heck of an adventure it is.
And book number two? As it happens, I have my own continuing story series / future history. Some of my most popular magazine fiction deals with an alternate / future history that split from our familiar timeline in 2002. The trigger: a radio signal from extra-solar aliens. In episodes spanning more than a century, the InterstellarNet stories chronicled humanity's discovery of its interstellar neighbors, the formation and evolution of a radio-based interstellar trading community, and the long-distance but still deadly serious jockeying for advantage among species.
I get lots of email asking me to continue the saga and where to find earlier episodes. I've had no good answer. Till now. I've greatly expanded and integrated five InterstellarNet stories that first appeared in Analog, Artemis, and Jim Baen's Universe.