NewCo Shift Forum
Rob Reid argues that perhaps the best answer is “Ideally, no.”
Rob Reid’s career has spanned founding successful Internet companies (he created Real’s Rhapsody service), a stint in venture investing, and a well received non-fiction book (Architects of the Web profiled the first wave of internet entrepreneurs). But it was in fiction where Reid found his groove. Reid’s novels are rife with arch and hilarious observations about the state of the tech industry, but are also painstakingly researched and carefully constructed. His first work of fiction, Year Zero, lampooned the music industry (with a heavy dose of alien-driven satire), but his second, set to debut later this year, is far more ambitious. Titled After On, the story turns on the emergence of machine-native super intelligence (laced with a heavy dose of biotech), in the form of an pervasive social network called Phluttr. I won’t spoil it for you — but at the NewCo Shift Forum earlier this year, Reid outlined some of the deep thinking that went into his latest creation. Reid delivered his thoughts as an Ignite talk — the five minute format created by Brady Forrest, who introduced Reid from the stage. Below is the video and a transcript of Reid’s talk.
Brady Forrest: We’ve had kind of an arc here at Ignite. We began in the past, we talked about the present, and now with our last speaker, we’re going to go look in the future. Please welcome up, Rob Reid.
Rob Reid: If you want to become an expert in something you know nothing about, I suggest you sign up to write a book about it. The prospect of awful reviews will terrify you, and as this guy will tell you, fear is a very powerful motivator. Now the other great thing about writing is that authors get amazing access to experts. I learned this twenty years ago, when I wrote a book about the rise of the internet. Practically everybody who mattered in the industry sat down for interviews with me, because smart people believe in books. They want them to be accurate, and no school could have taught me what I learned from these folks, which inspired me later to start my own company which built the Rhapsody music service, but I only started getting really high quality time and attention from top scientists and technologists when I gave up writing non-fiction and started writing science fiction.
This is catnip to ingenious nerds, who give me wonderful fodder by sharing their thoughts with me, and that’s how I researched my new book, which comes out in August. And I wanted to fill it with things I knew nothing about, things like quantum computing, neuroscience, synthetic biology, sex and dating post Tinder — total enigma to the married guy. So once again, I sought out experts who taught me so much, but this time they kind of scared me too, with something weirdly pertaining to the Cold War.
Now back then, we spent trillions to keep two people from blowing up the world. We spent it on things like monitoring, diplomacy, espionage, conventional armies, big regional wars that let steam out of the system. None of this was cheap. But today it’s even more expensive to keep things in balance, because not two, but several people could start doomsday. So what happens if twenty people get that power? Or a thousand, or a million? We couldn’t possibly keep a lid on all of that. So could it happen? Well, consider these two curves. The slow boring one is Moore’s Law, and it shows how fast computing power gets cheaper, and we all know how powerful that’s been.
Meanwhile, the steep, crazy curve, shows how quickly genetic sequencing gets cheaper. What does this mean? Well, to countless scientists, 13 years and $ billion to read the first human genome. Today, you can do that with a thousand bucks, that box, and a little bit of this guy’s time. And we’re meanwhile pushing down the cost of writing DNA that does not exist in nature. We can now synthesize a letter of DNA for about what it costs to read one in 2004. And the prices are dropping slower this time, and it could take several decades. Someday this guy’s successor will have a print button, as will tens of thousands of his future peers, because this isn’t the next Craig Venter, this is a lab tech, could be a smart undergrad, or maybe a smart high school student, a little bit further down the line, and eventually maybe even a smart 8th grader.
Because the passage of time makes wizards of us all. There are stocking stuffers out there that Thomas Edison couldn’t have even dreamed of. And particularly in bioscience, there is a yawning gap between the genius required by discovery and the competence required by replication. Shortly after a Nobel winner had beat polio, factory workers were mass-producing the cure, and much more recently some brilliant scientists rebuilt the Spanish Flu Virus, and that gene code is now all over the Internet. So maybe someday an 8th grader will hit ‘Print’ on it. Or maybe somebody will design a true doomsday bug, maybe for a thesis. Could be a perfectly good person, but later countless ‘ungood’ people could be in a position to unleash it. But who would do such a thing?
Well, every year, about a million people kill themselves, and a tiny fraction brings as many people with them as possible. And religion can be twisted to justify anything. For now, a lone nut can only kill a few dozen people, and only a handful of people get to start doomsday if they want to, but it costs us trillions of dollars to keep them in line. Who’s going to keep millions of us in line? The NSA can’t scale to do that, and as a big believer in privacy rights, I don’t want them to.
So, maybe it’s time for another ‘boogie man’ from the science fiction canon. A super intelligence, one as smart in relation to us as we are to bacteria. That thing could be as omniscient and omnipotent as an Old Testament God, and save us from ourselves. Unfortunately, some of the smartest people in the world say this could end in catastrophe, because that AI could have some really cool things to do with the atoms that happen to comprise our bodies and our biosphere. Many scenarios have been modeled concerning super AI’s goals, and many of them are very frightening. Some are whimsical, like the idea of an AI turning our entire planet into a giant computer to contemplate pi, but most of them are very serious, very logical and far too many of them end in our extinction. I explored a bunch of these scenarios in my book, in a way that I will admit was perversely light hearted, but I didn’t come away from this project with a glittering clarity, that I came away from, from researching earlier books. Shall we trust our future to an AI? Or billions of people who aren’t yet out of diapers? Or even in diapers? I don’t know. Thank you.