An Unknown Future is a Scary One

By

Scott Taylor | Flickr

While the election news cycles dedicated to Hillary’s emails and Donald’s ego sucked the oxygen out of our public sphere, here are some of the topics that barely got talked about during this campaign: Climate change. Life after fossil fuels. Automation and artificial intelligence.

The future is barreling down the road toward us. Will we leap aboard, duck, or get squashed? Jim Yardley, a New York Times correspondent who spent the last decade abroad, traveled the country to create a Tocquevillian portrait of the U.S. on the brink of the 2016 election and a wave of economic change that we’ve barely begun to understand.

Many of Yardley’s snapshots counter the prevailing election-year narrative. The Mexico-U.S. border is not a leaky sieve. Tech-rich downtown San Francisco is full of homeless people. Everywhere, people are disoriented by change.

The builders and shapers of the new economy — whether that group feels to you like an “us” or a “them” — need to accord that disorientation some attention and some respect. As minimum viable products get rushed to market to achieve maximum first-mover advantage and disrupt everything in sight, what we all need is a little context, understanding, and compassion. Maybe there’ll be more time for that after tomorrow’s votes are counted.

You Only Think That Job Interview Was Useful

When your company decides on a hire, more often than not the choice gets made on the basis of in-person interviews that provide gut-checks on intangible qualities like “cultural fit.” The trouble with that, writes Cass Sunstein (Bloomberg), lies not only in the door this reliance on intuition opens for bias. There’s also tons of evidence that eyeball-to-eyeball encounters are simply less reliable predictors of a hire’s likelihood of success than more objective measures like aptitude tests and previous performance metric.

Interviews are more likely to lead a hiring process astray than to help it succeed. They’re full of booby traps that lead hiring managers astray, introducing distracting irrelevancies based on physical appearance or mannerisms that have no bearing on the applicant’s fitness for a job.

Should Facebook Posts Determine Your Car Insurance Rates?

Hours before the launch of a new program that would quote car insurance rates to applicants based on a scan of their Facebook data, British company Admiral pulled the plug (The Guardian). Of course, privacy advocates cheered. And anyone who ever thought Facebook was a safe place to post confessions of unruly behavior might feel relieved.

But this idea isn’t going away. Facebook sold the world on the notion it was a platform where you could “be yourself” and share your life; then it turned around and promised Wall Street it would transmute that data into a geyser of revenue. Of course insurance companies — along with financial marketers, realtors, medical providers, political candidates, and everyone everywhere with anything to sell — want to get their hands on your profile. The only question is how much more reluctant people will get about publishing their lives on Facebook after it opens that spigot wide.

You’ve Already Signed a Novel-Length’s Worth of Digital Contracts

When you click “I agree” to the terms and conditions of a new device or app, you are legally signing a contract — and if you’re like nearly everyone else, it’s a contract you never bother to read. Quartz did a study of Apple’s contract ecosystem — the tens of thousands of words you are bound by if you use multiple Apple products — and tried to gauge its size. Apple isn’t any worse than its competitors in this regard, but it’s a useful bellwether since its products are in so many hands.

Should we care? These agreements rarely interfere with our daily technology use, but they are potential time-bombs. They make commitments for us that we don’t even know about, and companies like Apple can (and do) change them at will without our knowledge. The more software runs our world, the more likely it is that these beast-piles of legalistic language will bite us somewhere that hurts.

The Real Moonshot of Our Time

As artificial-intelligence driven systems and bots move into every corner of our lives, the value of “human traits like integrity, intuition, and imagination” will only grow — as will our need to make sure that AI systems serve ethical ends chosen by humans.

To achieve that, writes Tim Leberecht (NewCo Shift), we won’t be able to hand off the work to autonomous algorithms. We will have to turn to a new radical humanism for answers to the questions AI will pose.

Get GSD stories delivered to your in-box weekly: Subscribe to the GSD newsletter.

Leave a Reply