For decades, data has been the fuel that powers business. But we’re still figuring out how to use this new power source efficiently, and our economic and social arrangements have not yet adapted to the changes (The Economist). As machine-learning based artificial intelligence becomes the foundation of a new kind of economy, companies have learned to hoard data. But they haven’t figured out a convenient or standardized way to trade it. In this way, it’s not like oil at all — or maybe it’s like oil was before Standard Oil standardized it.
Data isn’t a commodity, yet. That’s because we don’t know how to turn it into fungible units, and we don’t yet know how to track fluctuations in its value depending on timing and context. But this is the kind of problem that processing power and good algorithms will probably solve over time. We’re just scratching the surface of this technology’s capabilities — think PCs in 1982 or the Web in 1994.
The economy stands at a threshold moment in the era of machine learning. The artificial intelligences that companies are increasingly deploying are just beginning to take on roles and jobs that used to demand a human being at the controls. But in most cases they’re nowhere near ready to take over entirely. They still need people at their sides — in some cases to generate the data that will train them, in others to provide judgment that’s beyond them.
Welcome to the world of the hybrid human-machine workplace. A couple of recent articles have begun to give us a portrait of this emerging work environment, with its awkward encounters, unemployment fears, and potential for both efficiency and exploitation.
Let’s circle back one more time to Treasury Secretary Steve Mnuchin’s extraordinary comment that artificial intelligence is “not even on my radar screen” and won’t affect the job market for 50 to 100 years. What was he smoking? And please keep it away from us, okay?
The double whammy of AI and robotics, what observers are calling “the fourth industrial revolution,” is certainly on the rest of the world’s radar, and already having an impact on transportation, manufacturing, retail, medicine, education — everything. We can’t know how this wave of change will play out; scenarios range from utopia to doomsday, and we’re already beginning to live them.
The healthcare debate had its Marie Antoinette moment yesterday, as GOP Rep. Jason Chaffetz told Americans to stop buying smartphones so they could afford health insurance under the new Republican proposal. “Rather than getting that new iPhone, that they just love and want to go spend hundreds of dollars on that, maybe they should invest in their own health care,” Chaffetz said.
The race was on for pundits and publications to calculate just how many iPhones one would need to forego in order to pay your insurer. For most of us, you’d have to be getting a fancy new phone every month or so to even come close (Lifehacker). But Chaffetz’s comment didn’t only prove his ignorance of the basics of both healthcare and telecommunications economics. It suggested he was fundamentally unaware of how essential a working smartphone — i- or otherwise — is for navigating everyday life and work in America 2017 (Brian Fung in The Washington Post).
If you’ve been reading all year about “machine learning” but still feel you only have a fuzzy grasp of what it is, relax. Having a fuzzy grasp of what something is turns out to be exactly what machine learning is all about. You can glean that and much more from Gideon Lewis-Kraus’s epic artificial-intelligence narrative in The New York Times Magazine, whichtells how Google sharpened its translation skills by plugging in a new machine learning engine.
As he traces the trail of Google’s researchers and engineers, Lewis-Kraus also provides a beautiful summary of how machine learning evolved from a renegade strain of artificial-intelligence theory into a practical tool for delivering useful services — and a powerful harbinger of socioeconomic disruption.
While the election news cycles dedicated to Hillary’s emails and Donald’s ego sucked the oxygen out of our public sphere, here are some of the topics that barely got talked about during this campaign: Climate change. Life after fossil fuels. Automation and artificial intelligence.
Instead of “techsplaining” the future, we need radical humanism
As we adjust to living digital cheek by digital jowl in our hyper-connected world, we’re rapidly approaching a technological shift that will be even bigger than the Internet — the union of man and machine. Some would say that day has already come, as we go about our business tethered to Google Now and Snapchat.
But the coming wave of super-intelligent computers will accelerate and deepen our connection to technology like never before. In the near future, computers will possess increasingly sophisticated types of machine intelligence and “deep learning” algorithms. The Fourth Industrial Revolution will challenge us to rethink what it means to be human.
Maybe the right way to master artificial intelligence isn’t through the markets, but through open collaboration, pure research, and … (shudder) our governments
One of the most intriguing public discussions to emerge over the past year is humanity’s wrestling match with the threat and promise of artificial intelligence. AI has long lurked in our collective consciousness — negatively so, if we’re to take Hollywood movie plots as our guide — but its recent and very real advances are driving critical conversations about the future not only of our economy, but of humanity’s very existence.
In May 2014, the world received a wakeup call from famed physicist Stephen Hawking. Together with three respected AI researchers, the world’s most renowned scientist warned that the commercially-driven creation of intelligent machines could be “potentially our worst mistake in history.” Comparing the impact of AI on humanity to the arrival of “a superior alien species,” Hawking and his co-authors found humanity’s current state of preparedness deeply wanting. “Although we are facing potentially the best or worst thing ever to happen to humanity,” they wrote, “little serious research is devoted to these issues outside small nonprofit institutes.”