Technology, Humanity, and the Existential Test

By

Can we govern ourselves? Will we?

You are here

If you pull far enough back from the day to day debate over technology’s impact on society – far enough that Facebook’s destabilization of democracy, Amazon’s conquering of capitalism, and Google’s domination of our data flows start to blend into one broader, more cohesive picture – what does that picture communicate about the state of humanity today?

Technology forces us to recalculate what it means to be human – what is essentially us, and whether technology represents us, or some emerging otherness which alienates or even terrifies us. We have clothed ourselves in newly discovered data, we have yoked ourselves to new algorithmic harnesses, and we are waking to the human costs of this new practice. Who are we becoming?

Nearly two years ago I predicted that the bloom would fade from the technology industry’s rose, and so far, so true. But as we begin to lose faith in the icons of our former narratives, a nagging and increasingly urgent question arises: In a world where we imagine merging with technology, what makes us uniquely human?

Our lives are now driven in large part by data, code, and processing, and by the governance of algorithms. These determine how data flows, and what insights and decisions are taken as a result.

So yes, software has, in a way, eaten the world. But software is not something being done to us. We have turned the physical world into data, we have translated our thoughts, actions, needs and desires into data, and we have submitted that data for algorithmic inspection and processing.

What we now struggle with is the result of these new habits – the force of technology looping back upon the world, bending it to a new will.
What agency – and responsibility – do we have? Whose will? To what end?

•••

Synonymous with progress, asking not for permission, fearless of breaking things – in particular stupid, worthy-of-being-broken things like government, sclerotic corporations, and fetid social norms – the technology industry reveled for decades as a kind of benighted warrior for societal good. As one Senator told me during the Facebook hearings this past summer, “we purposefully didn’t regulate technology, and that was the right thing to do.” But now? He shrugged. Now, maybe it’s time.

Because technology is already regulating us. I’ve always marveled at libertarians who think the best regulatory framework for government is none at all. Do they think that means there’s no governance?

In our capitalized healthcare system, data, code and algorithms now drive diagnosis, costs, coverage and outcomes. What changes on the ground? People are being denied healthcare, and this equates to life or death in the real world.

In our public square, data, code and algorithms drive civil discourse. We no longer share one physical, common square, but instead struggle to comprehend a world comprised of a billion Truman Shows. What changes on the ground? The election results of the world’s most powerful country.

Can you get credit to start a business? A loan to better yourself through education? Financial decisions are now determined by data, code, and algorithms. Job applications are turned to data, and run through cohorts of similarities, determining who gets hired, and who ultimately ends up leaving the workforce.

And in perhaps the most human pursuit of all – connecting to other humans – we’ve turned our desires and our hopes to data, swapping centuries of cultural norms for faith in the governance of code and algorithms built – in necessary secrecy – by private corporations.

•••

How does a human being make a decision? Individual decision making has always been opaque – who can query what happens inside someone’s head? We gather input, we weigh options and impacts, we test assumptions through conversations with others. And then we make a call – and we hope for the best.

But when others are making decisions that impact us, well, those kinds of decisions require governance. O
ver thousands of years we’ve designed systems to insure that our most important societal decisions can be queried and audited for fairness, that they are defensible against some shared logic, that they will benefit society at large.

We call these systems government. It is imperfect but… it’s better than anarchy.

For centuries, government regulations have constrained social decisions that impact health, job applications, credit – even our public square. Dating we’ve left to the governance of cultural norms, which share the power of government over much of the world.

But in just the past decade, we’ve ceded much of this governance to private companies – companies motivated by market imperatives which demand their decision making processes be hidden.
Our public government – and our culture – have not kept up.

What happens when decisions are taken by algorithms of governance that no one understands?

And what happens when those algorithms are themselves governed by a philosophy called capitalism?

•••

We’ve begun a radical experiment combining technology and capitalism, one that most of us have scarcely considered.

Our public commons – that which we held as owned by all, to the benefit of all – is increasingly becoming privatized.

Thousands of companies are now dedicated to revenue extraction in the course of delivering what were once held as public goods. Public transportation is being hollowed out by Uber, Lyft, and their competitors (leveraging public goods like roadways, traffic infrastructure, and GPS). Public education is losing funding to private schools, MOOCs, and for-profit universities. Public health, most disastrously in the United States, is driven by a capitalist philosophy tinged with technocratic regulatory capture. And in perhaps the greatest example of all, we’ve ceded our financial future to the almighty 401K – individuals can no longer count on pensions or social safety nets – they must instead secure their future by investing in “the markets” – markets which have become inhospitable to anyone lacking the technological acumen of the world’s most cutting-edge hedge funds.

What’s remarkable and terrifying about all of this is the fact that the combinatorial nature of technology and capitalism outputs fantastic wealth for a very few, and increasing poverty for the very many. It’s all well and good to claim that everyone should have a 401K. It’s irresponsible to continue that claim when faced with the reality that 84 percent of the stock market is owned by the wealthiest ten percent of the population.

This outcome is not sustainable. When a system of governance fails us, we must examine its fundamental inputs and processes, and seek to change them.

•••

So what truly is governing us in the age of data, code, algorithms and processing?

For nearly five decades, the singular true north of capitalism has been to enrich corporate shareholders.

Other stakeholders – employees, impacted communities, partners, customers – do not directly determine the governance of most corporations.

Corporations are motivated by incentives and available resources. When the incentive is extraction of capital to be placed in the pockets of shareholders, and a new resource becomes available which will aide that extraction, companies will invent fantastic new ways to leverage that resource so as to achieve their goal. If that resource allows corporations to skirt current regulatory frameworks, or bypass them altogether, so much the better.

The new resource, of course, is the combination of data, code, algorithms and processing. Unbridled, replete with the human right of speech and its attendant purchasing of political power, corporations are quite literally becoming our governance model.

Now the caveat: Allow me to state for the record that I am not a socialist. If you’ve never read my work, know I’ve started six companies, invested in scores more, and consider myself an advocate of transparently governed free markets. But we’ve leaned far too over our skis – the facts no longer support our current governance model.

•••

We turn our worlds to data, leveraging that data, technocapitalism then terraforms our world. Nowhere is this more evident that with automation – the largest cost of nearly every corporation is human labor, and digital technologies are getting extraordinarily good at replacing that cost.

Nearly everyone agrees this shift is not new – yes yes, a century or two ago, most of us were farmers. But this shift is coming far faster, and with far less considered governance. The last great transition came over generations. Technocapitalism has risen to its current heights in ten short years. Ten years.

If we are going to get this shift right, we urgently need to engage in a dialog about our core values.
Can we perhaps rethink the purpose of work, given work no longer means labor? Can we reinvent our corporations and our regulatory frameworks to honor, celebrate and support our highest ideals? Can we prioritize what it means to be human even as we create and deploy tools that make redundant the way of life we’ve come to know these past few centuries?

These questions beg a simpler one: What makes us human?

I dusted off my old cultural anthropology texts, and consulted the scholars. The study of humankind teaches us that we are unique in that we are transcendent toolmakers – and digital technology is our most powerful tool. We have nuanced language, which allows us both recollection of the past, and foresight into the future. We are wired – literally at the molecular level – to be social, to depend on one another, to share information and experience. Thanks to all of this, we have the capability to wonder, to understand our place in the world, to philosophize. The love of beauty, philosophers will tell you, is the most human thing of all.

Oh, but then again, we are uniquely capable of intentional destroying ourselves. Plenty of species can do that by mistake. We’re unique in our ability to do it on purpose.

But perhaps the thing that makes us most human is our love of story telling, for narrative weaves nearly everything human into one grand experience. Our greatest philosophers even tell stories about telling stories! The best stories employ sublime language, advanced tools, deep community, profound wonder, and inescapable narrative tension. That ability to destroy ourselves? That’s the greatest narrative driver in this history of mankind.

How will it turn out?

•••

We are storytelling engines uniquely capable of understanding our place in the world. And it’s time to change our story, before we fail a grand test of our own making: Can we transition to a world inhabited by both ourselves, and the otherness of the technology we’ve created? Should we fail, nature will indifferently shrug its shoulders. It has billions of years to let the whole experiment play over again.

We are the architects of this grand narrative. Let’s not miss our opportunity to get it right.

Adapted from a speech presented at the Thrival Humans X Tech conference in Pittsburgh earlier this week.

4 thoughts on “Technology, Humanity, and the Existential Test”

  1. As i read this and think about the question can we govern ourselves? where are we headed? i cant help but think about the story of the frog and the scorpion. If one has never heard it, it goes like this… a scorpion is on the edge of a river bank and ask the frog, “can you carry me to the other side.” The frog says, “oh no because you’ll sting me and i’ll die” to which the scorpion replies, “that wont happen because if i sting you i’ll drown too”. The frog thinks about it and says, “OK”. Half way across the river the frog feels a terrible sting in his side and says, “mr scorpion, why have you stung me, now we will both drown”, to which the scorpion replies, “i couldn’t help it, it’s in my NATURE!”. As much as we may try to fight our nature, universal law says eventually you will succumb to your nature and that is not even necessarily bad in the big picture (though we may think it is), you were designed with that nature for a reason. Perhaps one we cant even see. Can’t help but to be a bit of a philosopher when you read John’s articles! Of course my dad would reply, “How many angels can dance on the head of a pin” 🙂

  2. I am reminded of Frederick Bastiat’s treatise the Law (written at the time of the French revolution) which basically holds that we are ordained to use whatever resource that is available to better ourselves as long as we don’t interfere with someone else’s right to do the same. The governance of that process is the basis for law. In the age of technology perhaps the evolution of government is the maturation of open source and blockchain which will allow individuals to combine to make decisions that are good for specific ends and then disband only to come together to solve some other problem. This begs the question of centralization vs. decentralization – I can’t think of any lasting centralized decision process. I believe an answer worth testing is decentralized community decision making. There might always need to be some type of overiding arbitration process to interpret “fairness” but I am not entirely sure how it would work. The United States and its Constitution appears to be the latest pre-algorithm attempt to address the issue however imperfectly.

  3. “We call these systems government. It is imperfect but… it’s better than anarchy.”

    I believe you misrepresent anarchy. It is not chaos. It is self governance. I personally believe it is fairer, more efficient, and arrives at far better outcomes than any of the systems of authoritarianism that have thus far dominated governance (if you can call it that) since our shift to agriculture.

Leave a Reply to richard j pattersonCancel reply