Facebook Pivots to Privacy. Why?

By

(cross posted from Searchblog)

I’ll never forget a meal I had with a senior executive at Facebook many years ago, back when I was just starting to question the motives of the burgeoning startup’s ambition. I asked whether the company would ever support publishers across the “rest of the web” – perhaps through an advertising system competitive with Google’s AdSense. The executive’s response was startling and immediate. Everything anyone ever needs to do – including publishing – can and should be done on Facebook. The rest of the Internet was a sideshow. It’s just easier if everything is on one platform, I was told. And Facebook’s goal was to be that platform.

Those words still ring in my ears as we celebrate the 30th anniversary of the web today. And they certainly should inform our perspective as we continue to digest Facebook’s latest self-involved epiphany.

Read More

The Internet Must Change. To Get There, Start With the Data.

By

Copy of ArchitectureOfControl

This is an edited version of a series of talks I first gave in New York over the past week, outlining my work at Columbia. Many thanks to Reinvent, Pete Leyden, Cap Gemini, Columbia University, Cossette/Vision7, and the New York Times for hosting and helping me. Cross posted from Searchblog

Prelude. 

I have spent 30-plus years in the tech and media industries, mainly as a journalist, observer, and founder of companies that either make or support journalism and storytelling. When it comes to many of the things I am going to talk about here, I am not an expert. If I am expert at anything at all, it’s asking questions of technology, and of the media and marketing platforms created by technology. In that spirit I offer the questions I am currently pursuing, in the hope of sparking a dialog with this esteemed audience to further better answers.

Some context: Since 1986, I’ve spent my life chasing one story: The impact of technology on society. For whatever reason, I did this by founding or co-founding companies. Wired was kind of a first album, as it were, and it focused on the story broadly told. The Industry Standardfocused on the business of the Internet, as did my conference Web 2.  Federated Media was a tech and advertising platform for high quality “conversational” publishers, built with the idea that our social discourse was undergoing a fundamental shift, and that publishers and their audiences needed to be empowered to have a new kind of conversation. Sovrn, a company I still chair, has a similar mission, but with a serious data and tech focus. NewCo, my last company (well, I’ve got another one in the works, perhaps we can talk about that during Q&A) seeks to illuminate the impact of companies on society.

It’s Broke. Let’s Fix It.

And it is that impact that has led me to the work I am doing now, here in New York. I moved here just last Fall, seeking a change in the conversation. To be honest, the Valley was starting to feel a bit…cloistered.

A huge story – the very same story, just expanded – is once again rising. Only it’s just … more urgent. 25 years after the launch of Wired, the wildest dreams of its pages have come true. Back in 1992 we asked ourselves: What would happen to the world when technology becomes the most fundamental driver of our society? Today, we are living in the answer. Turns out, we don’t always like the result.

Most of my career has been spent evangelizing the power of technology to positively transform business, education, and politics. But five or so years ago, that job started to get harder. The externalities of technology’s grip on society were showing through the shiny optimism of the Wired era. Two years ago, in the aftermath of an election that I believe will prove to be the political equivalent of the Black Sox scandal, the world began to wake up to the same thing.

So it’s time to ask ourselves a simple question: What can we do to fix this?

Let’s start with some context. My current work is split between two projects: One has to do with data governance, the other political media. How might they be connected? I hope by the end of this talk, it’ll make sense.

So let’s go. In my work at Columbia, I’m currently obsessed with two things. First,

Data.

How much have you thought about that word in the past two years?

Given how much it’s been in the news lately, likely quite a lot. Big data, data breaches, data mining, data science…Today, we’re all about the data.

And second….

Governance.

When was the last time you thought about that word?

Government – well for sure, I’d wager that’s increased given who’s been running the country these past two years. But Governance? Maybe not as much.

But how often have you put the two words together?

Data Governance.

Likely not quite as much.

It’s time to fix that.

Why?

Because we have slouched our way into an architecture of data governance that is broken, that severely retards economic and cultural innovation, and that harms society as a whole.

Let’s unpack that and define our terms. We’ll start with Governance.

What is governance? It’s an …

Architecture of control

A regulatory framework that manages how a system works. The word is most often used in relation to political governance – which we care about a lot for the purposes of this talk – but the word applies to all systems, and in particular to corporations, which is also a key point in the research we’re doing.

Governance in corporate context is “the system of rules, practices and processes by which a firm is directed and controlled.

But in my work, when I refer to governance, I am referring to the “the system of rules, practices and processes by which a firm controls its relationship to its community.” Who’s that community? You, me, developers and partners in the ecosystem, for the most part. More on that soon.

Now, what is data? I like to think of it as…

Unrefined Information.

I’m not in love with this phrase, but again, this is a first draft of what I hope will grow to more refined (ha) work. Data is the core commodity from which information is created, or processed. Data has many attributes, not all of which are agreed upon. But I think it’s inarguable that the difference between data and information is …

Human meaning.

That’s Socrates, who thought about this shit, a lot. Information is data that means something to us (and possibly the entire universe, as it relates to the second law of thermodynamics. But physics is not the focus of this talk, nor is a possible fourth law of thermodynamics….).

As we’ve learned – the hard way – over the past decade, there are a few very large companies which have purview over a massive catalog of meaningful data, meaningful not only to us, but to society at large. And it’s this societal aspect that, until recently, we’ve actively overlooked.  We’re in the midst of a grand data renaissance, which if history remotely echoes, I fervently hope will give rise to …

A (Data) Enlightenment

That’s John Locke, an Enlightenment philosopher. Allow me to pull back for second and attempt to lay some context for the work I hope to advance in the next few years. It starts with the Enlightenment, a great leap forward in human history (and the subject of a robust defense by Steven Pinker last year).

Arguably the crowning document of the Enlightenment is…

The United States Constitution

This declaration of the rights of humankind (well mankind for the first couple of centuries) itself took more than three centuries to emerge (and cribbed generously from the French and English, channeling Locke and Hume). Our current political and economic culture is, of course, a direct descendant of this living document. American democracy was founded upon Enlightenment principles. And the cornerstone of Enlightenment ideas is …

The Scientific Method

That’s Aristotle, often credited with originating the scientific method, which is based on considered thesis formation, rigorous observation, comprehensive data collection, healthy skepticism, and sharing/transparency. The scientific method is our best tool, so far, for advancing human progress and problem solving.

And the scientific method – the pursuit of truth and progress – all that turns on the data. Prompting the question….

Who Has the Most (and Best) Data?

This is the question we are finally asking ourselves, the answer to which is sounding alarms.  As we all know, we are in a renaissance, a deluge, an orgy of data creation. We have invented sophisticated new data sensing organs  –  digital technologies – that have delivered us superhuman powers for the discovery, classification, and sense-making of data.

Not surprisingly, it is technology companies, driven as they are by the raw economics of profit-seeking capital and armed with these self-fulfilling tools of digital exploration and capture – that have initially taken ownership of this emerging resource. And that is a problem, one we’ve only begun to understand and respond to as a society. Which leads to an important question:

Who Is Governing Data?

In the US, anyway, the truth is, we don’t have a clear answer to this question. Our light touch regulatory framework created a tech-driven frenzy of company building, but it failed to anticipate massive externalities, now that these companies have come to dominate our capital markets. Clearly, the Tech Platform Companies have the most valuable data – at least if the capital markets are to be believed. Companies like Google. Facebook. Amazon. Apple.

All of these companies have very strong governance structures in place for the data they control. These structures are set internally, and are not subject to much (if any) government regulation. And by extension, nearly all companies that manage data, no matter their size, have similar governance models because they are all drafting off those companies’ work (and success). This has created a phenomenon in our society, one I’ve recently come to call …

The Default Internet Constitution

Without really thinking critically about it, the technology and finance industries have delivered us a new Constitution, a fundamental governance document controlling how information flows through the Internet. It was never ratified by anyone, never debated publicly, never published with a flourish of the pen, and it’s damn hard to read. But, it is based on a discoverable corpus. That corpus, at its core, is based on …

Terms of Service and EULAs

Like it or not, there is a governance model for the US Internet and the data which flows across it: Terms of Service and End User Licensing Agreements. Of course, we actively ignore them – who on earth would ever read them? One researcher did the math, and figured it’d take 76 work days for the average American to read all of the policies she clicks past (and that was six years ago!).

Of course, ignoring begets ignorance, and we’ve ignored Terms of Service at our peril. No one understands them, but we certainly should – because if we’re going to make change, we’ll want to change these Terms of Service, dramatically. They create the architecture that determines how data, and therefore societal innovation and value, flow around the Internet.

And let’s be clear, these terms of service have hemmed data into silos. They’re built by lawyers, based on the desires of engineers who are – for the most part – far more interested in the product they are creating than any externalities those products might create.

And what are the lawyers concerned with? Well, they have one True North: Protect the core business model of their companies.

And what is that business model? Engagement. Attention. And for most, data-driven personalized advertising. (Don’t get me started about Apple being different. The company is utterly dependent on those apps animating that otherwise black slate of glass they call an iPhone).

So what insures engagement and attention? Information refined from data.

So let’s take a look at a rough map of what this Terms of Service-driven architecture looks like:

The Mainframe Architecture

Does this look familiar? If you’re a student of technology industry history, it should, because this is how mainframes worked in the early days of computing. Data compute, data storage, and data transport is handled by the big processor in the sky. The “dumb terminal” lives at the edge of the system, a ‘thin client’ for data input and application output. Intelligence, control, and value exchange lives in the center. The center determines all that occurs at the edge.

Remind you of any apps you’ve used lately?

But it wasn’t always this way. The Internet used to look like this:

The Internet 1.0 Architecture

I’m one of the early true believers in the open Internet. Do you remember that world? It’s mostly gone now, but there was a time, from about 1994 to 2012, when the Internet ran on a different architecture, one based on the idea that the intelligence should reside in the nodes – the site – not at the center. Data was shared laterally between sites. Of course, back then the tech was not that great, and there was a lot of work to be done. But we all knew we’d get there….

…Till the platforms got there first. And they got there very, very well – their stuff was both elegant and addictive.

But could we learn from Internet 1.0, and imagine a scenario inspired by its core lessons? Technologically, the answer is “of course.” This is why so many folks are excited by blockchain, after all (well that, and ICO ponzi schemes…).

But it might be too late, because we’ve already ceded massive value to a broken model. The top five technology firms dominate our capital markets. We’re seriously (over)invested in the current architecture of data control. Changing it would be a massive disruption. But what if we can imagine how such change might occur?

This is the question of my work.

So…what is my work?

A New Architecture

If we’re stuck in an architecture that limits the potential of data in our society, we must envision a world under a different kind of architecture, one that pushes control, agency, and value exchange back out to the node.

Those of us old enough to remember the heady days of Web 1.0 foolishly assumed such a world would emerge unimpeded. But as Tim Wu has pointed out, media and technology run in cycles, ultimately consolidating into a handful of companies with their hands on the Master Switch – we live in a system that rewards the Curse of Bigness. If we are going to change that system, we have to think hard about what we want in its place.

I’ve given this some thought, and I know what I want.

Let The Data Flow

Imagine a scenario where you can securely share your Amazon purchase data with Walmart, and receive significant economic value for doing so (I’ve written this idea up at length here). Of course, this idea is entirely impossible today. This represents a major economic innovation blocked.

Or imagine a free marketplace for data that allows a would-be restaurant owner to model her customer base’s preferences and unique taste? (I’ve written this idea up at length here). Of course, this is also impossible today, representing a major cultural and small business innovation is impeded.

Neither of these kinds of ideas are even remotely possible – nor are the products of thousands of similar questions entrepreneurs might ask of the data rotting in plain sight across our poorly architected data economy.

We all lose when the data can’t flow. We lose collectively, and we lose individually.

But imagine if it was possible?!

How might such scenarios become reality?

We’re at a key inflection point in answering that question.

2019 is the year of data regulation. I don’t believe any meaningful regulation will pass here in the US, but it’ll be the year everyone talks about it. It started with the CA/Facebook hearings, and now every self-respecting committee chair wants a tech CEO in their hot seat. Congress and the American people have woken up to the problem, and any number of regulatory fixes are being debated. Beyond the privacy shitstorm and its associated regulatory response, which I’d love to toss around during Q&A, the most discussed regulatory relief is anti-trust – the curse of bigness is best fixed by breaking up the big guys. I understand the goal, and might even support it, but I don’t think we need to even do that. Instead, I submit for your consideration one improbable, crazy, and possibly elegant solution.

The Token Act

I’m calling it the Token Act.

It requires one thing: Every data processing service at a certain scale must deliver back to its customers any co-created data in machine readable format, easily portable to any other data processing service.

Imagine the economic value unlocked, the exponential impact on innovation such a simple rule would have. Of course we must acknowledge the negative short term impact such a policy would have on the big guys. But it also creates an unparalleled opportunity for them – the token of course can include a vig – a percentage of all future revenue associated with that data, for the value the platform helped to create. This model could drive a far bigger business in the long run, and a far healthier one for all parties concerned.

I can’t prove it yet, but I sense this approach could 10 to 100X our economy. We’ve got some work to do on proving that, but I think we can.

Imagine what would occur if the data was allowed to flow freely. Imagine the upleveling of how firms would have to compete. They’d have to move beyond mere data hoarding, beyond the tending of miniature walled gardens (most app makers) and massive walled agribusinesses (in the case of the platforms – and ADM and Monsanto, but that’s another chapter in the book, one of many).

Instead, firms would have to compete on creating more valuable tokens  – more valuable units of human meaning. And they’d encourage sharing those tokens widely – with the fundamental check of user agency and control governing the entire system.

The bit has flipped, and the intelligence would once again be driven to the nodes.

To us!

But the Token Act is just an exercise in envisioning a society governed by a different kind of data architecture. There are certainly better or more refined ideas.

And to get to them, we really need to understand how we’re governed today. And now that I’ve gotten nearly to the end of my prepared remarks, I’ll tell you what I’m working on at Columbia with several super smart grad students:

Mapping Data Flows

If we are going to understand how to change our broken architecture of data flows, we need to deeply understand where we are today. And that means visualizing a complex mess. I’m working with a small team of researchers at Columbia, and together we are turning the Terms of Service at Amazon, Apple, Facebook and Google into a database that will drive an interactive visualization – a blueprint of sorts for how data is governed across the US internet. We’re focusing on the advertising market, for obvious reasons, but it’s my hope we might create a model that can be applied to nearly any information rich market. It’s early stages, but our goal is to have something published by the end of May.

Finally, Advertising

I’ve not spoken much about advertising during this talk, and that was purposeful. I’ve written at length about how we came to the place we now inhabit, and the role of programmatic advertising in getting us there.

Truth is, I don’t see advertising as the cause of this problem, but rather an outgrowth of it. If you offer any company a deal that puts new customers on a platter, as Google did with AdWords, or Facebook has with NewsFeed, well, there’s no way those companies will refuse. Every major advertiser has embraced search and social, as have millions of smaller ones.

Our problem is simply this: The people who run technology platforms don’t actually understand the power and limitations of their systems, and let’s be honest, nor do we. Renee Di Resta has pointed this out in recent work around Russian interference in our national dialog and elections: Any system that allows for automated processing of messages is subject to directed, sophisticated abuse. The place for regulation is not in advertising (even though that’s where it’s begun with the Honest Ads Act), it’s in how the system works architecturally.

But advertisers must be highly aware of this transitional phase in the architecture of a system that has been a major source of revenue and business results. We must imagine what comes next, we must prepare for it, and perhaps, just perhaps, we should invent it, or at the very least play a far more active role than we’re playing currently.

I believe that if together – industry, government, media and consumers collectively – if we unite to address the core architectural issues inherent to how we manage data, in the process giving consumers economic, creative, and personal agency over the data they co create with platforms, the question of toxic advertising will disappear faster than it arose.

But I’ve talked (or written) long enough. Thank you so much for coming (for reading), and for being part of this conversation. Now, let’s start it.

Predictions for 2019: Data, Tech, Media, Climate, Markets and…Cannabis…

By

shutterstock_9218446.jpg

If predictions are like baseball, I’m bound to have a bad year in 2019, given how well things went the last time around. And given how my own interests, work life, and physical location have changed of late, I’m not entirely sure what might spring from this particular session at the keyboard.

But as I’ve noted in previous versions of this post (all 15 of them are linked at the bottom), I do these predictions in something of a fugue state – I don’t prepare in advance. I just sit down, stare at a blank page, and start to write.

So Happy New Year, and here we go.

1/ Global warming gets really, really, really real. I don’t know how this isn’t the first thing on everyone’s mind already, with all the historic fires, hurricanes, floods, and other related climate catastrophes of 2018. But nature won’t relent in 2019, and we’ll endure something so devastating, right here in the US, that we won’t be able to ignore it anymore. I’m not happy about making this prediction, but it’ll likely take a super Sandy or a king-sized Katrina to slap some sense into America’s body politic. 2019 will be the year it happens.

2/ Mark Zuckerberg resigns as Chairman of Facebook, and relinquishes his supermajority voting rights. Related, Sheryl Sandberg stays right where she is. I honestly don’t see any other way Facebook pulls out of its nosedive. I’ve written about this at length elsewhere, so I will just summarize: Facebook’s only salvation is through a new system of governance. And I mean that word liberally – new governance of how it manages data across its platform, new governance of how it works with communities, governments, and other key actors across its reach, and most fundamentally, new governance as to how it works as a corporate entity. It all starts with the Board asserting its proper role as the governors of the company. At present, the Board is fundamentally toothless.

3/ Despite a ton of noise and smoke from DC, no significant federal legislation is signed around how data is managed in the United States. I  know I predicted just a few posts ago that 2019 will be the year the tech sector has to finally contend with Washington. And it will be…but in the end, nothing definitive will emerge, because we’ll all be utterly distracted by the Trump show (see below). Because of this, unhappily, we’ll end up governed by both GDPR and California’s homespun privacy law, neither of which actually force the kind of change we really need.

4/ The Trump show gets cancelled. Last year, I said Trump would blow up, but not leave. This year, I’m with Fred, Trump’s in his final season. We all love watching a slow motion car wreck, but 2019 is the year most of us realize the car’s careening into a school bus full of our loved ones. Donald Trump, you’re fired.

5/ Cannabis for the win. With Sessions gone and politicians of all stripes looking for an easy win, Congress will pass legislation legalizing cannabis. Huzzah!!!! Just in time, because…

6/ China implodes, the world wobbles. Look, I’m utterly out of my depth here, but something just feels wrong with the whole China picture. Half the world’s experts are warning us that China’s fusion of capitalism and authoritarianism is already taking over the world, and the other half are clinging to the long-held notion that China’s approach to nation building is simply too fragile to withstand democratic capitalism’s demands for transparency. But I think there may be other reasons China’s reach will extend its grasp: It depends on global growth and optimistic debt markets. And both of those things will fail this year, exposing what is a marvelous but unsustainable experiment in managed markets. This is a long way of backing into a related prediction:

7/ 2019 will be a terrible year for financial markets. This is the ultimate conventional wisdom amongst my colleagues in SF and NY, even though I’ve seen plenty of predictions that Wall St. will have a pretty good year. I have no particular insight as to why I feel this way, it’s mainly a gut call: Things have been too good, for too long. It’s time for a serious correction.

8/ At least one major tech IPO is pulled, the rest disappoint as a class. Uber, Lyft, Slack, Pinterest et al are all expected this year. But it won’t be a good year to go public. Some will have no choice, but others may simply resize their businesses to focus on cash flow, so as to find a better window down the road.

9/ New forms of journalistic media flourish. It’s well past time those of us in the media world take responsibility for the shit we make, and start to try significant new approaches to information delivery vehicles. We have been hostages to the toxic business models of engagement for engagement’s sake. We’ll continue to shake that off in various ways this year – with at least one new format taking off explosively. Will it have lasting power? That won’t be clear by year’s end. But the world is ready to embrace the new, and it’s our jobs to invest, invent, support, and experiment with how we inform ourselves through the media. Related, but not exactly the same…

10/A new “social network” emerges by the end of the year. Likely based on messaging and encryption (a la Signal or Confide), the network will have many of the same features as the original Facebook, but will be based on a paid model. There’ll be some clever new angle – there always is – but in the end, it’s a way to manage your social life digitally. There are simply too many pissed off and guilt-ridden social media billionaires with the means to launch such a network – I mean, Insta’s Kevin Systrom, WhatsApp’s Jan and Brian, not to mention the legions of mere multi-millionaires who have bled out of Facebook’s battered body of late.

So that’s it. On a personal note, I’ll be happily busy this year. Since moving to NY this past September, I’ve got several new projects in the works, some still under wraps, some already in process. NewCo and the Shift Forum will continue, but in reconstituted forms.  I’ll keep up with my writing as best I can; more likely than not most of it will focus the governance of data and how its effect our national dialog. Thanks, as always, for reading and for your emails, comments, and tweets. I read each of them and am inspired by all. May your 2019 bring fulfillment, peace, and gratitude.

Previous predictions:

Predictions 2018

2018: How I Did

Predictions 2017

2017: How I Did

Predictions 2016

2016: How I Did

Predictions 2015

2015: How I Did

Predictions 2014

2014: How I Did

Predictions 2013

2013: How I Did

Predictions 2012

2012: How I Did

Predictions 2011

2011: How I Did

Predictions 2010

2010: How I Did

2009 Predictions

2009 How I Did

2008 Predictions

2008 How I Did

2007 Predictions

2007 How I Did

2006 Predictions

2006 How I Did

2005 Predictions

2005 How I Did

2004 Predictions

2004 How I Did

Don’t Blame Facebook.

By

Screen Shot 2018-12-26 at 2.13.44 PM

We got ourselves into this mess. Facebook was just happy to take our money – and our data – as we did. 2019 is the year we starting digging ourselves back out.

(Cross posted from Searchblog)

Those of us fortunate enough to have lived through the birth of the web have a habit of stewing in our own nostalgia. We’ll recall some cool site from ten or more years back, then think to ourselves (or sometimes out loud on Twitter): “Well damn, things were way better back then.”

Then we shut up. After all, we’re likely out of touch, given most of us have never hung out on Twitch. But I’m seeing more and more of this kind of oldster wistfulness, what with Facebook’s current unraveling and the overall implosion of the tech-as-savior narrative in our society.

Hence the chuckle many of us had when we saw this trending piece  suggesting that perhaps it was time for us to finally unhook from Facebook and – wait for it -get our own personal webpage, one we updated for any and all to peruse. You know, like a blog, only for now. I don’t know the author – the editor of the tech-site Motherboard – but it’s kind of fun to watch someone join the Old Timers Web Club in real time. Hey Facebook, get off my lawn!!!

That Golden Age

So as to not bury the lead, let me state something upfront: Of course the architecture of our current Internet is borked. It’s dumb. It’s a goddamn desert. It’s soil where seed don’t sprout. Innovation? On the web, that dog stopped hunting years ago.

And who or what’s to blame? No, no. It’s not Facebook. Facebook is merely a symptom. A convenient and easy stand in  – an artifact of a larger failure of our cultural commons. Somewhere in the past decade we got something wrong, we lost our narrative – we allowed Facebook and its kin to run away with our culture.

Instead of focusing on Facebook, which is structurally borked and hurtling toward Yahoo-like irrelevance, it’s time to focus on that mistake we made, and how we might address it.

Just 10-15 years ago, things weren’t heading toward the our currently crippled version of the Internet. Back in the heady days of 2004 to 2010 – not very long ago – a riot of innovation had overtaken the technology and Internet world. We called this era “Web 2.0” – the Internet was becoming an open, distributed platform, in every meaning of the word. It was generative, it was Gates Line-compliant, and its increasingly muscular technical infrastructure promised wonder and magic and endless buckets of new. Bandwidth, responsive design, data storage, processing on demand, generously instrumented APIs; it was all coming together. Thousands of new projects and companies and ideas and hacks and services bloomed.

Sure, back then the giants were still giants – but they seemed genuinely friendly and aligned with an open, distributed philosophy. Google united the Internet, codifying (and sharing) a data structure that everyone could build upon. Amazon Web Services launched in 2006, and with the problem of storage and processing solved, tens of thousands of new services were launched in a matter of just a few years. Hell, even Facebook launched an open platform, though it quickly realized it had no business doing so. AJAX broke out, allowing for multi-state data-driven user interfaces, and just like that, the web broke out of flatland. Anyone with passable scripting skills could make interesting shit! The promise of Internet 1.0 – that open, connected, intelligence-at-the-node vision we all bought into back before any of it was really possible – by 2008 or so, that promise was damn near realized. Remember LivePlasma? Yeah, that was an amazing mashup. Too bad it’s been dormant for over a decade.

After 2010 or so, things went sideways. And then they got worse. I think in the end, our failure wasn’t that we let Facebook, Google, Apple and Amazon get too big, or too powerful. No, I think instead we failed to consider the impact of the technologies and the companies we were building. We failed to play our hand forward, we failed to realize that these nascent technologies were fragile and ungoverned and liable to be exploited by people less idealistic than we were.

Our Shadow Constitution

Our lack of consideration deliberately aided and abetted the creation of a unratified shadow Constitution for the Internet – a governance architecture built on assumptions we have accepted, but are actively ignoring. All those Terms of Service that we clicked past, the EULAs we mocked but failed to challenge, those policies have built walls around our data and how it may be used. Massive platform companies have used those walls to create impenetrable business models. Their IPO filings explain in full how the monopolization and exploitation of data were central to their success – but we bought the stock  anyway.

We failed to imagine that these new companies – these Facebooks, Ubers, Amazons and Googles – might one day become exactly what they were destined to become, should we leave them ungoverned and in the thrall of unbridled capitalism.  We never imagined that should they win, the vision we had of a democratic Internet would end up losing.

It’s not that, at the very start at least, that tech companies were run by evil people in any larger sense. These were smart kids, almost always male, testing the limits of adolescence in their first years after high school or college. Timing mattered most: In the mid to late oughts, with the winds of Web 2 at their back, these companies had the right ideas at the right time, with an eager nexus of opportunistic capital urging them forward.

They built extraordinary companies. But again, they built a new architecture of governance over our economy and our culture – a brutalist ecosystem that repels innovation. Not on purpose – not at first. But protected by the walls of the Internet’s newly established shadow constitution and in the thrall of a new kind of technology-fused capitalism, they certainly got good at exploiting their data-driven leverage.

So here we are, at the end of 2018, with all our darlings, the leaders not only of the tech sector, but of our entire economy, bloodied by doubt, staggering from the weight of unconsidered externalities. What comes next?

2019: The Year of Internet Policy

Whether we like it or not, Policy with a capital P is coming to the Internet world next year. Our newly emboldened Congress is scrambling to introduce multiple pieces of legislation, from an Internet Bill of Rights  to a federal privacy law modeled on – shudder – the EU’s GDPR. In the past month, I’ve read draft policy papers suggesting we tax the Internet’s advertising model, that we break up Google, Facebook, and Amazon, or that we back off and just let the market “do its work.”

And that’s a good thing, to my mind – it seems we’re finally coming to terms with the power of the companies we’ve created, and we’re ready to have a national dialog about a path forward. To that end, a spot of personal news: I’ve joined the School of International and Public Affairs at Columbia University, and I’m working on a research project studying how data flows in US markets, with an emphasis on the major tech platforms. I’m also teaching a course on Internet business models and policy. In short, I’m leaning into this conversation, and you’ll likely be seeing a lot more writing on these topics here over the course of the next year or so.

Oh, and yeah, I’m also working on a new project, which remains in stealth for the time being. Yep, has to do with media and tech, but with a new focus: Our political dialog. More on that later in the year.

I know I’ve been a bit quiet this past month, but starting up new things requires a lot of work, and my writing has suffered as a result. But I’ve got quite a few pieces in the queue, starting with my annual roundup of how I did in my predictions for the year, and then of course my predictions for 2019. But I’ll spoil at least one of them now and just summarize the point of this post from the start: It’s time we figure out how to build a better Internet, and 2019 will be the year policymakers get deeply  involved in this overdue and essential conversation.

Naked and Afraid

By

Mark Zuckerberg is in a crisis of leadership. Will he grasp its opportunity?

Screen Shot 2018-11-27 at 9.08.20 AM

It seems like an eternity, but about one year ago this Fall, Uber had kicked its iconic founding CEO to the curb, and he responded by attempting a board room coup. Meanwhile, Facebook was at least a year into crisis mode, clumsily dealing with a spreading contagion that culminated in a Yom Kippur apology from CEO Mark Zuckerberg. “For those I hurt this year, I ask forgiveness and I will try to be better,” he posted. “For the ways my work was used to divide people rather than bring us together, I ask for forgiveness and I will work to do better.”

More than one year after that work reputedly began, what lesson from Facebook’s still rolling catastrophe? I think it’s pretty clear: Mark Zuckerberg needs to do a lot more than publish blog posts someone else has written for him.

And while I’m not much of a fan of the company he’s built, I think Facebook’s CEO can change. But only if he’s willing to truly lead, and take the kind of action that today may seem insane, but ten years from now, just might look like genius. What actions might those be? Well, let’s review.

Admit you have a problem. Yes, over and over and over, Facebook executives have copped a plea. But they’ve never acknowledged the real problem is the company’s core DNA. More often than not, the company plays the pre-teen game of admitting a small sin so as to cover a larger one. The latest case in point is this post-modern gem: Elliot Schrage On Definers. The headline alone says all you need to know about Facebook’s latest disaster: Blame the guy who hired the firm, have him fall on a sword, add a bit of Sandbergian mea culpa, and move along. Nope, this time is different, Facebook. It’s time for fundamental change. And that means….

Submit to real governance. Like Google, Uber, Snap, and other controversial tech companies, Facebook implemented a two-class system of shares which canonizes their founder as an untouchable god, rendering the company board toothless in moments of true crisis (and in appeasement mode the rest of the time). Following Uber’s lead, it’s time for Mark to submit to the governance of the capital markets and abandon his super majority voting powers. He must stand before his board naked and afraid for his job. This and this alone will predicate the kind of change Facebook needs.

Bring in outsiders. Facebook’s core problem is expressed through its insular nature. This is also the technology industry’s problem – an engineer’s determination that every obstacle can be hacked to submission, and that non-engineers are mainly good for paint and powder afterward. This is simply not the case anymore, either at Facebook or in tech more broadly. Zuckerberg must demand his board commission a highly qualified panel to review his company’s management and product decisions, and he must commit to implementing that panel’s recommendations. Along those lines, here are a two major thought starters:

Embrace radical change. Remember “Bringing People Closer Together” and the wildly misappropriatedTime Well Spent“? This was supposedly a major new product initiative to change Facebook’s core mission, designed to shift our attention from what was wrong with the platform – data breaches, the newsfeed, false news and election meddling – to what could be right about it: Community pages and human connection. Has it worked? Let’s just be honest: No. Community doesn’t happen because a technology company writes a blog post or emphasizes a product suite it built for an entirely different purpose. Facebook can’t be fixed unless it changes its core business model. So just do it, already. Which leads to:

Free the data. Facebook has so far failed to enable a truly open society, despite its embrace of lofty mission statements. I’ve written about this at length, so I’ll just summarize: Embrace machine-readable data portability, and build a true, Gates-line compliant platform that is governed by the people, companies, and participants who benefit from it. Yes, actually governing  is a messy pain in the ass, but failing to govern? That’s a company killer.

Many brilliant observers are calling for Mark’s head, and/or for the company to be broken up. I’m not sure either of these solutions will do much more than insure that the company fails. What tech needs now is proof that it can lead with bold, high-minded vision that gives back more than it takes. Mark Zuckerberg has the power to do just that. The only question now is whether he will use it.

Cross posted from Searchblog

Tech Must Get Over Its Superman Complex, Or We’re All Screwed

By

Screen Shot 2018-11-12 at 5.52.38 PM

Detail from the cover of Yuval Noah Harari’s 21 Lessons for the 21st Century

Everyone in tech loves Yuval Noah Harari. This is cause for concern.

A year and a half ago I reviewed Yuval Noah Harari’s Homo Deus, recommending it to the entire industry with this subhead: “No one in tech is talking about Homo Deus. We most certainly should be.”

Eighteen months later, Harari is finally having his technology industry moment. The author of a trio of increasingly disturbing books – Sapiens, for which made his name as a popular historian philosopher, the aforementioned Homo Deus, which introduced a dark strain of tech futurism to his work, and the recent 21 Lessons for the 21st Century – Harari has cemented his place in the Valley as tech’s favorite self-flagellant. So it’s only fitting that this weekend Harari was the subject of New York Times profile featuring this provocative title: Tech C.E.O.s Are in Love With Their Principal Doomsayer. The subhead continues: “The futurist philosopher Yuval Noah Harari thinks Silicon Valley is an engine of dystopian ruin. So why do the digital elite adore him so?”

Well, I’m not sure if I qualify as one of those elites, but I have a theory, one that wasn’t quite raised in the Times’ otherwise compelling profile. I’ve been a student of Harari’s work, and if there’s one clear message, it’s this: We’re running headlong into a world controlled by a tiny elite of superhumans, masters of new technologies that the “useless class” will never understand. “Homo sapiens is an obsolete algorithm,” Harari writes in Homo Deus. A new religion of Dataism will transcend our current obsession with ourselves, and we will “dissolve within the data torrent like a clump of earth within a gushing river.” In other words, we humans are f*cked, save for a few of the lucky ones who manage to transcend their fate and become masters of the machines. “Silicon Valley is creating a tiny ruling class,” the Times writes, paraphrasing Harari’s work, “and a teeming, furious “useless class.””

So here’s why I think the Valley loves Harari: We all believe we’ll be members of that tiny ruling class. It’s an indefensible, mathematically impossible belief, but as Harari reminds us in 21 Lessons, “never underestimate human stupidity.” Put another way, we are  fooling ourselves, content to imagine we’ll somehow all earn a ticket into (or onto) whatever apocalypse-dodging exit plan Musk, Page or Bezos might dream up (they’re all obsessed with leaving the planet, after all). Believing that impossible fiction is certainly a lot easier than doing the quotidian work of actually fixing the problems which lay before us. Better to be one of the winners than to risk losing along with the rest of the useless class, no?

But we can’t all be winners in the future Harari lays out, and he seems to understand this fact. “If you make people start thinking far more deeply and seriously about these issues,” he said to the Times, “some of the things they will think about might not be what you want them to think about.”

Exactly, Professor. Now that I’ve departed the Valley, where I spent nearly three decades of my life, I’m starting to gain a bit of perspective on my own complicated relationship with the power structure of the place. I grew up with the (mostly) men who lead companies like Amazon, Google, Facebook and Apple, and early in the industry’s rise, it was heady to share the same stage with legends like Bezos, Jobs, or Page. But as the technology industry becomes the driving force of social rupture, I’m far more skeptical of its leaders’ abilities to, well, lead.

Witness this nearly idea-free interview with Google CEO Sundar Pichai, also in the Times, where the meticulously media-prepped executive opines on whether his industry has a role to play in society’s ills: “Every generation is worried about the new technology, and feels like this time it’s different. Our parents worried about Elvis Presley’s influence on kids. So, I’m always asking the question, “Why would it be any different this time?” Having said that, I do realize the change that’s happening now is much faster than ever before. My son still doesn’t have a phone.”

Pichai’s son may not have a phone, but he is earning money mining Ethereum (really, you can’t make this shit up). I’m not sure the son of a centi-millionaire needs to earn money – but it certainly is useful to master the algorithms that will soon control nearly every aspect of human life. So – no, son, no addictive phone for you (even though my company makes them, and makes their operating systems, and makes the apps which ensure their addictive qualities).

But mining crypto currency? Absolutely!

Should Harari be proven right and humanity becomes irrelevant, I’m pretty sure Pichai’s son will have a first class ticket out of whatever mess is left behind. But the rest of us? We should probably focus on making sure that kid never needs to use it.


By the way, the other current obsession of Valley folks is author Anand Giridharadas’ Winners Take All – The Elite Charade of Changing the World. Read them together for a one-two punch, if you dare…

Don’t Break Up The Tech Oligarchs. Force Them To Share Instead.

By

mass flourishing data.png

(image)

Social conversations about difficult and complex topics have arcs – they tend to start scattered, with many threads and potential paths, then resolve over time toward consensus. This consensus differs based on groups within society – Fox News aficionados will cluster one way, NPR devotees another. Regardless of the group, such consensus then becomes presumption – and once a group of people presume, they fail to explore potentially difficult or presumably impossible alternative solutions.

This is often a good thing – an efficient way to get to an answer. But it can also mean we fail to imagine a better solution, because our own biases are obstructing a more elegant path forward.

This is my sense of the current conversation around the impact of what Professor Scott Galloway has named “The Four” – the largest and most powerful American companies in technology (they are Apple, Amazon, Google, and Facebook, for those just returning from a ten-year nap).  Over the past year or so, the conversation around technology has become one of “something must be done.” Tech was too powerful, it consumed too much of our data and too much of our economic growth. Europe passed GDPR, Congress held ineffectual hearings, Facebook kept screwing up, Google failed to show up…it was all of a piece.

The conversation evolved into a debate about various remedies, and recently, it’s resolved into a pretty consistent consensus, at least amongst a certain class of tech observers: These companies need to be broken up. Antitrust, many now claim, is the best remedy for the market dominance these companies have amassed.

It’s a seductive response, with seductive historical precedent. In the 1970s and 80s, antitrust broke up AT&T, ultimately paving the way for the Internet to flourish. In the 90s, antitrust provided the framework for the government’s case against Microsoft, opening the door for new companies like Google and Facebook to dominate the next version of the Internet. Why wouldn’t antitrust regulation usher in #Internet3? Imagine a world where YouTube, Instagram, and Amazon Web Services are all separate companies. Would not that world be better?

Perhaps. I’m not well read enough in antitrust law to argue one way or the other, but I know that antitrust turns on the idea of consumer harm (usually measured in terms of price), and there’s a strong argument to be made that a free service like Google or Facebook can’t possibly cause consumer harm. Then again, there are many who argue that data is in fact currency, and The Four have essentially monopolized a class of that currency.

But even as I stare at the antitrust remedy, another solution keeps poking at me, one that on its face seems quite elegant and rather unexplored.

The idea is simply this: Require all companies who’ve reached a certain scale to build machine-readable data portability into their platforms. The right to data portability is explicit in the EU’s newly enacted GDPR framework, but so far the impact has been slight: There’s enough wiggle room in the verbiage to hamper technical implementation and scope. Plus, let’s be honest: Europe has never really been a hotbed of open innovation in the first place.

But what if we had a similar statute here? And I don’t mean all of GDPR – that’s certainly a non starter. But that one rule, that one requirement: That every data service at scale had to stand up an API that allowed consumers to access their co-created data, download a copy of it (which I am calling a token), and make that copy available to any service they deemed worthy?

Imagine what might come of that in the United States?

I’m not a policy expert, and the devil’s always in the details. So let me be clear in what I mean when I say “machine-readable data portability”: The right to take, via an API, what is essentially a “token” containing all (or a portion of) the data you’ve co created in one service, and offer it, with various protections, permission, and revocability, to another service. In my Senate testimony, I gave the example of a token that has all your Amazon purchases, which you then give to Walmart so it can do a historical price comparison and tell you how much money you would save if you shopped at its online service. Walmart would have a powerful incentive to get consumers to create and share that token – the most difficult problem in nearly all of business is getting a customer to switch to a similar service. That would be quite a valuable token, I’d wager*.

Should be simple to do, no? I mean, don’t we at least co-own the information about what we bought at Amazon?

Well, no. Not really. Between confusing terms of service, hard to find dashboards, and confounding data reporting standards, The Four can both claim we “own our own data” while at the same time ensuring there’ll never be a true market for the information they have about us.

So yes, my idea is easily dismissed. The initial response I’ve had to it is always some variation of: “There’s no way The Four would let this happen.” That’s exactly the kind of biases I refer to above – we assume that The Four control the dialog, that they either will thwart this idea through intensive lobbying, clever terms of service, and soft power, or that the idea is practically impossible because of technical or market limitations. To that I ask….Why?

Why is it impossible for me to tokenize all of my Lyft ride data, and give for free it to an academic project that is mapping the impact of ride sharing on congestion in major cities? Why is it impossible for a small business owner to create an RFP for all OpenTable, Resy, and other dining data, so she can determine the best kind of restaurant to open in her neighborhood? I’m pretty certain she’d pay a few bucks a head for that kind of data – so why can’t I sell that information to her (with a vig back to OpenTable and Resy) if the value exchange is there to be monetized? Why can’t I tokenize and sell my Twitter interactions to a brand (or more likely, an agency or research company) interested in understanding the mind of a father who lives in Manhattan? Why can’t I tokenize and trade my Spotify history for better recommendations on live shows to see, or movies to watch, or books to read? Or, simply give it to a free service that’s sprung up to give me suggestions about new music to check out?

Why can’t an ecosystem of agents, startups, and data brokers emerge, a new industry of information processing not seen since the rise of search optimization in the early aughts, leveraging and arbitraging consumer information to create entirely new kinds of businesses driven by insights currently buried in today’s data monopolies?

Such a world would be fascinating, exciting, sometimes sketchy, and a hell of a lot of fun. It’d be driven by the individual choices of millions of consumers – choosing which agents to trust, which tokens to create, which trades felt fair. There’s be fails, there’d be fraud, there’d be bad actors. But over time, the good would win over the bad, because the decision making is distributed across the entire population of Internet users. In short, we’d push the decision making to the node – to us. Sure, we’d do stupid things. And sure, the hucksters and the hustlers would make short term killings. But I’ll take an open system like this over a closed one any day of the week, especially if the open system is governed by an architecture empowering the individual to make their own decisions.

It’s be a lot like the Internet was once imagined to be.

I’ve been noodling on such an ecosystem, and I’m convinced it could dwarf our current Internet in terms of overall value created (and credit where credit is due, The Four have created a lot of value). It’d run laps around The Four when it comes to innovation – tens of thousands of new companies would form, all of them feeding off the newly liberated oxygen of high quality, structured, machine readable data. Trusted independent platforms for value exchange would arise. Independent third party agents would munge tokens from competing services, verifying claims and earning the trust of consumers (will Walmart really save you a thousand bucks a year?! We can prove it, or not!). Huge platforms would develop for the processing, securitization, permissioning, and validation of our data. Man, it’d feel like…well, like the recumbent, boring old Internet was finally exciting again.

There’s no technical reason why this world doesn’t exist. The progenitors of the Web have already imagined it, heck, Tim Berners Lee recently announced he’s working pretty much full time on creating a system devoted to the foundational elements needed for it to blossom.

But until we as a society write machine-readable data portability into law, such efforts will be relegated to interesting side shows. And more likely than not, we’ll spend the next few years arguing about breaking up The Four, and let’s be honest, that’s an argument The Four want us to have, because they’re going to win it (more money, better lawyers, etc. etc.). Instead, we should  just require them – and all other data services of scale – to free the data they’ve so far managed to imprison. One simple new law could change all of that. Shouldn’t we consider it?

*In another post, I’ll explore this example in detail. It’s really, really fascinating. 

Cross posted from Searchblog

This Is How Amazon Loses

By

Algorithmic merchandising leaves a bad taste in my mouth. Slowly but surely, it will erode trust for all the tech giants.

Yesterday, I lost it over a hangnail and a two-dollar bottle of hydrogen peroxide.

You know when a hangnail gets angry, and a tiny red ball of pain settles in for a party on the side of your finger? Well, yeah. That was me last night. My usual solution is to stick said finger into a bottle of peroxide for a good long soak. But we were out of the stuff, so, as has become my habit, I turned to Amazon. And that’s when things not only got weird, they got manipulative. Sure, I’ve been ambiently aware of Amazon’s algorithmic pricing and merchandising practices, but last night, the raw power of the company’s control over my routine purchases was on full display.

There’s literally no company in the world with better data about online purchasing than Amazon. So studying how and where it lures a shopper through a purchase process is a worthy exercise. This particular one left a terrible taste in my mouth – one I don’t think I’ll ever shake.

First the detail. Take a look at my search results for “Hydrogen Peroxide” on Amazon. I’ve annotated them with red text and arrows:

Amazon Hydrogen P.png

As you can see, the most eye catching suggestions – the four featured panels with large images – are all Amazon brands. Big red flag. But Amazon knows sophisticated shoppers like me are suspicious of those in house suggestions, so it’s included a similar product in the space below its own brands (we’ll get to that in a minute).

Above the featured items are ads: sponsored listings that are not Amazon brands, which means the advertiser (a small player named “Blubonic Industries”) is paying Amazon to get ahead of the company’s own promotional power. Either way, Amazon makes money. Second red flag.

By now, I’ve decided I’m not interested in either the sponsored brands at the top, or Amazon’s four featured brands, because, well, I don’t like to be so baldly steered into buying Amazon’s own products. Then again, before I move down to the results below, I do notice something rather amazing – Amazon’s familiar brown bottle of peroxide is really, really cheap – as in, $1.29 cheap. There’s even a helpful per oz. calculation next to the price, screaming: this shit is eight pennies an ounce cheap!

Well, I’m almost sold, but because I hate to be directed into purchases,  I’m still going to consider that similar brown bottle below, the one with the red label. Amazon knows this, of course. It’s merchandising 101 – make sure you give the consumer choices, but also, make sure the most profitable choice is presented in such a way as to win the day.

So my eye moves down the page to check out the second bottle. It’s from Swan, a brand I’ve vaguely heard of. Then I check its price.

Nine dollars and sixty nine cents.

Which would you buy? After all, this is a staple, a basic, a chemical compound. And you trust Amazon to get shit right, don’t you? I mean, a buck and change – nearly nine times cheaper? What a deal!

So…my eyes revert to Amazon’s blue labeled bottle. It wouldn’t have a four-star plus review if it burned your skin, right? And that’s when I notice the tiny icon next to it, which looks like this:

What’s this? Is this yet another annoying subscription service?  Ever since we moved to New York, my wife and I have tried to figure out Amazon’s subscription services (Fresh? Pantry? Prime Now? Whole Foods Delivery? Who knows?!). I’m already deeply suspicious of any attempt by Amazon to lure me into paying them monthly for a service that I don’t understand.

But…a buck twenty nine! So I click on the bottle, and the landing page is super clean, and there’s no obvious Prime Pantry mention. Plus, it turns out, that bottle from Amazon is the Whole Foods generic brand, which for whatever reason seems a bit better than a generic Amazon brand.  Did I just get lucky? Maybe  I can just get some super cheap chemicals delivered in a day to my door, and my annoying hangnail will be a thing of the past soon enough….Right?

Here’s the landing page:

Screen Shot 2018-10-10 at 8.34.10 AM.png

Looks great, the price is amazing, but…Uh oh. I can’t get this bottle of peroxide until Sunday. By then, I’ve likely lost my finger to a flesh eating bacteria. As I feared, this bottle is nothing more than a baited fish hook for one of Amazon’s subscription offers – which I find out, will cost somewhere between five and thirteen bucks a month. I’ve signed up for Prime Pantry by mistake in the past, and it wasn’t a smooth or enjoyable experience. No thanks. I click back to the original search results. Seems to me Amazon is gaming the shipping deals.

Well of course it is. I’m no longer a happy Amazon customer at this point. Now I’m just annoyed.

But what’s this? If I scroll down below the $9.69 bottle, there’s another choice, also from Swan, and, it seems, exactly the same, if one is to judge just by the image (and we do judge just from the images, let’s just admit it). This one costs almost half as much as the one above it. What’s going on?! Here’s an annotated screen shot:

Prime Peroxide.png

As you can see, there’s a lot going on. I’ve narrowed my choice down to two non-Amazon brands. They look nearly identical. The most significant difference, at least in terms of the information provided to me by Amazon, is the price – the top bottle is nearly twice as expensive as the bottom one. But the top bottle has a major benefit: I can get it nearly immediately! The bottom one makes me wait a day. Is the wait worth four or five bucks? Hmm.

Also confounding: The bottom bottle has its price broken out on a per ounce basis – 32 cents, exactly four times more than the 8 cents-an-ounce bottle I just looked at from Amazon’s Prime Pantry. Ouch! Now I’m really annoyed, and confused. My eyes dart back up to the $9.69 bottle. As I’ve shown with the empty red circle, there’s….no per-ounce breakdown shown by Amazon. It does tell me that this particular bottle is 32 ounces, whereas the bottom one is 16 ounces.

But why not do the math for me? A quick calculation shows that the top bottle comes out to about 30 cents an ounce – two cents less than the bottom bottle. Why not show that fact?

This, folks, this is algorithmic merchandising at its finest.

Amazon knows exactly how many clicks it’s going to take for me to reach shopping fatigue. Not “on average for all shoppers,” or even “on average for each shopper who’s ever considered a bottle of hydrogen peroxide.” Amazon knows all of that, of course, but it also knows exactly how long it takes ME to get fatigued, to enter what I like to call “fuck it” mode. As in, “fuck it, I’m tired of this bullshit, I want to get back to the rest of my life. I’m going to buy one of these bottles.”

And because there’s no per-ounce breakdown of the 32-ounce bottle, and because that makes me suspicious of it, and because hell, who ever needs 32 ounces of hydrogen peroxide anyway, well, I’m just going to buy the $5 one.

Ca-ching! Amazon just made a nearly seven percent markup on my purchase. It took five clicks, 15 seconds, and a vast architecture of data and algorithmic mastery to make that profit. Each and every time we purchase something on Amazon, that machinery is engaged in the background, guiding us through choices which insure the company remains the trillion dollar behemoth we know and…

Love?

***

Do you love Amazon anymore? For that matter, do you love Facebook, Google, or Twitter? Interactions like the one I’ve detailed above are starting to chip away at that presumption. Personally, I’ve gone from cheerleader to skeptic over the past few years, and I’m broken out into full-blown critic over the last twelve months. I no longer trust Amazon to have my best interests at heart. I’ve lost any trust that Facebook or Twitter can deliver me a public square representative of my democracy. I’ve given up on Google delivering me search results that are truly “organic.” And YouTube? Point solution, at best. I can’t possibly trust the autoplay feature to do much more than waste my time.

What’s happened to our beloved tech icons, and what are the implications of this lost trust? In future posts, I plan on thinking out loud on that topic. I hope you’ll join me. In the meantime, I think I’ll stroll down to CVS and buy myself another bottle of hydrogen peroxide. By the time Amazon’s comes, I’m sure my hangnail will be a distant memory. But that taste in my mouth? That’s going to remain.

***

Update: Many readers have pointed out that I missed the fact that the top package of peroxide was, in fact, a two-pack. True that, and it would have changed my on-the-fly calculation around which to buy, given the per ounce comparison. However, it would not change the fact that the act of not adding the per ounce calculation directly on the page somehow discolored that choice. 

Also, a rather rich post note: The bottle I did buy never came. It was “lost” – and Amazon offered me a refund. Sometimes it pays to just hit CVS. 

(cross posted from Searchblog)

Technology, Humanity, and the Existential Test

By

Can we govern ourselves? Will we?

You are here

If you pull far enough back from the day to day debate over technology’s impact on society – far enough that Facebook’s destabilization of democracy, Amazon’s conquering of capitalism, and Google’s domination of our data flows start to blend into one broader, more cohesive picture – what does that picture communicate about the state of humanity today?

Technology forces us to recalculate what it means to be human – what is essentially us, and whether technology represents us, or some emerging otherness which alienates or even terrifies us. We have clothed ourselves in newly discovered data, we have yoked ourselves to new algorithmic harnesses, and we are waking to the human costs of this new practice. Who are we becoming?

Nearly two years ago I predicted that the bloom would fade from the technology industry’s rose, and so far, so true. But as we begin to lose faith in the icons of our former narratives, a nagging and increasingly urgent question arises: In a world where we imagine merging with technology, what makes us uniquely human?

Our lives are now driven in large part by data, code, and processing, and by the governance of algorithms. These determine how data flows, and what insights and decisions are taken as a result.

So yes, software has, in a way, eaten the world. But software is not something being done to us. We have turned the physical world into data, we have translated our thoughts, actions, needs and desires into data, and we have submitted that data for algorithmic inspection and processing.

What we now struggle with is the result of these new habits – the force of technology looping back upon the world, bending it to a new will.
What agency – and responsibility – do we have? Whose will? To what end?

•••

Synonymous with progress, asking not for permission, fearless of breaking things – in particular stupid, worthy-of-being-broken things like government, sclerotic corporations, and fetid social norms – the technology industry reveled for decades as a kind of benighted warrior for societal good. As one Senator told me during the Facebook hearings this past summer, “we purposefully didn’t regulate technology, and that was the right thing to do.” But now? He shrugged. Now, maybe it’s time.

Because technology is already regulating us. I’ve always marveled at libertarians who think the best regulatory framework for government is none at all. Do they think that means there’s no governance?

In our capitalized healthcare system, data, code and algorithms now drive diagnosis, costs, coverage and outcomes. What changes on the ground? People are being denied healthcare, and this equates to life or death in the real world.

In our public square, data, code and algorithms drive civil discourse. We no longer share one physical, common square, but instead struggle to comprehend a world comprised of a billion Truman Shows. What changes on the ground? The election results of the world’s most powerful country.

Can you get credit to start a business? A loan to better yourself through education? Financial decisions are now determined by data, code, and algorithms. Job applications are turned to data, and run through cohorts of similarities, determining who gets hired, and who ultimately ends up leaving the workforce.

And in perhaps the most human pursuit of all – connecting to other humans – we’ve turned our desires and our hopes to data, swapping centuries of cultural norms for faith in the governance of code and algorithms built – in necessary secrecy – by private corporations.

•••

How does a human being make a decision? Individual decision making has always been opaque – who can query what happens inside someone’s head? We gather input, we weigh options and impacts, we test assumptions through conversations with others. And then we make a call – and we hope for the best.

But when others are making decisions that impact us, well, those kinds of decisions require governance. O
ver thousands of years we’ve designed systems to insure that our most important societal decisions can be queried and audited for fairness, that they are defensible against some shared logic, that they will benefit society at large.

We call these systems government. It is imperfect but… it’s better than anarchy.

For centuries, government regulations have constrained social decisions that impact health, job applications, credit – even our public square. Dating we’ve left to the governance of cultural norms, which share the power of government over much of the world.

But in just the past decade, we’ve ceded much of this governance to private companies – companies motivated by market imperatives which demand their decision making processes be hidden.
Our public government – and our culture – have not kept up.

What happens when decisions are taken by algorithms of governance that no one understands?

And what happens when those algorithms are themselves governed by a philosophy called capitalism?

•••

We’ve begun a radical experiment combining technology and capitalism, one that most of us have scarcely considered.

Our public commons – that which we held as owned by all, to the benefit of all – is increasingly becoming privatized.

Thousands of companies are now dedicated to revenue extraction in the course of delivering what were once held as public goods. Public transportation is being hollowed out by Uber, Lyft, and their competitors (leveraging public goods like roadways, traffic infrastructure, and GPS). Public education is losing funding to private schools, MOOCs, and for-profit universities. Public health, most disastrously in the United States, is driven by a capitalist philosophy tinged with technocratic regulatory capture. And in perhaps the greatest example of all, we’ve ceded our financial future to the almighty 401K – individuals can no longer count on pensions or social safety nets – they must instead secure their future by investing in “the markets” – markets which have become inhospitable to anyone lacking the technological acumen of the world’s most cutting-edge hedge funds.

What’s remarkable and terrifying about all of this is the fact that the combinatorial nature of technology and capitalism outputs fantastic wealth for a very few, and increasing poverty for the very many. It’s all well and good to claim that everyone should have a 401K. It’s irresponsible to continue that claim when faced with the reality that 84 percent of the stock market is owned by the wealthiest ten percent of the population.

This outcome is not sustainable. When a system of governance fails us, we must examine its fundamental inputs and processes, and seek to change them.

•••

So what truly is governing us in the age of data, code, algorithms and processing?

For nearly five decades, the singular true north of capitalism has been to enrich corporate shareholders.

Other stakeholders – employees, impacted communities, partners, customers – do not directly determine the governance of most corporations.

Corporations are motivated by incentives and available resources. When the incentive is extraction of capital to be placed in the pockets of shareholders, and a new resource becomes available which will aide that extraction, companies will invent fantastic new ways to leverage that resource so as to achieve their goal. If that resource allows corporations to skirt current regulatory frameworks, or bypass them altogether, so much the better.

The new resource, of course, is the combination of data, code, algorithms and processing. Unbridled, replete with the human right of speech and its attendant purchasing of political power, corporations are quite literally becoming our governance model.

Now the caveat: Allow me to state for the record that I am not a socialist. If you’ve never read my work, know I’ve started six companies, invested in scores more, and consider myself an advocate of transparently governed free markets. But we’ve leaned far too over our skis – the facts no longer support our current governance model.

•••

We turn our worlds to data, leveraging that data, technocapitalism then terraforms our world. Nowhere is this more evident that with automation – the largest cost of nearly every corporation is human labor, and digital technologies are getting extraordinarily good at replacing that cost.

Nearly everyone agrees this shift is not new – yes yes, a century or two ago, most of us were farmers. But this shift is coming far faster, and with far less considered governance. The last great transition came over generations. Technocapitalism has risen to its current heights in ten short years. Ten years.

If we are going to get this shift right, we urgently need to engage in a dialog about our core values.
Can we perhaps rethink the purpose of work, given work no longer means labor? Can we reinvent our corporations and our regulatory frameworks to honor, celebrate and support our highest ideals? Can we prioritize what it means to be human even as we create and deploy tools that make redundant the way of life we’ve come to know these past few centuries?

These questions beg a simpler one: What makes us human?

I dusted off my old cultural anthropology texts, and consulted the scholars. The study of humankind teaches us that we are unique in that we are transcendent toolmakers – and digital technology is our most powerful tool. We have nuanced language, which allows us both recollection of the past, and foresight into the future. We are wired – literally at the molecular level – to be social, to depend on one another, to share information and experience. Thanks to all of this, we have the capability to wonder, to understand our place in the world, to philosophize. The love of beauty, philosophers will tell you, is the most human thing of all.

Oh, but then again, we are uniquely capable of intentional destroying ourselves. Plenty of species can do that by mistake. We’re unique in our ability to do it on purpose.

But perhaps the thing that makes us most human is our love of story telling, for narrative weaves nearly everything human into one grand experience. Our greatest philosophers even tell stories about telling stories! The best stories employ sublime language, advanced tools, deep community, profound wonder, and inescapable narrative tension. That ability to destroy ourselves? That’s the greatest narrative driver in this history of mankind.

How will it turn out?

•••

We are storytelling engines uniquely capable of understanding our place in the world. And it’s time to change our story, before we fail a grand test of our own making: Can we transition to a world inhabited by both ourselves, and the otherness of the technology we’ve created? Should we fail, nature will indifferently shrug its shoulders. It has billions of years to let the whole experiment play over again.

We are the architects of this grand narrative. Let’s not miss our opportunity to get it right.

Adapted from a speech presented at the Thrival Humans X Tech conference in Pittsburgh earlier this week.

Software Ate The World. Now What?

By

Naqdaw

Seven or so years ago, a famous VC penned a manifesto of sorts. Writing at a time the world was still skeptical of the dominance to which his industry has now ascended (to think, such a time existed, and so few years ago!), Marc Andreessen had a message for the doubters, the naysayers, and the Wall St. analysts who were (credibly!) claiming that his investments amounted to not much more than a bubble:

Software, he claimed, was eating the world.

Seven years later, no one can dispute Andreessen’s prescience. The man was right: If you had purchased a basket of his favorite stocks back then – he name-checked Apple, Amazon, and Facebook directly – you’d be up at least 10X, if not more. Software, it seems, has indeed eaten the world, and those smart (and rich) enough to put money into technology, as Andreessen has been, have done very, very well for themselves.

Of course, not many people have in fact been that smart. Nor are they that rich. As of last year, ten percent of investors own 84 percent of the stock market, and that ratio only gets worse as time goes by. Most of our society simply isn’t benefiting from this trend of software eating the world. In fact, most of them live in the very world that software ate.

***

We – us, all of us – have turned the world to data. Some of us – the founders of software companies, the funders of those founders, the cheerleaders who run the capital markets – took that data and used it to change the world. Along the way, the world didn’t disappear like some unfortunate animal descending a python’s midsection. No, the world remains.

Who are we now that we’ve been eaten? What have we become?

These are questions, it turns out, that almost none of technology’s leadership have deeply pondered. It certainly never came up in Andreessen’s manifesto. And it’s manifestly evident in the behavior of our most treasured technology founders. They are puzzled by these newfound demands from United States senators and European socialists. Don’t they understand that regulation is damage to be routed around?

***

But the world is not just software. The world is physics, it’s crying babies and shit on the sidewalk, it’s opioids and ecstasy, it’s car crashes and Senate hearings, lovers and philosophers, lost opportunities and spinning planets around untold stars. The world is still real. Software hasn’t eaten it as much as bound it in a spell, temporarily I hope, while we figure out what comes next.

Software – data, code, algorithms, processing – software has dressed the world in new infrastructure. But this is a conversation, not a process of digestion. It is a conversation between the physical and the digital, a synthesis we must master if we are to avoid terrible fates, and continue to embrace fantastic ones.

(cross posted from Searchblog)