I’ll never forget a meal I had with a senior executive at Facebook many years ago, back when I was just starting to question the motives of the burgeoning startup’s ambition. I asked whether the company would ever support publishers across the “rest of the web” – perhaps through an advertising system competitive with Google’s AdSense. The executive’s response was startling and immediate. Everything anyone ever needs to do – including publishing – can and should be done on Facebook. The rest of the Internet was a sideshow. It’s just easier if everything is on one platform, I was told. And Facebook’s goal was to be that platform.
This is an edited version of a series of talks I first gave in New York over the past week, outlining my work at Columbia. Many thanks to Reinvent, Pete Leyden, Cap Gemini, Columbia University, Cossette/Vision7, and the New York Times for hosting and helping me. Cross posted from Searchblog.
I have spent 30-plus years in the tech and media industries, mainly as a journalist, observer, and founder of companies that either make or support journalism and storytelling. When it comes to many of the things I am going to talk about here, I am not an expert. If I am expert at anything at all, it’s asking questions of technology, and of the media and marketing platforms created by technology. In that spirit I offer the questions I am currently pursuing, in the hope of sparking a dialog with this esteemed audience to further better answers.
Some context: Since 1986, I’ve spent my life chasing one story: The impact of technology on society. For whatever reason, I did this by founding or co-founding companies. Wiredwas kind of a first album, as it were, and it focused on the story broadly told. The Industry Standardfocused on the business of the Internet, as did my conference Web 2. Federated Media was a tech and advertising platform for high quality “conversational” publishers, built with the idea that our social discourse was undergoing a fundamental shift, and that publishers and their audiences needed to be empowered to have a new kind of conversation. Sovrn, a company I still chair, has a similar mission, but with a serious data and tech focus. NewCo, my last company (well, I’ve got another one in the works, perhaps we can talk about that during Q&A) seeks to illuminate the impact of companies on society.
It’s Broke. Let’s Fix It.
And it is that impact that has led me to the work I am doing now, here in New York. I moved here just last Fall, seeking a change in the conversation. To be honest, the Valley was starting to feel a bit…cloistered.
A huge story – the very same story, just expanded – is once again rising. Only it’s just … more urgent. 25 years after the launch of Wired, the wildest dreams of its pages have come true. Back in 1992 we asked ourselves: What would happen to the world when technology becomes the most fundamental driver of our society? Today, we are living in the answer. Turns out, we don’t always like the result.
Most of my career has been spent evangelizing the power of technology to positively transform business, education, and politics. But five or so years ago, that job started to get harder. The externalities of technology’s grip on society were showing through the shiny optimism of the Wired era. Two years ago, in the aftermath of an election that I believe will prove to be the political equivalent of the Black Sox scandal, the world began to wake up to the same thing.
So it’s time to ask ourselves a simple question: What can we do to fix this?
Let’s start with some context. My current work is split between two projects: One has to do with data governance, the other political media. How might they be connected? I hope by the end of this talk, it’ll make sense.
So let’s go. In my work at Columbia, I’m currently obsessed with two things. First,
How much have you thought about that word in the past two years?
Given how much it’s been in the news lately, likely quite a lot. Big data, data breaches, data mining, data science…Today, we’re all about the data.
When was the last time you thought about that word?
Government – well for sure, I’d wager that’s increased given who’s been running the country these past two years. But Governance? Maybe not as much.
But how often have you put the two words together?
Likely not quite as much.
It’s time to fix that.
Because we have slouched our way into an architecture of data governance that is broken, that severely retards economic and cultural innovation, and that harms society as a whole.
Let’s unpack that and define our terms. We’ll start with Governance.
What is governance? It’s an …
Architecture of control
A regulatory framework that manages how a system works. The word is most often used in relation to political governance – which we care about a lot for the purposes of this talk – but the word applies to all systems, and in particular to corporations, which is also a key point in the research we’re doing.
But in my work, when I refer to governance, I am referring to the “the system of rules, practices and processes by which a firm controls its relationshipto its community.” Who’s that community? You, me, developers and partners in the ecosystem, for the most part. More on that soon.
Now, what is data? I like to think of it as…
I’m not in love with this phrase, but again, this is a first draft of what I hope will grow to more refined (ha) work. Data is the core commodity from which information is created, or processed. Data has many attributes, not all of which are agreed upon. But I think it’s inarguable that the difference between data and information is …
That’s Socrates, who thought about this shit, a lot. Information is data that means something to us (and possibly the entire universe, as it relates to the second law of thermodynamics. But physics is not the focus of this talk, nor is a possible fourth law of thermodynamics….).
As we’ve learned – the hard way – over the past decade, there are a few very large companies which have purview over a massive catalog of meaningful data, meaningful not only to us, but to society at large. And it’s this societal aspect that, until recently, we’ve actively overlooked. We’re in the midst of a grand data renaissance, which if history remotely echoes, I fervently hope will give rise to …
A (Data) Enlightenment
That’s John Locke, an Enlightenment philosopher. Allow me to pull back for second and attempt to lay some context for the work I hope to advance in the next few years. It starts with the Enlightenment, a great leap forward in human history (and the subject of a robust defense by Steven Pinker last year).
Arguably the crowning document of the Enlightenment is…
The United States Constitution
This declaration of the rights of humankind (well mankind for the first couple of centuries) itself took more than three centuries to emerge (and cribbed generously from the French and English, channeling Locke and Hume). Our current political and economic culture is, of course, a direct descendant of this living document. American democracy was founded upon Enlightenment principles. And the cornerstone of Enlightenment ideas is …
The Scientific Method
That’s Aristotle, often credited with originating the scientific method, which is based on considered thesis formation, rigorous observation, comprehensive data collection, healthy skepticism, and sharing/transparency. The scientific method is our best tool, so far, for advancing human progress and problem solving.
And the scientific method – the pursuit of truth and progress – all that turns on the data. Prompting the question….
Who Has the Most (and Best) Data?
This is the question we are finally asking ourselves, the answer to which is sounding alarms. As we all know, we are in a renaissance, a deluge, an orgy of data creation. We have invented sophisticated new data sensing organs – digital technologies – that have delivered us superhuman powers for the discovery, classification, and sense-making of data.
Not surprisingly, it is technology companies, driven as they are by the raw economics of profit-seeking capital and armed with these self-fulfilling tools of digital exploration and capture – that have initially taken ownership of this emerging resource. And that is a problem, one we’ve only begun to understand and respond to as a society. Which leads to an important question:
Who Is Governing Data?
In the US, anyway, the truth is, we don’t have a clear answer to this question. Our light touch regulatory framework created a tech-driven frenzy of company building, but it failed to anticipate massive externalities, now that these companies have come to dominate our capital markets. Clearly, the Tech Platform Companies have the most valuable data – at least if the capital markets are to be believed. Companies like Google. Facebook. Amazon. Apple.
All of these companies have very strong governance structures in place for the data they control. These structures are set internally, and are not subject to much (if any) government regulation. And by extension, nearly all companies that manage data, no matter their size, have similar governance models because they are all drafting off those companies’ work (and success). This has created a phenomenon in our society, one I’ve recently come to call …
The Default Internet Constitution
Without really thinking critically about it, the technology and finance industries have delivered us a new Constitution, a fundamental governance document controlling how information flows through the Internet. It was never ratified by anyone, never debated publicly, never published with a flourish of the pen, and it’s damn hard to read. But, it is based on a discoverable corpus. That corpus, at its core, is based on …
Terms of Service and EULAs
Like it or not, there is a governance model for the US Internet and the data which flows across it: Terms of Service and End User Licensing Agreements. Of course, we actively ignore them – who on earth would ever read them? One researcher did the math, and figured it’d take 76 work days for the average American to read all of the policies she clicks past (and that was six years ago!).
Of course, ignoring begets ignorance, and we’ve ignored Terms of Service at our peril. No one understands them, but we certainly should – because if we’re going to make change, we’ll want to change these Terms of Service, dramatically. They create the architecture that determines how data, and therefore societal innovation and value, flow around the Internet.
And let’s be clear, these terms of service have hemmed data into silos. They’re built by lawyers, based on the desires of engineers who are – for the most part – far more interested in the product they are creating than any externalities those products might create.
And what are the lawyers concerned with? Well, they have one True North: Protect the core business model of their companies.
And what is that business model? Engagement. Attention. And for most, data-driven personalized advertising. (Don’t get me started about Apple being different. The company is utterly dependent on those apps animating that otherwise black slate of glass they call an iPhone).
So what insures engagement and attention? Information refined from data.
So let’s take a look at a rough map of what this Terms of Service-driven architecture looks like:
The Mainframe Architecture
Does this look familiar? If you’re a student of technology industry history, it should, because this is how mainframes worked in the early days of computing. Data compute, data storage, and data transport is handled by the big processor in the sky. The “dumb terminal” lives at the edge of the system, a ‘thin client’ for data input and application output. Intelligence, control, and value exchange lives in the center. The center determines all that occurs at the edge.
Remind you of any apps you’ve used lately?
But it wasn’t always this way. The Internet used to look like this:
The Internet 1.0 Architecture
I’m one of the early true believers in the open Internet. Do you remember that world? It’s mostly gone now, but there was a time, from about 1994 to 2012, when the Internet ran on a different architecture, one based on the idea that the intelligence should reside in the nodes – the site – not at the center. Data was shared laterally between sites. Of course, back then the tech was not that great, and there was a lot of work to be done. But we all knew we’d get there….
…Till the platforms got there first. And they got there very, very well – their stuff was both elegant and addictive.
But could we learn from Internet 1.0, and imagine a scenario inspired by its core lessons? Technologically, the answer is “of course.” This is why so many folks are excited by blockchain, after all (well that, and ICO ponzi schemes…).
But it might be too late, because we’ve already ceded massive value to a broken model. The top five technology firms dominate our capital markets. We’re seriously (over)invested in the current architecture of data control. Changing it would be a massive disruption. But what if we can imagine how such change might occur?
This is the question of my work.
So…what is my work?
A New Architecture
If we’re stuck in an architecture that limits the potential of data in our society, we must envision a world under a different kind of architecture, one that pushes control, agency, and value exchange back out to the node.
Those of us old enough to remember the heady days of Web 1.0 foolishly assumed such a world would emerge unimpeded. But as Tim Wu has pointed out, media and technology run in cycles, ultimately consolidating into a handful of companies with their hands on the Master Switch – we live in a system that rewards the Curse of Bigness. If we are going to change that system, we have to think hard about what we want in its place.
I’ve given this some thought, and I know what I want.
Let The Data Flow
Imagine a scenario where you can securely share your Amazon purchase data with Walmart, and receive significant economic value for doing so (I’ve written this idea up at length here). Of course, this idea is entirely impossible today. This represents a major economic innovation blocked.
Or imagine a free marketplace for data that allows a would-be restaurant owner to model her customer base’s preferences and unique taste? (I’ve written this idea up at length here). Of course, this is also impossible today, representing a major cultural and small business innovation is impeded.
Neither of these kinds of ideas are even remotely possible – nor are the products of thousands of similar questions entrepreneurs might ask of the data rotting in plain sight across our poorly architected data economy.
We all lose when the data can’t flow. We lose collectively, and we lose individually.
But imagine if it was possible?!
How might such scenarios become reality?
We’re at a key inflection point in answering that question.
2019 is the year of data regulation. I don’t believe any meaningful regulation will pass here in the US, but it’ll be the year everyone talks about it. It started with the CA/Facebook hearings, and now every self-respecting committee chair wants a tech CEO in their hot seat. Congress and the American people have woken up to the problem, and any number of regulatory fixes are being debated. Beyond the privacy shitstorm and its associated regulatory response, which I’d love to toss around during Q&A, the most discussed regulatory relief is anti-trust – the curse of bigness is best fixed by breaking up the big guys. I understand the goal, and might even support it, but I don’t think we need to even do that. Instead, I submit for your consideration one improbable, crazy, and possibly elegant solution.
The Token Act
I’m calling it the Token Act.
It requires one thing: Every data processing service at a certain scale must deliver back to its customers any co-created data in machine readable format, easily portable to any other data processing service.
Imagine the economic value unlocked, the exponential impact on innovation such a simple rule would have. Of course we must acknowledge the negative short term impact such a policy would have on the big guys. But it also creates an unparalleled opportunity for them – the token of course can include a vig – a percentage of all future revenue associated with that data, for the value the platform helped to create. This model could drive a far bigger business in the long run, and a far healthier one for all parties concerned.
I can’t prove it yet, but I sense this approach could 10 to 100X our economy. We’ve got some work to do on proving that, but I think we can.
Imagine what would occur if the data was allowed to flow freely. Imagine the upleveling of how firms would have to compete. They’d have to move beyond mere data hoarding, beyond the tending of miniature walled gardens (most app makers) and massive walled agribusinesses (in the case of the platforms – and ADM and Monsanto, but that’s another chapter in the book, one of many).
Instead, firms would have to compete on creating more valuable tokens – more valuable units of human meaning. And they’d encourage sharing those tokens widely – with the fundamental check of user agency and control governing the entire system.
The bit has flipped, and the intelligence would once again be driven to the nodes.
But the Token Act is just an exercise in envisioning a society governed by a different kind of data architecture. There are certainly better or more refined ideas.
And to get to them, we really need to understand how we’re governed today. And now that I’ve gotten nearly to the end of my prepared remarks, I’ll tell you what I’m working on at Columbia with several super smart grad students:
Mapping Data Flows
If we are going to understand how to change our broken architecture of data flows, we need to deeply understand where we are today. And that means visualizing a complex mess. I’m working with a small team of researchers at Columbia, and together we are turning the Terms of Service at Amazon, Apple, Facebook and Google into a database that will drive an interactive visualization – a blueprint of sorts for how data is governed across the US internet. We’re focusing on the advertising market, for obvious reasons, but it’s my hope we might create a model that can be applied to nearly any information rich market. It’s early stages, but our goal is to have something published by the end of May.
I’ve not spoken much about advertising during this talk, and that was purposeful. I’ve written at length about how we came to the place we now inhabit, and the role of programmatic advertising in getting us there.
Truth is, I don’t see advertising as the cause of this problem, but rather an outgrowth of it. If you offer any company a deal that puts new customers on a platter, as Google did with AdWords, or Facebook has with NewsFeed, well, there’s no way those companies will refuse. Every major advertiser has embraced search and social, as have millions of smaller ones.
Our problem is simply this: The people who run technology platforms don’t actually understand the power and limitations of their systems, and let’s be honest, nor do we. Renee Di Resta has pointed this out in recent work around Russian interference in our national dialog and elections: Any system that allows for automated processing of messages is subject to directed, sophisticated abuse. The place for regulation is not in advertising (even though that’s where it’s begun with the Honest Ads Act), it’s in how the system works architecturally.
But advertisers must be highly aware of this transitional phase in the architecture of a system that has been a major source of revenue and business results. We must imagine what comes next, we must prepare for it, and perhaps, just perhaps, we should invent it, or at the very least play a far more active role than we’re playing currently.
I believe that if together – industry, government, media and consumers collectively – if we unite to address the core architectural issues inherent to how we manage data, in the process giving consumers economic, creative, and personal agency over the data they co create with platforms, the question of toxic advertising will disappear faster than it arose.
But I’ve talked (or written) long enough. Thank you so much for coming (for reading), and for being part of this conversation. Now, let’s start it.
Every year I write predictions for the year ahead. And at the end of that year, I grade myself on how I did. I love writing this post, and thankfully you all love reading it as well. These “How I Did” posts are usually the most popular of the year, beating even the original predictions in readership and engagement.
What’s that about, anyway? Is it the spectacle of watching a guy admit he got things wrong? Cheering when I get it right? Perhaps it’s just a chance to pull back and review the year that was, all the while marveling at how much happened in twelve short months. And 2018 does not disappoint.
Here we go:
Prediction #1: Crypto/blockchain dies as a major story. Cast yourself back to late 2017 when Bitcoin was pushing $20,000 and the entire tech sector was obsessed with blockchain everything. ICOs were raising hundreds of millions of dollars, the press was hyping (or denigrating) it all, and the fools were truly rushing in. In my prediction post, I struck a more measured tone: “…there’s simply too much real-but-boring work to be done right now in the space. Does anyone remember 1994? Sure, it’s the year the Mozilla team decamped from Illinois to the Valley, but it’s not the year the Web broke out as a mainstream story. That came a few years later. 2018 is a year of hard work on the problems that have kept blockchain from becoming what most of us believe it can truly become. And that kind of work doesn’t keep the public engaged all year long.” I think I got that right. Bitcoin has crashed to earth, and those who remain in the space are deep in the real work – which I still believe to be fundamentally important to the future of not only tech, but society as well. Score: 10/10
Prediction #2: Donald Trump blows up. I don’t usually make political predictions, but by 2017, Trump was the story, bigger than politics, and bigger than tech. I wrote: “2018 is the year [Trump] goes down, and when [he] does, it will happen quickly (in terms of its inevitability) and painfully slowly (in terms of it actually resolving). This of course is a terrible thing to predict for our country, but we got ourselves into this mess, and we’ll have to get ourselves out of it. It will be the defining story of the year.” I think I also got this one right. Trump is done – nearly everyone I trust in politics agrees with that statement. I won’t recount all the reasons, but here are a few: No fewer than 17 ongoing investigations of the President and/or his organizations. A tanking stock market that has lost all faith in the President’s leadership. Nearly 40 actual indictments and several high profile guilty verdicts. A Democratic majority in the House preparing an endless barrage of subpoenas and investigations. And a Republican party finally ready to abandon its leader. Net net: Trump is toast. It’s just going to take a while for that final pat of butter. Score: 10/10
Prediction #3: Facts make a comeback. Here’s what I wrote in support of this assertion: “2018 is the year the Enlightenment makes a robust return to the national conversation. Liberals will finally figure out that it’s utterly stupid to blame the “other side” for our nation’s troubles. Several viral memes will break out throughout the year focused on a core narrative of truth and fact. The 2018 elections will prove that our public is not rotten or corrupt, but merely susceptible to the same fever dreams we’ve always been susceptible to, and the fever always breaks. A rising tide of technology-driven engagement will help drive all of this.” I’d like to claim I nailed this one, but I think the trend lines are supportive. Real journalism had a banner year, with subscriptions to high-integrity publications breaking records year on year. Most smart liberals have realized that the politics of blame is a losing game. And I was happily right about the 2018 elections, which was one of the most definitive rebukes of a sitting President in the history of our nation. As for those “viral memes” I predicted, I’m not sure how I might prove or disprove that assertion – none come to mind, but I may have missed something, given what a blur 2018 turned out to be. Alas, that “rising tide of technology-driven engagement” was a pretty useless statement. Everything these days is tech-driven…so I deserve to be dinged for that pablum. But overall? Not bad at all. Score: 7/10
Prediction #4: Tech stocks overall have a sideways year. It might be hard to give me credit for this one, given how the FANG names have tanked over the past few months, but cast your mind back to when I wrote this prediction, in late December: Tech stocks were doing nothing but going up. And where are they now? After continuing to climb for months, they’re….mostly where they started the year. Sideways. Apple started at around 170, and today is at … 156. Google started at 1048, and is now at…1037. Amazon and Netflix did better, rising double digit percentages, but plenty of other tech stocks are down significantly year on year. The tech-driven Nasdaq index started the year at around 7000, as of today, it’s down to 6600. So, some up, some down, and a whole lot of … sideways. As I wrote: “All the year-in-review stock pieces will note that tech didn’t drive the markets in the way they have over the past few years. This is because the Big Four have some troubles this coming year.” Ummm….yep, and see the next two predictions… Score: 9/10.
Prediction #6: Google/Alphabet will have a terrible first half (reputation wise), but recover after that. Well, in my original post, I predicted a #MeToo shoe dropping around Google Chairman Eric Schmidt. That didn’t happen exactly, though the whisper-ma-phone was sure running hot for the first few months of the year, and a massive sexual misconduct scandal eventually broke out later in the year. But even if I was wrong on that one point, it’s true the company had a bad first half, and for the most part, a pretty terrible year overall. In March, it had a government AI contract blow up in its face, leading to employee protests and resignations. This trend only continued throughout the year, culminating in thousands of employees walking out in protest of the company’s payouts to alleged sexual harassers. Oh, and that empty chair at Congressional hearings sure didn’t help the company’s reputation. I also predicted more EU fines: Check! A record-breaking $5 billion fine, to be exact. Further, news the company was creating a censored version of its core search engine in China also tarnished big G. But I whiffed when I mulled how the company might get its mojo back: I predicted it would consider breaking itself up and taking the parts public. That didn’t happen (as far as we know). Instead, Google CEO Sundar Pichai finally relented, showing up to endure yet another act in DC’s endless string of political carnivals. Pichai acquitted himself well enough to support my assertion that Google began to recover by year’s end. But as recoveries go, it’s a fragile one. Score: 8/10.
Prediction #7: The Duopoly falls out of favor. This was my annual prediction around the digital advertising marketplace, focused on Facebook and (again) Google. In it, I wrote: “This doesn’t mean year-on-year declines in revenue, but it does mean a falloff in year-on-year growth, and by the end of 2018, a increasingly vocal contingent of influencers inside the advertising world will speak out against the companies (they’re already speaking to me privately about it). One or two of them will publicly cut their spending and move it to other places.” This absolutely occurred. I’ve already chronicled Google’s travails in 2018, and there’s simply not enough pixels to do the same for Facebook. This New York Times piece lays out how advertisers have responded: No Morals. In the piece, and many others like it, top advertisers, including the CEO of a major agency, went on the record decrying Facebook – giving me cause for a #humblebrag, if I do say so myself. Oh, and yes, both Facebook and Google posted lower revenue growth rates year on year. Score: 10/10.
Prediction #8: Pinterest breaks out. As I wrote in my original post: “This one might prove my biggest whiff, or my biggest “nailed it.” Well, near the end of 2018, a slew of reports predicted that Pinterest is about to file for a massive IPO. As if by magic, the world woke up to Pinterest. It seems I was right – but as of yet, the IPO has not been confirmed. So…I’ll not score myself a 10 on this one, but if Pinterest does have a successful IPO early next year, I reserve the right to go back and add a couple of points. Score: 8/10.
Prediction #9: Autonomous vehicles do not become mainstream. Driverless cars have been “just around the corner” for what feels like forever. By late 2017, everyone in the business was claiming they’d breakout within a year. But that didn’t happen, regardless of the hype around the first “commercial launch” by Waymo in Phoenix a few weeks ago. I’m sorry, but a “launch” limited to 400 pre-selected and highly vetted beta ain’t mainstream – it’s not even a service in any defensible way. We’re still a long, long way off from this utopian vision. Our cities can’t even figure out what to do with electric scooters, for goodness sake. It’ll be a coon’s age before they figure out driverless cars. Score: 9/10.
Prediction #10: Business leads. I think I need to avoid these spongy predictions, because it’s super hard to prove whether or not they came true. 2018 showed us plenty of examples of business leadership along the lines of what I predicted. Here’s what I wrote: “A crucial new norm in business poised to have a breakout year is the expectation that companies take their responsibilities to all stakeholders as seriously as they take their duty to shareholders. “All stakeholders” means more than customers and employees, it means actually adding value to society beyond just their product or service. 2018 will be the year of “positive externalities” in business.” Well, I could list all the companies that pushed this movement forward. Lots of great companies did great things – Salesforce, a leader in corporate responsibility, even hired a friend of mine to be Chief Ethics Officer. Imagine if every major company empowered such a position? And a powerful Senator – Elizabeth Warren, who likely will run for the presidency in 2019 – laid out her vision for a new approach to corporate responsibility in draft legislation called the Accountable Capitalism Act. But at the end of the day, I’ve got no way to prove that 2018 was “a break out year” for “a crucial new norm in business.” I wish I did, but…I don’t. Score: 5/10.
Overall, I have to say, this was one of the most successful reviews of my predictions ever – and that’s saying something, given I’ve been doing this for more than 15 years. Nine of ten were pretty much correct, with just one being a push. That sets a high bar for my predictions for 2019…coming, I hope, in the next week or so. Until then, thanks as always for being a fellow traveler. And happy new year – may 2019 bring you and yours happiness, health, and gratitude.
If you’re read my rants for long enough, you know I’m fond of programmatic advertising. I’ve called it the most important artifact in human history, replacing the Macintosh as the most significant tool ever created.
So yes, I think programmatic advertising is a big deal. As I wrote in the aforementioned post:
“I believe the very same technologies we’ve built to serve real time, data-driven advertising will soon be re-purposed across nearly every segment of our society. Programmatic adtech is the heir to the database of intentions – it’s that database turned real time and distributed far outside of search. And that’s a very, very big deal. (I just wish I had a cooler name for it than “adtech.”)”
But lately, I’m starting to wonder if perhaps adtech is failing, not for any technical reason, but because the people leveraging are complicit in what might best be called a massive failure of imagination.
I’m about to go on a rant here, so please forgive me in advance.
But honestly, who else out there is sick of being followed by ads so stupid a fourth grader could do a better job of targeting them?
Case in point is the ad above. I took this screen shot from my phone this past weekend while I was reading a New York Times article. The image – of a robe Amazon wanted me to buy – was instantly annoying, because I had in fact purchased a robe on Amazon several days before. Why on earth was Amazon retargeting me for a product I just bought?!
But wait, it gets worse! As I perused the next Times article, this ad shows up:
You might think this ad makes more sense. If the dude buys a robe, makes sense to try to sell him a new pair of slippers, no? Well, sure, but only if that same dude didn’t buy a new pair of slippers two weeks ago. Which, in fact, I did just do.
So, yeah, this ad sucks as well. Not only is it not useful or relevant, it’s downright annoying. The vast machinery of adtech has correctly identified me as a robe-and-slippers-buying customer. But it’s failed to realize *I’ve already bought the damn things.*
Is it possible that adtech is this stupid? This poorly instrumented? I mean, are programmatic buyers simply tagging visitors who land on ecommerce pages (male robe intender?) without caring about whether those visitors actually bought anything?
Are the human beings responsible for setting the dials of programmatic just this lazy?
I’ve been a critical observer of adtech over the past ten or so years, and one consistent takeaway is this: If there’s a way for a buyer to cut corners, declare an easy win, and keep doing things they way the’ve always been done, well, they most certainly will.
But why does it have to be this way? Digging into the examples above yields an extremely frustrating set of facts. Consider the data the adtech infrastructure either got *right* about me as a customer, or could have gotten right:
I am a frequent ecommerce customer, usually buying on Amazon
I recently purchased both a robe and some slippers
I am reading on the New York Times site as a logged on (IE data rich) customer of the Times‘ offerings
These are just the obvious data points. My mobile ID and cookies, all of which are available to programmatic buyers, certainly indicate a high household income, a propensity to click on certain kinds of ads, a rich web browsing history reflecting a thickly veined lodestar of interest data, among countless other possible inputs.
Imagine if a programmatic campaign actually paid attention to all this rich data? Start with the fact I just purchased a robe and slippers. What are products related to those two that Amazon might show me? Well, according to its own “people who bought this item also bought” algorithms, folks who bought men’s robes also bought robes for the women in their life. Now there’s a cool recommendation! I might have clicked on an ad that showed a cool robe for my wife. But no, I’m shown an ad for a product I already have.
I’ve got a few calls in to verify my hunch, but I suspect the ugly truth is pure laziness on the part of the folks responsible for buying ads. Consider: The average cost for a thousand views (CPM) of a targeted programmatic advertisement hovers between ten cents (yes, ten pennies) to $2. With costs that low, the advertising community can afford to waste ad inventory.
Let’s apply that reality to our robe example. Let’s say the robe costs $60, and yields a $20 profit for our e-commerce advertiser, not including marketing costs. That means that same advertiser is can spend upwards of $19.99 per unit on advertising (more, if a robe purchaser turns out to be a “big basket” e-commerce spender). So what does our advertiser do? Well, they set a retargeting campaign aimed anyone who ever visited our erstwhile robe’s page. With CPMs averaging around a buck, that robe’s going to follow nearly 20,000 folks around the internet, hoping that just one of them converts.
Put another way, programmatic advertising is a pure numbers game, and as long as the numbers show one penny of profit, no one is motivated to make the system any better. I’ve encountered many similar examples of ad buyers ignoring high-quality data signals, preferring instead to “waste reach” because, well, it’s just easier to set up campaigns on one or two factors. Inventory is cheap. Why not?
This is problematic. What’s the point of having all that rich (and hard won) targeting data if buyers won’t use it, and consumers don’t benefit from it? An ecosystem that fails to encourage innovation will stagnate and lose share to walled gardens like Facebook, Google, and others. If the ads suck on the open web (and they do), then consumers will either install ad blockers (and they are), or abandon the open web altogether (and they are).
In my last post I imagined a world in which large data-driven platforms like Amazon, Google, Spotify, and Uber are compelled to share machine-readable copies of data to their users. There are literally scores, if not hundreds of wrinkles to iron out around how such a system would work, and in a future post I hope to dig into some of those questions. But for now, come with me on a journey into the future, where the wrinkles have been ironed out, and a new marketplace of personally-driven information is flourishing. We’ll return to one of the primary examples I sketched out in the aforementioned post: A battle for the allegiance – and pocketbook – of one online shopper, in this case, my wife Michelle.
It’s a crisp winter mid morning in Manhattan when the doorbell rings. Michelle looks up from her laptop, wondering who it might be. She’s not expecting any deliveries from Amazon, usually the source of such interruptions. She glances at her phone, and the Ring app (an Amazon service, naturally) shows a well dressed, smiling young woman at the door. She’s holding what looks like an elegantly wrapped gift in her hands. Now that’s unusual! Michelle checks the date – no anniversaries, no birthdays, no special occasions – so what gives?
Michelle opens the door and is greeted by a woman who introduces herself as Sheila. She tells Michelle she’s been sent over by Walmart. Walmart? Michelle’s never set foot in a Walmart store, and has a less than charitable view of the company overall. Why on earth would Walmart be sending her a special delivery gift box?
Sheila is used to exactly this kind of response – she’s been trained to expect it, and to manage the conversation that ensues. Sheila is a college-educated Walmart management associate, and delivering these gift boxes is a mandatory part of her company training. In fact, Sheila’s future career trajectory is based, in part, on her success at converting Michelle into becoming a Walmart customer, and she’s learned from her colleagues back at corporate that the best way to succeed is to be direct and open while engaging with a top-level prospect.
“Michelle, I know this seems a bit strange, but Walmart has identified you as a premier ecommerce customer – I’m guessing you probably have at least three or four packages a week delivered here?”
“More like three or four a day,” Michelle answers, warming to Sheila’s implied status as a premium customer.
“Yes, it’s amazing how it’s become a daily habit,” Sheila answers. “And as you probably know, Walmart has an online service, but truth be told, we never seem to get the business of folks like you. I’m here to see if we might change that.”
Michelle becomes suspicious. It doesn’t make sense to her – sending over a manager bearing gifts? Such tactics don’t scale – and feel like an intrusion to boot.
Sensing this, Sheila continues. “Look, I’m not here to sell you anything. I’ve got this special gift for you from Doug McMillon, the CEO of Walmart. You’ve been selected to be part of a new program we’re testing – we call it Walton’s Circle. It’s named after Sam Walton, our founder, who was pretty fond of the personal touch. In any case, the gift is yours to keep. There’s some pretty cool stuff in there, I have to say, including La Mer skin cream and some Neuhaus chocolate that’s to die for.”
Michelle smiles. Strange how the world’s biggest retailer, a place she’s never shopped, seems to know her brand preferences for skin care and chocolate. Despite herself, she relaxes a bit.
“Also inside,” Sheila continues, “is an invitation. It’s entirely up to you if you want to accept it, but let me explain?”
“Sure,” Michelle answers.
“Great. Have you heard of the Token Act?”
Michelle frowns. She read about this new piece of legislation, something to do with personal data and the right to exchange it for value across the internet. In the run up to its passage, her husband wouldn’t shut up about how revolutionary it was going to be, but so far nothing important in her life had changed.
“Yes, I’ve heard of it,” Michelle answers, “but it all seems pretty abstract.”
“Yeah, I hear that all the time,” Sheila responds. “But that’s where our invitation comes in. Inside the box is an envelope with a code and a website. I imagine you use Amazon…” Sheila glances toward an empty brown box in the hallway with Amazon’s universal smiling logo. Michelle laughs. “Of course you do! I was a huge Amazon customer for years. And that’s what our invitation is about – it’s an invitation to see what might happen if you became a Walmart customer instead. If you go to our site and enter your code, a program will automatically download your Amazon purchase history and run it through Walmart’s historical inventory. Within seconds, you’ll be given a report detailing what you would have saved had you purchased exactly the same products, at the same time, from us instead of Jeff Bezos.”
“Huh,” Michelle responds. “Sounds cool but…that’s my information on Amazon, no? I don’t want you to have that, do I?”
“Of course not,” Sheila says knowingly. “All of your information is protected by LiveRamp Identity, and is never stored or even processed on our servers. You maintain complete control over the process, and can revoke it at any time.”
Michelle had heard of LiveRamp Identity, it was a third-party guarantor of information safety she’d used for a recent mortgage application. She also came across it when co-signing for a car loan for her college-aged daughter.
“When you put that code into our site, a token is generated that gives us permission to compare our data to yours, and a report is generated,” Sheila explained. “The report is yours to keep and do with what you want. In fact, the report becomes a token in and of itself, and you can submit that token to third party services like TokenTrust, which will audit our work and tell you if our results can be trusted.”
TokenTrust was another service Michelle had heard of, her husband had raved about it as one of the fastest growing new entrants in the tech industry. The company had recently been featured on 60 Minutes – it played a significant role in a story about Google’s search results, if she recalled correctly. Docusign had purchased the company for several billion just last year. In any case, Michelle’s suspicions were defused – may as well check this out. I mean, why would Walmart risk its reputation stealing her Amazon data? It was worth at least seeing that report.
Sheila sensed the opening. “The reports are pretty amazing,” she says. “I’ve had clients who’ve discovered they could have saved thousands of dollars a year. And here’s the best part: If, after reviewing and validating the report, you switch to Walmart, we’ll credit your account with those savings – in essence, we’ll retroactively deliver you the savings you would have had all along.”
“Wow. That almost sounds too good to be true!” Michelle says. “But… OK, thanks. I’ll check it out. Thanks for coming by.”
“Absolutely,” Sheila responds. “And here’s my card – that’s my cell, and my email. Let me know if you have any questions.”
Michelle heads back inside and places the gift box on the table next to her laptop. Before opening the box, she wants to be sure this thing is for real. She Googles “Walmart Walton Circle Savings Token” – and the first link is to a Business Insider article: “These Lucky Few Amazon Customers Are Paid Thousands to Switch – By Walmart.” So Sheila wasn’t lying – this program is for real!
Michelle tugs on the satin ribbon surrounding her gift box and raises its sturdy lid. Nestled on straw inside are two jars of La Mer, several samples of Neuhaus chocolates, two of her favorite bath salts, and various high end household items. The inside lid of the box proclaims “Welcome to Walton’s Circle!” in elegant script. At the center of the box is an creamy envelope engraved with her name. Michelle opens it, and just as Sheila mentioned, a URL and code is included, along with simple instructions.
What the hell, may as well see what comes of it. Turning to her laptop, Michelle heads to Walmart.com – for the first time in her life – and enters her code. Almost instantaneously a dialog pops up, informing her that her report is ready. Would she like to review it?
Why not?! Michelle clicks “Yes” and up comes a side-by-side comparison of her entire Amazon purchase history. She notices that during the early years – roughly until 2006 – there’s not much on the Walmart side of the report. But after that the match rates start to climb, and for the past five or so years, the report shows that 98 percent of the stuff she’s bought at Amazon was also available on Walmart.com. Each purchase has a link, and she tries out one – a chaise lounge she purchased in 2014 (gotta love Prime shipping!). Turns out Walmart didn’t have that exact match, but the report shows several similar alternatives, any of which would have worked. Cool.
Michelle’s eye is drawn to the bottom of the report, to a large sum in red that shows the difference in price between her Amazon purchases and their Walmart doppelgangers.
Holy….cow. Michelle can’t believe it. Is this for real? Anticipating the question, Walmart’s report software pops up a dialog. “Would you like to validate your token’s report using TokenTrust? We’ll pay all fees.” Michelle clicks yes, and a TokenTrust site appears. The site shows a “working” icon for several seconds, then returns a simple message: “TokenTrust has reviewed Walmarts claims and your Amazon token, and validates the accuracy of this report.”
Michelle is sold. Next to the $2700 figure at the bottom of her report is one line of text, and a “Go” link. “Would you like to become a founding member of the Walton Circle? We’ll take care of all your transition needs, and Sheila, who’ve you already met, will be named as your personal shopping concierge.”
Michelle hovers momentarily over “Go.” What the hell, she thinks. I can always switch back. And with one click, Michelle does something she never thought she would: She becomes a Walmart customer.
Satisfied, she turns her eyes back to her work. Several new emails have collected in her inbox. One is from Doug McMillon, welcoming her to Walton’s Circle. As she hovers over it, mail refreshes, and a new message piles on top of McMillon’s.
Holy shit. Did Jeff Bezos really just email me?!
Is such a scenario even possible? Well, that question remains unexplored, at least for now. As I wrote in my last post, I’m not certain Amazon’s terms of service would allow for such an information exchange, though it’s currently possible to download exactly the information Walmart would need to stand up such a service. (I’ve done it, it takes a bit of poking around, but it’s very cool to see.) The real question is this: Would Walmart spend the thousands of dollars required to make this kind of customer acquisition possible?
I don’t see why not. A high end e-commerce customer spends more than ten thousand dollars a year online. Over a lifetime, this customer is worth thousands of dollars in profit for a well-run commerce site like Walmart. The most difficult and expensive problem for any brand is switching costs – it’s at the core of the most sophisticated marketing efforts in the world – Ford spends hundreds of millions each year trying to convince customers to switch from GM, Verizon spends equal amounts in an effort to pull customers from AT&T. Over the past five years, Walmart has watched Amazon run away with its customers online, even as it has spent billions building a competitive commerce offering. What Walmart needs are “point to” customers – the kind of people who not only become profitable lifelong buyers, but who will tell hundreds of friends, family members and colleagues about their gift box experience.
But to get there, Walmart needs that Amazon token. Wouldn’t it be cool if such a thing actually existed?
Social conversations about difficult and complex topics have arcs – they tend to start scattered, with many threads and potential paths, then resolve over time toward consensus. This consensus differs based on groups within society – Fox News aficionados will cluster one way, NPR devotees another. Regardless of the group, such consensus then becomes presumption – and once a group of people presume, they fail to explore potentially difficult or presumably impossible alternative solutions.
This is often a good thing – an efficient way to get to an answer. But it can also mean we fail to imagine a better solution, because our own biases are obstructing a more elegant path forward.
This is my sense of the current conversation around the impact of what Professor Scott Galloway has named “The Four” – the largest and most powerful American companies in technology (they are Apple, Amazon, Google, and Facebook, for those just returning from a ten-year nap). Over the past year or so, the conversation around technology has become one of “something must be done.” Tech was too powerful, it consumed too much of our data and too much of our economic growth. Europe passed GDPR, Congress held ineffectual hearings, Facebook kept screwing up, Google failed to show up…it was all of a piece.
The conversation evolved into a debate about various remedies, and recently, it’s resolved into a pretty consistent consensus, at least amongst a certain class of tech observers: These companies need to be broken up. Antitrust, many now claim, is the best remedy for the market dominance these companies have amassed.
It’s a seductive response, with seductive historical precedent. In the 1970s and 80s, antitrust broke up AT&T, ultimately paving the way for the Internet to flourish. In the 90s, antitrust provided the framework for the government’s case against Microsoft, opening the door for new companies like Google and Facebook to dominate the next version of the Internet. Why wouldn’t antitrust regulation usher in #Internet3? Imagine a world where YouTube, Instagram, and Amazon Web Services are all separate companies. Would not that world be better?
Perhaps. I’m not well read enough in antitrust law to argue one way or the other, but I know that antitrust turns on the idea of consumer harm (usually measured in terms of price), and there’s a strong argument to be made that a free service like Google or Facebook can’t possibly cause consumer harm. Then again, there are many who argue that data is in fact currency, and The Four have essentially monopolized a class of that currency.
But even as I stare at the antitrust remedy, another solution keeps poking at me, one that on its face seems quite elegant and rather unexplored.
The idea is simply this: Require all companies who’ve reached a certain scale to build machine-readable data portability into their platforms. The right to data portability is explicit in the EU’s newly enacted GDPR framework, but so far the impact has been slight: There’s enough wiggle room in the verbiage to hamper technical implementation and scope. Plus, let’s be honest: Europe has never really been a hotbed of open innovation in the first place.
But what if we had a similar statute here? And I don’t mean all of GDPR – that’s certainly a non starter. But that one rule, that one requirement: That every data service at scale had to stand up an API that allowed consumers to access their co-created data, download a copy of it (which I am calling a token), and make that copy available to any service they deemed worthy?
Imagine what might come of that in the United States?
I’m not a policy expert, and the devil’s always in the details. So let me be clear in what I mean when I say “machine-readable data portability”: The right to take, via an API, what is essentially a “token” containing all (or a portion of) the data you’ve co created in one service, and offer it, with various protections, permission, and revocability, to another service. In my Senate testimony, I gave the example of a token that has all your Amazon purchases, which you then give to Walmart so it can do a historical price comparison and tell you how much money you would save if you shopped at its online service. Walmart would have a powerful incentive to get consumers to create and share that token – the most difficult problem in nearly all of business is getting a customer to switch to a similar service. That would be quite a valuable token, I’d wager*.
Should be simple to do, no? I mean, don’t we at least co-own the information about what we bought at Amazon?
Well, no. Not really. Between confusing terms of service, hard to find dashboards, and confounding data reporting standards, The Four can both claim we “own our own data” while at the same time ensuring there’ll never be a true market for the information they have about us.
So yes, my idea is easily dismissed. The initial response I’ve had to it is always some variation of: “There’s no way The Four would let this happen.” That’s exactly the kind of biases I refer to above – we assume that The Four control the dialog, that they either will thwart this idea through intensive lobbying, clever terms of service, and soft power, or that the idea is practically impossible because of technical or market limitations. To that I ask….Why?
Why is it impossible for me to tokenize all of my Lyft ride data, and give for free it to an academic project that is mapping the impact of ride sharing on congestion in major cities? Why is it impossible for a small business owner to create an RFP for all OpenTable, Resy, and other dining data, so she can determine the best kind of restaurant to open in her neighborhood? I’m pretty certain she’d pay a few bucks a head for that kind of data – so why can’t I sell that information to her (with a vig back to OpenTable and Resy) if the value exchange is there to be monetized? Why can’t I tokenize and sell my Twitter interactions to a brand (or more likely, an agency or research company) interested in understanding the mind of a father who lives in Manhattan? Why can’t I tokenize and trade my Spotify history for better recommendations on live shows to see, or movies to watch, or books to read? Or, simply give it to a free service that’s sprung up to give me suggestions about new music to check out?
Why can’t an ecosystem of agents, startups, and data brokers emerge, a new industry of information processing not seen since the rise of search optimization in the early aughts, leveraging and arbitraging consumer information to create entirely new kinds of businesses driven by insights currently buried in today’s data monopolies?
Such a world would be fascinating, exciting, sometimes sketchy, and a hell of a lot of fun. It’d be driven by the individual choices of millions of consumers – choosing which agents to trust, which tokens to create, which trades felt fair. There’s be fails, there’d be fraud, there’d be bad actors. But over time, the good would win over the bad, because the decision making is distributed across the entire population of Internet users. In short, we’d push the decision making to the node – to us. Sure, we’d do stupid things. And sure, the hucksters and the hustlers would make short term killings. But I’ll take an open system like this over a closed one any day of the week, especially if the open system is governed by an architecture empowering the individual to make their own decisions.
It’s be a lot like the Internet was once imagined to be.
I’ve been noodling on such an ecosystem, and I’m convinced it could dwarf our current Internet in terms of overall value created (and credit where credit is due, The Four have created a lot of value). It’d run laps around The Four when it comes to innovation – tens of thousands of new companies would form, all of them feeding off the newly liberated oxygen of high quality, structured, machine readable data. Trusted independent platforms for value exchange would arise. Independent third party agents would munge tokens from competing services, verifying claims and earning the trust of consumers (will Walmart really save you a thousand bucks a year?! We can prove it, or not!). Huge platforms would develop for the processing, securitization, permissioning, and validation of our data. Man, it’d feel like…well, like the recumbent, boring old Internet was finally exciting again.
There’s no technical reason why this world doesn’t exist. The progenitors of the Web have already imagined it, heck, Tim Berners Lee recently announced he’s working pretty much full time on creating a system devoted to the foundational elements needed for it to blossom.
But until we as a society write machine-readable data portability into law, such efforts will be relegated to interesting side shows. And more likely than not, we’ll spend the next few years arguing about breaking up The Four, and let’s be honest, that’s an argument The Four want us to have, because they’re going to win it (more money, better lawyers, etc. etc.). Instead, we should just require them – and all other data services of scale – to free the data they’ve so far managed to imprison. One simple new law could change all of that. Shouldn’t we consider it?
*In another post, I’ll explore this example in detail. It’s really, really fascinating.
Algorithmic merchandising leaves a bad taste in my mouth. Slowly but surely, it will erode trust for all the tech giants.
Yesterday, I lost it over a hangnail and a two-dollar bottle of hydrogen peroxide.
You know when a hangnail gets angry, and a tiny red ball of pain settles in for a party on the side of your finger? Well, yeah. That was me last night. My usual solution is to stick said finger into a bottle of peroxide for a good long soak. But we were out of the stuff, so, as has become my habit, I turned to Amazon. And that’s when things not only got weird, they got manipulative. Sure, I’ve been ambiently aware of Amazon’s algorithmic pricing and merchandising practices, but last night, the raw power of the company’s control over my routine purchases was on full display.
There’s literally no company in the world with better data about online purchasing than Amazon. So studying how and where it lures a shopper through a purchase process is a worthy exercise. This particular one left a terrible taste in my mouth – one I don’t think I’ll ever shake.
First the detail. Take a look at my search results for “Hydrogen Peroxide” on Amazon. I’ve annotated them with red text and arrows:
As you can see, the most eye catching suggestions – the four featured panels with large images – are all Amazon brands. Big red flag. But Amazon knows sophisticated shoppers like me are suspicious of those in house suggestions, so it’s included a similar product in the space below its own brands (we’ll get to that in a minute).
Above the featured items are ads: sponsored listings that are not Amazon brands, which means the advertiser (a small player named “Blubonic Industries”) is paying Amazon to get ahead of the company’s own promotional power. Either way, Amazon makes money. Second red flag.
By now, I’ve decided I’m not interested in either the sponsored brands at the top, or Amazon’s four featured brands, because, well, I don’t like to be so baldly steered into buying Amazon’s own products. Then again, before I move down to the results below, I do notice something rather amazing – Amazon’s familiar brown bottle of peroxide is really, really cheap – as in, $1.29 cheap. There’s even a helpful per oz. calculation next to the price, screaming: this shit is eight pennies an ounce cheap!
Well, I’m almost sold, but because I hate to be directed into purchases, I’m still going to consider that similar brown bottle below, the one with the red label. Amazon knows this, of course. It’s merchandising 101 – make sure you give the consumer choices, but also, make sure the most profitable choice is presented in such a way as to win the day.
So my eye moves down the page to check out the second bottle. It’s from Swan, a brand I’ve vaguely heard of. Then I check its price.
Nine dollars and sixty nine cents.
Which would you buy? After all, this is a staple, a basic, a chemical compound. And you trust Amazon to get shit right, don’t you? I mean, a buck and change – nearly nine times cheaper? What a deal!
So…my eyes revert to Amazon’s blue labeled bottle. It wouldn’t have a four-star plus review if it burned your skin, right? And that’s when I notice the tiny icon next to it, which looks like this:
What’s this? Is this yet another annoying subscription service? Ever since we moved to New York, my wife and I have tried to figure out Amazon’s subscription services (Fresh? Pantry? Prime Now? Whole Foods Delivery? Who knows?!). I’m already deeply suspicious of any attempt by Amazon to lure me into paying them monthly for a service that I don’t understand.
But…a buck twenty nine! So I click on the bottle, and the landing page is super clean, and there’s no obvious Prime Pantry mention. Plus, it turns out, that bottle from Amazon is the Whole Foods generic brand, which for whatever reason seems a bit better than a generic Amazon brand. Did I just get lucky? Maybe I can just get some super cheap chemicals delivered in a day to my door, and my annoying hangnail will be a thing of the past soon enough….Right?
Here’s the landing page:
Looks great, the price is amazing, but…Uh oh. I can’t get this bottle of peroxide until Sunday. By then, I’ve likely lost my finger to a flesh eating bacteria. As I feared, this bottle is nothing more than a baited fish hook for one of Amazon’s subscription offers – which I find out, will cost somewhere between five and thirteen bucks a month. I’ve signed up for Prime Pantry by mistake in the past, and it wasn’t a smooth or enjoyable experience. No thanks. I click back to the original search results. Seems to me Amazon is gaming the shipping deals.
Well of course it is. I’m no longer a happy Amazon customer at this point. Now I’m just annoyed.
But what’s this? If I scroll down below the $9.69 bottle, there’s another choice, also from Swan, and, it seems, exactly the same, if one is to judge just by the image (and we do judge just from the images, let’s just admit it). This one costs almost half as much as the one above it. What’s going on?! Here’s an annotated screen shot:
As you can see, there’s a lot going on. I’ve narrowed my choice down to two non-Amazon brands. They look nearly identical. The most significant difference, at least in terms of the information provided to me by Amazon, is the price – the top bottle is nearly twice as expensive as the bottom one. But the top bottle has a major benefit: I can get it nearly immediately! The bottom one makes me wait a day. Is the wait worth four or five bucks? Hmm.
Also confounding: The bottom bottle has its price broken out on a per ounce basis – 32 cents, exactly four times more than the 8 cents-an-ounce bottle I just looked at from Amazon’s Prime Pantry. Ouch! Now I’m really annoyed, and confused. My eyes dart back up to the $9.69 bottle. As I’ve shown with the empty red circle, there’s….no per-ounce breakdown shown by Amazon. It does tell me that this particular bottle is 32 ounces, whereas the bottom one is 16 ounces.
But why not do the math for me? A quick calculation shows that the top bottle comes out to about 30 cents an ounce – two cents less than the bottom bottle. Why not show that fact?
This, folks, this is algorithmic merchandising at its finest.
Amazon knows exactly how many clicks it’s going to take for me to reach shopping fatigue. Not “on average for all shoppers,” or even “on average for each shopper who’s ever considered a bottle of hydrogen peroxide.” Amazon knows all of that, of course, but it also knows exactly how long it takes ME to get fatigued, to enter what I like to call “fuck it” mode. As in, “fuck it, I’m tired of this bullshit, I want to get back to the rest of my life. I’m going to buy one of these bottles.”
And because there’s no per-ounce breakdown of the 32-ounce bottle, and because that makes me suspicious of it, and because hell, who ever needs 32 ounces of hydrogen peroxide anyway, well, I’m just going to buy the $5 one.
Ca-ching! Amazon just made a nearly seven percent markup on my purchase. It took five clicks, 15 seconds, and a vast architecture of data and algorithmic mastery to make that profit. Each and every time we purchase something on Amazon, that machinery is engaged in the background, guiding us through choices which insure the company remains the trillion dollar behemoth we know and…
Do you love Amazon anymore? For that matter, do you love Facebook, Google, or Twitter? Interactions like the one I’ve detailed above are starting to chip away at that presumption. Personally, I’ve gone from cheerleader to skeptic over the past few years, and I’m broken out into full-blown critic over the last twelve months. I no longer trust Amazon to have my best interests at heart. I’ve lost any trust that Facebook or Twitter can deliver me a public square representative of my democracy. I’ve given up on Google delivering me search results that are truly “organic.” And YouTube? Point solution, at best. I can’t possibly trust the autoplay feature to do much more than waste my time.
What’s happened to our beloved tech icons, and what are the implications of this lost trust? In future posts, I plan on thinking out loud on that topic. I hope you’ll join me. In the meantime, I think I’ll stroll down to CVS and buy myself another bottle of hydrogen peroxide. By the time Amazon’s comes, I’m sure my hangnail will be a distant memory. But that taste in my mouth? That’s going to remain.
Update: Many readers have pointed out that I missed the fact that the top package of peroxide was, in fact, a two-pack. True that, and it would have changed my on-the-fly calculation around which to buy, given the per ounce comparison. However, it would not change the fact that the act of not adding the per ounce calculation directly on the page somehow discolored that choice.
Also, a rather rich post note: The bottle I did buy never came. It was “lost” – and Amazon offered me a refund. Sometimes it pays to just hit CVS.
So first the news. To celebrate the company’s eight birthday, Cloudflare is announcing the launch of a domain registrar. And because the company operates at massive scale, and can afford to do things most companies simply can’t (or won’t – looking at you, Google, Amazon, Facebook) – the company is offering domains *at cost.* In other words, Cloudflare isn’t making one red cent when you register a domain with them. What they pay to register a domain (and yes, that number is fixed, and the same for all domain registrars), is what you pay to register a domain.
OK, you back? Look, I’m not writing this post because I think the news is *that* exciting, though I’ll tell you, I’ve not found many folks who love their domain registrar. I certainly don’t. Most of them are experts at confusing you, at upcharging you, and at scaring you that you’re about to either lose your domain or miss some important feature you didn’t know you want or need. I pay an average of about 15-20 bucks for each of the domains I own each year. Cloudflare’s price is about eight dollars.
I own close to 50 domains. That means I’ll save nearly $400 a year when I move all my domains to Cloudflare. That’s real cheddar.
But the real reason I’m writing this post is to point out what a merry market discombobulator Cloudflare has become. This is a company that operates at Google scale, is independent (it’s on a path to an IPO and has raised hundreds of millions of dollars), has a core business model that drives profitable growth (it’s a content distribution network and secure infrastructure vendor), and most importantly, a philosophy which is utterly unique in today’s venal, steroidal capital markets (more on that in a second).
With every one of these steps, Cloudflare is doing two things: First, it’s refusing to view the Internet as property to be cornered, as real estate where infrastructure owners can camp out and collect rent. That’s utterly unheard of in a world where Amazon has cornered commerce and hosting, Facebook has cornered social attention, Google has cornered search, and AT&T, Comcast and Verizon are competing to be as walled as a garden can possibly be. Secondly, Cloudflare is actively exercising a core philosophy which can be honestly described as embracing the best (and most earnest) values of Internet 1.0: The web should be open, freely accessible, and an equal playing field upon which anyone can frolic.
Companies like this are very, very hard to find at scale. At some point, most firms with a “make the world a better place” philosophy succumb to the reality of Peter Theil’s maxim: Every world-beating company must be a rent-extracting monopoly. Maybe I’m missing something, so please, name me one (in the tech space anyway) that isn’t operating under this assumption?
Cloudflare is proof that great companies can also be forces for good, down to the molecules of their DNA. This is a company that defines what I mean when I use the word “NewCo.” I can’t wait to see what they do next. And, of course, they’re not perfect, and sure, this post might look naive in a few years.
But gosh, I sure hope it won’t. The world needs more Cloudflares, if only to remind us that it’s possible to move past the exhaustingly brutalist architecture we’ve managed to build around ourselves. Perhaps in fact we can trust ourselves to do what’s right for more than just us, more than just our company, more than just our shareholders. Perhaps our industry can dream to reach just a bit further, and imagine we are agents of larger purpose; and that, if we practice enough, we might earn the right to become what we’ve always imagined we could be, over these so many years: A force for good.
Lord knows it’s been a while since that’s been true. Right?
If you pull far enough back from the day to day debate over technology’s impact on society – far enough that Facebook’s destabilization of democracy, Amazon’s conquering of capitalism, and Google’s domination of our data flows start to blend into one broader, more cohesive picture – what does that picture communicate about the state of humanity today?
Technology forces us to recalculate what it means to be human – what is essentially us, and whether technology represents us, or some emerging otherness which alienates or even terrifies us. We have clothed ourselves in newly discovered data, we have yoked ourselves to new algorithmic harnesses, and we are waking to the human costs of this new practice. Who are we becoming?
Nearly two years ago I predicted that the bloom would fade from the technology industry’s rose, and so far, so true. But as we begin to lose faith in the icons of our former narratives, a nagging and increasingly urgent question arises: In a world where we imagine merging with technology, what makes us uniquely human?
Our lives are now driven in large part by data, code, and processing, and by the governance of algorithms. These determine how data flows, and what insights and decisions are taken as a result.
So yes, software has, in a way, eaten the world. But software is not something being done to us. We have turned the physical world into data, we have translated our thoughts, actions, needs and desires into data, and we have submitted that data for algorithmic inspection and processing.
What we now struggle with is the result of these new habits – the force of technology looping back upon the world, bending it to a new will.
What agency – and responsibility – do we have? Whose will? To what end?
Synonymous with progress, asking not for permission, fearless of breaking things – in particular stupid, worthy-of-being-broken things like government, sclerotic corporations, and fetid social norms – the technology industry reveled for decades as a kind of benighted warrior for societal good. As one Senator told me during the Facebook hearings this past summer, “we purposefully didn’t regulate technology, and that was the right thing to do.” But now? He shrugged. Now, maybe it’s time.
Because technology is already regulating us. I’ve always marveled at libertarians who think the best regulatory framework for government is none at all. Do they think that means there’s no governance?
In our capitalized healthcare system, data, code and algorithms now drive diagnosis, costs, coverage and outcomes. What changes on the ground? People are being denied healthcare, and this equates to life or death in the real world.
Can you get credit to start a business? A loan to better yourself through education? Financial decisions are now determined by data, code, and algorithms. Job applications are turned to data, and run through cohorts of similarities, determining who gets hired, and who ultimately ends up leaving the workforce.
And in perhaps the most human pursuit of all – connecting to other humans – we’ve turned our desires and our hopes to data, swapping centuries of cultural norms for faith in the governance of code and algorithms built – in necessary secrecy – by private corporations.
How does a human being make a decision? Individual decision making has always been opaque – who can query what happens inside someone’s head? We gather input, we weigh options and impacts, we test assumptions through conversations with others. And then we make a call – and we hope for the best.
But when others are making decisions that impact us, well, those kinds of decisions require governance. O
ver thousands of years we’ve designed systems to insure that our most important societal decisions can be queried and audited for fairness, that they are defensible against some shared logic, that they will benefit society at large.
We call these systems government. It is imperfect but… it’s better than anarchy.
For centuries, government regulations have constrained social decisions that impact health, job applications, credit – even our public square. Dating we’ve left to the governance of cultural norms, which share the power of government over much of the world.
But in just the past decade, we’ve ceded much of this governance to private companies – companies motivated by market imperatives which demand their decision making processes be hidden.
Our public government – and our culture – have not kept up.
What happens when decisions are taken by algorithms of governance that no one understands?
And what happens when those algorithms are themselves governed by a philosophy called capitalism?
We’ve begun a radical experiment combining technology and capitalism, one that most of us have scarcely considered.
Our public commons – that which we held as owned by all, to the benefit of all – is increasingly becoming privatized.
Thousands of companies are now dedicated to revenue extraction in the course of delivering what were once held as public goods. Public transportation is being hollowed out by Uber, Lyft, and their competitors (leveraging public goods like roadways, traffic infrastructure, and GPS). Public education is losing funding to private schools, MOOCs, and for-profit universities. Public health, most disastrously in the United States, is driven by a capitalist philosophy tinged with technocratic regulatory capture. And in perhaps the greatest example of all, we’ve ceded our financial future to the almighty 401K – individuals can no longer count on pensions or social safety nets – they must instead secure their future by investing in “the markets” – markets which have become inhospitable to anyone lacking the technological acumen of the world’s most cutting-edge hedge funds.
What’s remarkable and terrifying about all of this is the fact that the combinatorial nature of technology and capitalism outputs fantastic wealth for a very few, and increasing poverty for the very many. It’s all well and good to claim that everyone should have a 401K. It’s irresponsible to continue that claim when faced with the reality that 84 percent of the stock market is owned by the wealthiest ten percent of the population.
This outcome is not sustainable. When a system of governance fails us, we must examine its fundamental inputs and processes, and seek to change them.
So what truly is governing us in the age of data, code, algorithms and processing?
For nearly five decades, the singular true north of capitalism has been to enrich corporate shareholders.
Other stakeholders – employees, impacted communities, partners, customers – do not directly determine the governance of most corporations.
Corporations are motivated by incentives and available resources. When the incentive is extraction of capital to be placed in the pockets of shareholders, and a new resource becomes available which will aide that extraction, companies will invent fantastic new ways to leverage that resource so as to achieve their goal. If that resource allows corporations to skirt current regulatory frameworks, or bypass them altogether, so much the better.
Now the caveat: Allow me to state for the record that I am not a socialist. If you’ve never read my work, know I’ve started six companies, invested in scores more, and consider myself an advocate of transparently governed free markets. But we’ve leaned far too over our skis – the facts no longer support our current governance model.
We turn our worlds to data, leveraging that data, technocapitalism then terraforms our world. Nowhere is this more evident that with automation – the largest cost of nearly every corporation is human labor, and digital technologies are getting extraordinarily good at replacing that cost.
Nearly everyone agrees this shift is not new – yes yes, a century or two ago, most of us were farmers. But this shift is coming far faster, and with far less considered governance. The last great transition came over generations. Technocapitalism has risen to its current heights in ten short years. Ten years.
If we are going to get this shift right, we urgently need to engage in a dialog about our core values.
Can we perhaps rethink the purpose of work, given work no longer means labor? Can we reinvent our corporations and our regulatory frameworks to honor, celebrate and support our highest ideals? Can we prioritize what it means to be human even as we create and deploy tools that make redundant the way of life we’ve come to know these past few centuries?
These questions beg a simpler one: What makes us human?
I dusted off my old cultural anthropology texts, and consulted the scholars. The study of humankind teaches us that we are unique in that we are transcendent toolmakers – and digital technology is our most powerful tool. We have nuanced language, which allows us both recollection of the past, and foresight into the future. We are wired – literally at the molecular level – to be social, to depend on one another, to share information and experience. Thanks to all of this, we have the capability to wonder, to understand our place in the world, to philosophize. The love of beauty, philosophers will tell you, is the most human thing of all.
Oh, but then again, we are uniquely capable of intentional destroying ourselves. Plenty of species can do that by mistake. We’re unique in our ability to do it on purpose.
But perhaps the thing that makes us most human is our love of story telling, for narrative weaves nearly everything human into one grand experience. Our greatest philosophers even tell stories about telling stories! The best stories employ sublime language, advanced tools, deep community, profound wonder, and inescapable narrative tension. That ability to destroy ourselves? That’s the greatest narrative driver in this history of mankind.
How will it turn out?
We are storytelling engines uniquely capable of understanding our place in the world. And it’s time to change our story, before we fail a grand test of our own making: Can we transition to a world inhabited by both ourselves, and the otherness of the technology we’ve created? Should we fail, nature will indifferently shrug its shoulders. It has billions of years to let the whole experiment play over again.
We are the architects of this grand narrative. Let’s not miss our opportunity to get it right.
For decades technology helped the industrial world work better; more and more, technology is replacing that world completely, and there will be pain. That, though, is precisely why it is worth remembering that the world is not static: to replace humans is, in the long run, to free humans to create entirely new needs and means to satisfy those needs. It’s what we do, and the faith to believe it will happen again will be the best guide in figuring out how.
… The lines outside Amazon Go, though, are a reminder of exactly why aggregator monopolies are something entirely new: these companies are dominant because people love them. Regulation may be as elusive as Marx’s revolution.