Has Uber Built an Ant Farm For Its Drivers?

By

The NewCo Daily: Today’s Top Stories

Walter | Flickr

When it comes to experimenting with contractors’ behavior, how far do gig-economy companies like Uber go? Pretty far, according to a new in-depth account by The New York Times’ Noam Scheiber.

In a sense, the entire Uber platform is like a Skinner box or ant farm for isolating and examining how different incentives and inputs produce different outputs from riders and drivers. Uber is constantly tweaking the behavior of its drivers, the people formerly known as employees, with gamification-style interface elements — like badge rewards or notifications that pop up to say, “You’re $10 away from making $330 in net earnings. Are you sure you want to go offline?”

Uber wants to keep its drivers working, especially in areas and times of high demand. But when more drivers are working, each is likely to earn less — so the platform owner is fundamentally at odds with its drivers. That’s probably why Uber’s retention problem is so stubborn.

Uber has made life extra difficult for itself thanks to its grabby culture, but every gig economy platform faces similar challenges. Most are experimenting with contractor-behavior modification techniques to some degree, just as Uber is. As Scheiber says, “Pulling psychological levers may eventually become the reigning approach to managing the American worker.”

Companies that hope to build longer-lasting and more solid relationships with the people who are doing their work need to consider giving contractors more direct control over prices, granting them at least some of the power of true marketplace actors. Firms could also share more of the experimental data they’re collecting and use their learnings to establish norms that actually enhance the workers’ welfare.


When Videogames Look Like a Better Deal than Work

One reason Uber may be finding game-style tactics to be an effective management tool is that videogames, increasingly, are shaping the way more and more people relate to the world. “It is possible,” writes Ryan Avent in The Economist’s 1843 Magazine, “that they are too good. Today’s games seem to be displacing careers, friendships and families, and thus stopping young people (particularly men) from starting real, adult lives.”

Today, 22 percent of young American men in their 20s without college degrees are unemployed, unmarried, still living with their parents. These people are spending a lot of time playing today’s powerfully absorbing, open-ended videogames.

Avent suggests viewing this trend less as a sign of the imminent collapse of civilization than as a rational choice people are making with the hours of their lives. For those who have a way to meet their basic needs and are consumed by a hobby or leisure activity like gaming, he writes, “The income gain from another hour at work starts to look a poor trade-off for an additional hour away from it.” The better the games get — and they are on a rocket ride right now — the more heavily this equation tilts away from work.

Of course, such choices have a long-term impact on personal wealth, freedom, and retirement options that some gamers might regret. (A fraction will eventually find work in the gaming industry itself.) Avent takes a balanced view: A gaming habit entertains and diverts some people, while it addicts and disaffects others. Either way, an eroding work ethic could force deep changes in our economic assumptions. One attractive option: Explore ways of adjusting the dials on the game of real life so that it feels less rigged and more fun to more of its players.


Bots Go Back to the Drawing Board

After a wave of hype in the chatbot industry peaked about a year ago, botmakers have gotten down to the hard labor of figuring out how to make their software work in the real world (Tim Peterson in Marketing Land). Marketers and information providers know that we now spend so much of our time in messaging apps that they’re going to have to meet us there. Bots that turn purchasing, customer service, search and other interactions into a text-message experience looked like a great solution to this problem.

They can be. But there are pitfalls, and botmakers have stumbled into nearly all of them now. Facebook Messenger’s first commercial bot, the weather app Poncho, was a flop, in part because it just didn’t work that well. Users familiar with AI-enabled personal assistants like Alexa and Siri often think that chatbots will serve all their needs, whereas in most cases they’re highly specialized tools with very limited capabilities. Marketers want bots to show a brand’s personality, but users often just want to get on with their order or help request. Chirpy canned responses get tiresome fast.

The chatbot opportunity is still there: 78 percent of U.S. adults haven’t even heard of chatbots yet, which means they haven’t been exposed to the industry’s early misfires and could still be won over. Also, bots have now shown their mettle in an entirely different sphere — politics.

On Twitter users were aware that at least some of the throngs of Trump-loving “#MAGA” accounts that swarmed Clinton supporters during the election were automated. Now it seems that many of the pro-Bernie Sanders “Bernie Bro” accounts that also attacked Clinton backers and sowed division within the Democratic ranks were bots, too, probably Russian-made.

On ShareBlue, Leah McElrath writes that the attacks by these fake tweeters felt like “being targeted for psychological warfare.” That’s something basic for businesses to keep in mind as they deploy their own bots: Don’t be creepy.

https://upscri.be/9ca96e

Leave a Reply