NewCo Shift Forum
Four AI veterans discuss the implications of strong AI in society
As the introduction below makes clear, the Shift Forum’s AI panel was unique — our moderator, Azeem Azhar, could not make it to San Francisco because of Trump’s travel ban. Nevertheless, we brought Azhar in via video (which cannot be seen in the video capture, alas). Below is the edited transcript of the conversation between Azhar, who runs the popular Exponential View newsletter, Shivon Zilis, of Bloomberg Beta, Vivek Wadhwa of Carnegie Mellon, and Francesca Rossi of IBM.
John Battelle: I’m not often say that I’m excited that one of our speakers couldn’t make it, but I kind of am — because he’s going to be here, sitting in this chair as a video avatar. The moderator of the panel, Azeem Azhar, is a London-based Pakistani man, who is also a father. He called me a few days ago and said, “I don’t really think I can travel to the United States right now. I’ve been stopped at the border many times already but I’m concerned that even if I get in, I won’t be able to get back.” This was when there were protests going on at airports.
“I just don’t want to risk it. I’m really, really sorry.” I said, “That’s OK. We will find a way to get you here even if it’s by video.” Shivon Zilis, who was a panelist, has agreed to be the moderator, so we’re going to see how this works and experiment. Our program was directly affected by what I think is it pretty ludicrous executive order. Please join me in welcoming Shivon, who will bring on her panel.
Shivon Zilis: When do we get Azeem?
JB: Azeem’s going to show up, too, we hope.
Vivek Wadhwa: He’s going to come in via AI.
Shivon: The secret is he’s already a cyborg. Really excited to have this discussion. I’m Shivon Zilis. I’m a partner and founding member of Bloomberg Beta, which is an early stage venture capital fund that’s exclusively focused on companies that are trying to make the future of work better.
Within that, my laser focus has been on what role machine intelligence plays in that. We’ve, to date, invested in 40 different companies solving various different problems. A lot of them are trying to give knowledge workers superpowers.
They’re doing things like helping us grow more food with fewer resources and diagnose diseases earlier. It’s been an interesting four years to focus on this topic because four years ago…Azeem!
[laughter and applause]
Shivon: You’re looking particularly handsome today. Have you changed anything?
Azeem Azhar: I’m wearing a jacket for you.
Vivek: Is that really him or is that an AI we’re seeing?
Shivon: Aren’t they one and the same? I think Azeem went cyborg many years ago. Four years ago, this was an esoteric, academic subject and now, as you guys have seen through at the conference, AI has come up on pretty much every panel. But, the way it’s come up has been, there’s been this fear of this amorphous thing that’s coming.
On the other side, there’s been, basically, problem + AI = magical solution. I’d argue that it’s not something to be feared and it’s not purely magical, but regardless of that, the point is it’s not even just coming. It’s here and it’s touching our society in many, many different ways already.
The purpose of this panel really is just to have a digging-deep discussion on how do we make AI work for the benefit of us, not just us in this room, but for the benefit of humanity? With that, we’ve got an excellent panel and I will invite our cyborg to introduce himself first.
Azeem: Thank you, Shivon. My name is Azeem. I’m the Vice President of Ventures at Schibsted Media group, which is a large European media company. I spent the last two or three years looking at AI and its social effect, its effect on the economy, and the effect on work, and the effect on culture, society, and family.
Shivon: Excellent. Vivek?
Vivek: I’m Vivek Wadhwa. I’m a fellow at Carnegie Mellon. I’ve been researching the impact of exponential technologies on industries, on people, on humanity itself. The convergences of all of these technologies which include AI.
Francesca Rossi: Francesca Rossi, I work at IBM TJ Watson research center and I’m also a professor of computer science at University of Padua in Italy, on leave, currently. I’ve worked on AI all of my career, more than 25 years.
In recent three or four years, I became interested in AI ethics and the impact of AI onto the world, both from the research point of view and also for various initiatives that I have contributed putting in place to address issues of deploying AI to the real world.
Shivon: You’ve been in this for 20 years. What’s happened in the last few years that made everybody start freaking out?
Francesca: The way I see it is that, in the past, AI has always been a very flourishing scientific and advanced scientific community with many differences areas advancing all the time. But what happened in these last few years, these new techniques related to machine learning, and in particular, deep learning, together, with the exponential growing computing power of the machines, and together also with the huge amount of data that are collected all the time, then this allowed AI systems the ability to perceive the world much more than before.
Before, AI systems could only be used in very controlled environments where you knew exactly what the outside world was going to be without any surprises. Now, you can put AI systems in places where there can be surprises, there can be uncertainty — so the real world.
That’s why this allowed a much more pervasive, wider deployment of AI systems into our lives. That’s what’s happened. That’s why, now, people mostly talk about deep learning, but this is not the whole of AI. AI has many other disciplines, but deep learning gave AI systems the ability to see, to read, to make sense of the data and so on.
Shivon: As a show of hands, how many people have heard of deep learning, as a term? OK. That’s just showing how pervasive these technologies have become. Let’s make this really practical for people in the room. This room is stocked with mostly knowledge workers. Candidly, the reason I get interested in this space was I was starting to use these tools. Four years ago, that seemed to give me all the superpowers and I got excited.
Then, I viewed it as an opportunity. How should people in the room be viewing these new AIs? Are they opportunities? Are they threats? Where do you even begin?
Francesca: I’m an optimist, so I think it’s a great opportunity and I think we are already seeing why it’s a great opportunity, because really AI can help all of us to do better what we want to do. To make better decisions, more informative decision, and even more ethical decisions whenever there is ethics at expense.
I really think that our brains and our way of being has certain capabilities and features that are not in machines. We are very complementary to machines. This is a great opportunity for synergistic, helping each other to really do better what humans want to do. We’re very good at judgments, and empathy, and emotions, and all these things. Machines are not good at that.
Shivon: Some of us are good at empathy but we’re in Silicon Valley, so it’s a little murky. Vivek, Azeem, what do you guys think?
Vivek: Here’s the dark side of it. It’s exactly what she said about AI now judging ethics. Whose ethics? If it’s Donald Trump’s ethics, my friend Azeem is stuck in England. He’s afraid of traveling here. That’s ethical for him, is to keep the Muslims out.
Silicon Valley has its own ethical values that we think we’re gods. We think we’re better than everyone else. We think we know it all. This is what we’re programming into AI. Machine learning. It’s learning from us. It’s learning our values. It’s learning our code.
What are we programming into it? That’s where the problems are. That we are now creating a new species. Look forward 10, 15, 20 years and there’s no doubt about the fact that this thing is on autopilot. It’s learning itself.
It’s deciding what to learn, how to learn and we’re creating something new without having any idea what it is and what’s the value system we’ve given it. That of an elite group of people from Silicon Valley…
Shivon: You make the point that we’re legitimately building brains that have some semblance of…
Vivek: Exactly. Then, you have China, which is in the act…I don’t trust either parties to get this stuff right.
Shivon: In a world where we don’t trust anyone, what do we put in our brains, Azeem?
Azeem: It’s a very good point and I think Vivek has raised this question, that the AI systems will learn from the data they see in the world about them, which could be worrying if they read Breitbart.
Azeem: But, there is a second problem which is we don’t even agree amongst ourselves. In the homogeneous world of Silicon Valley, if we put out the trolley problem to the audience, which we won’t do, but people know the trolley problem.
It’s a question about whether you make a utilitarian decision about a runaway train. 70 percent of us will act utilitarian and 30 percent won’t. Even in the homogeneity of this audience, we’ve don’t agree on what an ethical outcome needs to be. How do you program that into an AI system?
It becomes more complex when we start to think about what do ethical values look like in China? Or, ethical values look like in India? Where these systems will be deployed, even if they’re not designed there.
Francesca: I think that first of all, it doesn’t necessarily have to be that machines will observe us and then learn these ethical principles by observation. It could be, and I think that most people in the scientific community think that what is good to do is embed these ethical principles into AI system by a combination of top-down and bottom-up approaches.
Where the bottom up approach is just, I observe and I try to understand how people behave, but also, the other approach — the top-down approach — is to put some basic rules, which cannot be all of it because otherwise it would be too brittle…
Shivon: It’s kind of like a robot democracy.
Francesca: Another thing, too, is these issues could be kind of unmanageable if we think about a general AI that has to solve every problem and to work in every domain and so on. If we think about task specific AI, that’s going to resolve a specific task and help a specific professional doing his job, then this is more manageable.
Because there are professional calls. There are people that we trust in that profession that we can observe over time. We can aggregate how those professionals work. That I think is the point of view which is very constructive and much more feasible and much more concrete in the short term.
So, addressing specific tasks and trying to help AI systems support that decision-making capabilities of humans in that task.
Shivon: That’s what we have to do in the adult world. Let’s talk about the thing that’s been near and dear to our hearts, which is kids. We’re building brains, just more dynamic brains. How do we robot-proof our children? If Jessica’s going into high school right now, what does she study such that she’s going to have a purpose and use in society?
Vivek: Azeem, you want to answer that?
Azeem: Yeah, absolutely. I think that we’ve identified that people need to, to a certain extent, know how the systems work, so either be the designer or be designed. There’s definitely an emphasis on computational thinking.
But, there’s something more, which is learning the skills of metacognition, which is how are we going to learn how to learn? What I’ve observed…
Shivon: Is this the return of liberal arts?
Azeem: It’s partly the return of liberal arts, which I think is important but I also think it’s about having an appreciation that whatever skills you had over the previous five years, are likely to change over the coming two. I’m not sure how those skills, the skills of meta-learning form part of the curriculum.
Shivon: You guys think meta-learning is going to be enough?
Vivek: Shivon, the way I’m seeing the future is we have about 5, 10, or 15 years left of amazing employment we’re going to be creating, and then suddenly, jobs start disappearing. Our kids now are graduating, and by the time they are in the workforce, they’re going to be unemployed. The fact is that jobs are going to keep changing, professions are going to keep changing.
Shivon: We’re optimizing for how quickly we change ourselves? Is that the solution?
Vivek: What we have to try to teach our children is how to learn. Let them enjoy learning. Let them realize that technology is going to dominate their lives. Let them be one with it the way they are. Don’t fight them. Encourage them to build a robot, encourage them to now write AI, encourage them to reinvent themselves.
Education ended when we graduated. With them, it starts when they graduate. Put that mindset into their heads. They’re ready for the robot revolution, they’ll adapt to it, and they’ll be one with it, whichever way it goes.
Shivon: We tend to put a very Silicon Valley lens of things. I just wanted to end by broadening out a little bit. One of the reasons I love doing what I do is there’s so many big problems out there, that have been untouched by technology and untouched by AI.
For example, I spent the last two years trying to figure out how to use machine intelligence to help the plight of the elderly. I would be curious just to get your guys’ perspective on calls to actions for people in the audience, areas where we should be applying this technology more to solve real problems or it’s not getting enough attention.
Francesca: Elderly care is certainly very important area where I think AI can help a lot. Being from Europe myself, this is a very heartfelt in Europe because as you know, Europe has this very aging population. That’s definitely a very good area where there is a lot of potential.
In general, I think there are many. I couldn’t think of any sector where AI could not help in making better decisions, especially where decisions should be based on a large amount of data that we cannot digest, we cannot keep up reading everything that’s been published on that particular disease, like thinking about healthcare.
Healthcare in general, and elderly care is one part of it, healthcare definitely is the main application area. Really, I think that all the companies should just think about their sectors, their domain of application and bring up all the issues that could be there. I think that AI has the potential to help there.
Azeem: Shivon, the issue of healthcare, having digital doctors. Imagine digital tutors, imagine having digital farmers basically advising farmers how to do that. The beauty of this technology is that it’s democratizing.
Once you code it, it’s available to all and it’s inexpensive. We really have the ability to impact the world by sharing this technology, these tools, that we’re creating.
Francesca: Before this is done, and it’s already done actually, we have to make sure that we spell out exactly the ethical principles in developing this AI. That’s very important, because otherwise we end up with AI systems that do not behave ethically, do not behave as we think they should, in order to help humans to be even better humans.
I think it’s very important, and as you know, I participate in several initiatives, partnerships on AI that put together companies, big players in developing AI, to try to understand together, in a very collaborative environment, which is not common for these companies to be on, how to address those issues, how to make AI ethical. How to derive some basic principles that we all agree on, how to share best practices, kind of these things.
Shivon: Azeem, we’ll close on you…
Azeem: Absolutely. I think tech has done great things to address our material needs, and if we don’t have all those needs met, of food and shelter, we have a path too that runs across the world. Where we have a deficit — and a great deficit — even in wealthy countries, is in well-being.
That sense of being comfortable, happy, and a sense of self. I would love AI to tackle the problem of well-being. How do we help ourselves feel good about ourselves?
Shivon: We’ll close with AI is helping us be better humans. Thank you, guys, so much.