The Camera As Platform

By

When the operating system moves to the viewfinder, the world will literally change

“Every day two billion people carry around an optical data input device — the smartphone Camera — connected to supercomputers and informed by massive amounts of data that can have nearly limitless context, position, recognition and direction to accomplish tasks.”

– Jacob Mullins, Shasta Ventures



As we transitioned from mainframe to client computing, then to desktop and mobile computing, we established the fundamental infrastructures of the internet. Operating systems, graphical user interfaces, servers, APIs, app stores and the cloud each enabled a new paradigm.

Today, nearly 2.6 billion people carry smartphones. The mobile computing era frees us from our tether to a desk, and therefore a location. All sorts of incredible things can now happen while we are out and about with a tiny supercomputer in our pocket.

But after years of unrelenting progress, the mobile computing era is coming to an end. The paradigm that will replace it is coming into focus, and it has a name: Spatial computing.

In spatial computing, the physical world around us is not only content, but also the interface and the distribution channel as well. How? We are on the precipice of shifting the OS layer from the mobile phone, where it has lived for nearly a decade, to the camera itself. Put simply, though boldly, the camera will bring the internet and the real world into a single time and space. Brand new worlds will enter our field of view, modular and stackable like so many NES cartridges of yore.


Once upon a time, the mobile camera was only used to capture content. Today, we increasingly use cameras to enhance the experience of the world around us. Recent innovations like Pokemon Go, Snapchat, Facebook Live, and Instagram Stories introduce behaviors which allow the camera to become something much more than a content creation and consumption device. By giving us tools to augment our selfies, Snapchat taught us the camera could be interactive. Now we’re turning the camera out on the world, and learning that the world is in fact our canvas.

Industries from entertainment to retail to broadcast to travel will be transformed. What if we could use the camera to harness the long loathed audience behavior of pulling out our phone and watching the show through the screen? Coachella, for example, is claiming this behavior and empowering artists to see it as new channel for story telling, extending the performance into and through the lens of the camera.

For brands like FabFitFun, the camera creates a space for content and commerce to live together. Transactions will soon be processed via the lens. The editorial brand voice, metadata, the button to “buy” — all will be visible as we look through the camera at our subscription box and the products inside.


Many media companies, too, are tackling how to extend TV broadcasts, meeting viewers where they are. We have all seen dog ears on TV personalities. But what if the relationship between our favorite broadcast content could be extended through our cameras and enter our living room? Or be tagged with our friends’ commentary? All without battling for attention as a second screen experience?

The camera is no longer a passive tool, but the new start menu. It is the next great consumption experience, the next great transaction experience, and the biggest technology opportunity in a decade.

Suddenly, the internet, that once was confined to four by two inches on our mobile phones, is now a blank canvas as wide and broad as your entire field of view.

The first driving factor in this transition is scale. For the better part of the last decade, nearly every mobile phone on the planet has shipped with a reasonably high-resolution camera on board. Cameras in the mobile era are not just ubiquitous, but densely populous too.

In 2000 there was a desktop computer every 27.79 square miles, per Statistica. In 2010 there were 16.5 smartphones per square mile, a number that has grown to 59.07 in 2017. Today there are now on average 78.5 cameras in the same area. Of course, these densities are dramatically higher on average in urban areas.

Cameras are not confined to the smartphone, either. They now live in almost every device, from tablets and laptops to keyless locks, refrigerators, and automobiles. As I write this post, I am surrounded by at least 30 cameras, and those are just the ones I know about. When I go home, I will be greeted by a new handful of cameras in a wide array of devices.

More capable software imbued with increasing intelligence is the second key enabler. The wide distribution of the camera makes it very appealing as a true platform. As a result, this Fall will be littered with announcements from major internet companies like Apple, Microsoft and Google, heralding their never-generation “extended reality” capabilities.

Facebook, Google and Snapchat have already released their own programmable camera offerings. They’re correctly betting that what once was a simple capture device, and an accessory to the mobile world, will become the developer’s next playground, and the consumer’s sustained obsession.


Figure 1. Computational sine wave describing the oscillation of application bundling and unbundling over a time horizon of ever shrinking hardware coupled, with increasing mesh density.

In the above graphic, on the x-axis is the hardware measured both in terms of absolute size in addition to relative density. Hardware is getting smaller with every step function and is approaching pervasive.

On the y-axis we have the software layer that is oscillating between the bundling and unbundling of programs and applications. The personal computer, for example, runs many programs and applications at once. The mobile handset was originally designed for a single functionality at a time. Even now, the smartphone can only display one application at a time.

In the early days of the web you could run one application at a time, providing one view of the world. Now we can run multiple tabs in a PC browser, but on our phones we are still constrained to one view. We’ve more or less stopped downloading new phone apps, and the war for your home screen has been largely won.

This is about to change.

With the camera, the fight for pixels falls by the wayside. We are moving towards a computing era when the camera will run applications just as the mobile phone does. Except, with the physical world as our canvas, we’ll find ourselves parallel processing in a way never before possible.

From ENIAC (heralded as a “giant brain” when introduced at the University of Pennsylvania in 1946) to Apple’s iPhone, the evolutionary trend of technology has been to fit more and more into smaller and smaller boxes. The screen on your phone, the TV in your house, and everything in between is a “virtual” representation of the world. In some cases we have begun to augment those representations; Pokemon Go opened the eyes of millions to what’s possible creatively, but we’re just getting started.

What comes next is looking through the camera at the entire world in front of you.

Rather than building a smaller box, we have started to eliminate the box entirely. As such, the physical world around us is once again more than just content. As I said before, it becomes both the interface and the distribution channel as well. The combination of the context provided by the data in our phone, the canvas of the real world around us, and our own curiosity, we are able to interact with the world around us in a way that has not yet been possible.

Imagine a day where we have OS-level access to the camera, enabling mobile apps to be replaced by “lenses” (camera applications) that provide a compute and interaction fabric on top.

Imagine too that the camera is a clearinghouse for an ever-increasing number of sensors in both the phone itself and the various “things” surrounding us. Add in a wide array of new lens applications, you’re looking at a global mesh of massive proportions.

The world around us is then programmable, accessible to a digital information flow in far higher fidelity that we know comprehend, ushering in a new level of always-on, multi-dimensional computing. Rather than the push-pull paradigm of today, in which we must ask for the data we receive, we are entering a persistent, always available, always-on relationship with digital information.

Imagine, for example, you’re simply walking down the street. Looking through the camera, you wonder where you are. The Maps application uses your GPS data and accelerometer to determine your location, and lay the correct path at your feet. You come across your favorite store, and, in addition to the Maps directions, you have the editorial voice of your favorite brands give in you information on their sale. In the same view, the National Geographic lens provides contextual information about the building on the other side of the road. All of this information is available without having to request it. Rather, the persistent, always-on paradigm of spatial computing means that we have access to contextualized information without even having to ask.

The opportunities are truly endless, and we are limited only by our imagination in the world view we can create through the camera.

Imagine finally that through this technology, you are able to build and create fields of view all your own. Imagination becomes the only rational limit.

We are about to make the whole world a programmable playground, with the camera as our gateway and guide. In the not too distant future, looking through the camera will eclipse the experience of looking through your browser on your computer or the screen on your phone. And when lightweight glasses, too, begin to incorporate this technology, we’ll come to accept and even expect lens in our field of view all of the time.

Welcome to spatial computing y’all. Indeed, the smartphone’s future is all about the camera.

Allison is co-founder and CEO of Camera IQ, the first camera experience manager. Alison and her team help marketers create new worlds that customers love to explore.

Leave a Reply