Ubiquitous Computing, as a term, has been around for quite some time now. It refers to a state of computing in which there is a presence of data, interfaces, computing, etc, that is essentially omnipresent and is available for interaction in a wide variety of forms for a wide array of purposes. In essence, when people talk about the Internet of Things, they usually are describing what others refer to as ubiquitous computing. One of the aspects of this paradigm that makes it ubiquitous is a somehow-universal interoperability between all things connected.
Also, separate from that, there should be a sense of ambient intelligence that persists around all of these interacting agents. Obviously, interoperability, intelligence, high-availability, access, security, communication, data interoperability, data analysis, prediction, etc, are all under the umbrella of the term. However, is all of this really needing to be solved in order to have the user experience of having interoperability and ambient intelligence? I think not. Either way, there are lots of things to think about when it comes to putting your finger on what the real problems are that are left to solve in this space.
One big one, interfaces. Interfaces aren’t just about UIs and graphics. Interfaces are more a problem of data entry in general. Consider, for example, the evolution from a computer in the 1980s to a computer now. We started with punch-cards, graduated to command-line only interfaces with a keyboard, then we got a mouse, then we got touch, then we got [are getting] voice, eventually computer interfaces will evolve to something that seems to require no physical input — it will probably be mind controlled. Who knows. Also, at the point that we graduated from keys to touch interfaces, we were able to observe that touch not only allowed for expressive gestures, but rather, the object we were interacting with itself was now fluid. A screen has no buttons, therefore, it can be infinitely many things. In many ways, this is what sci-fi stories back in the 1960s failed to predict, the future always had dashboards with buttons all over them, obviously, by today’s standards, that strategy is a huge waste of space.
Next, instead of being able to touch a screen that is a literal screen, we will eventually evolve to using implied screens. Imagine if projectors were able to do more than shine lights on the wall, but rather, shine interfaces on the wall. In fact, there are developers out there who’ve been trying to do that for, like, 10 years already. My point is that ubiquitous computing will have a lot to do with the general access to being able to do data entry as a user from anywhere at anytime, and that problem isn’t quite solved yet.
Another aspect of ubiquitous computing is data integration and data management. Back in the day when relational was the only good option out there, and schemaless data came along, it was being put in use by the big boys like Google, and it was truly liberating. Now, by using schemaless data, practically anything can be stored on the spot, and post processed later. User activity here, a comment there, a post here, a geo-tag there, etc, etc, etc. I like to refer to this type of data as wild data. Because ubiquitous computing really will incorporate data entry from anything and everything from anywhere, flexibility of data ingestion, management, and processing is key to taking it all to the next level. Obviously, schemaless is a cool strategy for solving this problem, but it doesn’t necessarily address the problem of scale and meaning. This is one of the big promises of linked data, actually. Linked data is flexible in that it is schemaless, but it is also structurable in the sense that predicates and objects can be connected, like lego blocks, to make rich structures of data that can be queried, and inference & prediction can be performed against that data on a very large scale. So there is something there, but time will tell.
Another thing that could affect the speed at which we arrive at a world of ubiquitous computing, in my opinion, is APIs — how many are there, what can they ingest, what are their standards, how reliable are they, what can we use them for. APIs in the last few years as a design pattern have practically exploded in every direction, and you can see the effects practically everywhere in the programming world. The most interesting aspect of APIs with regards to ubiquitous computing is that from the end-user’s perspective, they are expecting to eventually arrive at a state in which devices not only connect with the mothership, but with each other.
The irony of this is that depending on how interfaces are connected, and how the application itself is deployed, there are times where there is no actual difference from the UX perspective as we approach a soon-coming threshhold of human-perceivable time-lag in terms of network connections, etc. Take for example the connecting of two phones for the sake of playing a video game. If two people were in two different cities playing the same game together, they would perhaps assume their actions were being channelled through a game server. Alternatively, if they were sitting in the same room on the same couch, they may think the devices are communicating directly, perhaps via bluetooth or something.
The point is that unless you’re actually checking these details, if the speed of these connections are fast enough, you wont be able to perceive performance differences, and you wont really be able to tell by which avenue the data is travelling. Thus, from the UX perspective, they press a button, and something happens, and that’s the end of it. In fact, this is the way the Nest is currently programmed – by consuming APIs, yet, people have this profound believe that the Nest is some highly interoperable thing in the internet of things, which it isn’t — in fact, this is something that really bothers the Nest’s CEO. But that’s another story.
Hardware. There are some companies out there that believe all of this is just a hardware problem in that devices should all have the hardware built in that does interconnectivity and interoperability inherently. In some ways, that’s true, but as you can imagine, to be truly ubiquitous, those devices will need to play well with others, even when the others aren’t anything more than an internet-connected rasberry pi device that is sending events to an API. I’ve actually heard some hardware makers say that the internet of things isn’t really even a consumer product problem, but rather a problem of commercial and mechanical devices, like parts in a factory. But I digress.
At the end of the day, all of this stuff needs to work well together for the sake of elevating the general intelligence of this network of devices to a sufficiently high level that we can do more than just data-entry. That means we give data, and get inferences, intelligence, wisdom, and knowledge in return. In order to do that, a certain level of post-processing is required. Many consider HADOOP to be the answer to that problem at this time. In some ways that is correct, but there are companies out there who are still trying to squeeze more real-time conceptual analysis out of large datasets on premises with the power to predict, infer, and understand without the need for post-processing like HADOOP.
On a side-note, this particular set of challenges in the ubiquitous computing world is precisely the type of thing Algebraix Data wants to help solve with its universal data management platform, but that’s another story.
Finally, what all of this boils down to is ambient intelligence, knowledge acquisition, automation, autonomy, and device interoperability. If the network can think and react with some level of useful intelligence, that means less work for everyone. This is probably what Larry Page meant when he said recently that we work too much. Eventually we will find solutions to these challenges, one by one, or even several at a time. One thing is for sure, if ubiquitous computing is our future, then the future looks bright.