The act of computationally creating an answer via cognitive computing or conceptual reasoning rather than searching for it with text curiously gets described in so many ways, but nobody ever seems to talk about it directly, its always a talked about in terms of how it is done. I propose we call it “answer synthesis”. Let’s dig deeper.
If you aren’t quite following me on what the problem is, let me do a better job of explaining it. If you were go to Google right now and ask it something, you’d probably not ask the question directly, but rather shape your query in terms of the things you think might bring you closer to an answer. For example, if you needed help deciding what you should wear to a summer party, you wouldn’t type in “what should I wear to the summer party?” because you know you simply wont get an answer, and the results you do get are websites that maybe talk about that topic generally, but it is up to you to answer the question.
How about something easier, then. Google, “what features should my web application have if I want to get funded or acquired?” Again, you won’t really expect an answer to that, either.
How about something much easier. Google, “who was the 40th president of the United States?” Now, at this point, you’ll probably expect two types of results, one being a list of websites, and the other being some Google knowledge graph answers populated in the sidebar, which is Google’s attempt at answering the question without needing to send you away from their results page.
Finally, ask it something very easy, like, “how many pounds are in 1 kilogram?” if you’ve typed this type of question into Google, you’ll know that they simply answer the question, and that’s that.
So, hopefully between each example, you noticed that there was an ever widening gap between the questions, and that gap consists of several things: conceptual complexity, relationship to personal user data, broader reasoning, and in general the question just gets less answerable.
The concept of computational answerability is really just getting its legs in the industry right now, with projects out there like IBM Watson, and its really starting to pick up steam thanks to the multitude of different types of technologies out there, like linked data, triplestores, inferencing, HADOOP, language modelling, predictive analytics, natural language processing, neural networks, sentiment analysis, and much, much more. In fact, the whole notion of being able to answer questions directly will fundamentally change the value proposition of a search engine: you wont be searching for answers, you will asking questions.
The whole idea that a person has questions that is somewhere, somehow connected to its answer(s) is a really important problem that many have lumped under umbrella terms like “semantic search”, “cognitive computing”, and other things like that, but I just don’t think those terms really fit. The problem with calling it semantic search is that you’re not really searching, and the problem is bigger than semantics, though, it gets us on the right track. On the other hand, calling it cognitive computing doesn’t really fit either because it seems that cognitive computing is really a means to coming up with answers, but it isn’t the answer specifically.
So, since this type of product and feature is going to get more and more pervasive in the coming years and decades in computing applications, I think we really ought to have a proper name for it. The name I’ve come up with is answer synthesis, because what are we doing? — we’re synthesizing an answer, not searching for an answer.