At this moment, there are exactly 604,844 registered corporations in the USA. The story of why I know that is pretty fascinating.
A couple weeks ago, I was flying to San Diego from San Fransisco airport, and I was sitting next to a young guy who was working on his laptop, doing animations for a video teaser / pow piece for a thing called Alphabet.
Recently, I was tasked with building a method for performing data analysis as a service, as well as building a data science team that could perform those services for various industries. So, I quickly kicked off a deep exploration of the field and the technologies available in the space. I found what I learned to be very engaging and it got me thinking not only about technical challenges in performing analysis, but also managing the process of doing data science in a business context.
I occasionally get invited to attend performances at a Resort / Casino in the Palm Springs area by a family member who has hookups there. Usually what happens is we’ll get seats to a performance, and often before the show, my wife and I will get the opportunity to meet and greet with the performer. Usually, the performer is kinda in a trance, stuck in their own thoughts, presumably preparing mentally for their performance, and the meet and greet is more like a quiet handshake and a picture, and that’s that. But one time, Joan Rivers did a show there, and in the meet and greet, she was clearly a much different kind of person than many of the other performers that we’d been able to meet, and I also got some interesting insight into what kind of business woman she is. It was illuminating.
So, you’ve heard of a triplestore, that’s an important first step. Now, you’re wondering why you’d need one? That is a good question. I believe that the best way to answer the question is to talk a little bit about we know about triples as a data model, what SPARQL is good for, and where the industry has gone in the last few years that has caused us to need triples and SPARQL in the first place. Let’s get started.
The act of computationally creating an answer via cognitive computing or conceptual reasoning rather than searching for it with text curiously gets described in so many ways, but nobody ever seems to talk about it directly, its always a talked about in terms of how it is done. I propose we call it “answer synthesis”. Let’s dig deeper.
Ubiquitous Computing, as a term, has been around for quite some time now. It refers to a state of computing in which there is a presence of data, interfaces, computing, etc, that is essentially omnipresent and is available for interaction in a wide variety of forms for a wide array of purposes. In essence, when people talk about the Internet of Things, they usually are describing what others refer to as ubiquitous computing. One of the aspects of this paradigm that makes it ubiquitous is a somehow-universal interoperability between all things connected.
Also, separate from that, there should be a sense of ambient intelligence that persists around all of these interacting agents. Obviously, interoperability, intelligence, high-availability, access, security, communication, data interoperability, data analysis, prediction, etc, are all under the umbrella of the term. However, is all of this really needing to be solved in order to have the user experience of having interoperability and ambient intelligence? I think not. Either way, there are lots of things to think about when it comes to putting your finger on what the real problems are that are left to solve in this space.
Semantic web is alive, and I will tell you why. But first, let me tell you how I arrived at this conclusion.
When I first came to my current job, I was tasked with writing an automated implementation of Schema.org as a service, which could be implemented by multi-site owners as a way to shortcut the tagging and structuring of their site data for the sake of acquiring rich snippets, and ultimately to get better search engine performance.
During that time, I learned a lot about schema.org, semantic web technologies, linked data, and Google. So, with that said, if you’re here wanting to know if you should care about the semantic web, let me drop some knowledge.
So, if you’re part of the SEO community, you’ve probably heard the news that Google recently decided to go from sometimes-encrypted search to always-encrypted search. If you use Google Analytics to track the performance of your website out there on the web, then you’ve probably noticed that often the top recorded search terms that brought visitors to your website showed a value of “(not set)”. This value corresponds to visitors that were using the then-sometimes-encrypted search. Now, technically, everyone predicts that this will be the only keyword information we’ll see for inbound search traffic.
This scares SEO companies because typically their business model has been heavily based around the notion that websites need to be optimized for certain keywords, which will then result in people seeing your website when they search for those exact terms. “If we can’t sell that service, then is the death of SEO?” they ask. The answer is no. But before I explain why SEO is not dead, let me explain why Google doesn’t mind getting rid of keyword optimization.