Projects

The following is a short list of projects that I’ve worked on recently.

 

InnerWall
Using real time video feed processing, a motion tracking camera, integrated depth sensing, you can walk around your new home and model all of the studs, electrical lines, and gaslines in your home. Once the drywalls have been put up over all of that, it can be a total nightmare to troubleshoot problems that occur when things go wrong inside of your walls. With the InnerWall tool, you can hold up your device to the walls and walk around your home and literally see inside of the walls in real time, just like you have X-Ray vision. If you’ve ever owned a home, you know this would be an amazing tool.

 

ReportStream
Automated business intelligence and competitive analysis reports as a service. All of the reports are generated by doing graph data analysis from aggregated social network data, repurposed data from wikipedia on known companies, venture capitalists, chief officers, products / product lines. By combining those sources with manual strategy analysis, interesting and meaningful reports can be generated.

 

Graph Analytics as a Service
I am currently architecting and developing a hybrid platform that combines the technologies of the LAMP stack, the MEAN stack, and graph data engines, along with process management and a custom interface in order to rapidly perform data analysis over disparate data sources without having to think too hard about the process. The result is a unified data science / data analytics platform that may end up powering an entire untouched field in mainstream analytics.

 

DBPedia Suggestion Engine
Using SPARQL and DBPedia, I wrote a SPARQL query / algorithm that allows a user to specify any known data type / wikipedia category and then receive content suggestions based on any seed data that they provide. An example would be a user supplying Movie – The Matrix and returning a variety of movies that are in the same genres, time periods, stylistic qualities, directors, etc. What is interesting about using DBPedia for source data is how easy it really was to make this process nearly universal across all categories.

 

DBPedia NodeEdge Explorer
Wikipedia has a lot of information in it, so much so that an entire project called DBPedia was built around it to scrape the data and serialize it as RDF data. That allows programmers and data scientists to programmatically access gazillions of interesting facts from Wikipedia, but from a purely SPARQL point of view, exploring that data can feel boring or old fashioned. So, using D3 and Angular, I wrote an explorer of that data as graph visualizations. Specifically you can explore data be clicking through the nodes and edges that represent the information and their relationships within the dataset.

 

SPARQLClient
The world of SPARQL is old and new at the same time — its old in the sense that some of the players in the space have been working on triplestores and SPARQL implementations for the last decade, its new in the sense that most people haven’t heard of any of this. The interesting thing about this space is that there aren’t any exceptional applications for consuming and managing triples with SPARQL. So, I wrote a full application that implements consumers for SPARQL 1.1 and SPARUL in order to build a cohesive and engaging interface for interacting with triplestores.

 

Algebrize – Automated Implementation of Schema.org
In the process of categorizing the content found in web pages by crawlers, it became clear that more machine-based disambiguation would be required to get the crawlers to the next plateau of information context. In order to help solve this problem, tech leaders tapped into the fields of graph data and ontologies to start to build webs of context. In the process, they built up what is now called Schema.org, which is an ontology designed to categorize the information on the world wide web. As Schema.org got more sophisticated, search companies started asking for site owners to use special Schema.org markup in their webpages so that clearer signals of the information and context to be read by crawlers. The only problem is, its very difficult to learn this process if you’re new to the technology. So, we built an automated data parser for web content that could produce semantic markup that implements the Schema.org ontology in a way that could be called and cached from a RESTful API directly in web pages, or at the middle tier for popular content management systems.

 

GolfShot – Game Performance Analysis
Every time a golfer steps up to the ball before taking their shot, they must visualize and decide on a particular ball flight, shot shape, desired landing area, trajectory, and more. The sheer combination of tracking all of this data is mentally difficult to do, and for that reason, it is hard to know, while in the middle of a round, what your tendencies for messing up are. In short, this tool compares your intention with the outcome and gives you round statistics and historical statistics. Since writing this analysis tool and using it on the weekends, my round score has improved by an average of 5 strokes.

 

Freesponsive
This project is a public tool designed to give designers and developers access to a multitude of responsive web development help. It first started as the correctly-sized iFrame site viewer. Then I added device image support. Then I wrote a PHP proxy so that way I could get around XSS restrictions. Then, I included device support into the proxy so that the UserAgent of the curl requests were made from the “device” and not from my website. Now I’m adding tutorials and interesting information for responsive developers.

 

Projo Project Manager
For the last two years I’ve been developing a full scale web application for social project management, which was completely created by me in every aspect. All application architecture was written from scratch, including the MVC framework that powers Projo. All visual design and all user interface development and planning was designed to minimize the amount of work needed to manage your team.

Leave a Reply

Your email address will not be published. Required fields are marked *