I was fortunate enough to attend the recent Oracle OpenWorld event. The conference was held in downtown San Francisco at the Moscone center and if reported numbers are accurate, approximately 60,000 guests converged for four days of presentations, technology sessions and information booths all related in some way to Oracle. Being able to hear firsthand what industry leaders are working on and envisioning for the future of technology was exciting, thought provoking, and a little bit scary (in an Asimovian, psycho historical way). I wanted to share some of the take-aways that I remember from the event and so I will boil four days down to two blogs. This is "Part One."
During the course of the conference, Oracle made it clear what they see as the future of computing: concepts like Big Data, Analytics, M2M and the Cloud. Many of these terms should be somewhat familiar to the IT crowd, but the scale and scope of what Oracle and its partners are trying to accomplish is truly at a different level than we have seen in the past. Let's look at some of these concepts.
Big Data. This is essentially the concept of collecting large amounts of data and storing that data for future analysis. Big data is usually categorized as unstructured data such that relationships in the dataset may not be well defined. Most relational databases do not deal well with this type of data. Just to get an idea of how much data is being created, ninety percent of the data that exists in the world today was created in the last two years and this trend is not slowing down. With the ability to embed sensors into just about anything, the amount of information that can be generated is staggering. Super projects like the Large Hadron Collider are good examples of this. It contains one hundred fifty million sensors that can collect approximately forty million data points per second. Currently this data is being filtered down to .001% of the content in order to effectively work with it.
Analytics. Not a new concept here. This is the ability to analyze and make decisions based on a set of data. The difference now is that all of that Big Data floating around needs to have something done with it. Oracle's Sun hardware division, along with its partners such as Intel, Dell and EMC are working on database and hardware platforms that can better process larger and larger datasets. This includes larger storage systems, specific analytics engines, redesigned database engines, and languages capable of converting traditional queries into searches optimized for these datasets. The hope is that in having all the collected data available online and with a proper set of rules applied to the data, useable information can be gathered. Apply a different set of rules to the same dataset and gain another perspective and possibly new results. This kind of analysis can be extremely useful for complex design and operational mechanics. Being able to process the data efficiently and quickly can provide useful information that can be applied back into the monitored system. One area that was a very intriguing was the discussion of pre-emptive analytics. Based on what I heard, I would call it predictive analytics. The idea of taking the large amount of collected and incoming data to make decisions on what might happen. Sounds great for getting eyes on critical failure points in complex operational environments before anything happens. Some of the discussion however did cover behavioral predictions (hence my reference to Asimov in the opening). Not sure I am ready for that science fiction to turn into any form of science fact yet.
Come back next month as I continue this conversation with thoughts on M2M and the ever elusive cloud.