Oracle's massive OpenWorld conference kicked off the other day - opening with the traditional Larry Ellison keynote - and a partner keynote - this time Fujitsu talking about the goods of their M10 system.
But the real news was Ellison presenting the new in memory option for the Oracle database. Interesting enough the presentation was void of version numbers and Oracle did also not post a press release about the new capabilities - which raises some questions on why that did not happen. More to come during the week - or a desire to keep things general for revenue recognition purposes - we will learn later.
Coming since a long time
In memory technology has been important for Oracle since quite some time, starting with the TimesTen in 2005. But it was always an option to solve a limited set of performance problems - not running the overall database in memory. Credit for shipping a complete in memory database and evangelizing the market goes to SAP with HANA - something Ellison doubted that SAP could deliver. Well SAP did deliver, and did well, so since about 6 months we heard Ellison hinting to the next version of the Oracle database beyond 12c (12c R1?) to address the in memory technology, last on the Q1 earnings call the other week.
Interesting similarities
As an observer it's interesting to see how much both industry veterans - Ellison and Plattner - care about solving a performance problem that traditional databases could not address - the flexible crunching of large amounts of data - often referred to as the analytical applications buzzword wise today. And both gentlemen get a tad professorial talking about this - Plattner with blackboard sessions - Ellison with talking about the fundamental challenges of database architecture. And both are passionate about the topic and Ellison was evidently in best spirits - winning two races at the America's Cup certainly helped, too.
An organic approach
The path Oracle has chosen to address in memory is more organic - allowing customers to turn the in memory feature on / off with what Ellison referred to just a switch. If you turn the switch on - a DBA has just to walk through three steps and the database will take advantage of the in memory option... and what happens behind the scene is that tables will be transported to in memory, stored there in columnar format and all future transactions of the application running in the database will be saved both to the new in memory column store and the traditional row store, which most likely will reside on disk.
The key benefit for customers will be that they do not have to change a line of code to get to the benefits of in memory, the system will just get faster as more data gets moved to memory and once it's there the system will be significantly faster as a demo showed.
Key Benefits
Oracle is choosing a systemic approach to the in memory problem, which is possible as Oracle owns the underlying infrastructure of the row database. Oracle knows what the CRUD operations on its database are being operated and can sort them out to in memory as parametrized.
Ironically Ellison claimed that this will even accelerate the database - having the dual writes. This is largely aided by being able to drop expensive index file management that no longer need to be maintained as the database will automatically direct the queries for these tables to in memory, where thanks to RAM speed no index files are required.
So this will allow customers to play with in memory - by upgrading the memory on their database servers and see what benefits they can achieve with a partial move of data to in memory.
The integrated play
And it would not be Oracle of 2013 - if Oracle would not ship hardware that empowers the latest software move - and indeed Oracle has available a number of Exa-Severs that are ideally tuned to operate large in memory databases. Ironically the hardware is available today - the software - no mention (yet).
Questions remain
A lot of details remain to be clarified - and my hope is latest the database summit on Wednesday will address them. As with all powerful software - the question is going to be the price for the switch and for sure Oracle will not make it cheap. But Oracle will price it right to make it easy for customers to stay on Oracle and not move to alternate products.
The ISV angle
As we all know the largest SaaS vendor - Salesforce.com - stroke a deal with Oracle continuing to rely on Oracle 12c going forward. There was a lot of hoopla around this back in June - but Benioff supported the decision with tweets going along the keynote - which almost had a feel of vindication. Now finally Salesforce.com could show why they decided to stay on the Oracle database.
And they will not be alone in that decision - it's the first time a Micosoft executive is presenting at Oracle OpenWorld ...
MyPOV
There is a parallel between the AmericasCup and in memory databases right now. Oracle is playing catchup - and we all need to wonder why TeamOracleUSA did not sail as fast from the start and why Oracle let others (SAP) get a lead in the in memory database game. But unlike to sailing where the puffs in the San Francisco Bay may decide the outcome - for the in memory database game - the customer adoption makes the difference - and there Oracle has made it technically easy for customers to follow. Let's see how easy Oracle will make it commercially... if Oracle gets this right there may well be soon more SAP customers running the Oracle in memory option database wise than running HANA.
You can also find the tweetstream of the keynote in this Storify here.