Image via Wikipedia
There will be plenty of detailed coverage on Exadata V2 so I won’t attempt to replicate that. However I do have a couple of initial thoughts which I would like to share. For those who missed it, Oracle has just announced Exadata V2 (which is their pre-built “machine”). Exadata V1 was built using HP equipment, Exadata V2 is using Sun. The main addition to Exadata V2 seems to be an extra tier in the memory hierarchy, a flash cache. Oracle is very quick to point out this is not flash disk, but it is flash memory, Sun’s FlashFire technology (flash disk or SSD’s was always going to be a transition technology, flash memory doesn’t have the physical constraints of moving parts disk so the whole “disk” concept for flash doesn’t make too much sense other than it fits easily with current architectures).
The new memory layer (Processor Cache’s -> DRAM -> Flash Cache -> Disk) coupled with Oracle’s algorithms to effectively use the Flash Cache layer brings performance benefit to the solution (+ all the other improvements 12 months of hardware innovation brings, faster CPU’s, more memory etc).
My initial thoughts are:
- Kudos to Oracle. They are the first vendor to really bring a bunch of this leading edge technology together in a semi-mainstream way. Flash Cache, Inifiband interconnects, DBMS optimizations using flash hasn't really surfaced anywhere outside of startups yet.
- So what happens to Exadata V1 customers using the HP solution? This is only about a year old. Some analysts are suggesting there has only been minor sales of Exadata V1 (I am not an analyst so don’t really know). So why would HP continue to support a platform where no new sales will be created, when potentially only a limited number of customers have it today? Possibly Oracle will offer attractive terms to move existing HP Exadata V1 customers to Sun Exadata V2.
- It is a preconfigured solution that you by in certain size configurations. Small, half rack, full rack, multiple racks. I think Larry said that 3 racks will give you a PetaByte of storage capacity. This is fine, except they are targeting it for use with OLTP and data warehousing workloads. It seems odd that to get very high computational resources for transaction processing, you would also get massive volumes of potentially unnecessary storage capacity. It will be interesting to see if they allow the balance between processing & storage to be modified as part of configuration.
I have had some questions along the lines of “isn’t this back to the one size fits all approach?” Well yes it is, but Oracle never really moved away from this in terms of the core DBMS. It is my understanding that Oracle Exadata was still the general purpose Oracle DBMS & RAC but on a hardware platform optimized for accessing large data sets (making it a data warehousing solution). Using FlashFire, the hardware can now do high levels of random I/O (I think 1m random I/O’s was quoted) which makes the hardware platform general purpose as well.
One interesting question will be if, under Oracle, other vendors can buy the exact same hardware configuration from Sun and optimize their DBMS for Flash also? If so, it may be difficult for them to do this in a way that is price competitive. And will competitive DBMS vendors really want to help fill Oracle’s pockets further?
If we expect to see more of this hardware alignment between DBMS vendors where does that leave Microsoft? Maybe HP is already peeling the Exadata V1 logos off their racks and sticking Microsoft Madison logo’s in their place?
Oracle has put out a FAQ which partly answers some of the questions.