A couple of weeks ago, I admitted my confusion and bemusement over Oracle’s cloud AND engineered systems strategy. Sometimes, IT workers can get very touchy over people thinking that they might not know EVERYTHING about EVERYTHING, but not I, apparently.
Not only did I scratch my head on my blog, but did so very publicly on LinkedIn too. In all honesty, I really appreciated the input from some very smart people and I do understand the logic a lot more now. Admitting that you don’t have the answer to every question is liberating sometimes and personally beneficial almost every time.
Basically, Oracle are going big on engineered systems. If customers really are serious about migrating to THE CLOUD(TM) and have made a strategic decision to never,ever buy any hardware ever again – I often find that the most reasoned decision involves limiting your options on ideological grounds – Oracle will add these systems to their PaaS offering instead of selling them for on-site use. Win-win.
It’s still doesn’t really tessellate perfectly for me, but at least it makes more sense now. I’m sure you’ve all seen the data sheets by now, so here’s a few pennies for my thoughts:
A full-rack can read and write 4m IOPS: I presume this is four MILLION IOPS, which is a seriously impressive number. To put it into context, the X3-2 quarter-rack was rated for 6,000 IOPS!
The Oracle Database Appliance now comes with FlashCache and InfiniBand: which should make the ODA worthy of very serious consideration for a lot of small-to-medium-sized enterprises.
Goodbye High Performance drives: they’ve been replaced with a flash-only option. Not only is it Flash, but it’s “Extreme Flash“, no less.
Do I trust all-Flash storage? No.
Since moving off V2 and leaving Bug Central, have I encountered any problems whatsoever with the FlashCache? No.
Can I justify my distrust in Flash storage? Without delving into personality defects, probably not.
There’s a “gotcha” with the Extreme Flash drives: the license costs are DOUBLE that of High Capacity drives. I don’t understand the reasoning behind this, unless Oracle are specifically targeting clients for whom money is no option with this (and they probably ARE in a way).
Configuration elasticity is cool: you can pick and choose how many compute nodes / storage cells you buy. I do remember in the days of the V1 and V2 when you couldn’t even buy more storage to an existing machine. The rationale being that you’d mess the scaling all up (offloading, etc).
It’s a really great move for Oracle to make this very flexible and will go some way to silencing those who claim that Exadata is monolithic (and, don’t forget, expensive).
You can now virtualize your Exadata machine with OVM: I haven’t had the best of luck ever getting OVM to work properly, so I’ll defer my views on that for the time being, though the purist thinks they’re dumbing down the package by offering virtualization at all. Isn’t that what the Exalytics machine is for?
OK, fine, they want to bring Exadata to the masses and it’s an extension of the “consolidation” drive they’ve been on for a couple of years, but it’s a bit like buying a top-end Cadillac and not wanting to use high-grade gasoline because it’s too expensive.
Other cool-sounding new Exadata features that made my ears prick up:
- faster pure columnar flash caching
- database snapshots
- flash cache resource management – via the ever-improving IORM
- near-instant server death detection – this SOUNDS badass, but could be a bit of a sales gimmick; don’t they already do that?
- I/O latency capping – if access to one copy of the data is “slow”, it’ll try the other copy/copies instead.
- offload of JSON and XML analytics – cool, I presume this is offloaded to the cells.
I didn’t have the chance to listen to Oracle’s vision of the “data center of the future” – I think it had something to do with their Virtual Compute Appliance competing against Cisco’s offerings and “twice the price at half the cost“.
Oracle’s problem is still going to be persuading customers to consider VALUE instead of COST. “Exadata is outrageously expensive” is something I’m sure everyone hears all the time and to claim it’s “cheap” isn’t going to work because managers with sign-off approval can count.
Is it expensive? Of course. Is it worth it? Yes, if you need it.
This is why I’m unconvinced that customers will buy an Exadata machine and then virtualize it. The customers who are seriously considering Exadata are likely doing so because they NEED that extreme performance. You can make a valid argument for taking advantage of in-house expertise once your DBA team has their foot in the door – best of breed, largest pool of talent and knowledge, etc.
However, so many companies are focusing solely on the short-term and some exclude their SMEs from strategic discussions altogether. Getting to a point where the DBA team is able to enforce Exadata as the gold standard in an IT organization is going to be incredibly difficult without some sort of sea change across the entire industry and … well, the whole economy, really.
I’m not sure what caused it, but I came away with a feeling that these major leaps in performance were very distant to me. Maybe it’s because I don’t personally see much evidence of companies INVESTING in technology, but still attempting to do “more with less” (see all THE CLOUD(TM) hype).
I’m really not convinced there is much appetite out there to maximize data as an asset or to gain a competitive advantage through greatly enhancing business functionality so much as there is to minimize IT expenditure as much as possible. Cost still feels seems to be the exclusive driver behind business decisions, which is a real shame because it’s difficult to imagine a BETTER time to spend to invest in enterprise data than right now.
Said the DBA, of course.