Category Archives: Architecture

Exadata X6

Blink and you might have missed it, but the Exadata X6 was officially announced today

As has become the norm, Oracle have doubled-down on the specs compared to the X5:

  • 2x disk capacity
  • 2x Flash capacity
  • 2x faster Flash
  • 25% faster CPUs
  • 13% faster DRAM

With the X6-2 machine, you still have Infiniband running at 40Gb/sec, but the compute nodes and the storage servers now have the following:

X-6 Compute Node

  • 2x 22-core Broadwell CPUs
  • 256Gb of DDR4 DRAM (expandable to 768Gb)
  • 4x 600Gb 10,000 RPM disks for local storage (expandable to 6)

High Capacity Storage Server

  • 2x 10-core Broadwell CPUs
  • 128Gb of DDR4 DRAM
  • 12x 8Tb 7,000 RPM Helium SAS3 disks
  • 4x 3.2Tb NVMe PCIe 3.0 Flash cards

Extreme Flash Storage Server

  • 2-socket, 10-core Broadwell CPUs
  • 128Gb of DDR4 DRAM
  • 8x 3.2Tb NVMe PCIe 3.0 Flash cards

What does all of that give you when it comes down to it?

Well, remember that the eighth-rack is the same as a quarter-rack, but you have access to half the cores and half the storage across the board (you still have two compute nodes and three storage servers):

High Capacity Eighth-Rack

  • 44-core compute nodes
  • 30-core storage servers
  • 144Tb raw usable disk storage
  • 19.2Tb Flash storage

Extreme Flash Eighth-Rack

  • 44-core compute nodes
  • 30-core storage servers
  • 38.4Tb Flash storage

Minimum licensing requirements is 16 cores for the eighth-rack and 28 cores for the quarter-rack.

I’m sure you can read through the sales stuff yourself, but aside from the UUUUGE increase in hardware, two new features of the X6 really pop out for me.

Exadata now has the ability to preserve storage indexes through a storage cell reboot. Anyone who had to support an older Exadata machine will remember quite how much of a big deal that used to be: the wait for the storage index to be rebuilt would take hours and often require some major understanding on the part of user population and management to get through the first day or so after some maintenance.

Probably the biggest thing is that Oracle have introduced high availability quorum disks for the quarter-rack and eighth-rack machines. I blogged about this before as I thought it had the potential to be a real “gotcha” if you were expecting to run high redundancy diskgroups on anything less than a half-rack.

No longer.

Now, a copy of the quorum disk is stored locally on each database node, allowing you to lose a storage cell and still be able to maintain your high redundancy.

This is a particularly useful development when you remember that Oracle have doubled the size of the high-capacity disks from 4Tb to 8Tb. Why? Well, because re-balancing a bunch of 8Tb disks is going to take longer than re-balancing the same number of 4Tb disks.

I’ll be going to Collaborate IOUG 2016 next week and I’m looking forward to hearing more about the new kit there.


Tagged , , ,

Today’s Nugget: Oracle GoldenGate on E-Business Suite Databases

Throughout my work week, I often learn something new and unexpected about my travels. Instead of writing a full blog post about the subject, I figure I’ll occasionally post these as “nuggets” – bite-size chunks which are easily digested!

Here is today’s “nugget”:

Oracle GoldenGate is certified to capture data from E-Business Suite databases, but it cannot be used to apply data to E-Business Suite databases.

This includes writing back to the source database in an active-active configuration.



Tagged , , , , ,

Exadata: why a half-rack is the “recommended minimum size”

Lots of shops dipped their toes in the Exadata water with a quarter-rack first of all.

(For those who are new to the Exadata party and don’t know of a world without elastic configurations, a quarter-rack is a machine with two compute nodes and three storage cells).

If you are / were one of those customers, you’ll probably have winced at the difference between the “raw” storage capacity and the “usable” storage capacity when you got to play with it for the first time.

While you could choose to configure your DATA and RECO diskgroups with HIGH redundancy in ASM, did you notice that you couldn’t do the same with the DBFS_DG / SYSTEM_DG?

Check out page 5 in this document about best practices for consolidation on Exadata.

“A slight HA disadvantage of an Oracle Exadata Database Machine X3-2 quarter or eighth rack is that there are insufficient Exadata cells for the voting disks to reside in any high redundancy disk group which can be worked around by expanding with 2 more Exadata cells. Voting disks require 5 failure groups or 5 Exadata cells; this is one of the main reasons why an Exadata half rack is the recommended minimum size.”

Basically, you need at least 5 storage cells for each Exadata environment if you want to have true “high availability” with your Exadata machine.

While quarter-rack machines have 3 storage cells, half-rack machines have 7 or 8 storage cells, depending on the model.

Let’s say that you have the model with 8 storage cells:  if you split a half-rack machine equally, you’ll have 2x quarter-rack machines with 4 storage cells, so you would need one more storage cell per machine to provide HA for the SYSTEMDG / DATA_DG diskgroup.

For some reason, this nugget escaped my attention until recently.  Even more reason to have a standby Exadata machine at your DR site …



Tagged , , , , ,

Oracle Interactive Quick Reference

Remember those enormous posters of Oracle’s data dictionary views you used to see in DBA shops?

Here’s the Oracle 12c Interactive Quick Reference – more interactive and less need for pulp.

The Oracle 11g Interactive Quick Reference can be downloaded from here.



Tagged , , ,

Exadata and OVM

Exadata and OVM.

OVM and Exadata.

It’s not been the best-kept secret in the world, but it is now a reality with Oracle’s new X5 engineered systems.

I don’t like it either, though I admit that I might just be a purist snob. As far as I can see, this might be useful in two possible scenarios:

1) Saving on additional cost option licensing.
Picture this: you have four databases on your Exadata machine and only one of them needs the {INSERT EXPENSIVE COST OPTION HERE} option.

Instead of buying, for instance, an Advanced Security license for all 144 cores, you might consider dividing up your X5-2 half-rack into four virtual machines – one for each database – and only license Advanced Security for the virtual machine on which that particular database resides.

Assuming each virtual machine is provisioned identically (with 36 cores each instead of the full 144), the cost of licensing ASO is 25% of what it was if you had licensed the entire machine.

Some of those cost options are expensive, definitely. But why not consider a smaller, dedicated Exadata machine for that database instead? Why not consider an alternative instead, such as ODA?

2) Capacity on-demand licensing.
Let’s say that you KNOW you’re going to migrate more databases onto your Exadata machine in the future, but you’re not using its full capabilities to support the databases that are running there right now. Bear with me for argument’s sake…

With OVM, you’re able to license a minimum of 40% of the cores on your Exadata system. If you’re not getting close to fifth gear right now, but you know you will be at some point, you could use OVM to license in a “capacity on-demand” fashion and crank things up as your needs increase.

Of course, given the exponential improvements that come with each new version of Exadata, wouldn’t you try your best to wait until a couple of months before you DID need the extra horsepower so you could buy the latest and greatest Exadata then?

Let’s say you DO eventually get to 100% usage, you still have that extra virtualization layer in the stack and whatever issues go with it, including having to maintain it. To remove it, one assumes that the machine would need to be rebuilt, which isn’t a particularly attractive option.

“Exadata is expensive”
I understand the “Exadata is expensive” argument, but I don’t really think this helps with that very much – you’re still laying down a big wad of cash when you buy the hardware, no matter how you slice the licensing up. Is it really going to be worth the hassle of that extra virtualization layer to save (and possibly only temporarily) on licenses?

Oddly, I think the new elastic configuration capability in X5 makes the argument harder to make: you could achieve the same thing by choosing a different hardware configuration and/or adding comp nodes or storage cells as your needs dictate.

I’m sure there’s a compelling reason out there for putting OVM on Exadata that I haven’t figured out yet, there usually is. Until then, I’m back to scratching my head…

Tagged , , , , , ,

Exadata X5 – Yet ANOTHER Level

A couple of weeks ago, I admitted my confusion and bemusement over Oracle’s cloud AND engineered systems strategy. Sometimes, IT workers can get very touchy over people thinking that they might not know EVERYTHING about EVERYTHING, but not I, apparently.

Not only did I scratch my head on my blog, but did so very publicly on LinkedIn too.  In all honesty, I really appreciated the input from some very smart people and I do understand the logic a lot more now.  Admitting that you don’t have the answer to every question is liberating sometimes and personally beneficial almost every time.

Basically, Oracle are going big on engineered systems.  If customers really are serious about migrating to THE CLOUD(TM) and have made a strategic decision to never,ever buy any hardware ever again – I often find that the most reasoned decision involves limiting your options on ideological grounds – Oracle will add these systems to their PaaS offering instead of selling them for on-site use.  Win-win.

It’s still doesn’t really tessellate perfectly for me, but at least it makes more sense now.  I’m sure you’ve all seen the data sheets by now, so here’s a few pennies for my thoughts:

A full-rack can read and write 4m IOPS:  I presume this is four MILLION IOPS, which is a seriously impressive number. To put it into context, the X3-2 quarter-rack was rated for 6,000 IOPS!

The Oracle Database Appliance now comes with FlashCache and InfiniBand:  which should make the ODA worthy of very serious consideration for a lot of small-to-medium-sized enterprises.

Goodbye High Performance drives:  they’ve been replaced with a flash-only option.  Not only is it Flash, but it’s “Extreme Flash“, no less.

Do I trust all-Flash storage?  No.
Since moving off V2 and leaving Bug Central, have I encountered any problems whatsoever with the FlashCache?  No.
Can I justify my distrust in Flash storage?  Without delving into personality defects, probably not.

There’s a “gotcha” with the Extreme Flash drives:  the license costs are DOUBLE that of High Capacity drives. I don’t understand the reasoning behind this, unless Oracle are specifically targeting clients for whom money is no option with this (and they probably ARE in a way).

Configuration elasticity is cool:  you can pick and choose how many compute nodes / storage cells you buy.  I do remember in the days of the V1 and V2 when you couldn’t even buy more storage to an existing machine.  The rationale being that you’d mess the scaling all up (offloading, etc).

It’s a really great move for Oracle to make this very flexible and will go some way to silencing those who claim that Exadata is monolithic (and, don’t forget, expensive).

You can now virtualize your Exadata machine with OVM:  I haven’t had the best of luck ever getting OVM to work properly, so I’ll defer my views on that for the time being, though the purist thinks they’re dumbing down the package by offering virtualization at all.  Isn’t that what the Exalytics machine is for?

OK, fine, they want to bring Exadata to the masses and it’s an extension of the “consolidation” drive they’ve been on for a couple of years, but it’s a bit like buying a top-end Cadillac and not wanting to use high-grade gasoline because it’s too expensive.

Other cool-sounding new Exadata features that made my ears prick up:

  • faster pure columnar flash caching
  • database snapshots
  • flash cache resource management – via the ever-improving IORM
  • near-instant server death detection – this SOUNDS badass, but could be a bit of a sales gimmick; don’t they already do that?
  • I/O latency capping – if access to one copy of the data is “slow”, it’ll try the other copy/copies instead.
  • offload of JSON and XML analytics – cool, I presume this is offloaded to the cells.

I didn’t have the chance to listen to Oracle’s vision of the “data center of the future” – I think it had something to do with their Virtual Compute Appliance competing against Cisco’s offerings and “twice the price at half the cost“.

Oracle’s problem is still going to be persuading customers to consider VALUE instead of COST.  “Exadata is outrageously expensive” is something I’m sure everyone hears all the time and to claim it’s “cheap” isn’t going to work because managers with sign-off approval can count.

Is it expensive?  Of course.  Is it worth it?  Yes, if you need it.

This is why I’m unconvinced that customers will buy an Exadata machine and then virtualize it.  The customers who are seriously considering Exadata are likely doing so because they NEED that extreme performance.  You can make a valid argument for taking advantage of in-house expertise once your DBA team has their foot in the door – best of breed, largest pool of talent and knowledge, etc.

However, so many companies are focusing solely on the short-term and some exclude their SMEs from strategic discussions altogether.  Getting to a point where the DBA team is able to enforce Exadata as the gold standard in an IT organization is going to be incredibly difficult without some sort of sea change across the entire industry and … well, the whole economy, really.

I’m not sure what caused it, but I came away with a feeling that these major leaps in performance were very distant to me. Maybe it’s because I don’t personally see much evidence of companies INVESTING in technology, but still attempting to do “more with less” (see all THE CLOUD(TM) hype).

I’m really not convinced there is much appetite out there to maximize data as an asset or to gain a competitive advantage through greatly enhancing business functionality so much as there is to minimize IT expenditure as much as possible.  Cost still feels seems to be the exclusive driver behind business decisions, which is a real shame because it’s difficult to imagine a BETTER time to spend to invest in enterprise data than right now.

Said the DBA, of course.

Tagged , , , , , , ,

You WILL use container databases … and you WILL like it!

For those who haven’t climbed aboard the 12c train yet, note that using a “traditional” or non-CDB architecture is deprecated by Oracle from

That does not mean you have to use the multi-tenant cost option: container databases with a single pluggable database (“single-tenant”) will be OK.  The cost option only comes into play whenever you attach more than one pluggable database to a container database (“multi-tenant”).

I’m still surprised that Oracle charge you for the multi-tenant option, to be honest.

Earlier in the week, Oracle released patch set I’m not sure why Oracle didn’t call “12c Release 2 / 12.2”, as it includes a lot of major new features rather than just bug fixes, but I’m sure their marketing types had a reason.

In reality, this means that “12c Release 2, Patch Set 1” is now available. There are a lot of DBAs out there, myself included, who adopt a “wait until Release 2, Patch Set 1” approach before getting serious about upgrading to a new version. If you’re one of those DBAs, happy upgrading!

Tagged , , , , ,

Microsoft’s Analytics Platform System Appliance

In the last couple of months, Microsoft have repackaged their Parallel Data Warehouse appliance and now offer their new Analytics Platform System (APS) instead.

APS is a turnkey/black box appliance which offers the PDW, a Hadoop cluster (HDInsight, an implementation of Hortonworks) and an integration feature like Oracle’s Big Data SQL – called PolyBase – to allow the PDW to use the Hadoop data. Continue reading

Oracle Big Data SQL Primer

What is Big Data SQL?
Oracle Big Data SQL runs on the Big Data Appliance and allows an Oracle database to run one SQL query to pull data from disparate sources such as Hadoop, NoSQL and relational databases.

Continue reading

Tagged , , , , ,
%d bloggers like this: