Exadata X6

Blink and you might have missed it, but the Exadata X6 was officially announced today

As has become the norm, Oracle have doubled-down on the specs compared to the X5:

  • 2x disk capacity
  • 2x Flash capacity
  • 2x faster Flash
  • 25% faster CPUs
  • 13% faster DRAM

With the X6-2 machine, you still have Infiniband running at 40Gb/sec, but the compute nodes and the storage servers now have the following:

X-6 Compute Node

  • 2x 22-core Broadwell CPUs
  • 256Gb of DDR4 DRAM (expandable to 768Gb)
  • 4x 600Gb 10,000 RPM disks for local storage (expandable to 6)
  • DDR4 DRAM

High Capacity Storage Server

  • 2x 10-core Broadwell CPUs
  • 128Gb of DDR4 DRAM
  • 12x 8Tb 7,000 RPM Helium SAS3 disks
  • 4x 3.2Tb NVMe PCIe 3.0 Flash cards

Extreme Flash Storage Server

  • 2-socket, 10-core Broadwell CPUs
  • 128Gb of DDR4 DRAM
  • 8x 3.2Tb NVMe PCIe 3.0 Flash cards

What does all of that give you when it comes down to it?

Well, remember that the eighth-rack is the same as a quarter-rack, but you have access to half the cores and half the storage across the board (you still have two compute nodes and three storage servers):

High Capacity Eighth-Rack

  • 44-core compute nodes
  • 30-core storage servers
  • 144Tb raw usable disk storage
  • 19.2Tb Flash storage

Extreme Flash Eighth-Rack

  • 44-core compute nodes
  • 30-core storage servers
  • 38.4Tb Flash storage

Minimum licensing requirements is 16 cores for the eighth-rack and 28 cores for the quarter-rack.

I’m sure you can read through the sales stuff yourself, but aside from the UUUUGE increase in hardware, two new features of the X6 really pop out for me.

Exadata now has the ability to preserve storage indexes through a storage cell reboot. Anyone who had to support an older Exadata machine will remember quite how much of a big deal that used to be: the wait for the storage index to be rebuilt would take hours and often require some major understanding on the part of user population and management to get through the first day or so after some maintenance.

Probably the biggest thing is that Oracle have introduced high availability quorum disks for the quarter-rack and eighth-rack machines. I blogged about this before as I thought it had the potential to be a real “gotcha” if you were expecting to run high redundancy diskgroups on anything less than a half-rack.

No longer.

Now, a copy of the quorum disk is stored locally on each database node, allowing you to lose a storage cell and still be able to maintain your high redundancy.

This is a particularly useful development when you remember that Oracle have doubled the size of the high-capacity disks from 4Tb to 8Tb. Why? Well, because re-balancing a bunch of 8Tb disks is going to take longer than re-balancing the same number of 4Tb disks.

I’ll be going to Collaborate IOUG 2016 next week and I’m looking forward to hearing more about the new kit there.

Mark

Advertisements
Tagged , , ,

6 thoughts on “Exadata X6

  1. Dan Norris says:

    A few comments:
    1. (from the data sheet) Eighth Rack HC storage servers have half the cores enabled and half the disks and flash cards removed. Logically, there are still 2 DB nodes and 3 cells as you mentioned, but the HW is slightly different now for HC configs.
    2. Quorum disks don’t allow you to maintain HIGH redundancy when a storage cell is lost. When a 3-cell config uses HIGH and one cell goes offline, the redundancy is degraded. However, quorum disks do allow you to make all DGs HIGH redundancy on 3-cell configs and that was not possible before. Quorum disks don’t store data, they are used by the clusterware to maintain cluster membership. Think of them as another way that the cluster can track which members are up/available. They are small (relatively speaking) and don’t hold any DB data.

    Like

  2. Philip Newlan says:

    I believe compute node local storage is expandable to 8 disks

    Like

  3. HI, I think however that a lot of the storage cell reboot was actually because the whole flash cache got wiped unless you are running on write back flash cache mode which is probably still not the default ( which irks me mightily). storage indexes might take a little time to warm up but flash cache takes a much longer time that increases every time the flash cache size is doubled.

    Like

  4. Anonymous says:

    Compute node (DB Server) local storage is now expandable to 8 disks. Since quorum disks are small (~200 MB) so you can have many high redundancy diskgroups with the existing disks in the compute nodes. The main purpose of adding enabling disk expansion for compute nodes is for virtualized environments. Since VMs are stored locally, this disk expansion enables more VMs per compute node.

    Like

  5. Mat Steinberg says:

    Reduction in Patching Time

    In addition to quorum disks for high redundancy on 1/8th and 1/4 racks, one of the key features of this software release (available Feb 2013) is dramatic reduction in patching time. Storage server patching has been reduced by 2.5x per server.

    Like

  6. KenP says:

    Hi, Mark, what could be the pitfalls of adding just the X6-2 storage server to existing X5 setup?

    Thanks,
    Ken

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: