Tag Archives: Oracle

Today’s Nugget: Materialized Views

As I’m still trying to get my feet under the door at my new gig, here’s another nugget in place of a real blog entry. While troubleshooting for a client, I noted that there were three types of materialized view. I knew this before, sort of, but this troubleshooting exercise was a great aide memoire.

The three types of materialized views:

  1. Read-only
  2. Updateable
  3. Writeable

The read-only MV omits the FOR UPDATE clause in its DDL during creation and does not permit DML.

The updateable MV includes the FOR UPDATE clause in its DDL and is included in a materialized view group.  This allows changes made to the MV to be pushed back to the ‘master’ during a refresh.

The writeable MV includes the FOR UPDATE clause in its DDL, but does not belong to a materialized view group.  All changes made to the MV are lost during a refresh.

 

Tagged , , , ,

Exadata X5 – Yet ANOTHER Level

A couple of weeks ago, I admitted my confusion and bemusement over Oracle’s cloud AND engineered systems strategy. Sometimes, IT workers can get very touchy over people thinking that they might not know EVERYTHING about EVERYTHING, but not I, apparently.

Not only did I scratch my head on my blog, but did so very publicly on LinkedIn too.  In all honesty, I really appreciated the input from some very smart people and I do understand the logic a lot more now.  Admitting that you don’t have the answer to every question is liberating sometimes and personally beneficial almost every time.

Basically, Oracle are going big on engineered systems.  If customers really are serious about migrating to THE CLOUD(TM) and have made a strategic decision to never,ever buy any hardware ever again – I often find that the most reasoned decision involves limiting your options on ideological grounds – Oracle will add these systems to their PaaS offering instead of selling them for on-site use.  Win-win.

It’s still doesn’t really tessellate perfectly for me, but at least it makes more sense now.  I’m sure you’ve all seen the data sheets by now, so here’s a few pennies for my thoughts:

A full-rack can read and write 4m IOPS:  I presume this is four MILLION IOPS, which is a seriously impressive number. To put it into context, the X3-2 quarter-rack was rated for 6,000 IOPS!

The Oracle Database Appliance now comes with FlashCache and InfiniBand:  which should make the ODA worthy of very serious consideration for a lot of small-to-medium-sized enterprises.

Goodbye High Performance drives:  they’ve been replaced with a flash-only option.  Not only is it Flash, but it’s “Extreme Flash“, no less.

Do I trust all-Flash storage?  No.
Since moving off V2 and leaving Bug Central, have I encountered any problems whatsoever with the FlashCache?  No.
Can I justify my distrust in Flash storage?  Without delving into personality defects, probably not.

There’s a “gotcha” with the Extreme Flash drives:  the license costs are DOUBLE that of High Capacity drives. I don’t understand the reasoning behind this, unless Oracle are specifically targeting clients for whom money is no option with this (and they probably ARE in a way).

Configuration elasticity is cool:  you can pick and choose how many compute nodes / storage cells you buy.  I do remember in the days of the V1 and V2 when you couldn’t even buy more storage to an existing machine.  The rationale being that you’d mess the scaling all up (offloading, etc).

It’s a really great move for Oracle to make this very flexible and will go some way to silencing those who claim that Exadata is monolithic (and, don’t forget, expensive).

You can now virtualize your Exadata machine with OVM:  I haven’t had the best of luck ever getting OVM to work properly, so I’ll defer my views on that for the time being, though the purist thinks they’re dumbing down the package by offering virtualization at all.  Isn’t that what the Exalytics machine is for?

OK, fine, they want to bring Exadata to the masses and it’s an extension of the “consolidation” drive they’ve been on for a couple of years, but it’s a bit like buying a top-end Cadillac and not wanting to use high-grade gasoline because it’s too expensive.

Other cool-sounding new Exadata features that made my ears prick up:

  • faster pure columnar flash caching
  • database snapshots
  • flash cache resource management – via the ever-improving IORM
  • near-instant server death detection – this SOUNDS badass, but could be a bit of a sales gimmick; don’t they already do that?
  • I/O latency capping – if access to one copy of the data is “slow”, it’ll try the other copy/copies instead.
  • offload of JSON and XML analytics – cool, I presume this is offloaded to the cells.

I didn’t have the chance to listen to Oracle’s vision of the “data center of the future” – I think it had something to do with their Virtual Compute Appliance competing against Cisco’s offerings and “twice the price at half the cost“.

Oracle’s problem is still going to be persuading customers to consider VALUE instead of COST.  “Exadata is outrageously expensive” is something I’m sure everyone hears all the time and to claim it’s “cheap” isn’t going to work because managers with sign-off approval can count.

Is it expensive?  Of course.  Is it worth it?  Yes, if you need it.

This is why I’m unconvinced that customers will buy an Exadata machine and then virtualize it.  The customers who are seriously considering Exadata are likely doing so because they NEED that extreme performance.  You can make a valid argument for taking advantage of in-house expertise once your DBA team has their foot in the door – best of breed, largest pool of talent and knowledge, etc.

However, so many companies are focusing solely on the short-term and some exclude their SMEs from strategic discussions altogether.  Getting to a point where the DBA team is able to enforce Exadata as the gold standard in an IT organization is going to be incredibly difficult without some sort of sea change across the entire industry and … well, the whole economy, really.

I’m not sure what caused it, but I came away with a feeling that these major leaps in performance were very distant to me. Maybe it’s because I don’t personally see much evidence of companies INVESTING in technology, but still attempting to do “more with less” (see all THE CLOUD(TM) hype).

I’m really not convinced there is much appetite out there to maximize data as an asset or to gain a competitive advantage through greatly enhancing business functionality so much as there is to minimize IT expenditure as much as possible.  Cost still feels seems to be the exclusive driver behind business decisions, which is a real shame because it’s difficult to imagine a BETTER time to spend to invest in enterprise data than right now.

Said the DBA, of course.

Tagged , , , , , , ,

Data Guard Licensing

BREAKING NEWS: Oracle does NOT charge you for an advanced feature which is crucial in providing disaster recovery and is a key part of their Maximum Availability Architecture.

Today, I found out that Oracle Data Guard is NOT a cost option in its own right (like RAC, partitioning, etc), but that the Enterprise Edition “includes” it.

There is no entry in the Price List for “Data Guard”.

I could have SWORN I’d seen it before – many times, in fact. I’ve been working with Data Guard or its predecessor for 10 years, so this came as a bit of a surprise.

But then again, so did the fact that Oracle DO charge for the 12c multi-tenancy option.

Tagged , , ,

OLTP Compression vs. Single Row Inserts

Day 3 of my new gig and I hit a rare bug with OLTP compression (new in 11g) while executing single-row INSERT statements.

Luckily, my new colleagues are excellent notekeepers as well as excellent DBAs – they had seen it before and quickly found the workaround they’d used previously. Basically, the session got hung trying to find free space in which to put the new block.

And, yes, this was on an Exadata box, so I was able to recommend EHCC as an alternative to OLTP compression 🙂

Tagged , , , , ,

Oracle’s HA Service Level Tiers

This white paper on HA Best Practices and DBaaS is pretty interesting, especially for its definitions of “HA Service Level Tiers”:

 

BRONZE

  • Single-instance database.
  • DR: Requires restore from the last backup, potential for data loss.

SILVER

  • RAC database.
  • DR: Requires restore from the last backup (with potential for data loss).

GOLD

  • RAC database with Data Guard and GoldenGate.
  • DR: Real-time failover to standby database (with zero or near-zero data loss).

PLATINUM

  • RAC database with Data Guard, GoldenGate and Application Continuity.
  • DR: In-flight transactions are preserved (with zero data loss).

 

The “Platinum” service level, naturally, requires some seriously impressive kit. I wonder what “Titanium” or “Palladium” will eventually turn out to need 🙂

 

Tagged , , , , , , ,

CPU and QFSDP for April 2014 Availability

I’m sure you probably got the memo that the Critical Patch Update for April 2014 is now available.

For non-Exadata databases, this patch upgrades your database software to:

  • 11.2.0.3.10
  • 11.2.0.4.2
  • 12.1.0.3.0

For Exadata machines, the associated Quarterly Full Stack Download Patch for April 2014 (Patch 18370231) upgrades your software to:

  • Exadata Storage Server: 11.2.3.3.0
  • DB Node Update Utility: 3.20
  • PDU Firmware: 1.06
  • Database/Grid Infrastructure: 11.2.0.3.23 (Bundle Patch 23)
  • OPatch: 11.2.0.3.6
  • OPlan: 12.1.0.1.5

It is also recommended that your Enterprise Manager is upgraded to 12.1.0.3 and that you apply the latest Exadata and database plugins (12.1.0.5).

While browsing through MOS for details on this patch, I found this neat reference note:

  • Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (MOS 1454618.1)

A very handy reference point for all the patch numbers and download links. Of course, the best thing about it is that, you don’t have to go searching for the correct patch number yourself anymore 🙂

 

Tagged , , , , ,

Exadata and the OpenSSL/”HeartBleed” Exploit

Oracle have published MOS 1645479.1 which describes the impact of the OpenSSL/”HeartBleed” exploit on their products.

It appears that the individual components of Exadata – with the exception of OEM Cloud/Grid Control – are NOT impacted by the OpenSSL/HeartBleed bug.

Obviously, this depends on your software stack, so I urge you to read 1645479.1 as soon as possible.

Exadata-related products which, while using OpenSSL, were never vulnerable:

  • Audit Vault
  • Exadata (prod 2546)
  • Exalogic
  • ILOM 3.2.2 and earlier
  • NM2 IB switches
  • NM2-36P InfiniBand switches
  • Oracle Linux 5 (watch out for EL 6 – this IS vulnerable, but has a fix!)
  • Oracle Secure Backup 10.2 and 10.3
  • Oracle ZFS Storage Appliance
  • Sun System Firmware

Exadata-related products which are likely vulnerable but have no fixes yet available:

  • EM Cloud Control
  • EM Grid Control

Exadata-related products which do NOT include OpenSSL:

  • Database
  • JavaVM
  • Linux 5
Tagged , , , ,

Oracle’s “Nightmare”?

Over the last six weeks or so, I’ve been voraciously reading about Big Data, NoSQL, Hadoop and other emerging data management technologies which are clearly The Next Big Thing™ in IT. Personally, I find it fascinating to learn/think about how we will integrate different volumes and scales of data – both structured and unstructured, big and small – to give our organizations a clear, competitive edge in their industries.

As with all Next Big Things™, there are a lot of tremendous blogs and white papers written by some extremely smart and insightful people – and there are a lot of articles in “trade mags” which do nothing but fuel the runaway hype. Continue reading

Tagged , , , , , , ,

I don’t often learn of major bugs with incremental backups …

… but when I do, it’s always on a Friday afternoon.

During a Friday meeting, some friendly Oracle ExCite types directed my attention to this GEM of a bug:

Bug 16057129 – Exadata cell optimized incremental backup can skip some blocks to backup

If your incremental backups are “cell-optimized” – which they are by default in Exadata – they can skip blocks if a database file grows in size during the backup.

If you’re taking incremental backups with RMAN of your Exadata databases, have not applied the one-off patch for this bug or the “hidden” parameter workaround and you’re running:

  • 12.1.0.1 GIPSU1 or earlier
  • 11.2.0.3 BP21 or earlier
  • 11.2.0.2 (any)
  • 11.2.0.1 (any)

You should assume that all your incremental backups – both cumulative and differential – are invalid and unusable.

Unfortunately, there is no way to know whether your incremental backup has been affected until you use it for recovery and see an ORA-00600 [3020] error message.

Only incremental backups are possibly affected by this problem – you can still use archive logs to recover from a Level 0 backup and other recovery options are not impacted.

To squash this bug, you have three options:

  • Apply the workaround to disable cell optimized backups.
  • Apply the one-off bug fix for your version.
  • Upgrade to either 12.1.0.1.2, 11.2.0.4 or 11.2.0.3.22.

The Quarterly Full Stack Patch for January 2014 includes Bundle Patch 22 for 11.2.0.3, so you may want to review the QFSP January 2014 upgrade process. It looks like you will have to apply a one-off patch 17599908 on top of 11.2.0.3.22, so review it closely!

If you haven’t done so already, bookmark the MOS list of Exadata Critical Issues.

Mark

Tagged , , , , , , , , ,
%d bloggers like this: