Oracle 19c is now available for Oracle Cloud and Exadata. It’s not yet available for “on-premises”, but it will no doubt be available shortly. Well, hopefully.
19c is the terminal release of 12.2, which means that prior to Oracle’s versioning changes, it would have been referred to as 188.8.131.52. As of the time of writing, it will be supported for the next 4 years under Premium Support and 3 years under Extended Support.
This is probably going to be the version that those waiting to pull the plug on upgrading to 12c are going to move to as 18c is still quite buggy (especially for versions < 18.1.5)
Bear in mind that there is no support for 19c on Linux 6. The minimum versions of the O/S and kernels are as follows:
It’s been 17 years since I worked with an Oracle database running on SuSE, but there must be some hiding out there somewhere!
One thing to bear in mind, though, is that the upgrade to 19c on Exadata can be quite painful. If you moved to 18c and found it to be nice and (relatively) easy, don’t be surprised if 19c is trickier.
As we try and contain our excitement, here are some of the features I think are going to be the most useful.
A new video on how to make the most out of Collaborate 2019 is up on my YouTube channel.
Along with some handy tips, I also explain how to score a free Oracle certification exam AND how to get Oracle’s new Autonomous DBA certification!
It’s been a while (!) but I’ve made a resolution to do a better job in keeping my blog updated, no matter how busy I might be.
I will be attending Collaborate IOUG 2019 in San Antonio this year. I plan on putting together a collection of hints and tips to get the most out of the conference soon. There are some really cool benefits that you might not be aware of.
Blink and you might have missed it, but the Exadata X6 was officially announced today
As has become the norm, Oracle have doubled-down on the specs compared to the X5:
With the X6-2 machine, you still have Infiniband running at 40Gb/sec, but the compute nodes and the storage servers now have the following:
X-6 Compute Node
High Capacity Storage Server
Extreme Flash Storage Server
What does all of that give you when it comes down to it?
Well, remember that the eighth-rack is the same as a quarter-rack, but you have access to half the cores and half the storage across the board (you still have two compute nodes and three storage servers):
High Capacity Eighth-Rack
Extreme Flash Eighth-Rack
Minimum licensing requirements is 16 cores for the eighth-rack and 28 cores for the quarter-rack.
I’m sure you can read through the sales stuff yourself, but aside from the UUUUGE increase in hardware, two new features of the X6 really pop out for me.
Exadata now has the ability to preserve storage indexes through a storage cell reboot. Anyone who had to support an older Exadata machine will remember quite how much of a big deal that used to be: the wait for the storage index to be rebuilt would take hours and often require some major understanding on the part of user population and management to get through the first day or so after some maintenance.
Probably the biggest thing is that Oracle have introduced high availability quorum disks for the quarter-rack and eighth-rack machines. I blogged about this before as I thought it had the potential to be a real “gotcha” if you were expecting to run high redundancy diskgroups on anything less than a half-rack.
Now, a copy of the quorum disk is stored locally on each database node, allowing you to lose a storage cell and still be able to maintain your high redundancy.
This is a particularly useful development when you remember that Oracle have doubled the size of the high-capacity disks from 4Tb to 8Tb. Why? Well, because re-balancing a bunch of 8Tb disks is going to take longer than re-balancing the same number of 4Tb disks.
I’ll be going to Collaborate IOUG 2016 next week and I’m looking forward to hearing more about the new kit there.
It’s been a very busy summer for yours truly and the rest of the database world. Some interesting nouvelles (I thought so, at least) in case you missed them:
GET RID OF ORACLE!
The UK Government has ordered its agencies to “get rid of Oracle“. While Oracle have been shooting themselves in the foot spectacularly of late with their bedside manner, I have personal experience of the last time that the UKG wanted to replace them.
It didn’t go well. At all.
Nor did it go cheaply, which is all that anyone is caring about, of course.
Despite the horror stories in the media about how difficult it is to deal with Oracle’s support, sales and auditing teams, it’s still the best database out there by a country mile.
Ask … Someone Other Than Tom
It’s no longer possible to Ask Tom. Mr. Kyte has decided to take a very well-deserved sabbatical and has handed over Ask Tom duties to … someone who isn’t called Tom.
What a crazy world we live in.
I’m not going to lie – I definitely was dazzled by his stardom on more than one occasion. At a NEOUG conference in Cleveland, I managed to get him to sign a copy of his book and one of the most memorable moments of my DBA career was when I asked him a “great question” on a webinar many years ago. It helped me win an important argument at work, so I will always be thankful to him for that 🙂
Don’t Believe The Hype
Talking of which, not even Gartner thinks Big Data is worthy of the hype now. Instead of moving to their “Slope of Enlightenment” or “Plateau of Productivity”, it fell off their Hype Cycle for Emerging Technologies completely.
NoSQL = NoPASSWORDS = NoDATA?
Maybe part of the reason is that the industry has realized that a lot of NoSQL databases just plain suck. Don’t forget that the “big” Hadoop story of 2015 is “Hadoop-on-SQL”, which would have been QUITE the juxtaposition eighteen months ago.
Still think NoSQL databases will replace relational databases? Then read this beauty and try and say that “relational databases are outdated” with a straight face.
While they found over a PETABYTE of unsecured data without too much trouble, probably the most noteworthy finding is that they found 347 different MongoDB databases called “DELETED_BECAUSE_YOU_DIDNT_PASSWORD_PROTECT_YOUR_MONGODB”.
You know how people complain that Oracle licensing can be very complicated?
Well, Oracle 184.108.40.206 Standard Edition 2 has been released after being announced earlier in the summer. Great, but what about Standard Edition and Standard Edition One?
A bit confused? I know I was.
Basically, SE and SEOne (SE1?) are available options if you’re running a 220.127.116.11 database. However, if you like living your life in the fast lane (as well as making use of some really cool new features) and you’re running 18.104.22.168, both SE and SE1 editions are replaced by SE2.
The licensing restrictions are as follows. The bold emphasis is mine:
“Oracle Database Standard Edition 2 may only be licensed on servers that have a maximum capacity of 2 sockets. When used with Oracle Real Application Clusters, Oracle Database Standard Edition 2 may only be licensed on a maximum of 2 one-socket servers. In addition, notwithstanding any provision in Your Oracle license agreement to the contrary, each Oracle Database Standard Edition 2 database may use a maximum of 16 CPU threads at any time. When used with Oracle Real Application Clusters, each Oracle Database Standard Edition 2 database may use a maximum of 8 CPU threads per instance at any time. The minimums when licensing by Named User Plus (NUP) metric are 10 NUP licenses per server.”
By the way, SE2 does not support multi-tenant. Don’t forget, though, Oracle have deprecated non-“CDB / PDB” architecture from 22.214.171.124 onwards, so you should install SE2 as a single-tenant pluggable database with a container database to follow Oracle’s recommended path.
One wonders whether the “SE2” nomenclature will persist. Will Oracle only offer “Standard Edition 2” and “Enterprise Edition” for Database 13.1?
“What happened to Standard Edition 1?
Why don’t they just call it ‘Standard Edition’?”
I do not know, dear reader. I do not know.
This week, a client encountered a particularly nasty bug – 10194190 – which caused their ASM instance processes to “spin” and cause a bunch of errors, essentially leading to a instance crash on both of their RAC nodes.
In the ASM instance:
ORA-00490: PSP process terminated with error
PMON (ospid: 12345): terminating the instance due to error 490
In the database instance, we either saw an ORA-00240 or ORA-03113 error and an instance crash.
ORA-00240: control file enqueue held for more than 120 seconds
ORA-15064: communication failure with ASM instance
ORA-03113: end-of-file on communication channel
What triggered this was that the uptime of each node was greater than 248 days. On Solaris SPARC systems, there is a bug in the compiler which can cause either a database or an ASM instance crash if the server has been up for more than 248 days.
There are bug fixes available for versions 10.2.0.4 through 126.96.36.199, but there is no fix for any 12c database at the moment.
Larry’s Secret Number?
“248 days”, you ask? Curious, n’est pas? Indeed, especially when you learn that there are another couple of Oracle bugs out there which really seem to fixate on that number.
In a previous life, I encountered bug 4612267 while running the 10.2.0.1 Client for an EDW batch server.
It is looping on the times() function.
In addition to sqlplus, it has been reported that the netca and dbca tools also hang.
This may happen with a new installation of Instant Client 10.2.0.1.0 or Oracle 10.2.0.1.0 on UNIX platform, or it can just occur after some period of time with no other changes.
This is a known, unpublished bug.
Bug 4612267 OCI CLIENT IS IN AN INFINITE LOOP WHEN MACHINE UPTIME HITS 248 DAYS
This machine was critical to the enterprise and had to be rock solid, so once we got things stable, the project team were loathe to mess with it, even though it was running 10.2.0.1.
“Not messing with it” also involved maintaining server uptime, because we had a bunch of NFS mounts attached to it, which seemed to somehow cause the server to take a very long time to reboot.
Unfortunately, the platform was a bit too “solid” and we hit bug 4612267 during a particularly busy afternoon of batch processing. As with such things, it took a while to find the culprit, but it ended up being new sqlplus sessions started after 1pm that day on the EDW batch server. The server had been up 231 days.
We looked and we could see the sessions had started, but they had got no CPU time whatsoever. In typical IT fashion, we collected as much diagnostic info as we could and bounced the server. Naturally, the problem went away.
After much head-scratching and seeing the same issue once or twice more whenever the server uptime ranged between 60 and 248 days, we realized we were hitting the bug. As we were using a client install, we couldn’t upgrade to 10.2.0.2 and applying the patch failed for some obscure reason, despite assistance from Oracle Support. We only “fixed” the bug when we upgraded the client to 11g, which was deemed to be too much “messing about” until after our busiest period of the year was over.
Our workaround was to schedule a server reboot, which was a pain, though it was better than the alternative.
I always did wonder why someone would code something which would “spin” if the server uptime was between 60 and 248 days. What does that matter to SQL*Plus? Besides, once you went beyond 248 days of uptime, you were in the clear. This is why it took three years before it was noticed.
I wonder what significance 248 days has with Oracle? Maybe it’s a code which is somehow used to obtain privileged access somewhere in Redwood City? 🙂
As I’m still trying to get my feet under the door at my new gig, here’s another nugget in place of a real blog entry. While troubleshooting for a client, I noted that there were three types of materialized view. I knew this before, sort of, but this troubleshooting exercise was a great aide memoire.
The three types of materialized views:
The read-only MV omits the FOR UPDATE clause in its DDL during creation and does not permit DML.
The updateable MV includes the FOR UPDATE clause in its DDL and is included in a materialized view group. This allows changes made to the MV to be pushed back to the ‘master’ during a refresh.
The writeable MV includes the FOR UPDATE clause in its DDL, but does not belong to a materialized view group. All changes made to the MV are lost during a refresh.