Search This Blog

Friday, August 30, 2019

Oracle Cloud Backups - The easy way to mass delete files in a large OCI bucket

In some previous posts, I talked about the benefits of doing backups to Oracle's cloud...if you run the numbers it will be significantly (magnitudes) less expensive than backups on prem or to other cloud vendors.  Its safer than keeping backups geographically close to you, they're very fast (depending on your WAN speed, the storage array's write speed will be the bottleneck for your restore), they're encrypted, compressed and secured.  Once you have everything scripted...its a "set it and forget it."  The uptime far exceeds typical on-site backup MML systems...which is important because failing to do timely archivelog backups may translate into a database outage.

I've really only had 1 complaint until now.  Here's the full story...skip down to "The Solution" if you're impatient.

When I first helped a customer set up their backups to Oracle's Classic backup service (pre-OCI), there was a problem.  The design called for the database backups to go into a different bucket for each databases.  When the non-prod databases were refreshed from production, they wanted to delete the bucket that held those backups.  No problem...there's a big red delete button on the bucket page, right?  Wrong!  You can't delete a bucket that has files in it.  When you do a backup of even a medium sized database, you can have hundreds of thousands of files in it.  The only way to delete files on the console is to select 100 at a time and click delete.  I had 10 databases/buckets, over a million files and it took me about 5 seconds to delete 100 files.  If I did nothing else...we're talking ~14 hrs of work.  In reality it was going to take somebody days to delete the backups after every refresh.  Completely unacceptable....

I opened an SR to ask for a better way.  They pointed me to Cloudberry Explorer (which is a pretty cool product.)  CBE allows you do pull up the files in Oracle's (or virtually any other) cloud in an interface similar to Window's file explorer.  Great!  I set it up, selected all the files I wanted to delete and clicked the little delete button.  After the files were indexed,  it kicked off multiple threads to get this done.  3 by 3 (about 3 sec per file), they started to delete.  I left my laptop to it over the weekend.  When I came back on was still deleting them.

I updated the SR...they pointed me to using curl in batch to hit the API (Doc ID 2289491.1).  Cool!  I love linux took a while to set it up, but eventually I got it to work...and one by one (about 3 sec per file) they started to delete.  Argh.

A colleague I work with wrote a similar procedure in perl...but it was using the same API, and had the same results.

I updated the SR...they essentially said there's no way to do what I'm trying to do any faster.

I opened an new SR for an enhancement request to add a way to do mass's the response:

"The expectation to delete the bucket directly is like a Self Destructive program ,Many customer will accidentally delete the container and will come back to US for recovery which is not possible."
...and then they closed my enhancement  request.

My first thought was to send an email to Linus Torvalds and let him know about the terrible mistake he made introducing rm to his OS.  (sarcasm)  My second thought was that the company that charges for my TB's of storage usage isn't motivated to help me delete the storage.

The Solution:  Lifecycle the files out.
1. First enable Service Permissions (click here).
2. In OCI, go to Object Storage, click on the bucket you want to clean up and click on "Lifecycle Policy Rules."  Click on "Create Rule."
3.  Put in anything for name (its going to be deleted in a minute, so this doesn't matter.) Select "Delete" from the lifecycle action and set the number of days to 1.  Leave the state at the default Enabled, then click the Create button.  This will immediately start deleting all the files in the bucket that are over 1 day my case, that was all of them.

4. The Approximate Count for the bucket will update in a few minutes.  Soon (within a min or so) it will say 0 objects.  Now when you click the red Delete button to remove the bucket, it will go away.  I was able to remove about 150 Terrabytes in a few seconds.

I have no idea why Oracle made this so difficult...their job is to provide a resource, not protect us from making a mistake.  Anyway, I hope this helps some of you out.

Tuesday, August 13, 2019

How to monetize your blog with BAT

I think one thing is clear at this'll either get into blockchain technology, or you'll regret it when you get in later.  This is still the time of "Early Adopters"'re not too late.  How does this tie in to a mainly Oracle/database blog?  Recently I heard Oracle's Larry Ellison gave a speech to Wharton Business School graduates about blockchain where he said this is the single most important new technology in decades.  A Bloomberg report stated Oracle is ready to announce its entry into Blockchain including a platform as a service (PaaS) Blockchain product...IBM and others have already done the same thing.  We've all seen blockchain enter the finance and IoT spaces, but the applications are endless...its my other passion, besides databases.  You could say blockchain is a new database technology.  Its going to change the world.

Blockchain gave us Bitcoin and many other coins/tokens.  You've all heard of Bitcoin, but you may not have heard of Basic Attention Coin (BAT).  BAT works with the Brave browser, which can generate BAT tokens as you use it.  It then allows you to tip websites automatically when you use them.  *More info here*.  To do this you just need to download and use the Brave browser . Think "Chrome" but its faster, more secure, ads blocked, built by Brendan Eich, the guy that invented java script and Mozilla.  The idea is that from a user's perspective, you can reward content creators (like websites, twitter/youtube/blog channels, etc) when they help you out (at really no cost to you)...and from the content provider's perspective, they can get free money.

I've used it for about a year, and it keeps getting better, but I wanted to monetize my blog with it.  To do this, you have to register your channel, to do that, you need to put in a file with a key into a folder called ".well-known".  Blogger has no way to do this, but I found a way to get it to work.

1. Go to
2. Create an account and click on "Add Channel"
3. Click on Website
4. Add your website: ie:
You're then given two options to verify via DNS or by adding a file...choose to add the file.

That will give you the instructions to add a file called /.well-known/brave-rewards-verification.txt with metadata in it.

5. Now go into your blog's settings and go to "Pages" and click "New Page."
6. I named my page brave-rewards-verification.txt, although I'm not sure that was necessary, and then I pasted in the metadata I got from step 4.
7. Click save and publish...this gives you a page you can access like this:
8. Back to blogger settings->search preferences->Custom Redirects->Edit->New Redirect
9. Create a redirect that looks like this, with relative paths:
10. Now go back to and click on the verify button.  Within a few seconds it will say, "Channel Verified."
11. Now create an uphold wallet (which is another link on that page and follow the instructions to connect it to your new bat publishers account, and you're done.  Now, when people come to your page, they can tip your blog automatically, in the background.

Now, when you go into brave and click on brave rewards, you can check your website:

Thursday, August 8, 2019

Oracle's Backup Cloud Service (OCI) - 3 setup your backup

In this series of posts:

Oracle's Backup Cloud Service (OCI)-1 - The Oracle OCI Backup Cloud

Oracle's Backup Cloud Service (OCI) - 2 setup OCI and your server environment

Oracle's Backup Cloud Service (OCI) - 3 - You are here

At this point, you've read about why you want to use OCI and the benefits and you've read how to setup your environment for it.  This one is about setting up your backups to use it.  The only negative I can say after using OCP/OCI for over a year is that they change the security certificates without warning.  If you install/setup the software per the docs and you have a lot of databases...this can easily occupy many hours that day, reinstalling all the OCI software for every Oracle_Home in your infrastructure.  This is why in my previous post, I recommended you use a shared mount.

You have already:
1. Installed OCI software on a shared mount
2. Setup a container in the cloud web page
3. Used the jar file to create the environment files you need
4. Enabled block change tracking in the database (for incrementals)
5. Enabled medium compression in rman (per the OCI backup docs, this is normally licensed as part of the compression pack, but it appears if you're paying for OCI backups, this is included.  Talk to your Oracle sales guy.)

In the server you used in the last post, as the db software owner:
1. cd /nas/dba/backups/cfg
     2. ls should see a file called opc[database name].ora
     3. You can use this as a template for all your other databases...just copy the file to have the new database name and update the container parameter in the file.  

      4. If you use stored scripts in an rman repository, you can add the backup scripts similar to this.  This is just the archivelog'll want incrementals and fulls, obviously, but the key is the allocate channel commands, which are the same.  Add more channels to increase performance...until your network guys complain:

printing stored global script: tstoracle01_TSTDB1_arch
allocate channel ch1 DEVICE TYPE 'SBT_TAPE' CONNECT 'sys/[the_pw]@TSTDB1' PARMS  'SBT_LIBRARY=/nas/dba/backups/cfg/, SBT_PARMS=(OPC_PFILE=/nas/dba/backups/cfg/opcTSTDB_a.ora)';
allocate channel ch2 DEVICE TYPE 'SBT_TAPE' CONNECT 'sys/[the_pw]@TSTDB2' PARMS  'SBT_LIBRARY=/nas/dba/backups/cfg/, SBT_PARMS=(OPC_PFILE=/nas/dba/backups/cfg/opcTSTDB_a.ora)';
allocate channel ch3 DEVICE TYPE 'SBT_TAPE' CONNECT 'sys/[the_pw]@TSTDB3' PARMS  'SBT_LIBRARY=/nas/dba/backups/cfg/, SBT_PARMS=(OPC_PFILE=/nas/dba/backups/cfg/opcTSTDB_a.ora)';
crosscheck archivelog all;
delete expired archivelog all;
backup as compressed backupset archivelog filesperset 8 not backed up 1 times delete input;

   5. Make sure your rman environment is set up correctly:

CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/nas/dba/backups/cfg/, SBT_PARMS=(OPC_PFILE=/nas/dba/backups/cfg/opcTSTDB_a.ora)';

    6. Assuming you can tnsping to each node from node 1, and you kick off the script from node 1, when you kick off the backup, it should work from all 3 nodes.  You'll also need a maint script to delete backups that are outside of your retention period...otherwise your OCI storage will grow too large.  Here's an example of a maint script:
printing stored global script: global_maint_backup
crosscheck backupset device type sbt_tape;
crosscheck archivelog all;
delete noprompt expired archivelog all;
delete noprompt expired backup;
delete noprompt obsolete;

    7. One of the requirements for OCI is that the backups MUST be encrypted.  This is easy...before your backup script, after you get in to rman, just add:
       set encryption on identified by '1VeryGoodPassword!' ONLY;
     ...and when you do the restore, you have to add a line to decrypt:
       set decryption identified by '1VeryGoodPassword!';

    8. When you do the restore, you'll probably want to use many more channels then you do when you're backing up...assuming its an emergency and you need the restore done asap.  When you add lots of channels, depending on your network equipment, it may help you compete for bandwidth favorably from the cat videos being watched by all the marketing managers at your company.

Oracle's Backup Cloud Service (OCI) - 2 (setup OCI and your server environment)

In this series of posts:

Oracle's Backup Cloud Service (OCI)-1 - The Oracle OCI Backup Cloud

Oracle's Backup Cloud Service (OCI) - 2 - You are here

Oracle's Backup Cloud Service (OCI) - 3 setup your backup

In my previous post, I talked about OCI Backups and their advantages.  Here's how you do it:

Before you start, verify you have Java 1.7+ installed on your server (java -version) and download the MML software:

In a browser, go to:

1. Setup your tenant
2. Create your user

...those steps keep changing, so I'm not going to tell you how to do them...but if you go to that page, it should be pretty self-explanatory.  Ok, so at this point you're in.

3. On the top right, select the region that's geo-located closest to you.  Be aware of any in-country requirements for your database backups.

     4.   On the top left, click on the hamburger (the button with 3 lines), and then click “Object Storage”.  The Object Storage page will appear:

5. To make management easier, put each database’s backups into its own bucket.  If you don’t see the bucket for the database you’re working on, create one by clicking on the “Create Bucket” button.

6. Fill in the “Bucket Name” text box with the database you want to add and select “standard tier”, then click the “Create Bucket” button.  The list price of standard tier is $0.025 cents/GB, and archive price is $0.0025 cents, but you’re required to keep archive for 90 days and there’s up to a 4 hour lag if you need to pull back archived unless its a rare backup you're going to keep for an extended time, you want standard.

7.   For the install to work, we first need to get:  

          a.Cloud Infrastructure API signing keys
          b.tenant OCID
          c.user OCID
          d.fingerprint of our RSI key. 

      8. Click the profile button in the top right corner (it looks like a head in a circle) and select user details. Click "copy" and paste the USER OCID to the editor of your choice.  It should start out with: "ocid1.user.oc1.."

      9. Click again on the profile button in the top right corner (that looks like a head in a circle) and this time select "Tenancy: [whatever your tenancy is called]"

     10. Under the Tenancy information, click on "Copy" and copy the tenant OCID and paste it into the editor of your choice.

     11. Assuming you have an nfs mount that's shared on all your database servers, make a directory on that mount...  ie: mkdir /nas/dba/backups/cfg/opc_wallet

      12. In your database server, go to the directory you unzipped the software into and get a new RSA key pair:
         java -jar oci_install.jar -newRSAKeyPair -walletDir /nas/dba/backups/cfg/opc_wallet

      13. View the oci_pub file and select it so it loads into your laptop's clipboard (starting at the first dash, stopping after the last dash)
      14. Once again back to the Profile button in the top right corner, click on your user and scroll down to the bottom of the page where it says, "API Keys" and click Ad Public Keys and paste in your new key from the previous step...when complete, click ADD.

      15. A new fingerprint will be displayed.  Select that fingerprint and paste into the text editor of your choice.

      16. Using all the things you put in your editor of choice, create the install command.  Put yours all in one line...I broke this out to make it more readable:

  java -jar oci_install.jar -host
    -pvtKeyFile  /nas/dba/backups/cfg/opc_wallet/oci_pvt
   -pubFingerPrint [paste finger print here]
   -tOCID [paste tenancy OCID]
   -uOCID [paste user OCID]
   -libDir /nas/dba/backups/cfg
   -walletDir /nas/dba/backups/cfg/opc_wallet
   -bucket [the bucket you created in step 5]
   -cOCID [the OCID of the compartment you're using, if any]
   -configFile /nas/dba/backups/cfg


Oracle's Backup Cloud Service (OCI)-1

In this series of posts:
Oracle's Backup Cloud Service (OCI)-1 - You are here

Oracle's Backup Cloud Service (OCI) - 2 setup OCI and your server environment

Oracle's Backup Cloud Service (OCI) - 3 setup your backup

IMHO, one of the best use cases for the cloud is backups.  I honestly don't know why Oracle doesn't market this more...its been the best Oracle Cloud experience I've seen.  The bar is low there...but this service has been flawless.  The questions re:security have been MOSTLY answered (save a few for AWS at Capital One).  One of the trends I'm seeing is for companies to move some of their database backups to the cloud.  If you can't handle downtime more than a few minutes, this isn't for you, you may need storage array snaps or a fast dataguard failover...but what about your dev/test/qa database backups?  Putting at least some of your backups in the cloud might make a lot of financial sense.  In a recent analysis, I saw a company that found it was (all things considered) about 400% cheaper to put their dev/test backups in Oracle's cloud.  Backing up to Oracle's new OCI cloud (as opposed to Oracle's Classic Cloud Backup service) was even cheaper than AWS Glacier for their use case.

What about restore performance?  Dev/Test might be more tolerant than prod for an outage during a restore/recovery, but how long are we talking here?  The answer:it really depends more on your infrastructure than it does on the OCI is FAST.  My test used an all-flash Dell/EMC VMax and my 3 RAC nodes were the only things on the new array.  I have no way to know how much spare network bandwidth there was, but I did this test at night using a 1Gb link.  The key is...using Oracle backups allows you to use medium compression.  So...whatever your bandwidth is...multiple that by your compression ratio, and that's the restore speed you can expect (unless you hit a different array performance, cpu, etc...).  I posted in the past about the performance benefits for using rman compression.  In a recent test, my waits were on writing to the storage array, not the network or OCI itself.

The Test:
Oracle RAC 12.2

Almost 14TB (13,940,708 MB)
Starting at 9:53
Finishing at 12:52 3 hrs...~4.66TB/hr.

So you ask...its secured, its much cheaper and its as fast as it could possibly be?  You also get an automatic air-gap, offsite backup.  What's the downside?

Once you understand it, its easy to do and the up-time has been great, too, but the instructions are almost impossible to locate online...and when you do, they're obsolete or talking about classic, not OCI.  The only instructions (that weren't wrong) I found were in the README...but they weren't great and they were about how to set it up for a probable have more than 1, right?  Several times/yr, maybe without warning, Oracle will update their security certificates which will force you to do a re-install...but if you plan your installation with this in mind, you can make that almost painless.  In my next post, I'll describe what I did.....

Wednesday, August 7, 2019

Data Cleanup (how to exercise the ghost from the machine)

The scariest movie I ever saw was the tell the truth, I don't think I actually saw it...while I was watching it I had my eyes closed half the time and I left early.  This is relevant to Oracle Databases...stay with me here.  Database upgrades are a *HOT* topic right now...with the extinction of free support, everybody is moving to 18c or 19c.  I recently helped a customer with a problem that was preventing their database upgrades from being successful.  They were moving a legacy database into a multi-tenant pluggable database using datapump and rows were being dropped.  We initially assumed it was a character set issue, but we were going from UTF8 to UTF8.  We eventually learned that this database had long ago migrated to UTF8 from a different character set, and the conversion was done incorrectly, so we were left with bad data in the legacy tables.

They said they've had issues with "ghosts" in the data...two columns that appeared to equal each other not equaling each other in plsql, that kind of thing...but they always attributed it to bugs.  Now we knew the column's data had hidden, invisible characters in them.  They didn't care about the hidden characters, but the rest of the data in the column needed to be saved.  They only needed the basic, common ascii character set, so I wrote this quick little stored procedure to dynamically take the data and scan it, replacing any unusual characters with a space.  After this the import worked without an issue, and the database ghosts were exercised (no young and old priest required).  There were many 100 mil+ row tables, but the number of rows with bad data were only in the I didn't do a forall update...if your data has a larger number of rows to change, you might want to modify this.

When its complete, you can see if everything worked ok with this:
select /*+ PARALLEL 8 */ * from ascii_check where CPOS=-1 order by 1,2,3;

[insert usual disclaimer here] I used this on many very large tables and it appears to do what I intended (which was to use the character's ascii code range to decide if it was a character we wanted to not), but review it closely to make sure it does what you want it to do in your environment before you use it. No warranties/'ll probably not work...yada yada yada....

First, create this table that will serve as a log to keep all the data that was changed, in case something unexpected happens or you need to revert back:

create table ascii_check (Table_Owner varchar2(30),Table_Name varchar2(255), Col_Name varchar2(255), old_rowid varchar2(255), cpos number, code number, old_value varchar2(255 char));

...once you feel confident its doing what you can submit it as a job, and have many tables being scanned and corrected at the same time.

CREATE OR REPLACE procedure data_cleanup
  Table_Owner_in in varchar2,
  Table_Name_in in varchar2,
  Column_Name_in in varchar2,
  Testing_in in varchar2
  v_Char number;
  v_String2 varchar2(32000);
  v_Flag boolean;
  v_Flag2 boolean := true;
  v_Counter number :=0;
  v_Errm varchar2(255);
  v_ErrTrack number :=0;
  v_Count number := 0;
  already_completed exception;
  pragma exception_init(already_completed,-20001);
  type rct_RowIDs is ref cursor;
  rc_RowIDs rct_RowIDs;
  v_Err_Rowid varchar2(18);
  type t_Row is record
    Old_RowID varchar2(18),
    Old_Data varchar2(32000)
  type t_Rows is table of t_Row index by pls_integer;
  c_Rows t_Rows;
  execute immediate 'ALTER SESSION SET ddl_lock_timeout=900';
  select /*+ PARALLEL 16 */ count(0) into v_Count from ascii_check where table_owner=Table_Owner_in and Table_Name=Table_Name_in and col_name=Column_Name_in and old_value='Completed Successfully.';
  if v_Count!=0 then
    raise already_completed;
  end if;
  delete /*+ PARALLEL 16 */ from ascii_check where table_owner=Table_Owner_in and Table_Name=Table_Name_in and col_name=Column_Name_in and cpos=-1;
  v_ErrTrack := 1;

  OPEN rc_RowIDs FOR  'select /*+ PARALLEL 16 */ rowid,'||Column_Name_in||' from '||Table_Owner_in||'.'||Table_Name_in||' order by rowid';
    v_ErrTrack := 2;
    -- for each row
    for i in 1..c_Rows.count loop
        v_ErrTrack := 3;
        v_String2 := '';
        v_Flag := false;
        -- for each character in each row
        for j in 1..nvl(length(c_Rows(i).Old_Data),0) loop
          v_ErrTrack := 4;
          v_Char := ascii(substr(c_Rows(i).Old_Data,j,1));
          v_ErrTrack := 5;
          if (v_Char<32 or v_Char>126) then
            v_String2 := v_String2||' ';
            v_Flag := true;
            v_ErrTrack := 6;
            insert into ascii_check values (Table_Owner_in,Table_Name_in,Column_Name_in,c_Rows(i).Old_RowID,j,v_Char,substr(c_Rows(i).Old_Data,1,240));
          elsif v_Char=39 then
            v_String2 := v_String2||'''''';
            v_Flag := true;
            v_ErrTrack := 6;
            insert into ascii_check values (Table_Owner_in,Table_Name_in,Column_Name_in,c_Rows(i).Old_RowID,j,v_Char,substr(c_Rows(i).Old_Data,1,240));
            v_ErrTrack := 7;
            v_String2 := v_String2||substr(c_Rows(i).Old_Data,j,1);
          end if;
        end loop;
        v_ErrTrack := 8;
        if v_Flag then
          v_Counter := v_Counter +1;
          if Testing_in ='FALSE' then
            v_ErrTrack := 9;
            v_Err_Rowid := c_Rows(i).Old_RowID;
              execute immediate 'update '||Table_Owner_in||'.'||Table_Name_in||' set '||Column_Name_in||'='''||v_String2||''' where rowid='''||c_Rows(i).Old_RowID||'''';
              when DUP_VAL_ON_INDEX then
                execute immediate 'update '||Table_Owner_in||'.'||Table_Name_in||' set '||Column_Name_in||'=''EDITED-'||v_String2||''' where rowid='''||c_Rows(i).Old_RowID||'''';
                --insert into ascii_check values (Table_Owner_in,Table_Name_in, Column_Name_in, v_Err_Rowid, -1,-1,v_ErrTrack||'-'||v_String2);
                v_Flag2 := False;
          end if;
        end if;
        v_ErrTrack := 10;
    end loop;
    v_ErrTrack := 11;
    exit when c_Rows.count < 50000;
  end loop;
  close rc_RowIDs;
  dbms_output.put_line('There were '||v_Counter||' issues detected.');
  if v_Flag2 then
    insert into ascii_check values (Table_Owner_in,Table_Name_in,Column_Name_in,'',-1,-1,'Completed Successfully.');
    insert into ascii_check values (Table_Owner_in,Table_Name_in,Column_Name_in,'',-1,-1,'Completed Unsuccessfully.');
  end if; 
  when already_completed then
    dbms_output.put_line('Already completed.  Exiting...');
  when others then   
    v_Errm := sqlerrm;
    insert into ascii_check values (Table_Owner_in,Table_Name_in, Column_Name_in, substr(v_Errm,1,240), -1,-1,v_ErrTrack||'-'||v_String2||'-'||v_Err_Rowid);
    close rc_RowIDs; 

Tuesday, July 23, 2019

We're using up storage too fast! Where is all the growth coming from?

Technical challenges aside, its difficult to navigate the politics of being an Oracle DBA.  In today's world, data warehouses/data lakes are expected to ingest massive amounts of data and still perform well.  Ok, no problem...this can be done...but when the storage utilization spikes, the storage guys (who write the checks for the expensive storage arrays the data is on)...they want to know if/why they need to buy another storage array.  DBA's are expected to perform storage planning, but are rarely informed when the data owners increase utilization until after the fact.

This question comes up pretty often...why is the database growing so fast?  Coming in to a new environment I'm not familiar with...I have a bunch of scripts I usually run to get a profile of the databases.  Usually a non-technical manager wants to see a summary, so if this data can be presented graphically as part of a presentation, communication flows better, and it looks more professional.

All the storage utilization data is being kept in the history tables...its just a matter of pulling it .  This is a dynamic query that creates a query...the results of that can be put into Excel and graphed.  Once the results are graphed, things pretty clearly jump out at you.  So...

1. Run this query (if you aren't using 8k block size, change 8192 below to w/e you're using):

select stmnt from (
select 1 o1, 0 o2, 'with stacked as
select snap_id, tablespace_id, trunc(to_date(rtime,''MM/DD/YYYY HH24:MI:SS'')) rtime, (tablespace_size*8192)/1024/1024/1024 ts_size
from dba_hist_tbspc_space_usage tdu
where snap_id in (select max(snap_id) from dba_hist_tbspc_space_usage tdu2
                    where trunc(to_date(tdu.rtime,''MM/DD/YYYY HH24:MI:SS''))=trunc(to_date(tdu2.rtime,''MM/DD/YYYY HH24:MI:SS'')))
select distinct rtime,'  stmnt from dual
union all
select 2, TS#,'(select ts_size from stacked s2 where s2.tablespace_id='||TS#||' and s1.rtime=s2.rtime and rownum=1) as "'||NAME||'"'||decode((tsm-ts#),0,null,',') from v$tablespace ts1 , (select max(TS#) tsm from v$tablespace) ts2
union all
select 3,0,'from stacked s1
 where (select ts_size from stacked s2 where s2.tablespace_id=0 and s1.rtime=s2.rtime) is not null;' from dual
 ) order by o1,o2;

2. Take the output of that query, and run it.  It'll create a very wide data set with daily storage size data for each tablespace.

3. Cut/Paste and put that data into Excel.

4. Select the data columns in Excel and at the top, select insert, then on the "Charts" group, select your favorite chart type..."2-D, stacked area" works well for this data, IMHO.

You should now see something like this (GB on the Y axis): now its clear where the acceleration in growth came from.  The relatively new, orange "troublemaker_data" tablespace had a ~10TB growth spurt until May 1st, where it looks like it leveled off.  This empowers you to go after the data owners and ask them if this will continue, and you can tell the storage guys there was growth but it settled down and they don't need to panic and buy more storage.

I hope this helps you with your storage planning!

Monday, July 22, 2019

Watch out for the latest 12.1 PSU/proactive patches!

We all have horror stories about "That one time, when we restored and recovered the database, and the CIO was calling out managers every few minutes, reminding them the company is at a stand-still...this is costing us $XXX,XXX per minute!!!!!"  Now imagine after a few hours in to your restore you have to say, "We hit a bug. We have to open an SR, get a patch and start over."  Discovering you can't successfully restore/recover when the heat is on due to a bug would make a great horror movie...nothing is more scary.  Luckily I hit this bug during a restore I had some time to deal with it...but I thought I should let everybody know about it.

I know a lot of people are still running  The Jan and Apr 2019 PSU (the two latest PSU's) introduce a bug (25107334) that cause you to be unable to open the database.  I hit this 3 hours into a restore:

ORA-65108: invalid use of a cursor belonging to another container

The work around is that you manually put the $SEED database into read-only mode.  Simple enough...but if you're restoring/recovering a database, that's not a option.  There's now a patch (25107334), that solves the issue.  While waiting for MOS to come up with a solution, I used a 19.3 home to restore the database, and it worked that's an optional work-around too.

As people migrate to 19.3, there are earlier bugs that prevent 12.1 from working with a 19.3 GI...those bugs are fixed in these PSU' patching needs to be part of the 19.3 upgrade plan, and this patch needs to be part of the patch.

Hopefully this will be part of future PSU/proactive patch bundles...and we can keep the nightmares in the movies! 

/*+ UPDATE */
The July PSU is out ( and per (1924126.1), 25107334 is fixed in that PSU.

Upgrade to Oracle 19.3!

Oracle has postponed End of Support for many, many times, but no mas!  In fact, if you're running EBS, is STILL supported with extended support fees waived, almost 11 years after its initial release.  I know a couple of large EBS shops that are still running on  To upgrade the database for EBS is a huge its understandable why they'd put this off as long as possible...but time is running out. See 742060.1 for details.

Also, 12.1 premier support ran out almost a year ago, but I know there are a lot of people still running critical databases on that version.

At this point, everybody is aware that Oracle changed their naming you can think of 19.3 really being 12.2.X.  There are *many* great new features in 19.3...its the terminal release of Oracle 12.2.  Oracle recommends you not upgrade at this point to anything other that 19c.  I've heard rumors that the next db version that'll be supported for EBS is 19.3 (currently the max is 12.1...they're skipping 12.2 and 18.3.)

So its're going to go to 19c.  Here's are some things to keep in mind:

1. In order to upgrade your database to 19.3, you...of course...need to upgrade your ASM's Grid Infrastructure to 19.3

2. In order to upgrade your Grid Infrastructure to 19.3, (assuming you're on RHEL) you MUST be on RHEL 7.  18.X was fine with RHEL 6, but 19.3 requires RHEL 7.

3. There are a lot of changes in the OS requirements.  Make sure you review/verify the install documentation before you install it.

I'll post more about the changes in the 19.3 upgrade process later.

Friday, April 26, 2019

19c Linux x86-64  for on-prem/non-Exadata is available for download!

Some interesting things to know...

1. Its available as the traditional download or as an RPM to download.
2. Just like the Exadata version that requires OEL 7, the on-prem version requires:
      OEL 7.4, UEK 4 (4.1.12-112.16.7+)
      OEL 7.4, UEK 5 (4.14.35-1818.1.6+)
      OEL 7.4, RH Compat (3.10.0-693.
      RHEL 7.4+ (3.10.0-693.
      SUSE ES 12 SP3 (4.4.103-92.56+)

Have fun!

Friday, April 12, 2019

Yet one more End Of Support date push for

If you haven't heard, on 1/26 Oracle updated their release/end of life document (742060.1) to say Extended Support fees have been waived through 12/31/2018.  I understand Oracle trying to work with their customers, but a lot of people took that to mean, "Great, I can postpone working on this upgrade." has been around since 7/22/2014, so we're coming up to 3 years to plan and execute an upgrade.  11.2 has been around since 9/1/2010...the terminal patchset isn't as old, but 7 years for a database is an eternity.

/* UPDATE */

As of 4/6/2019, if you're on Oracle E-Business Suite, database 11.2 and 12.1 extended support fee waiver has been extended again through Dec 2020...Wow!  That's an extension of almost 2 years! (2522948.1)  That means 11.2 will have had a 10+year supported run!

This is understandable since I've heard Oracle is skipping 12.2 database support, and 18.3 isn't the "long-term" version (and 12.2 and 18.X) isn't going to be supported by EBS.  This means the database version you would need to upgrade to for EBS is 19.2.  Unfortunately, 19.2 isn't available yet for non-Exadata/ODA, so there's nothing to upgrade to for the majority of EBS customers.  They couldn't end support on 12.1 at the end of June and release 19.2 support for EBS at that same time.  It typically takes months to upgrade an entire EBS was inevitable.

Still, I hope you don't read this and think this grants you more time to procrastinate.  There are *a lot* of great features in 19.2!  Don't allow getting kicked off the old database version to be your primary motivation!