Search This Blog

Thursday, July 23, 2020

The complete post-12c online procedure to move a file from file system to ASM (ORA-01110,ORA-01157,ORA-01187,ORA-12850,RMAN-03009)

In the Coronovirus-19 world, most of us would rather spend time learning about the Seattle Kraken, Max Ehrich, Roy Jones Jr and the Washington Football Team.  They'll probably eventually do what Prince did and just call themselves, "Symbol." Roy Jones and Max Ehrich will both get knocked out and the Blues will have one more team to beat before they get their next cup.  If you're reading this...you probably also care about Oracle databases...so let me tell you about a recent experience I had.

A client I work with had an issue the other day (actually 2 issues) in a container database.  This database has very strict uptime SLA's...so downtime for a bounce of an instance isn't an option.  The online process to move a datafile from file system to ASM is a 3 step process in 12c+...until now I've always thought it was a single "move datafile" command.

1. They hit a bug similar to the issue discussed in note 2558640.1, while doing a select from cdb_data_files, they got "ORA-12850: Could not allocate slaves on all specified instances: 3 needed, 0 allocated".  

2. This happened in a script that's used to add datafiles as needed.  Unfortunately, that script created a datafile with some of the error text instead of "+DATA"...so the file was created in the dbs directory on node 1 instead of putting it into ASM.  This is RAC, so the datafile was only able to be accessed on the first node.  RAC requires shared storage.


First order of business, get it working again.  This is 18.7, so it was easy to move the file into ASM without downtime.  The filename was ridiculous with special characters, and the file was on the file system on node 1, so from node 1 I did:


alter database move datafile 1654 to '+DATA' ;


...and that worked, now the datafile is in ASM's shared storage.  I kicked off a backup (which failed):



                RMAN-03009: failure of backup command on ch4 channel at 07/23/2020 11:23:24 
                ORA-01157: cannot identify/lock data file 1654 - see DBWR trace file
                ORA-01110: data file 1654: '+DATA/DB/DATAFILE/sysaux.22113.1246514439'

On node 1, this file is 100% working in ASM.  On other nodes, it was unavailable, giving ORA-01157's.  In gv$datafile I could see it was online and available on all nodes.  This made no sense to me.  I even double checked by going into asmcmd and doing an "ls" from one of the nodes that was failing...yep...it was there.  It was acting like it was offline, but reporting it was online.

I googled/checked metalink and didn't really find a good hit.  Finally I thought...there's no harm in onlining a file that says its online...so I did an:

ALTER DATABASE DATAFILE 1654 ONLINE;

The error changed from:
ORA-01157: cannot identify/lock data file 1654 
to:
ORA-01187 cannot read from file 1654 because it failed verification tests

It was still working on node 1, but getting ORA-01187 on the other nodes.  This time I found a solution:

ALTER SYSTEM CHECK DATAFILES;

...which I did on the nodes that were having issues.  After that, everything began to work.

With the solution and the original error, I went back on MOS and this time found the ultimate solution: (DOC 2499264.1) which states:

ORA-01157 error is expected to occur when running a query on dba_datafiles on a particular instance (of RAC database) for a file moved (just after creation) from Filesystem to ASM, without instance wide identification.
 
This problem was previously addressed in Bug 27268242 which was also closed as "Not a Bug" because this is an expected behavior.

(I think they meant cluster or database-wide identification.) 
<VENT> Come on, Oracle!  You shouldn't expect to get an error when running a correct query on a valid datafile.  This just causes more trouble and steps and after a datafile move, there's no reason to leave it in a partially working state.  If you're in RAC and somebody moves a file to ASM...let the other nodes know!  There's no conceivable situation where somebody would want to move a file in RAC and have it only available to a single instance...make the alter database datafile move check datafiles when its coming from the filesystem for us!  
<?VENT>

Until Larry starts listening to me...this means we have multiple steps.

That note also included a resolution:
alter system check datafiles;
alter system flush shared_pool; 

...so at this point, the datafile is working correctly on all nodes.

This means we have 3 statements to do when we move a file from the file system to ASM:
alter database move datafile 1654 to '+DATA';
alter system check datafiles;

alter system flush shared_pool; 

Now that its working, I can re-address problem #1 above...the root cause.

Note 2558640.1 discusses a similar problem when OEM is hitting the views that have the datafiles.  My issues isn't in OEM, but in every other way this matches up.  The work around:


alter system set "_px_cdb_view_enabled"=false comment='Work-around for internal view error and OEM-see bug 29595328';

This prevents the issue from happening again, but it also slows down the query (and others) hitting the cdb_data_files from 45 seconds (there's a lot of datafiles) to 90 seconds.  I suppose that's better than getting this error...I opened an SR to ask this be fixed in future releases.

I love the new feature in 12c+ to do online datafile moves...but it would be great if the process was documented when moving a datafile from the file system...all I've seen in the doc's is the basic move datafile command, not all 3 steps.

With the system working and in a state where this won't happen again...I can now return to C19's bad hair and toilet paper and coin shortages.

Wednesday, May 6, 2020

RHEL7 Control Groups on Oracle Database Servers


After some linux patching on some database servers I was surprised to see an entry in an alert log I hadn’t seen before…complete with a WARNING message.  After digging around I found it was due to Linux control groups.  In an effort to manage a server with multiple workloads, RHEL7 introduced control groups and they’ve increased features/abilities with each dot release.  On the affected  Oracle nodes in RHEL7 we see entries on startup in the alert log:

************************ Large Pages Information *******************
Parameter use_large_pages = ONLY
Per process system memlock (soft) limit = UNLIMITED

Large page usage restricted to processor group "user.slice"

Total Shared Global Region in Large Pages = 60 GB (100%)

WARNING:
  The parameter _linux_prepage_large_pages is explicitly disabled.
  Oracle strongly recommends setting the _linux_prepage_large_pages
  parameter since the instance  is running in a Processor Group. If there is
  insufficient large page memory, instance may encounter SIGBUS error
  and may terminate abnormally.

Large Pages used by this instance: 30721 (60 GB)
Large Pages unused in Processor Group user.slice = 1722 (3444 MB)
Large Pages configured in Processor Group user.slice = 120000 (234 GB)
Large Page size = 2048 KB

You may also see:
Large page usage restricted to processor group "system.slice/oracle-ohasd.service"

…this concerned me because if our large page usage is restricted and we can’t do a prepage, eventually the db_cache would warm up and allocate memory beyond the limitation, causing an instance crash.  Also, “WARNING” on startup is something to take a closer look at.  That undocumented parameter was most definitely NOT explicitly disabled.  This message appears in every database that’s being controlled by processor groups, so it appears prepage of large pages is disabled automatically.  There’s a note (2414778.1) that says this message can be ignored and its removed in 12.2+.

So…what amount of memory is our group being limited to by default?
>systemctl show user.slice|grep Mem

MemoryCurrent=18446744073709551615
MemoryAccounting=no
MemoryLimit=18446744073709551615

…which works out to be 17,179,869,184GB, (roughly 17 Exabytes) which is probably a sufficient amount of RAM (per node) for any database out there…so we’re basically being limited to slightly less than infinity….nothing to worry about.

Are control groups useful in a multi-database instance server environment?  To set up CPU limits per instance, you could use this or instance caging…I would prefer to set the cpu_count parameter and use instance caging.  Memory Limits?  Memory utilization for the SGA is set via sga_target/max_size…so I don’t think there’s a play for it here either.  Maybe the accounting features could be useful…if you wanted to see how much CPU one instance in a multiple instance db server was using historically?  There are awr tables that track that…but getting the information from CG might be easier….

One potentially useful way to use control groups for a database server with multiple instances is to throttle bandwidth on IO to prevent one instance from using up all the IO and affecting the performance of a different instance (assuming each instance is on its own asm disks or luns.)  There’s a note about how to get that to work on Metalink: 2495924.1.  This has been a useful feature in Exadata for years with IORM…now you can have similar functionality without Exadata.

One other interesting feature of control groups is the ability to look at the resources each group is using in real time (if you have accounting on) with the systemd-cgtop command.  If you don’t have accounting, you’ll see something like this, which is a summary of process counts, cpu, memory and IO for each group:






It took some time to find the details about what the new alert log entries were telling me with the new warning and processor group information.  Hopefully this information will save you some time. 

Friday, August 30, 2019

Oracle Cloud Backups - The easy way to mass delete files in a large OCI bucket

In some previous posts, I talked about the benefits of doing backups to Oracle's cloud...if you run the numbers it will be significantly (magnitudes) less expensive than backups on prem or to other cloud vendors.  Its safer than keeping backups geographically close to you, they're very fast (depending on your WAN speed, the storage array's write speed will be the bottleneck for your restore), they're encrypted, compressed and secured.  Once you have everything scripted...its a "set it and forget it."  The uptime far exceeds typical on-site backup MML systems...which is important because failing to do timely archivelog backups may translate into a database outage.

I've really only had 1 complaint until now.  Here's the full story...skip down to "The Solution" if you're impatient.

When I first helped a customer set up their backups to Oracle's Classic backup service (pre-OCI), there was a problem.  The design called for the database backups to go into a different bucket for each databases.  When the non-prod databases were refreshed from production, they wanted to delete the bucket that held those backups.  No problem...there's a big red delete button on the bucket page, right?  Wrong!  You can't delete a bucket that has files in it.  When you do a backup of even a medium sized database, you can have hundreds of thousands of files in it.  The only way to delete files on the console is to select 100 at a time and click delete.  I had 10 databases/buckets, over a million files and it took me about 5 seconds to delete 100 files.  If I did nothing else...we're talking ~14 hrs of work.  In reality it was going to take somebody days to delete the backups after every refresh.  Completely unacceptable....

I opened an SR to ask for a better way.  They pointed me to Cloudberry Explorer (which is a pretty cool product.)  CBE allows you do pull up the files in Oracle's (or virtually any other) cloud in an interface similar to Window's file explorer.  Great!  I set it up, selected all the files I wanted to delete and clicked the little delete button.  After the files were indexed,  it kicked off multiple threads to get this done.  3 by 3 (about 3 sec per file), they started to delete.  I left my laptop to it over the weekend.  When I came back on Mon...it was still deleting them.

I updated the SR...they pointed me to using curl in batch to hit the API (Doc ID 2289491.1).  Cool!  I love linux scripting...it took a while to set it up, but eventually I got it to work...and one by one (about 3 sec per file) they started to delete.  Argh.

A colleague I work with wrote a similar procedure in perl...but it was using the same API, and had the same results.

I updated the SR...they essentially said there's no way to do what I'm trying to do any faster.

I opened an new SR for an enhancement request to add a way to do mass deletes...here's the response:

"The expectation to delete the bucket directly is like a Self Destructive program ,Many customer will accidentally delete the container and will come back to US for recovery which is not possible."
...and then they closed my enhancement  request.

My first thought was to send an email to Linus Torvalds and let him know about the terrible mistake he made introducing rm to his OS.  (sarcasm)  My second thought was that the company that charges for my TB's of storage usage isn't motivated to help me delete the storage.

The Solution:  Lifecycle the files out.
1. First enable Service Permissions (click here).
2. In OCI, go to Object Storage, click on the bucket you want to clean up and click on "Lifecycle Policy Rules."  Click on "Create Rule."
3.  Put in anything for name (its going to be deleted in a minute, so this doesn't matter.) Select "Delete" from the lifecycle action and set the number of days to 1.  Leave the state at the default Enabled, then click the Create button.  This will immediately start deleting all the files in the bucket that are over 1 day old...in my case, that was all of them.


4. The Approximate Count for the bucket will update in a few minutes.  Soon (within a min or so) it will say 0 objects.  Now when you click the red Delete button to remove the bucket, it will go away.  I was able to remove about 150 Terrabytes in a few seconds.

I have no idea why Oracle made this so difficult...their job is to provide a resource, not protect us from making a mistake.  Anyway, I hope this helps some of you out.

Tuesday, August 13, 2019

How to monetize your blog with BAT

I think one thing is clear at this point...you'll either get into blockchain technology, or you'll regret it when you get in later.  This is still the time of "Early Adopters"...you're not too late.  How does this tie in to a mainly Oracle/database blog?  Recently I heard Oracle's Larry Ellison gave a speech to Wharton Business School graduates about blockchain where he said this is the single most important new technology in decades.  A Bloomberg report stated Oracle is ready to announce its entry into Blockchain including a platform as a service (PaaS) Blockchain product...IBM and others have already done the same thing.  We've all seen blockchain enter the finance and IoT spaces, but the applications are endless...its my other passion, besides databases.  You could say blockchain is a new database technology.  Its going to change the world.

Blockchain gave us Bitcoin and many other coins/tokens.  You've all heard of Bitcoin, but you may not have heard of Basic Attention Coin (BAT).  BAT works with the Brave browser, which can generate BAT tokens as you use it.  It then allows you to tip websites automatically when you use them.  *More info here*.  To do this you just need to download and use the Brave browser . Think "Chrome" but its faster, more secure, ads blocked, built by Brendan Eich, the guy that invented java script and Mozilla.  The idea is that from a user's perspective, you can reward content creators (like websites, twitter/youtube/blog channels, etc) when they help you out (at really no cost to you)...and from the content provider's perspective, they can get free money.

I've used it for about a year, and it keeps getting better, but I wanted to monetize my blog with it.  To do this, you have to register your channel, to do that, you need to put in a file with a key into a folder called ".well-known".  Blogger has no way to do this, but I found a way to get it to work.

1. Go to https://publishers.basicattentiontoken.org/publishers/home
2. Create an account and click on "Add Channel"
3. Click on Website
4. Add your website: ie: otipstricks.blogspot.com
You're then given two options to verify via DNS or by adding a file...choose to add the file.

That will give you the instructions to add a file called /.well-known/brave-rewards-verification.txt with metadata in it.

5. Now go into your blog's settings and go to "Pages" and click "New Page."
6. I named my page brave-rewards-verification.txt, although I'm not sure that was necessary, and then I pasted in the metadata I got from step 4.
7. Click save and publish...this gives you a page you can access like this:
https://otipstricks.blogspot.com/p/this-is-brave-rewards-publisher.html
8. Back to blogger settings->search preferences->Custom Redirects->Edit->New Redirect
9. Create a redirect that looks like this, with relative paths:
From:/.well-known/brave-rewards-verification.txt
To:/p/this-is-brave-rewards-publisher.html
Permanent:Yes
10. Now go back to https://publishers.basicattentiontoken.org/publishers/home and click on the verify button.  Within a few seconds it will say, "Channel Verified."
11. Now create an uphold wallet (which is another link on that page and follow the instructions to connect it to your new bat publishers account, and you're done.  Now, when people come to your page, they can tip your blog automatically, in the background.

Now, when you go into brave and click on brave rewards, you can check your website:



Thursday, August 8, 2019

Oracle's Backup Cloud Service (OCI) - 3 setup your backup

In this series of posts:

Oracle's Backup Cloud Service (OCI)-1 - The Oracle OCI Backup Cloud

Oracle's Backup Cloud Service (OCI) - 2 setup OCI and your server environment

Oracle's Backup Cloud Service (OCI) - 3 - You are here


At this point, you've read about why you want to use OCI and the benefits and you've read how to setup your environment for it.  This one is about setting up your backups to use it.  The only negative I can say after using OCP/OCI for over a year is that they change the security certificates without warning.  If you install/setup the software per the docs and you have a lot of databases...this can easily occupy many hours that day, reinstalling all the OCI software for every Oracle_Home in your infrastructure.  This is why in my previous post, I recommended you use a shared mount.

You have already:
1. Installed OCI software on a shared mount
2. Setup a container in the cloud web page
3. Used the jar file to create the environment files you need
4. Enabled block change tracking in the database (for incrementals)
5. Enabled medium compression in rman (per the OCI backup docs, this is normally licensed as part of the compression pack, but it appears if you're paying for OCI backups, this is included.  Talk to your Oracle sales guy.)

In the server you used in the last post, as the db software owner:
1. cd /nas/dba/backups/cfg
     2. ls
        ...you should see a file called opc[database name].ora
     3. You can use this as a template for all your other databases...just copy the file to have the new database name and update the container parameter in the file.  

      4. If you use stored scripts in an rman repository, you can add the backup scripts similar to this.  This is just the archivelog backup...you'll want incrementals and fulls, obviously, but the key is the allocate channel commands, which are the same.  Add more channels to increase performance...until your network guys complain:

printing stored global script: tstoracle01_TSTDB1_arch
{
allocate channel ch1 DEVICE TYPE 'SBT_TAPE' CONNECT 'sys/[the_pw]@TSTDB1' PARMS  'SBT_LIBRARY=/nas/dba/backups/cfg/libopc.so, SBT_PARMS=(OPC_PFILE=/nas/dba/backups/cfg/opcTSTDB_a.ora)';
allocate channel ch2 DEVICE TYPE 'SBT_TAPE' CONNECT 'sys/[the_pw]@TSTDB2' PARMS  'SBT_LIBRARY=/nas/dba/backups/cfg/libopc.so, SBT_PARMS=(OPC_PFILE=/nas/dba/backups/cfg/opcTSTDB_a.ora)';
allocate channel ch3 DEVICE TYPE 'SBT_TAPE' CONNECT 'sys/[the_pw]@TSTDB3' PARMS  'SBT_LIBRARY=/nas/dba/backups/cfg/libopc.so, SBT_PARMS=(OPC_PFILE=/nas/dba/backups/cfg/opcTSTDB_a.ora)';
crosscheck archivelog all;
delete expired archivelog all;
backup as compressed backupset archivelog filesperset 8 not backed up 1 times delete input;
}

   5. Make sure your rman environment is set up correctly:

CONFIGURE COMPRESSION ALGORITHM 'MEDIUM';
CONFIGURE DEFAULT DEVICE TYPE TO SBT_TAPE;
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/nas/dba/backups/cfg/libopc.so, SBT_PARMS=(OPC_PFILE=/nas/dba/backups/cfg/opcTSTDB_a.ora)';

    6. Assuming you can tnsping to each node from node 1, and you kick off the script from node 1, when you kick off the backup, it should work from all 3 nodes.  You'll also need a maint script to delete backups that are outside of your retention period...otherwise your OCI storage will grow too large.  Here's an example of a maint script:
printing stored global script: global_maint_backup
{
crosscheck backupset device type sbt_tape;
crosscheck archivelog all;
delete noprompt expired archivelog all;
delete noprompt expired backup;
delete noprompt obsolete;
}

  
    7. One of the requirements for OCI is that the backups MUST be encrypted.  This is easy...before your backup script, after you get in to rman, just add:
    
       set encryption on identified by '1VeryGoodPassword!' ONLY;
     ...and when you do the restore, you have to add a line to decrypt:
       set decryption identified by '1VeryGoodPassword!';

    8. When you do the restore, you'll probably want to use many more channels then you do when you're backing up...assuming its an emergency and you need the restore done asap.  When you add lots of channels, depending on your network equipment, it may help you compete for bandwidth favorably from the cat videos being watched by all the marketing managers at your company.
       

Oracle's Backup Cloud Service (OCI) - 2 (setup OCI and your server environment)

In this series of posts:

Oracle's Backup Cloud Service (OCI)-1 - The Oracle OCI Backup Cloud

Oracle's Backup Cloud Service (OCI) - 2 - You are here

Oracle's Backup Cloud Service (OCI) - 3 setup your backup


In my previous post, I talked about OCI Backups and their advantages.  Here's how you do it:

Before you start, verify you have Java 1.7+ installed on your server (java -version) and download the MML software:
http://www.oracle.com/technetwork/database/availability/oracle-cloud-backup-2162729.html

In a browser, go to:
https://console.us-ashburn-1.oraclecloud.com/

1. Setup your tenant
2. Create your user

...those steps keep changing, so I'm not going to tell you how to do them...but if you go to that page, it should be pretty self-explanatory.  Ok, so at this point you're in.

3. On the top right, select the region that's geo-located closest to you.  Be aware of any in-country requirements for your database backups.




     4.   On the top left, click on the hamburger (the button with 3 lines), and then click “Object Storage”.  The Object Storage page will appear:




5. To make management easier, put each database’s backups into its own bucket.  If you don’t see the bucket for the database you’re working on, create one by clicking on the “Create Bucket” button.

6. Fill in the “Bucket Name” text box with the database you want to add and select “standard tier”, then click the “Create Bucket” button.  The list price of standard tier is $0.025 cents/GB, and archive price is $0.0025 cents, but you’re required to keep archive for 90 days and there’s up to a 4 hour lag if you need to pull back archived files...so unless its a rare backup you're going to keep for an extended time, you want standard.


7.   For the install to work, we first need to get:  


          a.Cloud Infrastructure API signing keys
          b.tenant OCID
          c.user OCID
          d.fingerprint of our RSI key. 

      8. Click the profile button in the top right corner (it looks like a head in a circle) and select user details. Click "copy" and paste the USER OCID to the editor of your choice.  It should start out with: "ocid1.user.oc1.."

      9. Click again on the profile button in the top right corner (that looks like a head in a circle) and this time select "Tenancy: [whatever your tenancy is called]"

     10. Under the Tenancy information, click on "Copy" and copy the tenant OCID and paste it into the editor of your choice.

     11. Assuming you have an nfs mount that's shared on all your database servers, make a directory on that mount...  ie: mkdir /nas/dba/backups/cfg/opc_wallet

      12. In your database server, go to the directory you unzipped the software into and get a new RSA key pair:
         java -jar oci_install.jar -newRSAKeyPair -walletDir /nas/dba/backups/cfg/opc_wallet

      13. View the oci_pub file and select it so it loads into your laptop's clipboard (starting at the first dash, stopping after the last dash)
      
      14. Once again back to the Profile button in the top right corner, click on your user and scroll down to the bottom of the page where it says, "API Keys" and click Ad Public Keys and paste in your new key from the previous step...when complete, click ADD.

      15. A new fingerprint will be displayed.  Select that fingerprint and paste into the text editor of your choice.

      16. Using all the things you put in your editor of choice, create the install command.  Put yours all in one line...I broke this out to make it more readable:

  java -jar oci_install.jar -host https://objectstorage.us-phoenix-1.oraclecloud.com
    -pvtKeyFile  /nas/dba/backups/cfg/opc_wallet/oci_pvt
   -pubFingerPrint [paste finger print here]
   -tOCID [paste tenancy OCID]
   -uOCID [paste user OCID]
   -libDir /nas/dba/backups/cfg
   -walletDir /nas/dba/backups/cfg/opc_wallet
   -bucket [the bucket you created in step 5]
   -cOCID [the OCID of the compartment you're using, if any]
   -configFile /nas/dba/backups/cfg

  




Oracle's Backup Cloud Service (OCI)-1


In this series of posts:
Oracle's Backup Cloud Service (OCI)-1 - You are here

Oracle's Backup Cloud Service (OCI) - 2 setup OCI and your server environment

Oracle's Backup Cloud Service (OCI) - 3 setup your backup



IMHO, one of the best use cases for the cloud is backups.  I honestly don't know why Oracle doesn't market this more...its been the best Oracle Cloud experience I've seen.  The bar is low there...but this service has been flawless.  The questions re:security have been MOSTLY answered (save a few for AWS at Capital One).  One of the trends I'm seeing is for companies to move some of their database backups to the cloud.  If you can't handle downtime more than a few minutes, this isn't for you, you may need storage array snaps or a fast dataguard failover...but what about your dev/test/qa database backups?  Putting at least some of your backups in the cloud might make a lot of financial sense.  In a recent analysis, I saw a company that found it was (all things considered) about 400% cheaper to put their dev/test backups in Oracle's cloud.  Backing up to Oracle's new OCI cloud (as opposed to Oracle's Classic Cloud Backup service) was even cheaper than AWS Glacier for their use case.

What about restore performance?  Dev/Test might be more tolerant than prod for an outage during a restore/recovery, but how long are we talking here?  The answer:it really depends more on your infrastructure than it does on the OCI cloud...it is FAST.  My test used an all-flash Dell/EMC VMax and my 3 RAC nodes were the only things on the new array.  I have no way to know how much spare network bandwidth there was, but I did this test at night using a 1Gb link.  The key is...using Oracle backups allows you to use medium compression.  So...whatever your bandwidth is...multiple that by your compression ratio, and that's the restore speed you can expect (unless you hit a different bottleneck...storage array performance, cpu, etc...).  I posted in the past about the performance benefits for using rman compression.  In a recent test, my waits were on writing to the storage array, not the network or OCI itself.

The Test:
Oracle RAC 12.2

Almost 14TB (13,940,708 MB)
Starting at 9:53
Finishing at 12:52
...so 3 hrs...~4.66TB/hr.

So you ask...its secured, its much cheaper and its as fast as it could possibly be?  You also get an automatic air-gap, offsite backup.  What's the downside?

Once you understand it, its easy to do and the up-time has been great, too, but the instructions are almost impossible to locate online...and when you do, they're obsolete or talking about classic, not OCI.  The only instructions (that weren't wrong) I found were in the README...but they weren't great and they were about how to set it up for a database...you probable have more than 1, right?  Several times/yr, maybe without warning, Oracle will update their security certificates which will force you to do a re-install...but if you plan your installation with this in mind, you can make that almost painless.  In my next post, I'll describe what I did.....