Search This Blog

Friday, June 18, 2021

Oracle 19c Preinstall-What does it actually do?

I set up a lot of systems, lately about a new RAC cluster a week.  We use a lot of automation and scripting, but if some of that can be done for you, that just makes it even easier. I love what the pre-install Oracle DB rpm provides.  How do you use it?

yum install oracle-database-preinstall-19c.x86_64 

About 30 seconds later, the system is 99% ready for a RAC install.  In a previous post <<click here>>, I detailed how to download the version 12 RPM and modify it to work with Redhat.  That's no longer necessary, the 19c version will install into RHEL...there's no longer a requirement on OEL.  You may still want to look at that post to see how to make modifications to the RPM (for instance, how to change the parameters, the UID's and GID's to match your environment.)

Recently I had an issue where I was getting different results on my test system than other people were getting, and I suspected it came down to the OS parameters that were being set.  I had used a minimal RHEL 7 install and added the pre-install rpm.  I knew roughly what that did, but not the details.  I looked up the Oracle documentation and all I found was this (for 12c):

When installed, the Oracle Preinstallation RPM does the following:

  • Automatically downloads and installs any additional RPM packages needed for installing Oracle Grid Infrastructure and Oracle Database, and resolves any dependencies
  • Creates an oracle user, and creates the oraInventory (oinstall) and OSDBA (dba) groups for that user
  • As needed, sets sysctl.conf settings, system startup parameters, and driver parameters to values based on recommendations from the Oracle RDBMS Pre-Install program
  • Sets hard and soft resource limits
  • Sets other recommended parameters, depending on your kernel version


Ok...that's not very detailed.  If you go to a sysadmin and say "I set other recommended parameters..." he'll have lots of follow up questions for you....

So...I once again tore open the RPM, found where the logs were sent and went through them.  This is what is ACTUALLY does:

When you install the RPM, it verifies several other rpms exist that are needed for Oracle (or installs them if they're missing).  If these rpms require rpms you don't have, yum will install those too.  Here's the list:

procps module-init-tools ethtool initscripts bind-utils nfs-utils util-linux-ng pam
xorg-x11-utils xorg-x11-xauth smartmontools
binutils glibc glibc-devel
ksh libaio libaio-devel libgcc libstdc++ libstdc++-devel
make sysstat openssh-clients
psmisc net-tools unzip bc tar

...in addition, 

if you're on RHEL7, it'll also require compat-libcap1

if you're on RHEL8, it'll require both libnsl compat-openssl10

The next thing it does is add the required OS groups.  They are:

oinstall,dba,oper,backupdba,dgdba,kmdba,racdba

...and then creates the oracle OS user and puts it in those groups.

Next, it adjusts settings in sysctl.conf:

fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500

Be aware, it sets fs.aio-max-nr to a value *much* lower than you likely need.  See note 2229798.1 for details.

Next, on my system it set up user limits with these values.  I *think* the memlock setting was based on the amount of ram I had in my VM at the time...which was, not much:

Adding oracle soft nofile 1024
Adding oracle hard nofile 65536
Adding oracle soft nproc 16384
Adding oracle hard nproc 16384
Adding oracle soft stack 10240
Adding oracle hard stack 32768
Adding oracle hard memlock 134217728
Adding oracle soft memlock 134217728
Adding oracle soft data unlimited
Adding oracle hard data unlimited

Next, it adjusts the boot parameters:

Originally   : crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet

Changed to : crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet numa=off transparent_hugepage=never (which also disabled thp’s memory compaction)

Lastly, it edits /etc/sysconfig/network:

Adding NOZEROCONF=yes

You may need to adjust those settings for your environment if its large.  Sizing issues aside, after you install the rpm, within a few seconds you'll have a very stable system, ready for a RAC install.  There are still other recommended things you should do...for example, verifying the network is using jumbo packets for the interconnect and reserving hugepages for your SGA, but the RPM does a lot of the OS work for you.  

Hopefully this will help you better answer the question, "What does the Oracle Pre-install RPM do?"

Thursday, July 23, 2020

The complete post-12c online procedure to move a file from file system to ASM (ORA-01110,ORA-01157,ORA-01187,ORA-12850,RMAN-03009)

In the Coronovirus-19 world, most of us would rather spend time learning about the Seattle Kraken, Max Ehrich, Roy Jones Jr and the Washington Football Team.  They'll probably eventually do what Prince did and just call themselves, "Symbol." Roy Jones and Max Ehrich will both get knocked out and the Blues will have one more team to beat before they get their next cup.  If you're reading this...you probably also care about Oracle databases...so let me tell you about a recent experience I had.

A client I work with had an issue the other day (actually 2 issues) in a container database.  This database has very strict uptime SLA's...so downtime for a bounce of an instance isn't an option.  The online process to move a datafile from file system to ASM is a 3 step process in 12c+...until now I've always thought it was a single "move datafile" command.

1. They hit a bug similar to the issue discussed in note 2558640.1, while doing a select from cdb_data_files, they got "ORA-12850: Could not allocate slaves on all specified instances: 3 needed, 0 allocated".  

2. This happened in a script that's used to add datafiles as needed.  Unfortunately, that script created a datafile with some of the error text instead of "+DATA"...so the file was created in the dbs directory on node 1 instead of putting it into ASM.  This is RAC, so the datafile was only able to be accessed on the first node.  RAC requires shared storage.


First order of business, get it working again.  This is 18.7, so it was easy to move the file into ASM without downtime.  The filename was ridiculous with special characters, and the file was on the file system on node 1, so from node 1 I did:


alter database move datafile 1654 to '+DATA' ;


...and that worked, now the datafile is in ASM's shared storage.  I kicked off a backup (which failed):



                RMAN-03009: failure of backup command on ch4 channel at 07/23/2020 11:23:24 
                ORA-01157: cannot identify/lock data file 1654 - see DBWR trace file
                ORA-01110: data file 1654: '+DATA/DB/DATAFILE/sysaux.22113.1246514439'

On node 1, this file is 100% working in ASM.  On other nodes, it was unavailable, giving ORA-01157's.  In gv$datafile I could see it was online and available on all nodes.  This made no sense to me.  I even double checked by going into asmcmd and doing an "ls" from one of the nodes that was failing...yep...it was there.  It was acting like it was offline, but reporting it was online.

I googled/checked metalink and didn't really find a good hit.  Finally I thought...there's no harm in onlining a file that says its online...so I did an:

ALTER DATABASE DATAFILE 1654 ONLINE;

The error changed from:
ORA-01157: cannot identify/lock data file 1654 
to:
ORA-01187 cannot read from file 1654 because it failed verification tests

It was still working on node 1, but getting ORA-01187 on the other nodes.  This time I found a solution:

ALTER SYSTEM CHECK DATAFILES;

...which I did on the nodes that were having issues.  After that, everything began to work.

With the solution and the original error, I went back on MOS and this time found the ultimate solution: (DOC 2499264.1) which states:

ORA-01157 error is expected to occur when running a query on dba_datafiles on a particular instance (of RAC database) for a file moved (just after creation) from Filesystem to ASM, without instance wide identification.
 
This problem was previously addressed in Bug 27268242 which was also closed as "Not a Bug" because this is an expected behavior.

(I think they meant cluster or database-wide identification.) 
<VENT> Come on, Oracle!  You shouldn't expect to get an error when running a correct query on a valid datafile.  This just causes more trouble and steps and after a datafile move, there's no reason to leave it in a partially working state.  If you're in RAC and somebody moves a file to ASM...let the other nodes know!  There's no conceivable situation where somebody would want to move a file in RAC and have it only available to a single instance...make the alter database datafile move check datafiles when its coming from the filesystem for us!  
<?VENT>

Until Larry starts listening to me...this means we have multiple steps.

That note also included a resolution:
alter system check datafiles;
alter system flush shared_pool; 

...so at this point, the datafile is working correctly on all nodes.

This means we have 3 statements to do when we move a file from the file system to ASM:
alter database move datafile 1654 to '+DATA';
alter system check datafiles;

alter system flush shared_pool; 

Now that its working, I can re-address problem #1 above...the root cause.

Note 2558640.1 discusses a similar problem when OEM is hitting the views that have the datafiles.  My issues isn't in OEM, but in every other way this matches up.  The work around:


alter system set "_px_cdb_view_enabled"=false comment='Work-around for internal view error and OEM-see bug 29595328';

This prevents the issue from happening again, but it also slows down the query (and others) hitting the cdb_data_files from 45 seconds (there's a lot of datafiles) to 90 seconds.  I suppose that's better than getting this error...I opened an SR to ask this be fixed in future releases.

I love the new feature in 12c+ to do online datafile moves...but it would be great if the process was documented when moving a datafile from the file system...all I've seen in the doc's is the basic move datafile command, not all 3 steps.

With the system working and in a state where this won't happen again...I can now return to C19's bad hair and toilet paper and coin shortages.

Wednesday, May 6, 2020

RHEL7 Control Groups on Oracle Database Servers


After some linux patching on some database servers I was surprised to see an entry in an alert log I hadn’t seen before…complete with a WARNING message.  After digging around I found it was due to Linux control groups.  In an effort to manage a server with multiple workloads, RHEL7 introduced control groups and they’ve increased features/abilities with each dot release.  On the affected  Oracle nodes in RHEL7 we see entries on startup in the alert log:

************************ Large Pages Information *******************
Parameter use_large_pages = ONLY
Per process system memlock (soft) limit = UNLIMITED

Large page usage restricted to processor group "user.slice"

Total Shared Global Region in Large Pages = 60 GB (100%)

WARNING:
  The parameter _linux_prepage_large_pages is explicitly disabled.
  Oracle strongly recommends setting the _linux_prepage_large_pages
  parameter since the instance  is running in a Processor Group. If there is
  insufficient large page memory, instance may encounter SIGBUS error
  and may terminate abnormally.

Large Pages used by this instance: 30721 (60 GB)
Large Pages unused in Processor Group user.slice = 1722 (3444 MB)
Large Pages configured in Processor Group user.slice = 120000 (234 GB)
Large Page size = 2048 KB

You may also see:
Large page usage restricted to processor group "system.slice/oracle-ohasd.service"

…this concerned me because if our large page usage is restricted and we can’t do a prepage, eventually the db_cache would warm up and allocate memory beyond the limitation, causing an instance crash.  Also, “WARNING” on startup is something to take a closer look at.  That undocumented parameter was most definitely NOT explicitly disabled.  This message appears in every database that’s being controlled by processor groups, so it appears prepage of large pages is disabled automatically.  There’s a note (2414778.1) that says this message can be ignored and its removed in 12.2+.

So…what amount of memory is our group being limited to by default?
>systemctl show user.slice|grep Mem

MemoryCurrent=18446744073709551615
MemoryAccounting=no
MemoryLimit=18446744073709551615

…which works out to be 17,179,869,184GB, (roughly 17 Exabytes) which is probably a sufficient amount of RAM (per node) for any database out there…so we’re basically being limited to slightly less than infinity….nothing to worry about.

Are control groups useful in a multi-database instance server environment?  To set up CPU limits per instance, you could use this or instance caging…I would prefer to set the cpu_count parameter and use instance caging.  Memory Limits?  Memory utilization for the SGA is set via sga_target/max_size…so I don’t think there’s a play for it here either.  Maybe the accounting features could be useful…if you wanted to see how much CPU one instance in a multiple instance db server was using historically?  There are awr tables that track that…but getting the information from CG might be easier….

One potentially useful way to use control groups for a database server with multiple instances is to throttle bandwidth on IO to prevent one instance from using up all the IO and affecting the performance of a different instance (assuming each instance is on its own asm disks or luns.)  There’s a note about how to get that to work on Metalink: 2495924.1.  This has been a useful feature in Exadata for years with IORM…now you can have similar functionality without Exadata.

One other interesting feature of control groups is the ability to look at the resources each group is using in real time (if you have accounting on) with the systemd-cgtop command.  If you don’t have accounting, you’ll see something like this, which is a summary of process counts, cpu, memory and IO for each group:






It took some time to find the details about what the new alert log entries were telling me with the new warning and processor group information.  Hopefully this information will save you some time. 

Friday, August 30, 2019

Oracle Cloud Backups - The easy way to mass delete files in a large OCI bucket

In some previous posts, I talked about the benefits of doing backups to Oracle's cloud...if you run the numbers it will be significantly (magnitudes) less expensive than backups on prem or to other cloud vendors.  Its safer than keeping backups geographically close to you, they're very fast (depending on your WAN speed, the storage array's write speed will be the bottleneck for your restore), they're encrypted, compressed and secured.  Once you have everything scripted...its a "set it and forget it."  The uptime far exceeds typical on-site backup MML systems...which is important because failing to do timely archivelog backups may translate into a database outage.

I've really only had 1 complaint until now.  Here's the full story...skip down to "The Solution" if you're impatient.

When I first helped a customer set up their backups to Oracle's Classic backup service (pre-OCI), there was a problem.  The design called for the database backups to go into a different bucket for each databases.  When the non-prod databases were refreshed from production, they wanted to delete the bucket that held those backups.  No problem...there's a big red delete button on the bucket page, right?  Wrong!  You can't delete a bucket that has files in it.  When you do a backup of even a medium sized database, you can have hundreds of thousands of files in it.  The only way to delete files on the console is to select 100 at a time and click delete.  I had 10 databases/buckets, over a million files and it took me about 5 seconds to delete 100 files.  If I did nothing else...we're talking ~14 hrs of work.  In reality it was going to take somebody days to delete the backups after every refresh.  Completely unacceptable....

I opened an SR to ask for a better way.  They pointed me to Cloudberry Explorer (which is a pretty cool product.)  CBE allows you do pull up the files in Oracle's (or virtually any other) cloud in an interface similar to Window's file explorer.  Great!  I set it up, selected all the files I wanted to delete and clicked the little delete button.  After the files were indexed,  it kicked off multiple threads to get this done.  3 by 3 (about 3 sec per file), they started to delete.  I left my laptop to it over the weekend.  When I came back on Mon...it was still deleting them.

I updated the SR...they pointed me to using curl in batch to hit the API (Doc ID 2289491.1).  Cool!  I love linux scripting...it took a while to set it up, but eventually I got it to work...and one by one (about 3 sec per file) they started to delete.  Argh.

A colleague I work with wrote a similar procedure in perl...but it was using the same API, and had the same results.

I updated the SR...they essentially said there's no way to do what I'm trying to do any faster.

I opened an new SR for an enhancement request to add a way to do mass deletes...here's the response:

"The expectation to delete the bucket directly is like a Self Destructive program ,Many customer will accidentally delete the container and will come back to US for recovery which is not possible."
...and then they closed my enhancement  request.

My first thought was to send an email to Linus Torvalds and let him know about the terrible mistake he made introducing rm to his OS.  (sarcasm)  My second thought was that the company that charges for my TB's of storage usage isn't motivated to help me delete the storage.

The Solution:  Lifecycle the files out.
1. First enable Service Permissions (click here).
2. In OCI, go to Object Storage, click on the bucket you want to clean up and click on "Lifecycle Policy Rules."  Click on "Create Rule."
3.  Put in anything for name (its going to be deleted in a minute, so this doesn't matter.) Select "Delete" from the lifecycle action and set the number of days to 1.  Leave the state at the default Enabled, then click the Create button.  This will immediately start deleting all the files in the bucket that are over 1 day old...in my case, that was all of them.


4. The Approximate Count for the bucket will update in a few minutes.  Soon (within a min or so) it will say 0 objects.  Now when you click the red Delete button to remove the bucket, it will go away.  I was able to remove about 150 Terrabytes in a few seconds.

I have no idea why Oracle made this so difficult...their job is to provide a resource, not protect us from making a mistake.  Anyway, I hope this helps some of you out.

Tuesday, August 13, 2019

How to monetize your blog with BAT

I think one thing is clear at this point...you'll either get into blockchain technology, or you'll regret it when you get in later.  This is still the time of "Early Adopters"...you're not too late.  How does this tie in to a mainly Oracle/database blog?  Recently I heard Oracle's Larry Ellison gave a speech to Wharton Business School graduates about blockchain where he said this is the single most important new technology in decades.  A Bloomberg report stated Oracle is ready to announce its entry into Blockchain including a platform as a service (PaaS) Blockchain product...IBM and others have already done the same thing.  We've all seen blockchain enter the finance and IoT spaces, but the applications are endless...its my other passion, besides databases.  You could say blockchain is a new database technology.  Its going to change the world.

Blockchain gave us Bitcoin and many other coins/tokens.  You've all heard of Bitcoin, but you may not have heard of Basic Attention Coin (BAT).  BAT works with the Brave browser, which can generate BAT tokens as you use it.  It then allows you to tip websites automatically when you use them.  *More info here*.  To do this you just need to download and use the Brave browser . Think "Chrome" but its faster, more secure, ads blocked, built by Brendan Eich, the guy that invented java script and Mozilla.  The idea is that from a user's perspective, you can reward content creators (like websites, twitter/youtube/blog channels, etc) when they help you out (at really no cost to you)...and from the content provider's perspective, they can get free money.

I've used it for about a year, and it keeps getting better, but I wanted to monetize my blog with it.  To do this, you have to register your channel, to do that, you need to put in a file with a key into a folder called ".well-known".  Blogger has no way to do this, but I found a way to get it to work.

1. Go to https://publishers.basicattentiontoken.org/publishers/home
2. Create an account and click on "Add Channel"
3. Click on Website
4. Add your website: ie: otipstricks.blogspot.com
You're then given two options to verify via DNS or by adding a file...choose to add the file.

That will give you the instructions to add a file called /.well-known/brave-rewards-verification.txt with metadata in it.

5. Now go into your blog's settings and go to "Pages" and click "New Page."
6. I named my page brave-rewards-verification.txt, although I'm not sure that was necessary, and then I pasted in the metadata I got from step 4.
7. Click save and publish...this gives you a page you can access like this:
https://otipstricks.blogspot.com/p/this-is-brave-rewards-publisher.html
8. Back to blogger settings->search preferences->Custom Redirects->Edit->New Redirect
9. Create a redirect that looks like this, with relative paths:
From:/.well-known/brave-rewards-verification.txt
To:/p/this-is-brave-rewards-publisher.html
Permanent:Yes
10. Now go back to https://publishers.basicattentiontoken.org/publishers/home and click on the verify button.  Within a few seconds it will say, "Channel Verified."
11. Now create an uphold wallet (which is another link on that page and follow the instructions to connect it to your new bat publishers account, and you're done.  Now, when people come to your page, they can tip your blog automatically, in the background.

Now, when you go into brave and click on brave rewards, you can check your website:



Thursday, August 8, 2019

Oracle's Backup Cloud Service (OCI) - 3 setup your backup

In this series of posts:

Oracle's Backup Cloud Service (OCI)-1 - The Oracle OCI Backup Cloud

Oracle's Backup Cloud Service (OCI) - 2 setup OCI and your server environment

Oracle's Backup Cloud Service (OCI) - 3 - You are here


At this point, you've read about why you want to use OCI and the benefits and you've read how to setup your environment for it.  This one is about setting up your backups to use it.  The only negative I can say after using OCP/OCI for over a year is that they change the security certificates without warning.  If you install/setup the software per the docs and you have a lot of databases...this can easily occupy many hours that day, reinstalling all the OCI software for every Oracle_Home in your infrastructure.  This is why in my previous post, I recommended you use a shared mount.

You have already:
1. Installed OCI software on a shared mount
2. Setup a container in the cloud web page
3. Used the jar file to create the environment files you need
4. Enabled block change tracking in the database (for incrementals)
5. Enabled medium compression in rman (per the OCI backup docs, this is normally licensed as part of the compression pack, but it appears if you're paying for OCI backups, this is included.  Talk to your Oracle sales guy.)

In the server you used in the last post, as the db software owner:
1. cd /nas/dba/backups/cfg
     2. ls
        ...you should see a file called opc[database name].ora
     3. You can use this as a template for all your other databases...just copy the file to have the new database name and update the container parameter in the file.  

      4. If you use stored scripts in an rman repository, you can add the backup scripts similar to this.  This is just the archivelog backup...you'll want incrementals and fulls, obviously, but the key is the allocate channel commands, which are the same.  Add more channels to increase performance...until your network guys complain:

printing stored global script: tstoracle01_TSTDB1_arch
{
allocate channel ch1 DEVICE TYPE 'SBT_TAPE' CONNECT 'sys/[the_pw]@TSTDB1' PARMS  'SBT_LIBRARY=/nas/dba/backups/cfg/libopc.so, SBT_PARMS=(OPC_PFILE=/nas/dba/backups/cfg/opcTSTDB_a.ora)';
allocate channel ch2 DEVICE TYPE 'SBT_TAPE' CONNECT 'sys/[the_pw]@TSTDB2' PARMS  'SBT_LIBRARY=/nas/dba/backups/cfg/libopc.so, SBT_PARMS=(OPC_PFILE=/nas/dba/backups/cfg/opcTSTDB_a.ora)';
allocate channel ch3 DEVICE TYPE 'SBT_TAPE' CONNECT 'sys/[the_pw]@TSTDB3' PARMS  'SBT_LIBRARY=/nas/dba/backups/cfg/libopc.so, SBT_PARMS=(OPC_PFILE=/nas/dba/backups/cfg/opcTSTDB_a.ora)';
crosscheck archivelog all;
delete expired archivelog all;
backup as compressed backupset archivelog filesperset 8 not backed up 1 times delete input;
}

   5. Make sure your rman environment is set up correctly:

CONFIGURE COMPRESSION ALGORITHM 'MEDIUM';
CONFIGURE DEFAULT DEVICE TYPE TO SBT_TAPE;
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/nas/dba/backups/cfg/libopc.so, SBT_PARMS=(OPC_PFILE=/nas/dba/backups/cfg/opcTSTDB_a.ora)';

    6. Assuming you can tnsping to each node from node 1, and you kick off the script from node 1, when you kick off the backup, it should work from all 3 nodes.  You'll also need a maint script to delete backups that are outside of your retention period...otherwise your OCI storage will grow too large.  Here's an example of a maint script:
printing stored global script: global_maint_backup
{
crosscheck backupset device type sbt_tape;
crosscheck archivelog all;
delete noprompt expired archivelog all;
delete noprompt expired backup;
delete noprompt obsolete;
}

  
    7. One of the requirements for OCI is that the backups MUST be encrypted.  This is easy...before your backup script, after you get in to rman, just add:
    
       set encryption on identified by '1VeryGoodPassword!' ONLY;
     ...and when you do the restore, you have to add a line to decrypt:
       set decryption identified by '1VeryGoodPassword!';

    8. When you do the restore, you'll probably want to use many more channels then you do when you're backing up...assuming its an emergency and you need the restore done asap.  When you add lots of channels, depending on your network equipment, it may help you compete for bandwidth favorably from the cat videos being watched by all the marketing managers at your company.
       

Oracle's Backup Cloud Service (OCI) - 2 (setup OCI and your server environment)

In this series of posts:

Oracle's Backup Cloud Service (OCI)-1 - The Oracle OCI Backup Cloud

Oracle's Backup Cloud Service (OCI) - 2 - You are here

Oracle's Backup Cloud Service (OCI) - 3 setup your backup


In my previous post, I talked about OCI Backups and their advantages.  Here's how you do it:

Before you start, verify you have Java 1.7+ installed on your server (java -version) and download the MML software:
http://www.oracle.com/technetwork/database/availability/oracle-cloud-backup-2162729.html

In a browser, go to:
https://console.us-ashburn-1.oraclecloud.com/

1. Setup your tenant
2. Create your user

...those steps keep changing, so I'm not going to tell you how to do them...but if you go to that page, it should be pretty self-explanatory.  Ok, so at this point you're in.

3. On the top right, select the region that's geo-located closest to you.  Be aware of any in-country requirements for your database backups.




     4.   On the top left, click on the hamburger (the button with 3 lines), and then click “Object Storage”.  The Object Storage page will appear:




5. To make management easier, put each database’s backups into its own bucket.  If you don’t see the bucket for the database you’re working on, create one by clicking on the “Create Bucket” button.

6. Fill in the “Bucket Name” text box with the database you want to add and select “standard tier”, then click the “Create Bucket” button.  The list price of standard tier is $0.025 cents/GB, and archive price is $0.0025 cents, but you’re required to keep archive for 90 days and there’s up to a 4 hour lag if you need to pull back archived files...so unless its a rare backup you're going to keep for an extended time, you want standard.


7.   For the install to work, we first need to get:  


          a.Cloud Infrastructure API signing keys
          b.tenant OCID
          c.user OCID
          d.fingerprint of our RSI key. 

      8. Click the profile button in the top right corner (it looks like a head in a circle) and select user details. Click "copy" and paste the USER OCID to the editor of your choice.  It should start out with: "ocid1.user.oc1.."

      9. Click again on the profile button in the top right corner (that looks like a head in a circle) and this time select "Tenancy: [whatever your tenancy is called]"

     10. Under the Tenancy information, click on "Copy" and copy the tenant OCID and paste it into the editor of your choice.

     11. Assuming you have an nfs mount that's shared on all your database servers, make a directory on that mount...  ie: mkdir /nas/dba/backups/cfg/opc_wallet

      12. In your database server, go to the directory you unzipped the software into and get a new RSA key pair:
         java -jar oci_install.jar -newRSAKeyPair -walletDir /nas/dba/backups/cfg/opc_wallet

      13. View the oci_pub file and select it so it loads into your laptop's clipboard (starting at the first dash, stopping after the last dash)
      
      14. Once again back to the Profile button in the top right corner, click on your user and scroll down to the bottom of the page where it says, "API Keys" and click Ad Public Keys and paste in your new key from the previous step...when complete, click ADD.

      15. A new fingerprint will be displayed.  Select that fingerprint and paste into the text editor of your choice.

      16. Using all the things you put in your editor of choice, create the install command.  Put yours all in one line...I broke this out to make it more readable:

  java -jar oci_install.jar -host https://objectstorage.us-phoenix-1.oraclecloud.com
    -pvtKeyFile  /nas/dba/backups/cfg/opc_wallet/oci_pvt
   -pubFingerPrint [paste finger print here]
   -tOCID [paste tenancy OCID]
   -uOCID [paste user OCID]
   -libDir /nas/dba/backups/cfg
   -walletDir /nas/dba/backups/cfg/opc_wallet
   -bucket [the bucket you created in step 5]
   -cOCID [the OCID of the compartment you're using, if any]
   -configFile /nas/dba/backups/cfg