Wednesday, March 25, 2015

Oracle 12c threaded_execution and netbackup

If you haven't started migrating your databases to 12c, you better start!  The last patchset of 11g runs out of free extended support at the end of January '16 and 11.2.0.3 on 8/27/2015.  Besides, 12c is a huge step forward for Oracle, if you take advantage of the new features.  You COULD just install the db and run it just like it was version 7.3 and not take advantage of its features...that would be easier, but you'd be doing the company you work for a disservice....besides...new features are what's fun about being a DBA!

If you've spent any time at all looking at the new features in 12c, you've probably come across "threaded_execution."  It essentially makes all connections to the databases (and most background "processes") threads (aka lightweight processes) instead of process in linux.  The advantage is an in-process library call rather than a cpu call when the OS need to switch between them.  Also, memory that didn't used to be "shareable" between processes is now shared.  In my performance tests you could see a measured improvement in performance, but the big advantage I found was in scalability.  There are lots of post talking about its performance and memory benefits.  This is one of the rare simple changes you can make where the user will notice a quicker system.  IMHO its something non-standard that should be a standard...like how hugepages should always be used, for instance.

Usually, when you do a ps -ef|grep ANDY (my instance name is ANDY) you'd see MANY oracle background processes.  When threaded execution is set to true, most of those processes are now threads...so now when you do a ps -ef|grep ANDY you only see these 6:

oracle   48460     1  0 14:08 ?        00:00:00 ora_pmon_ANDY
oracle   48462     1  0 14:08 ?        00:00:01 ora_psp0_ANDY
oracle   48464     1  4 14:08 ?        00:03:03 ora_vktm_ANDY
oracle   48468     1  0 14:08 ?        00:00:12 ora_u004_ANDY
oracle   48482     1  0 14:08 ?        00:00:08 ora_dbw0_ANDY
oracle   50665     1  1 14:10 ?        00:00:57 ora_u005_ANDY

Like the great philosopher Bon Jovi once said, "Every rose has its thorn."

The main problem with the threaded execution architecture in Linux is that OS authentication no longer works.  (ie:no more sqlplus "/as sysdba")  I've been able to take advantage of other new 12c features to compensate for that in our scripting, but its kind of a pain.  Even so, its worth it to get the most out of the system.  Besides, you could argue that no OS Authentication could mean better security.

While doing extensive testing with this feature I hit a problem.  Although Symantec supports 12c in 7.6.0.2+, there was a problem where the environment variables passed in with my allocate channel commands weren't getting to Netbackup.

run
{
  allocate channel t1 DEVICE TYPE SBT parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_1)';
  allocate channel t2 DEVICE TYPE SBT parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_2)';
  allocate channel t3 DEVICE TYPE SBT parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_3)';
  backup filesperset 8 database format '%d_%U';
}

In the script above, NB_ORA_POLICY was being reported as Oracle_Master_1 for each channel in the NB logs and on the NB console.  The other 2 channels would eventually error out and the backup would complete VERY slowly on the one remaining channel.  If I set threaded_execution=false, NB worked fine on all 3 channels as usual.  I searched the internet for a solution and came up with nothing...I created an SR with Oracle...the response was "Contact your MML vendor."  I created a ticket with Symantec who escalated it to their Engineering group and I worked closely with a great on-site Symantec consultant, but we weren't able to get it to work.  The Symantec guys aren't familiar with the new architecture, so as I explained to them how connections come in first to the listener, then a thread is created for the user...*BAM* the solution hit me:

THE FIX:
Make RMAN use processes, make everything else use threads.

For threaded execution to make threaded connections, you have to add a line to your listener.ora file:

DEDICATED_THROUGH_BROKER_[listener_name] = ON

I already had a normal listener for TCP connections...I created a new (2nd) static listener using IPC and set "dedicated_through_broker" to OFF (the default) for that listener, then I created the entry in my db's tnsnames.ora.

In listener.ora:

lsnr-rman =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
       (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROCRMAN))
      )
    )
  )

SID_LIST_lsnr-rman=
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = ANDY)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db_2)
    )

  )

DEDICATED_THROUGH_BROKER_lsnr_rman= OFF

...then I started the new listener:

lsnrctl start lsnr-rman

Then I added the alias in tnsnames.ora:

rman =
  (DESCRIPTION=

     (ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROCANDY))(CONNECT_DATA=(SID=ANDY))(HS=))

The last thing I had to do was change the backup script to use the connect string...usually used to connect to remote nodes on RAC to load balance the RAC backups:

run
{
  allocate channel t1 DEVICE TYPE SBT connect rman_user/changeme@rman parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_1)';
  allocate channel t2 DEVICE TYPE SBT connect rman_user/changeme@rman parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_2)';
  allocate channel t3 DEVICE TYPE SBT connect rman_user/changeme@rman parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_3)';
  backup filesperset 1 database format '%d_%U';

}

When I kicked off the backup, I could see the new PROCESSES were created and connected to my threaded_execution db:

oracle   48460     1  0 14:08 ?        00:00:00 ora_pmon_ANDY
oracle   48462     1  0 14:08 ?        00:00:01 ora_psp0_ANDY
oracle   48464     1  4 14:08 ?        00:03:03 ora_vktm_ANDY
oracle   48468     1  0 14:08 ?        00:00:12 ora_u004_ANDY
oracle   48482     1  0 14:08 ?        00:00:08 ora_dbw0_ANDY
oracle   50665     1  1 14:10 ?        00:00:57 ora_u005_ANDY
oracle   56718     1  3 15:18 ?        00:00:00 oracleANDY (LOCAL=NO)
oracle   56720     1  3 15:18 ?        00:00:00 oracleANDY (LOCAL=NO)
oracle   56722     1  1 15:18 ?        00:00:00 oracleANDY (LOCAL=NO)

oracle   56763 36022  0 15:18 pts/2    00:00:00 grep ANDY

...and now the backup is working as expected.  The only downside I see to this is that you can't use sysbackup privs to run your backup...to connect this way you seem to need sysdba privs.  I wasted a lot of time searching the internet for a solution.  I suspect this isn't a Netbackup issue, but an issue with the way allocate channel commands pass env variables in the new architecture.  After all, they're process env variables and all the allocate channel commands are using the same process (just different threads.)  Whatever you're using to backup your database...if you're using the great new feature threaded_execution, I hope you find this post useful. :)

Tuesday, March 10, 2015

Constructing an effective IT team

Margy Ross (President of the Kimball Group, founded by the father of DW) wrote a great article called "Risky Project Resources are Risky Business."  She was focused specifically on DW/BI projects, but I think her article applies to all effective teams, and I want to pass it on. 

Risky Project Resources are Risky Business

Over the years, we’ve worked with countless exemplary DW/BI project team members: smart, skilled, dedicated, and motivated, coupled with a healthy dose of mutual trust, respect, and camaraderie with their teammates. Teams with members who possess these characteristics tend to fire on all cylinders, with the resulting whole often greater than the sum of the parts. But we’ve also run into risky project resources; in addition to being individual non-contributors, they can undermine the effectiveness of the entire DW/BI team. Model team members often become short-timers if the team is stacked with unproductive non-performers. We hope your team doesn’t include resources that resemble the following profiles:
  • Obstructionist debaters are perpetual naysayers who find fault with everything and get more satisfaction from the process of debating than the process of delivering.
  • Heat seekers who are always anxious to try the latest, greatest technical gadgets and gizmos regardless of whether they align with the DW/BI project’s objectives.
  • Cookie cutters continue to do exactly what’s worked for them in the past, regardless of their latest assignment’s nuances.
  • Weed dwellers lose sight of the forest from the trees, focusing exclusively on the nitty-gritty details without regard to the bigger picture.
  • Perpetual students and researchers want to read, read, and then read some more, but are disinclined to ever take action because there’s always more to learn.
  • Independent spirits march to their own drummer without regard to rules, standards or accepted best practices.
  • Honesty dodgers and problem hiders are always nodding “yes” and saying “no problem,” even when serious issues are lurking just around the corner.
  • Dysfunctional incompetents and mental retirees are checked out and unable to perform.
  • Self-declared “know it all” experts don’t need to listen because they already have all the answers – just ask them!
  • Threatened worriers are so paralyzed with fear about what might happen that they respond by doing nothing at all.
Of course, even with superstar teammates, the right team leadership is also necessary. Hopefully your DW/BI project/program manager fits the following bill:
  • Establishes a partnership with the business, including joint ownership for the DW/BI project/program, in part because they’re respected by the business as being user-oriented rather than technology-focused.
  • Demonstrates excellent interpersonal and organizational skills since the DW/BI project/program is a political and cultural animal.
  • Recruits and retains resources with the talent to deliver, gets them operating cohesively from a common playbook, and understands that adding more mediocre players won’t increase the team’s chances of winning. Conversely, they also spot individuals who are slowing down the effort and proactively counsel them (or at least minimize the risk of project derailment.)
  • Listens keenly, plus communicates effectively and honestly, setting appropriate expectations and having the courage to say “no” when necessary.
  • Optimally possesses some DW/BI domain expertise, in addition to strong project management skills. At a minimum, they’re staying one chapter ahead of the project team in The Data Warehouse Lifecycle Toolkit
  • Understands that DW/BI success is directly tied to business acceptance. Period.

Monday, July 1, 2013

Oracle RAC on vBlock

My recent project migrating many large, very active databases from single instance AIX to RAC running Redhat 6.2 had a lot of challenges that changed the design as time went on.  Originally the plan was to deploy (according to VMWare's best practices) using vmdk's on datastores, but the overall storage requirements exceeded 60TB, so this was no longer an option and we were forced (to my delight) to use raw devices instead.  All of these databases logically migrated to multiple VCE vBlocks (http://www.vce.com/products/vblock/overview).

Per SAP's ASM best practices (Variant 1), we placed the storage in 3 diskgroups: DATA, RECO and ARCH.


Oracle ASM Disk Group Name Stores
+DATA     - All data files
                  - All temp files
                  - Control file (first copy)
                  - Online redo logs (first copy)

+ARCH     - Control file (second cop

                  - Archived redo logs


+RECO      - Control file (third copy)
                   - Online redo logs (second copy)

Per Oracle's best practices, all the storage in a diskgroup should be the same size and performance...and the SAP layout suggests different IO requirements of these pools, so we went with a combination of SSD's and fast 15k SAS spindles in the DATA diskgroup (FAST on), many smaller 15k SAS spindles in REDO and slower 7200rpm 2TB NL-SAS spindles in +ARCH...after all, its ok if the background processes take longer to archive your logs.  Redo will remain active a little longer, but as long as its cleared long before we wrap around all the redo groups, its sufficient, doesn't affect performance and its much less expensive per GB.  We also created VMWare datastores for the OS out of the arch pool, since it, too, has low iops requirements.

There are some issues with this design but overall its performing extremely well.  The SAP database  is serving about 4 million db calls per minute, generating 1TB of archivelogs/day.  For a mixed load or DSS database, that archivelog generation wouldn't be a big deal...but for a pure OLTP db that's pretty respectable.  The DB Cache is undersized at 512GB...more than the old system had, which has really helped take the load off the storage and reduced our IOPS requirements.  The "DB Time" tracked by SAP is showing over a 2X performance boost.

For the larger non-SAP databases, their performance increase has been much more dramatic.  SAP ties your hands a bit, to make things consistent between all their customers their implementation is very specific...you have to be SAP Migration Certified to move a database to a new platform.  Michael Wang (from Oracle's SAP Migration group), who also teaches some Exadata administration classes, is an excellent resource for SAP migrations, and he's great to work with.   Many features that have been common in Oracle for years aren't supported.  For the non-SAP databases, we're free to take advantage of all the performance features Oracle has...and there are many.  We compressed tables with advanced compression, compressed indexes, tweaked stats and caches, moved to merged incremental backups on a different set of spindles than our data, create profiles suggested during RAT testing...basically everything we could think of.  For some databases, we implemented result cache...for others we found (in RAT testing) that it wasn't beneficial overall...it depends on your workload.  Some of our biggest performance gains (in some cases, 1000X+) didn't come from the new hardware, new software or the new design...but came from the migration itself.  For years, database upgrades were done in place, and since performance was tracked relative to "what it usually is" rather than what it should be...lots of problems, such as chained rows, were hidden.  After we did a logical migration, these problems were fixed and performance reached its potential.  I got lots of emails that went something like, "Wow, this is fast!!"

Its extremely good, but not perfect.  There's still an issue left due to going to multiple VNX's instead of a single vMax.  I'll talk about that one later.

Friday, June 28, 2013

Adding a disk to ASM and using UDEV?

The project I've been writing about to migrate many single instance IBM P5 595 AIX databases to 11.2.0.3 RAC on EMC vBlocks is coming to a close.  I thought there might be value in sharing some of the lessons learned from the experience.  There have been quite a few....

As I was sitting with the DBA team, discussing how well everything has gone and how stable the new environment was, alerts started going off that services, a vip and a scan listener on one of the production RAC nodes had failed over.  Hmm...that's strange.  About 45 seconds later...more alerts came in that the same thing happened on the next node...that happened over and over.  We poured through the clusterware/listener/alert logs and found nothing helpful...only that there was a network issue and clusterware took measures after the fact...nothing to point at the root cause.

Eventually we looked in the OS message log, and found this incident:

May 30 18:05:14 SCOOBY kernel: udev: starting version 147
May 30 18:05:16 SCOOBY ntpd[4312]: Deleting interface #11 eth0:1, 115.18.28.17#123, interface stats: received=0, sent=0, dropped=0, active_time=896510 secs
May 30 18:05:16 SCOOBY ntpd[4312]: Deleting interface #12 eth0:2, 115.18.28.30#123, interface stats: received=0, sent=0, dropped=0, active_time=896510 secs
May 30 18:05:18 SCOOBY ntpd[4312]: Listening on interface #13 eth0:1, 115.18.28.30#123 Enabled
May 30 18:08:21 SCOOBY kernel: ata1: soft resetting link
May 30 18:08:22 SCOOBY kernel: ata1.00: configured for UDMA/33
May 30 18:08:22 SCOOBY kernel: ata1: EH complete
May 30 18:09:55 SCOOBY kernel: sdab: sdab1
May 30 18:10:13 SCOOBY kernel: sdac: sdac1
May 30 18:10:27 SCOOBY kernel: udev: starting version 147

Udev started, ntpd reported the network issue, then udev finished.  Hmm...why did Udev start?  It turns out that the unix team added a disk (which has always been considered safe during business hours) and as part of Oracle's procedure to create the udev rule, they needed to run start_udev.  The first reaction was to declare "adding storage" an "after-hours practice only" from now on...and that would usually be ok...but there are times that emergencies come up and adding storage can't wait until after hours, and must be done online...so we needed a better answer.

The analysis of the issue showed that when the Unix team followed their procedure and ran start_udev, udev deleted the public network interface and re-created it within a few seconds which caused the listener to crash...and of course, clusterware wasn't ok with this.  All the scan listeners and services fled from that node to other nodes.  Without noticing an issue, the unix team proceeded to add the storage to the other nodes causing failovers over and over.  

We opened tickets with Oracle (since we followed their documented process per multiple MOS notes) and Redhat (since they support Udev).  The Oracle ticket didn't really go anywhere...the Redhat ticket said this is normal, expected behavior, which I thought is strange...I've done this probably hundreds of times and never noticed a problem, and I found nothing on MOS that mentions a problem.   RH eventually suggested we add HOTPLUG="NO" to the network configuration files.  After that, when we run start_udev, we don't have the problem, the message log doesn't show the network interface getting dropped and re-created...and everything is good.  We're able to add storage w/o an outage again.


I updated the MOS SR w/Redhat's resolution.  Hopefully this will be mentioned in a future note, or added to RACCHECK, for those of us running Oracle on Redhat 6+, where asmlib is unavailable.

-- UPDATE --

From Oracle, per note 414897.1, 1528148.1, 371814.1 etc, we're told to use start_udev to activate a new rule and add storage.  From Redhat (https://access.redhat.com/site/solutions/154183) we're told to never manually run start_udev.

Redhat has a better suggestion...you can trigger the udev event and not lose your network configuration and only effect the specific device you're working with via:

echo change > /sys/block/sdg/sdg1/uevent

I think this is a better option...so...do this instead of start_udev.  I would expect this to become a bigger issue as more people migrate to RH 6+, where asmlib isn't an option.




Thursday, December 6, 2012

Swingbench is great! (but not descriptive when there's an error.)

Just a quick note that may help some of you using Swingbench in distributed mode.  For those of you that haven't used it yet, Swingbench is a great way to compare performance of different Oracle databases running on different platforms or configurations.  The best part is...its free:

http://www.dominicgiles.com/swingbench.html

There are two ways to do it...one is a simple test from your laptop...the other is for a distributed RAC database, in order to push it and see when its bottlenecks are (and to make sure your laptop isn't introducing a bottleneck) You can get the details and a walk through from the author's site (link above), but essentially you have multiple groups connecting to the database directly to specific nodes, then their results are aggregated into the "coordinator process"...and its results are displayed by the cluster overview process.  When I was doing this last night I got this error and was unable to find help on "the internets":

11:33:11 AM FINEST com.dom.benchmarking.swingbench.clusteroverview.datasource.S calabilityDataSource () Connected java.lang.NullPointerException at com.dom.benchmarking.swingbench.clusteroverview.datasource.Transactio nDataSource.updateResultsArray(TransactionDataSource.java:148) at com.dom.benchmarking.swingbench.clusteroverview.datasource.TransactionDataSource.run(TransactionDataSource.java:177) at java.lang.Thread.run(Unknown Source)

I was doing a distributed swingbench test and the workload generators I was using (charbench), were all using the same swingconfig.xml over a shared NFS mount, which had a typo in the connect string...so I ended up having no connections. I can only guess this might be what the java error was trying to say with the "null pointed exception on update of the results array of the transaction data source." For my situation (and maybe yours) I consider this an "unable to connect to database" error...if you hit this issue, check the connect string in swingconfig.xml.

I hope this helps!

Thursday, October 25, 2012

Update to VM script for PowerCLI is on the way....

I'm going to update the script in the previous post to not use vmdk's for the data luns of the database. 

Although this performs well and its the SAP/VMWare best practice (and this is great for smaller db's) the idea that we may have to replicate our bugs on physical hardware for Oracle support means we'd have to live with our bug long enough to set up a new physical server, install the db software and restore the database to it.  For these multi-TB databases, that would take many hours, at least.  If we use RAW devices in the VM's (with the vBlock's Cisco UCS) all we have to do is apply a profile to a spare blade, turn it on and add that node to the RAC cluster.  Within a few minutes we'll be able to replicate the bug for MOS...then we can shutdown the blade and remove the profile.

I'll post it when its finished.

Friday, August 31, 2012

The Oracle RAC VM Build Script for PowerCLI on vSphere 5

One of the benefits of computers is that they're supposed to make repetative tasks easier, or automated.  Still, in many tasks in our industry, we're still supposed to click over and over in a GUI to do the same tasks, without error.  Besides causing the "Post Office Syndrom" (which causes one to "go postal"), this is no way for intelligent human beings to spend their lives.  Any chance I get to automate something that needs to be done over and over, I take it.  With that in mind, this script improved my life, and I hope it'll improve your life too.

I mentioned this in a previous post, so here it is, the PowerCLI vSphere 5 multiple vm build script.  It easily creates multiple VM's with shared storage utilizing a combination of best practices from Oracle, SAP and VMware.  I am by no means a PowerCLI guru...if you have improvements for this...let me know.  Here are a few I would like to see in the long term:

1. I have a friend that's planning to add the XLS functionality in PowerCLI to this...so that the dba team can give him an Excel spreadsheet with a list of database parameters that this script can read in and replace the parameters in a loop...creating many vm's, one after the other, automated.

2. The number of nodes should be a parameter that feeds the logic in a loop...so if you have a 2 node or 8 node RAC db, the same script can be used.

3. The final section that eager-zeroes the storage works...but for large databases it takes an extremely long time.  An alternative method would be to create the vmdk's thin, and then move them to the same datastore as eager zeroed, similar to what's discussed here.  My theory is that this might use VAAI, which could hugely improve the eager zeroing process by offloading it to the SAN.

Also, be aware there is a VSphere client bug that incorrectly reports the backing of the vmdk's and thick lazy zeroed when they're actually thick eager zeroed.  If you run the "eager zero" part of the script and it seems to complete in a few seconds, it means you're trying to eager zero something that's already eager zeroed (regardless of what the client reports), which almost amounts to a no-op. 

Many sections of this have been patch worked from talented people in the PowerCLI community but I can't tell who the original authors were...I think because different snippets have been added on by different people. Still, I'd really like to give them credit and thank them for making this possible. Technical communities that work together restore my faith in the good of mankind. :)


When I run this, I connect via terminal services to vCenter on vSphere 5, then paste it into PowerCLI on that machine...but there are lots of ways to skin that cat... Hmm...I know my blog gets translated to different languages...I wonder how colloquialisms like that are interpreted in Hindi, Chinese, Peta etc? :)


$VS5_Host1 = "node1.company.com"
$VS5_Host2 = "node2.company.com"
$VS5_Host3 = "node3.company.com"
$vmName1 = "racnode1"
$vmName2 = "racnode2"
$vmName3 = "racnode3"
$rac_vm_cpu = 6
$rac_vm_ram_mb = (110GB/1MB)
$rac_vm_ram_mb_rez = (90.6GB/1MB)
$public_network_name = "10.2.14-17"
$private_network_name = "192.168.20.0"
$backup_network_name = "Backup"
$osstore = "os_datastore"
$osstore_size_MB = (100GB/1MB)
$orastore = "ora_datastore"
$orastore_size_KB = (100GB/1KB)
$datastore1 = "data1"
$datastore2 = "data2"
$datastore3 = "data3"
$datastore4 = "data4"
$datastore5 = "data5"
$datastore6 = "data6"
$datastore7 = "data7"
$datastore8 = "data8"
$datastore_size_KB = (550GB/1KB)
$recostore1 = "loga"
$recostore2 = "logb"
$recostore_size_KB = (8GB/1KB)
$archstore1 = "arch01"
$archstore2 = "arch02"
$archstore3 = "arch03"
$archstore_size_KB = (200GB/1KB)

$VM1 = new-vm `
-Host "$VS5_Host1" `
-Name $vmName1 `
-Datastore (get-datastore "$osstore") `
-Location "Oracle" `
-GuestID rhel6_64Guest `
-MemoryMB 4096 `
-DiskMB $osstore_size_MB `
-NetworkName "$public_network_name" `
-DiskStorageFormat "Thin"

$vm2 = new-vm `
-Host "$VS5_Host2" `
-Name $vmName2 `
-Datastore (get-datastore "$osstore") `
-Location "Oracle" `
-GuestID rhel6_64Guest `
-MemoryMB 4096 `
-DiskMB $osstore_size_MB `
-NetworkName "$public_network_name" `
-DiskStorageFormat "Thin"

$VM3 = new-vm `
-Host "$VS5_Host3" `
-Name $vmName3 `
-Datastore (get-datastore "$osstore") `
-Location "Oracle" `
-GuestID rhel6_64Guest `
-MemoryMB 4096 `
-DiskMB $osstore_size_MB `
-NetworkName "$public_network_name" `
-DiskStorageFormat "Thin"

Function Change-Memory {
Param (
$VM,
$MemoryMB
)
Process {
$VMs = Get-VM $VM
Foreach ($Machine in $VMs) {
$VMId = $Machine.Id

$VMSpec = New-Object VMware.Vim.VirtualMachineConfigSpec
$VMSpec.memoryMB = $MemoryMB
$RawVM = Get-View -Id $VMId
$RawVM.ReconfigVM_Task($VMSpec)
}
}
}

Change-Memory -MemoryMB $rac_vm_ram_mb -VM $VM1
Change-Memory -MemoryMB $rac_vm_ram_mb -VM $VM2
Change-Memory -MemoryMB $rac_vm_ram_mb -VM $VM3

Set-VM -vm(get-vm $VM1) -NumCpu $rac_vm_cpu -RunAsync -Version v8 -Confirm:$false
Set-VM -vm(get-vm $vm2) -NumCpu $rac_vm_cpu -RunAsync -Version v8 -Confirm:$false
Set-VM -vm(get-vm $VM3) -NumCpu $rac_vm_cpu -RunAsync -Version v8 -Confirm:$false

Get-VM $VM1 | Get-VMResourceConfiguration | Set-VMResourceConfiguration -MemReservationMB $rac_vm_ram_mb_rez
Get-VM $vm2 | Get-VMResourceConfiguration | Set-VMResourceConfiguration -MemReservationMB $rac_vm_ram_mb_rez
Get-VM $VM3 | Get-VMResourceConfiguration | Set-VMResourceConfiguration -MemReservationMB $rac_vm_ram_mb_rez

New-NetworkAdapter -VM $vm1 -NetworkName "$private_network_name" -StartConnected -Type vmxnet3 -Confirm:$false
New-NetworkAdapter -VM $vm2 -NetworkName "$private_network_name" -StartConnected -Type vmxnet3 -Confirm:$false
New-NetworkAdapter -VM $vm3 -NetworkName "$private_network_name" -StartConnected -Type vmxnet3 -Confirm:$false

New-NetworkAdapter -VM $vm1 -NetworkName "$backup_network_name" -StartConnected -Type vmxnet3 -Confirm:$false
New-NetworkAdapter -VM $vm2 -NetworkName "$backup_network_name" -StartConnected -Type vmxnet3 -Confirm:$false
New-NetworkAdapter -VM $vm3 -NetworkName "$backup_network_name" -StartConnected -Type vmxnet3 -Confirm:$false

Function Enable-MemHotAdd($vm){
$vmview = Get-vm $vm | Get-View
$vmConfigSpec = New-Object VMware.Vim.VirtualMachineConfigSpec

$extra = New-Object VMware.Vim.optionvalue
$extra.Key="mem.hotadd"
$extra.Value="true"
$vmConfigSpec.extraconfig += $extra

$vmview.ReconfigVM($vmConfigSpec)
}

enable-memhotadd $vm1
enable-memhotadd $vm2
enable-memhotadd $vm3

Function Enable-vCpuHotAdd($vm){
$vmview = Get-vm $vm | Get-View
$vmConfigSpec = New-Object VMware.Vim.VirtualMachineConfigSpec

$extra = New-Object VMware.Vim.optionvalue
$extra.Key="vcpu.hotadd"
$extra.Value="true"
$vmConfigSpec.extraconfig += $extra

$vmview.ReconfigVM($vmConfigSpec)
}

enable-vCpuHotAdd $vm1
enable-vCpuHotAdd $vm2
enable-vCpuHotAdd $vm3

New-HardDisk -vm($VM1) -CapacityKB $orastore_size_KB -StorageFormat Thin -datastore "$orastore"
New-HardDisk -vm($vm2) -CapacityKB $orastore_size_KB -StorageFormat Thin -datastore "$orastore"
New-HardDisk -vm($VM3) -CapacityKB $orastore_size_KB -StorageFormat Thin -datastore "$orastore"

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $datastore_size_KB -StorageFormat Thick -datastore "$datastore1"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})
$New_SCSI_1_1 = $New_Disk1 | New-ScsiController -Type ParaVirtual -Confirm:$false
$New_SCSI_2_1 = $New_Disk2 | New-ScsiController -Type ParaVirtual -Confirm:$false
$New_SCSI_3_1 = $New_Disk3 | New-ScsiController -Type ParaVirtual -Confirm:$false

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $datastore_size_KB -StorageFormat Thick -datastore "$datastore2"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_1
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_1
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_1

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $datastore_size_KB -StorageFormat Thick -datastore "$datastore3"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_1
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_1
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_1

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $datastore_size_KB -StorageFormat Thick -datastore "$datastore4"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_1
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_1
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_1

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $datastore_size_KB -StorageFormat Thick -datastore "$datastore5"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_1
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_1
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_1

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $datastore_size_KB -StorageFormat Thick -datastore "$datastore6"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_1
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_1
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_1

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $datastore_size_KB -StorageFormat Thick -datastore "$datastore7"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_1
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_1
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_1

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $datastore_size_KB -StorageFormat Thick -datastore "$datastore8"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_1
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_1
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_1

###################################

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $recostore_size_KB -StorageFormat Thick -datastore "$recostore1"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})
$New_SCSI_1_2 = $New_Disk1 | New-ScsiController -Type ParaVirtual -Confirm:$false
$New_SCSI_2_2 = $New_Disk2 | New-ScsiController -Type ParaVirtual -Confirm:$false
$New_SCSI_3_2 = $New_Disk3 | New-ScsiController -Type ParaVirtual -Confirm:$false

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $recostore_size_KB -StorageFormat Thick -datastore "$recostore2"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_2
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_2
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_2

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $recostore_size_KB -StorageFormat Thick -datastore "$recostore1"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_2
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_2
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_2

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $recostore_size_KB -StorageFormat Thick -datastore "$recostore2"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_2
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_2
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_2

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $recostore_size_KB -StorageFormat Thick -datastore "$recostore1"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_2
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_2
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_2

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $recostore_size_KB -StorageFormat Thick -datastore "$recostore2"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_2
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_2
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_2

#######################


$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $archstore_size_KB -StorageFormat Thick -datastore "$archstore1"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})
$New_SCSI_1_3 = $New_Disk1 | New-ScsiController -Type ParaVirtual -Confirm:$false
$New_SCSI_2_3 = $New_Disk2 | New-ScsiController -Type ParaVirtual -Confirm:$false
$New_SCSI_3_3 = $New_Disk3 | New-ScsiController -Type ParaVirtual -Confirm:$false

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $archstore_size_KB -StorageFormat Thick -datastore "$archstore2"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_3
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_3
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_3

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $archstore_size_KB -StorageFormat Thick -datastore "$archstore3"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_3
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_3
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_3

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $archstore_size_KB -StorageFormat Thick -datastore "$archstore1"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_3
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_3
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_3

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $archstore_size_KB -StorageFormat Thick -datastore "$archstore2"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_3
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_3
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_3

$New_Disk1 = New-HardDisk -vm($VM1) -CapacityKB $archstore_size_KB -StorageFormat Thick -datastore "$archstore3"
$New_Disk2 = new-harddisk -vm($vm2) -diskpath ($New_Disk1 | %{$_.Filename})
$New_Disk3 = new-harddisk -vm($vm3) -diskpath ($New_Disk1 | %{$_.Filename})

set-harddisk -Confirm:$false -harddisk $New_Disk1 -controller $New_SCSI_1_3
set-harddisk -Confirm:$false -harddisk $New_Disk2 -controller $New_SCSI_2_3
set-harddisk -Confirm:$false -harddisk $New_Disk3 -controller $New_SCSI_3_3

$ExtraOptions = @{
# per VMware, SAP and Oracle VMware Best Practices
"scsi1:0.sharing"="multi-writer";
"scsi1:1.sharing"="multi-writer";
"scsi1:2.sharing"="multi-writer";
"scsi1:3.sharing"="multi-writer";
"scsi1:4.sharing"="multi-writer";
"scsi1:5.sharing"="multi-writer";
"scsi1:6.sharing"="multi-writer";
"scsi1:8.sharing"="multi-writer";
"scsi1:9.sharing"="multi-writer";
"scsi1:10.sharing"="multi-writer";
"scsi1:11.sharing"="multi-writer";
"scsi1:12.sharing"="multi-writer";
"scsi1:13.sharing"="multi-writer";
"scsi1:14.sharing"="multi-writer";
"scsi1:15.sharing"="multi-writer";
"scsi2:0.sharing"="multi-writer";
"scsi2:1.sharing"="multi-writer";
"scsi2:2.sharing"="multi-writer";
"scsi2:3.sharing"="multi-writer";
"scsi2:4.sharing"="multi-writer";
"scsi2:5.sharing"="multi-writer";
"scsi2:6.sharing"="multi-writer";
"scsi2:8.sharing"="multi-writer";
"scsi2:9.sharing"="multi-writer";
"scsi2:10.sharing"="multi-writer";
"scsi2:11.sharing"="multi-writer";
"scsi2:12.sharing"="multi-writer";
"scsi2:13.sharing"="multi-writer";
"scsi2:14.sharing"="multi-writer";
"scsi2:15.sharing"="multi-writer";
"scsi3:0.sharing"="multi-writer";
"scsi3:1.sharing"="multi-writer";
"scsi3:2.sharing"="multi-writer";
"scsi3:3.sharing"="multi-writer";
"scsi3:4.sharing"="multi-writer";
"scsi3:5.sharing"="multi-writer";
"scsi3:6.sharing"="multi-writer";
"scsi3:8.sharing"="multi-writer";
"scsi3:9.sharing"="multi-writer";
"scsi3:10.sharing"="multi-writer";
"scsi3:11.sharing"="multi-writer";
"scsi3:12.sharing"="multi-writer";
"scsi3:13.sharing"="multi-writer";
"scsi3:14.sharing"="multi-writer";
"scsi3:15.sharing"="multi-writer";
"disk.EnableUUID"="true";
"ethernet0.coalescingScheme"="disabled";
"ethernet1.coalescingScheme"="disabled";
"sched.mem.pshare.enable"="false";
"numa.vcpu.preferHT"="true";

# per VMware's Hardening Guide - Enterprise Level
"isolation.tools.diskShrink.disable"="true";
"isolation.tools.diskWiper.disable"="true";
"isolation.tools.copy.disable"="true";
"isolation.tools.paste.disable"="true";
"isolation.tools.setGUIOptions.enable"="false";
"isolation.device.connectable.disable"="true";
"isolation.device.edit.disable"="true";
"vmci0.unrestricted"="false";
"log.keepOld"="10";
"log.rotateSize"="1000000";
"tools.setInfo.sizeLimit"="1048576";
"guest.command.enabled"="false";
"tools.guestlib.enableHostInfo"="false"
}
$vmConfigSpec = New-Object VMware.Vim.VirtualMachineConfigSpec;
Foreach ($Option in $ExtraOptions.GetEnumerator()) {
$OptionValue = New-Object VMware.Vim.optionvalue
$OptionValue.Key = $Option.Key
$OptionValue.Value = $Option.Value
$vmConfigSpec.extraconfig += $OptionValue
}

$vmview=get-vm $vmName1 | get-view
$vmview.ReconfigVM_Task($vmConfigSpec)
$vmview=get-vm $vmName2 | get-view
$vmview.ReconfigVM_Task($vmConfigSpec)
$vmview=get-vm $vmName3 | get-view
$vmview.ReconfigVM_Task($vmConfigSpec)

function Set-EagerZeroThick{
param($vcName, $vmName, $hdName)
# Find ESX host for VM
# $vcHost = Connect-VIServer -Server $vcName -Credential (Get-Credential -Credential "vCenter account")
$vmImpl = Get-VM $vmName
if($vmImpl.PowerState -ne "PoweredOff"){
Write-Host "Guest must be powered off to use this script !" -ForegroundColor red
return $false
}

$vm = $vmImpl | Get-View
$esxName = (Get-View $vm.Runtime.Host).Name
# Find datastore path
$dev = $vm.Config.Hardware.Device | where {$_.DeviceInfo.Label -eq $hdName}
if($dev.Backing.thinProvisioned){
return $false
}
$hdPath = $dev.Backing.FileName

# For Virtual Disk Manager we need to connect to the ESX server
# $esxHost = Connect-VIServer -Server $esxName -User $esxAccount -Password $esxPasswd

# Convert HD
$vDiskMgr = Get-View -Id (Get-View ServiceInstance -Server $esxHost).Content.VirtualDiskManager
$dc = Get-Datacenter -Server $esxHost | Get-View
$taskMoRef = $vDiskMgr.EagerZeroVirtualDisk_Task($hdPath, $dc.MoRef)
$task = Get-View $taskMoRef
while("running","queued" -contains $task.Info.State){
$task.UpdateViewData("Info")
}

# Disconnect-VIServer -Server $esxHost -Confirm:$false

# Connect to the vCenter
# Connect-VIServer -Server $vcName -Credential (Get-Credential -Credential "vCenter account")
if($task.Info.State -eq "success"){
return $true
}
else{
return $false
}
}

Set-EagerZeroThick $vCenter $vmName1 "Hard disk 3"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 4"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 5"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 6"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 7"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 8"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 9"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 10"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 11"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 12"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 13"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 14"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 15"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 16"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 17"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 18"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 19"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 20"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 21"
Set-EagerZeroThick $vCenter $vmName1 "Hard disk 22"