Tuesday, July 21, 2015

The funniest title for a MOS note ever? Using OEM and Cloud Control with 12c in multi-threaded mode

While you read this next paragraph, use the voice of an old west cowboy.

Its high noon in the OCC coral.  The boys are ridin' in, lookin' for a fight, and fate ain't gonna let them down.  Who'll walk away when the dust settles and who will be eatin' the dust...Oracle Cloud Control, or the 12c Database?

In a recent SR, MOS referenced a note talking about an incompatibility between the 12c Oracle agent and 12c's threaded architecture in Linux/Unix.  The MOS note I'm talking about is:
"Databases show down in Cloud Control when using Network Connection Pool, feature with 12.1.0.1 databases and 11.2 JDBC threaded_execution=true (Doc ID 1960485.1)"

In past posts I've talked about the performance and scalability virtues of 12c in a multi-threaded architecture...depending on your workload, you'll get increased caching, less CPU utilization and less memory utilization.  New connections are threads, not processes in Unix/Linux.  Other blogs have posts on this topic that stated a 30% performance improvement.  Its a beautiful thing.  In a recent 12c RAC upgrade I worked on, the feedback from the user testing called this feature, "The Turbo Button."

The incompatibility is because...for some reason, Oracle elected to go with the ancient 11.1.0.7 JDBC drivers in the 12c OEM agent, and they have no plans to improve that until OEM 13 is released.  The multi-threaded architecture is only functional for JDBC thin clients 11.2.0.1 and up.  So...when you try to use the latest, greatest OEM agent, besides things appearing falsely down in OEM Cloud Control, many very small trace files are created in the db's diag trace directory every second or so.  Very quickly your log destination will fill up.  The traces contain an error like:
...
Network protocol error on first data after new connect
Probable error (ORA-28546) in network administration.

To work around this problem, we implemented the same solution I showed in my previous post for Netbackup.  We have threaded_execution=true in the init.ora parameters and we have 2 listeners, one with DEDICATED_THROUGH_BROKER_LISTENER=ON  and the other with it off.  This makes the database a hybrid between threaded and traditional process architecture.  For things like Netbackup and OEM we send them to the listener that creates a traditional process.  For everything else, we use the threaded architecture.  Its the best of both worlds...performance of multithreaded architecture and backwards compatibility with ancient 11.1 jdbc drivers.

Almost everybody uses OEM for monitoring/alerting and soon every supported database will be on 12c.  I hope this helps you as you roll out your 12c implementation so you can complete it before January 2016 (when 12c becomes the only supported option.)  I mentioned this work around in the SR with the hope that MOS would share this with other people having this issue.  I never saw a western movie where the characters went to the corral for a gunfight, and went away shaking hands and working together, but in this "show down", with this work around, I think that's exactly what happened between 12c's multi-threaded architecture and the Cloud Control's OEM agent.  Yippy-ki-yeah.

LISTENER_SCAN2=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2))))                # line added by Agent
LISTENER_SCAN3=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3))))                # line added by Agent
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))            # line added by Agent
LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1))))                # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON                # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1=OFF             # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON              # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET                # line added by Agent


lsnr-no-thread =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
       (ADDRESS = (PROTOCOL = TCP)(HOST = hostbackupnetwork)(PORT = 1521))
      )
    )
  )

SID_LIST_no-thread=
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = remedyp2)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
    )
  )


DEDICATED_THROUGH_BROKER_lsnr-no-thread=OFF
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3=ON                # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN3=OFF             # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2=ON                # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN2=OFF             # line added by Agent

Monday, July 6, 2015

I remember an English class in high school where we were assigned a term paper.  When somebody asked our teacher (Mrs Theile) how long it should be, she replied, "A term paper should be like a dress.  Long enough to cover the topic but short enough to keep you interested."  It occurred to me today that term papers are also like virtual CPU's for databases.  Recently a DBA friend told me about a scenario where he's migrating several databases from a physical server to multiple virtual servers.  The motivations for doing this are that:

  • Today, multiple business units have to agree on when to take an outage whenever there are OS patches that need to be applied.  Coordination is a nightmare.  The new environment will have each database running in its own OS environment, so this will no longer be an issue.
 
  • The existing hardware is very old and ready to end its life cycle.

He asked for advice to size the VM's.  He wanted to know how much storage capacity, RAM and CPU to tell the virtual team each VM needs for each database.

For storage capacity, he looked at dba_segments, v$logfile and v$tempfile.  He took several days of redo generation to size the FRA capacity.  Using the advice views (ie: v$db_cache_advice, etc) he was able to gather how much ram each db needed to optimize logical caching in the VM's for each memory pool.  That was all straight forward.  The more difficult task is to figure out how much CPU each db within the server typically needs.

When he looks at system-related monitoring tools, they show the total amount of cpu used on the system, not broken out by database...so, that's not very helpful.  In his situation he has 10 databases per physical server.  Like the term paper, there's a delicate balance between having too much cpu and not having enough.  There's really no way to quantify what "too much" is because its expected that under extremely rare situations, you're going to peg your CPU.  How often that's acceptable depends on your SLA's.  Also, its important to keep in mind CPU scheduling in VMware will slow down the db if you overallocate CPU. (I mentioned this in point #2 in a previous post)

If you think of a busy core as CPU time divided by 1000 ms, then the WRH$ tables have the solution for us.  There are two potential (but probably very uncommon) pitfalls. 

1. The cpu time is recorded in increments of 10ms.  If your core is busy for less than 10ms, it gets counted as 0. 

2. Strange things happen to system stats when 100% of the CPU is used.  At that point, you have to consider the stats unreliable.  For our purposes, this should rarely happen since we're going to look at an average over a snap period (by default, 1 hr)  so I filtered out that data.  If its common for you to run at 100% cpu utilization, this query won't help you determine how much cpu is sufficient.

So...granted that sometimes we're going to peg the CPU, how many vCPU would fulfill my SLA?



CORES_USEDSECONDS_AT_CORE_USEDSECONDS_AT_OR_BELOW_CORE_USEDTOTAL_SECONDS_IN_SNAPSSLA_ACHIEVED
1800914944369819719619276822.27309
24246642812378969119619276863.09595
31609381516625612019619276884.74121
4553107318234993519619276892.94427
5217286218788100819619276895.76347
676994619005387119619276896.87099
750137319082381719619276897.26343
827848819132519119619276897.51898
923237419160367919619276897.66093
1033689819183605419619276897.77937
1135028719217295319619276897.95109
1228078419252324119619276898.12963
1327068319280402519619276898.27275
1438887619307470819619276898.41072
1533458819346358519619276898.60893
1627368919379817319619276898.77947
1729010919407186219619276898.91897
1829021819436197219619276899.06684
1933504219465219019619276899.21476
2021596519498723319619276899.38554
2126093119520319819619276899.49561
2224626719546413019619276899.62861
2328795519571039719619276899.75413
2419441519599835219619276899.90091


with cpu_info as
(
select /*+ PARALLEL 8 */ sum(seconds) weight, sum(seconds), cpups cores_used from (
select s2.snap_id snap_1, s2.snap_id snap_2, to_date(to_char(begin_interval_time,'MM/DD/YYYY HH24'),'MM/DD/YYYY HH24') sample_snap,
  s2.cpu-s1.cpu cpu, round(trunc((s2.cpu-s1.cpu)/seconds)/100) cpups, seconds
from (
select hiof1.snap_id, hiof1.value cpu
from sys.WRH$_SYSSTAT HIOF1
where HIOF1.stat_id = (select stat_id from v$statname where name = 'CPU used by this session')
) s1, (
select hiof1.snap_id, hiof1.value cpu
from sys.WRH$_SYSSTAT HIOF1
where HIOF1.stat_id = (select stat_id from v$statname where name = 'CPU used by this session')
) s2, (
select snap_id, instance_number, begin_interval_time, end_interval_time,
 extract(second from (end_interval_time-begin_interval_time))+
 (extract(minute from (end_interval_time-begin_interval_time))*60)+
 (extract(hour from (end_interval_time-begin_interval_time))*60*60) seconds
 from dba_hist_snapshot
)
 ms
where s1.snap_id=ms.snap_id
  and s1.snap_id=(s2.snap_id-1)
  and (s2.cpu-s1.cpu)>1
  and (round(trunc((s2.cpu-s1.cpu)/seconds)/100))<=(select sum(p1.value)*p2.value
                                                      from gv$parameter p1, v$parameter p2
                                                      where p1.name='cpu_count'
                                                        and p2.name='parallel_threads_per_cpu'
                                                      group by p2.value)
) group by cpups
)
select ci1.cores_used, trunc(ci1.weight) seconds_at_core_used, trunc(sum(ci2.weight)) seconds_at_or_below_core_used, trunc(ci3.weight) total_seconds_in_snaps, round(100*(sum(ci2.weight)/ci3.weight),5) SLA_Achieved from cpu_info ci1, cpu_info ci2, (select sum(weight) weight from cpu_info) ci3
where ci2.cores_used+1<=ci1.cores_used
group by ci1.weight, ci1.cores_used, ci3.weight
order by 1; 



 

Wednesday, March 25, 2015

Oracle 12c threaded_execution and netbackup

If you haven't started migrating your databases to 12c, you better start!  The last patchset of 11g runs out of free extended support at the end of January '16 and 11.2.0.3 on 8/27/2015.  Besides, 12c is a huge step forward for Oracle, if you take advantage of the new features.  You COULD just install the db and run it just like it was version 7.3 and not take advantage of its features...that would be easier, but you'd be doing the company you work for a disservice....besides...new features are what's fun about being a DBA!

If you've spent any time at all looking at the new features in 12c, you've probably come across "threaded_execution."  It essentially makes all connections to the databases (and most background "processes") threads (aka lightweight processes) instead of process in linux.  The advantage is an in-process library call rather than a cpu call when the OS need to switch between them.  Also, memory that didn't used to be "shareable" between processes is now shared.  In my performance tests you could see a measured improvement in performance, but the big advantage I found was in scalability.  There are lots of post talking about its performance and memory benefits.  This is one of the rare simple changes you can make where the user will notice a quicker system.  IMHO its something non-standard that should be a standard...like how hugepages should always be used, for instance.

Usually, when you do a ps -ef|grep ANDY (my instance name is ANDY) you'd see MANY oracle background processes.  When threaded execution is set to true, most of those processes are now threads...so now when you do a ps -ef|grep ANDY you only see these 6:

oracle   48460     1  0 14:08 ?        00:00:00 ora_pmon_ANDY
oracle   48462     1  0 14:08 ?        00:00:01 ora_psp0_ANDY
oracle   48464     1  4 14:08 ?        00:03:03 ora_vktm_ANDY
oracle   48468     1  0 14:08 ?        00:00:12 ora_u004_ANDY
oracle   48482     1  0 14:08 ?        00:00:08 ora_dbw0_ANDY
oracle   50665     1  1 14:10 ?        00:00:57 ora_u005_ANDY

Like the great philosopher Bon Jovi once said, "Every rose has its thorn."

The main problem with the threaded execution architecture in Linux is that OS authentication no longer works.  (ie:no more sqlplus "/as sysdba")  I've been able to take advantage of other new 12c features to compensate for that in our scripting, but its kind of a pain.  Even so, its worth it to get the most out of the system.  Besides, you could argue that no OS Authentication could mean better security.

While doing extensive testing with this feature I hit a problem.  Although Symantec supports 12c in 7.6.0.2+, there was a problem where the environment variables passed in with my allocate channel commands weren't getting to Netbackup.

run
{
  allocate channel t1 DEVICE TYPE SBT parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_1)';
  allocate channel t2 DEVICE TYPE SBT parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_2)';
  allocate channel t3 DEVICE TYPE SBT parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_3)';
  backup filesperset 8 database format '%d_%U';
}

In the script above, NB_ORA_POLICY was being reported as Oracle_Master_1 for each channel in the NB logs and on the NB console.  The other 2 channels would eventually error out and the backup would complete VERY slowly on the one remaining channel.  If I set threaded_execution=false, NB worked fine on all 3 channels as usual.  I searched the internet for a solution and came up with nothing...I created an SR with Oracle...the response was "Contact your MML vendor."  I created a ticket with Symantec who escalated it to their Engineering group and I worked closely with a great on-site Symantec consultant, but we weren't able to get it to work.  The Symantec guys aren't familiar with the new architecture, so as I explained to them how connections come in first to the listener, then a thread is created for the user...*BAM* the solution hit me:

THE FIX:
Make RMAN use processes, make everything else use threads.

For threaded execution to make threaded connections, you have to add a line to your listener.ora file:

DEDICATED_THROUGH_BROKER_[listener_name] = ON

I already had a normal listener for TCP connections...I created a new (2nd) static listener using IPC and set "dedicated_through_broker" to OFF (the default) for that listener, then I created the entry in my db's tnsnames.ora.

In listener.ora:

lsnr-rman =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
       (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROCRMAN))
      )
    )
  )

SID_LIST_lsnr-rman=
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = ANDY)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db_2)
    )

  )

DEDICATED_THROUGH_BROKER_lsnr_rman= OFF

...then I started the new listener:

lsnrctl start lsnr-rman

Then I added the alias in tnsnames.ora:

rman =
  (DESCRIPTION=

     (ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROCANDY))(CONNECT_DATA=(SID=ANDY))(HS=))

The last thing I had to do was change the backup script to use the connect string...usually used to connect to remote nodes on RAC to load balance the RAC backups:

run
{
  allocate channel t1 DEVICE TYPE SBT connect rman_user/changeme@rman parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_1)';
  allocate channel t2 DEVICE TYPE SBT connect rman_user/changeme@rman parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_2)';
  allocate channel t3 DEVICE TYPE SBT connect rman_user/changeme@rman parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_3)';
  backup filesperset 1 database format '%d_%U';

}

When I kicked off the backup, I could see the new PROCESSES were created and connected to my threaded_execution db:

oracle   48460     1  0 14:08 ?        00:00:00 ora_pmon_ANDY
oracle   48462     1  0 14:08 ?        00:00:01 ora_psp0_ANDY
oracle   48464     1  4 14:08 ?        00:03:03 ora_vktm_ANDY
oracle   48468     1  0 14:08 ?        00:00:12 ora_u004_ANDY
oracle   48482     1  0 14:08 ?        00:00:08 ora_dbw0_ANDY
oracle   50665     1  1 14:10 ?        00:00:57 ora_u005_ANDY
oracle   56718     1  3 15:18 ?        00:00:00 oracleANDY (LOCAL=NO)
oracle   56720     1  3 15:18 ?        00:00:00 oracleANDY (LOCAL=NO)
oracle   56722     1  1 15:18 ?        00:00:00 oracleANDY (LOCAL=NO)

oracle   56763 36022  0 15:18 pts/2    00:00:00 grep ANDY

...and now the backup is working as expected.  The two downsides I see to this is that:

1. You can't use sysbackup privs to run your backup...to connect this way you seem to need sysdba privs.
2. You have to have the init.ora parameter backup_tape_io_slaves=false...which I always usually set to true to make backups more efficient.

I suspect this isn't a Netbackup issue, but an issue with the way allocate channel commands pass env variables in the new architecture.  After all, they're process env variables and all the allocate channel commands are using the same process (just different threads.)  Whatever you're using to backup your database...if you're using the great new feature threaded_execution, I hope you find this post useful. :)


I recently had a similar issue with the 12c OEM agent which uses 11.1.0.7 thin jdbc connections to the database.  I used the same work around...here are the details.

Tuesday, March 10, 2015

Constructing an effective IT team

Margy Ross (President of the Kimball Group, founded by the father of DW) wrote a great article called "Risky Project Resources are Risky Business."  She was focused specifically on DW/BI projects, but I think her article applies to all effective teams, and I want to pass it on. 

Risky Project Resources are Risky Business

Over the years, we’ve worked with countless exemplary DW/BI project team members: smart, skilled, dedicated, and motivated, coupled with a healthy dose of mutual trust, respect, and camaraderie with their teammates. Teams with members who possess these characteristics tend to fire on all cylinders, with the resulting whole often greater than the sum of the parts. But we’ve also run into risky project resources; in addition to being individual non-contributors, they can undermine the effectiveness of the entire DW/BI team. Model team members often become short-timers if the team is stacked with unproductive non-performers. We hope your team doesn’t include resources that resemble the following profiles:
  • Obstructionist debaters are perpetual naysayers who find fault with everything and get more satisfaction from the process of debating than the process of delivering.
  • Heat seekers who are always anxious to try the latest, greatest technical gadgets and gizmos regardless of whether they align with the DW/BI project’s objectives.
  • Cookie cutters continue to do exactly what’s worked for them in the past, regardless of their latest assignment’s nuances.
  • Weed dwellers lose sight of the forest from the trees, focusing exclusively on the nitty-gritty details without regard to the bigger picture.
  • Perpetual students and researchers want to read, read, and then read some more, but are disinclined to ever take action because there’s always more to learn.
  • Independent spirits march to their own drummer without regard to rules, standards or accepted best practices.
  • Honesty dodgers and problem hiders are always nodding “yes” and saying “no problem,” even when serious issues are lurking just around the corner.
  • Dysfunctional incompetents and mental retirees are checked out and unable to perform.
  • Self-declared “know it all” experts don’t need to listen because they already have all the answers – just ask them!
  • Threatened worriers are so paralyzed with fear about what might happen that they respond by doing nothing at all.
Of course, even with superstar teammates, the right team leadership is also necessary. Hopefully your DW/BI project/program manager fits the following bill:
  • Establishes a partnership with the business, including joint ownership for the DW/BI project/program, in part because they’re respected by the business as being user-oriented rather than technology-focused.
  • Demonstrates excellent interpersonal and organizational skills since the DW/BI project/program is a political and cultural animal.
  • Recruits and retains resources with the talent to deliver, gets them operating cohesively from a common playbook, and understands that adding more mediocre players won’t increase the team’s chances of winning. Conversely, they also spot individuals who are slowing down the effort and proactively counsel them (or at least minimize the risk of project derailment.)
  • Listens keenly, plus communicates effectively and honestly, setting appropriate expectations and having the courage to say “no” when necessary.
  • Optimally possesses some DW/BI domain expertise, in addition to strong project management skills. At a minimum, they’re staying one chapter ahead of the project team in The Data Warehouse Lifecycle Toolkit
  • Understands that DW/BI success is directly tied to business acceptance. Period.

Monday, July 1, 2013

Oracle RAC on vBlock

My recent project migrating many large, very active databases from single instance AIX to RAC running Redhat 6.2 had a lot of challenges that changed the design as time went on.  Originally the plan was to deploy (according to VMWare's best practices) using vmdk's on datastores, but the overall storage requirements exceeded 60TB, so this was no longer an option and we were forced (to my delight) to use raw devices instead.  All of these databases logically migrated to multiple VCE vBlocks (http://www.vce.com/products/vblock/overview).

Per SAP's ASM best practices (Variant 1), we placed the storage in 3 diskgroups: DATA, RECO and ARCH.


Oracle ASM Disk Group Name Stores
+DATA     - All data files
                  - All temp files
                  - Control file (first copy)
                  - Online redo logs (first copy)

+ARCH     - Control file (second cop

                  - Archived redo logs


+RECO      - Control file (third copy)
                   - Online redo logs (second copy)

Per Oracle's best practices, all the storage in a diskgroup should be the same size and performance...and the SAP layout suggests different IO requirements of these pools, so we went with a combination of SSD's and fast 15k SAS spindles in the DATA diskgroup (FAST on), many smaller 15k SAS spindles in REDO and slower 7200rpm 2TB NL-SAS spindles in +ARCH...after all, its ok if the background processes take longer to archive your logs.  Redo will remain active a little longer, but as long as its cleared long before we wrap around all the redo groups, its sufficient, doesn't affect performance and its much less expensive per GB.  We also created VMWare datastores for the OS out of the arch pool, since it, too, has low iops requirements.

There are some issues with this design but overall its performing extremely well.  The SAP database  is serving about 4 million db calls per minute, generating 1TB of archivelogs/day.  For a mixed load or DSS database, that archivelog generation wouldn't be a big deal...but for a pure OLTP db that's pretty respectable.  The DB Cache is undersized at 512GB...more than the old system had, which has really helped take the load off the storage and reduced our IOPS requirements.  The "DB Time" tracked by SAP is showing over a 2X performance boost.

For the larger non-SAP databases, their performance increase has been much more dramatic.  SAP ties your hands a bit, to make things consistent between all their customers their implementation is very specific...you have to be SAP Migration Certified to move a database to a new platform.  Michael Wang (from Oracle's SAP Migration group), who also teaches some Exadata administration classes, is an excellent resource for SAP migrations, and he's great to work with.   Many features that have been common in Oracle for years aren't supported.  For the non-SAP databases, we're free to take advantage of all the performance features Oracle has...and there are many.  We compressed tables with advanced compression, compressed indexes, tweaked stats and caches, moved to merged incremental backups on a different set of spindles than our data, create profiles suggested during RAT testing...basically everything we could think of.  For some databases, we implemented result cache...for others we found (in RAT testing) that it wasn't beneficial overall...it depends on your workload.  Some of our biggest performance gains (in some cases, 1000X+) didn't come from the new hardware, new software or the new design...but came from the migration itself.  For years, database upgrades were done in place, and since performance was tracked relative to "what it usually is" rather than what it should be...lots of problems, such as chained rows, were hidden.  After we did a logical migration, these problems were fixed and performance reached its potential.  I got lots of emails that went something like, "Wow, this is fast!!"

Its extremely good, but not perfect.  There's still an issue left due to going to multiple VNX's instead of a single vMax.  I'll talk about that one later.

Friday, June 28, 2013

Adding a disk to ASM and using UDEV?

The project I've been writing about to migrate many single instance IBM P5 595 AIX databases to 11.2.0.3 RAC on EMC vBlocks is coming to a close.  I thought there might be value in sharing some of the lessons learned from the experience.  There have been quite a few....

As I was sitting with the DBA team, discussing how well everything has gone and how stable the new environment was, alerts started going off that services, a vip and a scan listener on one of the production RAC nodes had failed over.  Hmm...that's strange.  About 45 seconds later...more alerts came in that the same thing happened on the next node...that happened over and over.  We poured through the clusterware/listener/alert logs and found nothing helpful...only that there was a network issue and clusterware took measures after the fact...nothing to point at the root cause.

Eventually we looked in the OS message log, and found this incident:

May 30 18:05:14 SCOOBY kernel: udev: starting version 147
May 30 18:05:16 SCOOBY ntpd[4312]: Deleting interface #11 eth0:1, 115.18.28.17#123, interface stats: received=0, sent=0, dropped=0, active_time=896510 secs
May 30 18:05:16 SCOOBY ntpd[4312]: Deleting interface #12 eth0:2, 115.18.28.30#123, interface stats: received=0, sent=0, dropped=0, active_time=896510 secs
May 30 18:05:18 SCOOBY ntpd[4312]: Listening on interface #13 eth0:1, 115.18.28.30#123 Enabled
May 30 18:08:21 SCOOBY kernel: ata1: soft resetting link
May 30 18:08:22 SCOOBY kernel: ata1.00: configured for UDMA/33
May 30 18:08:22 SCOOBY kernel: ata1: EH complete
May 30 18:09:55 SCOOBY kernel: sdab: sdab1
May 30 18:10:13 SCOOBY kernel: sdac: sdac1
May 30 18:10:27 SCOOBY kernel: udev: starting version 147

Udev started, ntpd reported the network issue, then udev finished.  Hmm...why did Udev start?  It turns out that the unix team added a disk (which has always been considered safe during business hours) and as part of Oracle's procedure to create the udev rule, they needed to run start_udev.  The first reaction was to declare "adding storage" an "after-hours practice only" from now on...and that would usually be ok...but there are times that emergencies come up and adding storage can't wait until after hours, and must be done online...so we needed a better answer.

The analysis of the issue showed that when the Unix team followed their procedure and ran start_udev, udev deleted the public network interface and re-created it within a few seconds which caused the listener to crash...and of course, clusterware wasn't ok with this.  All the scan listeners and services fled from that node to other nodes.  Without noticing an issue, the unix team proceeded to add the storage to the other nodes causing failovers over and over.  

We opened tickets with Oracle (since we followed their documented process per multiple MOS notes) and Redhat (since they support Udev).  The Oracle ticket didn't really go anywhere...the Redhat ticket said this is normal, expected behavior, which I thought is strange...I've done this probably hundreds of times and never noticed a problem, and I found nothing on MOS that mentions a problem.   RH eventually suggested we add HOTPLUG="NO" to the network configuration files.  After that, when we run start_udev, we don't have the problem, the message log doesn't show the network interface getting dropped and re-created...and everything is good.  We're able to add storage w/o an outage again.


I updated the MOS SR w/Redhat's resolution.  Hopefully this will be mentioned in a future note, or added to RACCHECK, for those of us running Oracle on Redhat 6+, where asmlib is unavailable.

-- UPDATE --

From Oracle, per note 414897.1, 1528148.1, 371814.1 etc, we're told to use start_udev to activate a new rule and add storage.  From Redhat (https://access.redhat.com/site/solutions/154183) we're told to never manually run start_udev.

Redhat has a better suggestion...you can trigger the udev event and not lose your network configuration and only effect the specific device you're working with via:

echo change > /sys/block/sdg/sdg1/uevent

I think this is a better option...so...do this instead of start_udev.  I would expect this to become a bigger issue as more people migrate to RH 6+, where asmlib isn't an option.




Thursday, December 6, 2012

Swingbench is great! (but not descriptive when there's an error.)

Just a quick note that may help some of you using Swingbench in distributed mode.  For those of you that haven't used it yet, Swingbench is a great way to compare performance of different Oracle databases running on different platforms or configurations.  The best part is...its free:

http://www.dominicgiles.com/swingbench.html

There are two ways to do it...one is a simple test from your laptop...the other is for a distributed RAC database, in order to push it and see when its bottlenecks are (and to make sure your laptop isn't introducing a bottleneck) You can get the details and a walk through from the author's site (link above), but essentially you have multiple groups connecting to the database directly to specific nodes, then their results are aggregated into the "coordinator process"...and its results are displayed by the cluster overview process.  When I was doing this last night I got this error and was unable to find help on "the internets":

11:33:11 AM FINEST com.dom.benchmarking.swingbench.clusteroverview.datasource.S calabilityDataSource () Connected java.lang.NullPointerException at com.dom.benchmarking.swingbench.clusteroverview.datasource.Transactio nDataSource.updateResultsArray(TransactionDataSource.java:148) at com.dom.benchmarking.swingbench.clusteroverview.datasource.TransactionDataSource.run(TransactionDataSource.java:177) at java.lang.Thread.run(Unknown Source)

I was doing a distributed swingbench test and the workload generators I was using (charbench), were all using the same swingconfig.xml over a shared NFS mount, which had a typo in the connect string...so I ended up having no connections. I can only guess this might be what the java error was trying to say with the "null pointed exception on update of the results array of the transaction data source." For my situation (and maybe yours) I consider this an "unable to connect to database" error...if you hit this issue, check the connect string in swingconfig.xml.

I hope this helps!