Search This Blog

Friday, August 12, 2011

An Overlooked Database Performance Pitfall - Part 2 (smallness logic)


To answer the question, "How can I make my large transaction faster in Oracle?"  We first have to know when direct path reads are prevented and triggered...when will the help, and when will they hamper performance?  Here are some situations that prevent them:

    * If the table uses advanced security features, like fine-grained access control

    * Queue tables don't use direct path reads

    * Tables that have BFILE or opaque or have an object type containing opaque

    * Tables that have encrypted columns

    * A table with LONG or LONG RAW column, where that column isn't the last one in the table

    * I've heard direct path reads are avoided when X% of the table's block are already in the db_cache.  This may be due to smallness logic...if the table is 5% the size of your db cache, and 4% of it is already in the db cache, the read would only be 1% of the db cache.  I haven't verified that, but that's my theory.  This means its possible that one day you get direct path reads, the next day you don't.

    * When "smallness logic" (_small_table_threshold) comes into play (Metalink-ID 787373.1):

"If the number of blocks to be read is lower than or equal to the setting of the parameter Oracle will load the object via the buffer cache as this is more efficient than doing a direct read operation. However, the parameter does not need to be set explicitly as there is also a dependency function calculating the cut over threshold if the parameter is unset. This calculation is roughly 2% of the buffer cache size to be used as the cut over threshold. This means any object (table) smaller than 2% of the buffer cache will be read via the buffer cache and not using direct load."

Table smallness logic has a lot of ramifications.  It doesn't just apply to tables, it applies to table partitions too, measuring segment size vs cache size. 

Here are 2 situations I've seen where table smallness logic had an impact:


In an effort to simplify the decision path of the query, a DBA I know went on a mission to partition everything he could.  He should have listened to Einstein who said, "Make everything as simple as possible, but not simpler." The DBA almost created a new type of index.  Partitioning a table with hundreds of partitions on a medium-sized table (to get partition pruning) to the point where some partitions only hold a few rows...and this has been done on an Exadata system!  He concluded that it was faster after he was able to do a sample workload much faster than before.  Besides the obvious management pain, this makes the partitioned segments so small that you can't trigger direct-path reads, which then prevents the use of storage indexes, and all the other Exadata goodies.  For performance reasons, if a query hits that table and filters on the partition key, that's likely not  a big deal...its small so a conventional, non-direct path read will be faster anyway.  For large queries, over partitioning removes the performance advantage of direct path reads.  I think Einstein would say, "He made it too simple."  He prevented the use of direct path reads, which meant his reads were no longer blocking his writes...so the end result of his tests were faster because there weren't waits...but he had no idea why it was faster.

I had the opportunity to work with Scott Gossett briefly for a consulting stint in Saint Louis.  He's an author, creator of ADDM...also teaches the Exadata classes to Oracle consultants and one the most talented "explainers" I've ever met.  There's a disconnect often between "techys" and "management" that's difficult to overcome.  Scott has the ability to almost put the management in a trance with his style of explaining things.  (Visualize a cobra in front of a guru playing a flute.)  Anyway, he sized the nodes of a full OLTP Exadata machine with 16GB of SGA and 32GB of PGA (out of 96GB of ram.)  The reason is, even if you have spare RAM, if you create the sga too big, you increase the db_cache size.  This means fewer tables will be less than 2% of the cache size, and that means you reduce the percent of activity that'll be direct path.  Its counter-intuitive, but less cache means it will run faster (depending on your workload and table layout, of course.)

Should you design your database to use direct path more often than conventional path?  It depends on your data and your workload.  For a data warehouse, its likely that direct path will almost always be the correct answer...but even in a data warehouse system, there's usually some non-DSS activity.  Be aware of direct path IO...it can make things much faster (single, large queries) or it can be a pitfall and make things much slower (multiple transactions attempting to read and modify the same segment).

No comments:

Post a Comment