Search This Blog

Friday, August 12, 2011

An Overlooked Database Performance Pitfall (Direct IO) - Part 1

At this point, if you've heard of Oracle (other than from watching The Matrix or reading "The History of Ancient Greece") then you've likely heard of Exadata.  If you've heard of Exadata, whatever your source...reading about it, hearing about it, whatever, there was a mention of how fast it is.  If this was a power point slide instead of a post, I'd show a race car, because designing your database is a little like speeding in your car...people just don't always know when they're hitting the accelerator and when they're breaking.

In my previous post, Click Here,  I talked about how Exadata's accelerator-direct path reads, allows for the use of many Exadata performance features.  Normally, the IO path for Oracle will move the data blocks from storage to shared pool, and then work with it from there.  There's a lot of overhead related to shared memory latches.  For small amounts of data movement, there's no better way to do it.  For large data transfers (large is relative to your segment) there's a better way.  Direct-IO allows you to circumvent the overhead of shared buffer caching, and move the data directly from the storage (array/spindle, etc) to the your process memory (PGA).

"But what about dirty blocks?  They're in memory, but they haven't been written to disk.  If I'm reading directly from disk to my process, wouldn't I miss them?"  Yes, you would...that's why at the start of any direct-io read there's a segment checkpoint that flushes dirty blocks to disk, followed by a shared lock to prevent the segment contents from changing during a direct-io read.  So, writing (especially direct IO writes...they can't be dirty) will have to wait for the read to complete.  Readers blocking writers...where have I heard that before? 

This is why there isn't one correct method of disk access in Oracle...it depends on what you're doing.  If you're moving large amounts of data direct-io moves it faster but you incur segment checkpoint waits before you can begin.  If you're only moving a few blocks they copy slower with conventional IO because of the latch waits of shared memory, but they don't incur the overhead of the segment checkpoint at the beginning.  The optimizer looks at how much you're copying and attempts to do the best path, based on the system settings and table statistics.

In previous posts I gave examples on how under certain OLTP conditions, direct path reads can also work like a break in your db-racecar, due to waits on segment checkpoints.  As a DBA/Data Architect, how do you design your database to take advantage of the benefits?  The optimizer will choose direct path reads when it thinks they're optimal, so you really need to know, what are the pitfalls that prevent them?

I'll list a few in my next post.

No comments:

Post a Comment