Preliminary testing results are very promising. After a few days of activity the accelerated performance due to C4OLTP is working, and the majority of the data is seeing excellent compression (up to 55X+) for HCC c4archive-high.
One of the tables is ~2TB in size and contains a clob column. If you read the documentation, it tells you that HCC doesn't work on cLOBS. What they SHOULD say is it doesn't work on out-of-line clobs. If your clobs are small, they're stored in the table's segment and HCC does work on them. I was able to take this 2048GB table and compress it down to around 76GB. That's 3.7% of its original size, for a compression ratio of about 27X.
This is one of the concepts that's difficult for people considering buying Exadata to grasp...it costs a lot, but due to its compression abilities, it may be the least expensive thing out there for very large databases. Its hard to compare Exadata to a standard 11.2 database system because Hybrid Columnar Compression makes it an apples-oranges comparison.
Consider the cost of high-performance storage on an EMC or Hitachi array. Everybody has their own TCO for high performance storage...but lets say EMC gives you a good deal and it costs $15/GB. The storage savings on this table alone saved almost $30k. Multiply that out by the 28TB in a high performance machine and it saves $11.3 million. A different way to look at it is...in a world where you can compress everything with HCC-QH and get these compression results...the actual capacity of a 28TB high capacity Exadata machine is 756TB!
Now that Oracle is selling storage cells individually, Exadata is truly expandable...there really isn't a capacity limit until you can saturate IB and create a bottleneck...but since that's been optimized (only sending required blocks, columns and sometimes result sets to the RAC nodes), its difficult to image how much storage you can have before that's an issue.
One of the tables is ~2TB in size and contains a clob column. If you read the documentation, it tells you that HCC doesn't work on cLOBS. What they SHOULD say is it doesn't work on out-of-line clobs. If your clobs are small, they're stored in the table's segment and HCC does work on them. I was able to take this 2048GB table and compress it down to around 76GB. That's 3.7% of its original size, for a compression ratio of about 27X.
This is one of the concepts that's difficult for people considering buying Exadata to grasp...it costs a lot, but due to its compression abilities, it may be the least expensive thing out there for very large databases. Its hard to compare Exadata to a standard 11.2 database system because Hybrid Columnar Compression makes it an apples-oranges comparison.
Consider the cost of high-performance storage on an EMC or Hitachi array. Everybody has their own TCO for high performance storage...but lets say EMC gives you a good deal and it costs $15/GB. The storage savings on this table alone saved almost $30k. Multiply that out by the 28TB in a high performance machine and it saves $11.3 million. A different way to look at it is...in a world where you can compress everything with HCC-QH and get these compression results...the actual capacity of a 28TB high capacity Exadata machine is 756TB!
Now that Oracle is selling storage cells individually, Exadata is truly expandable...there really isn't a capacity limit until you can saturate IB and create a bottleneck...but since that's been optimized (only sending required blocks, columns and sometimes result sets to the RAC nodes), its difficult to image how much storage you can have before that's an issue.
No comments:
Post a Comment