Make a note for Exadata new Features' frame:
A.Hybrid Columnar Compression
B.Exadata Smart Flash Cache
C.Off Loading(*c1):
1.Decompression(*c2)
2.Decryption
3.Rman: 1.Incremental Backups
2.Restores
4.Datafile Initaliztion
5.Simple Join
6.Function
7.Virural Column
8.Smart Scan(*c3): 1.Predicate Filtering: =>1.Storage Index
2.Column Projection
(*c1)Offloading three design goals:
• Reduce the volume of data transferred from disk systems to the database servers.
• Reduce CPU usage on database servers.
• Reduce disk access times at the storage layer.
(*c2)
Compression always Done at Database tier, while Decompression can be Done at storage tier.
Same to Encryption/Decryption.
(*c3)Four Necessary condition For Smart Scan:
• Must be Full Scan or Index Fast Full Scan
• Must be Direct Path Read: Parallel Query or Serial Query
• Must have Predicate Filter or Column Projection
• Data must located on Exadata storage
Appendix A: Four kinds of HCC Compression Types
Name Level Type Ratio
QUERY LOW HCC level 1 LZO 4×
QUERY HIGH HCC level 2 ZLIB 6×
ARCHIVE LOW HCC level 3 ZLIB 7×
QUERY HIGH level 4 BZIP2 14×
Appendix B: Commands related to Flash Card
alter table kso.skew3 storage (cell_flash_cache keep);
CellCLI> create flashcache all;
CellCLI> list flashcache detail
CellCLI> list celldisk attributes name, diskType, size where name like 'FD.*'
CellCLI> LIST METRICCURRENT WHERE objectType = 'FLASHCACHE';
Appendix C: parameter to control offload related Features
alter session set cell_offload_processing=true;
alter session set "_serial_direct_read"=true;
alter system set "_kcfis_storageidx_disabled"=false;
alter session set "_bloom_predicate_pushdown_to_storage"=true;
alter session set "_cell_offload_virtual_columns"=true;
alter system set _DISABLE_CELL_OPTIMIZED_BACKUPS=true;
More......
GE, AGFA. experienced in 9i/10g/11g/12c, AdvanceReplication, Stream, GoldenGate, RAC, DataGuard, ODA, EXADATA, GridControl. In my career I have always been assigned to handle most complicated and critical oracle issues without solution online, and I hope to record some of them via this blog.
February 28, 2012
Frame for Exadata new Features
February 5, 2012
serious performance issue caused by 11g new feature
Not long time ago, one of our new production database hits very serious performance issue after migration from 10g to 11g.
It appreas like: All SQLs execution plan not changed, only the execution time become ten times or more.
After go through AWR report it looks clear that: The IO system for the DB is overload.
The IO load reach to 400mb/s , and cost nearly 400ms per read. The normal value should around 5ms-10ms.
Go through the AWR report, and find below TOP event cost 85%+ DB time:
direct path read.
Direct path read is an new feature in 11G. In 11G, when a table is doing a full table scan, oracle will make a judge and very possiblely choose direct path read instead of traditional db file scattered read.
In our enviroment, there is an simple SQL which don't have where filter, so definitly oracle has to choose full table scan for it. And oracle choosed direct path read.
This simple SQL is executed frequently, at the busiest time, more then 100+ sesssions run that SQL at same time.
Direct path read is not a problem, but when 100+ session doing direct path read at same time, then it is a big problem.
Because direct path read will skip the memory, hence make the IO load higher and higher and slower and slower.
And all other no-related SQLs which not doing a full table scan, when they need to read block from disk, will be also very slow because the IO system already nearly crashed.
Below is part of AWR report, for refer:
Wait | Event | Wait Time | Summary Avg Wait Time (ms) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
I# | Class | Event | Waits | %Timeouts | Total(s) | Avg(ms) | %DB time | Avg | Min | Max | Std Dev | Cnt |
* | User I/O | direct path read | 1,373,931 | 0.00 | 481,162.28 | 350.21 | 81.58 | 350.26 | 348.14 | 353.65 | 2.97 | 3 |
Other | reliable message | 32,847 | 21.54 | 9,074.56 | 276.27 | 1.54 | 369.12 | 152.35 | 495.44 | 188.58 | 3 | |
User I/O | db file sequential read | 21,558 | 0.00 | 8,367.24 | 388.13 | 1.42 | 387.98 | 379.14 | 394.27 | 7.88 | 3 | |
System I/O | control file sequential read | 26,328 | 0.00 | 5,801.76 | 220.36 | 0.98 | 218.09 | 211.52 | 228.97 | 9.49 | 3 | |
Commit | log file sync | 13,094 | 0.00 | 5,346.94 | 408.35 | 0.91 | 447.49 | 251.90 | 658.57 | 203.78 | 3 | |
DB CPU | 4,250.92 | 0.72 | 3 | |||||||||
User I/O | direct path write temp | 12,742 | 0.00 | 3,505.86 | 275.14 | 0.59 | 399.88 | 124.09 | 539.61 | 238.85 | 3 | |
System I/O | log file parallel write | 13,040 | 0.00 | 2,580.21 | 197.87 | 0.44 | 209.98 | 142.42 | 304.40 | 84.26 | 3 | |
System I/O | db file parallel write | 3,896 | 0.00 | 800.70 | 205.52 | 0.14 | 208.68 | 172.50 | 256.24 | 43.01 | 3 | |
Other | gcs log flush sync | 70,782 | 86.16 | 691.43 | 9.77 | 0.12 | 9.81 | 9.51 | 9.99 | 0.26 | 3 |
From above we can find that all IO related operation turns into tragedy. Even log file sync, controlfile sequential read these no-related event, was dragged by the poor IO caused by direct path read.
More......