本文永久地址https://www.askmac.cn/archives/rac-awr-statistics.html
RAC相关指标
Global Cache Load Profile
Per Second | Per Transaction | |
Global Cache blocks received: | 12.06 | 2.23 |
Global Cache blocks served: | 8.18 | 1.51 |
GCS/GES messages received: | 391.19 | 72.37 |
GCS/GES messages sent: | 368.76 | 68.22 |
DBWR Fusion writes: | 0.10 | 0.02 |
Estd Interconnect traffic (KB) | 310.31 |
指标 | 指标说明 |
Global Cache blocks received | 通过硬件连接收到远程实例的数据块的数量。发生在一个进程请求一致性读一个数据块不是在本地缓存中。Oracle发送一个请求到另外的实例。一旦缓冲区收到,这个统计值就会增加。这个统计值是另两个统计值的和:Global Cache blocks received = gc current blocks received + gc cr blocks received |
Global Cache blocks served | 通过硬件连接发送到远程实例的数据块的数量。这个统计值是另外两个统计值的和:Global Cache blocks served = gc current blocks served + gc cr blocks served |
GCS/GES messages received | 通过硬件连接收到远程实例的消息的数量。这个统计值通常代表RAC服务引起的开销。这个统计值是另外两个统计值的和:GCS/GES messages received = gcs msgs received + ges msgs received |
GCS/GES messages sent | 通过硬件连接发送到远程实例的消息的数量。这个统计值通常代表RAC服务引起的开销。这个统计值是另外两个统计值的和:GCS/GES messages sent = gcs messages sent + ges messages sent |
DBWR Fusion writes | 这个统计值显示融合写入的次数。在RAC中,单实例Oracle数据库,数据块只被写入磁盘因为数据过期,缓冲替换或者发生检查点。当一个数据块在缓存中被替换因为数据过期或发生检查点但在另外的实例没有写入磁盘,Global Cache Service会请求实例将数据块写入磁盘。因此融合写入不包括在第一个实例中的额外写入磁盘。大量的融合写入表明一个持续的问题。实例产生的融合写入请求占总的写入请求的比率用于性能分析。高比率表明DB cache大小不合适或者检查点效率低。 |
Estd Interconnect traffic (KB) | 连接传输的KB大小。计算公式如下:Estd Interconnect traffic (KB) = ((‘gc cr blocks received’+ ‘gc current blocks received’ + ‘gc cr blocks
served’+ ‘gc current blocks served’) * Block size) + ((‘gcs messages sent’ + ‘ges messages sent’ + ‘gcs msgs received’+ ‘gcs msgs received’)*200)/1024/Elapsed Time |
Global Cache Efficiency Percentages (Target local+remote 100%)
Buffer access – local cache %: | 91.05 |
Buffer access – remote cache %: | 0.03 |
Buffer access – disk %: | 8.92 |
指标 | 指标说明 |
Buffer access – local cache % | 数据块从本地缓存命中占会话总的数据库请求次数的比例。在OLTP应用中最希望的是尽可能维持这个比率较高,因为这是最低成本和最快速的获得数据库数据块的方法。计算公式:Local Cache Buffer Access Ratio = 1 – ( physical reads cache + Global Cache blocks received ) / Logical Reads |
Buffer access – remote cache % | 数据块从远程实例缓存命中占会话总的数据块请求的比例。在OLTP应用中这个比率和Buffer access – local cache的和应该尽可能的高因为这两种方法访问数据库数据块是最快速最低成本的。这个比率的计算方法:Remote Cache Buffer Access Ratio = Global Cache blocks received / Logical Reads |
Buffer access – disk % | 从磁盘上读数据块到缓存占会话总的数据块请求次数的比例。在OLTP应用中希望维持这个比例低因为物理读是最慢的访问数据库数据块的方式。这个比率计算方法:
1 – physical reads cache / Logical Reads |
Global Cache and Enqueue Services – Workload Characteristics
Avg global enqueue get time (ms): | 0.0 |
Avg global cache cr block receive time (ms): | 0.3 |
Avg global cache current block receive time (ms): | 0.2 |
Avg global cache cr block build time (ms): | 0.0 |
Avg global cache cr block send time (ms): | 0.0 |
Global cache log flushes for cr blocks served %: | 1.2 |
Avg global cache cr block flush time (ms): | 1.8 |
Avg global cache current block pin time (ms): | 1,021.7 |
Avg global cache current block send time (ms): | 0.0 |
Global cache log flushes for current blocks served %: | 6.9 |
Avg global cache current block flush time (ms): | 0.9 |
本文永久地址https://www.askmac.cn/archives/rac-awr-statistics.html
指标 | 指标说明 |
Avg global enqueue get time (ms) | 通过interconnect发送消息,为争夺资源开启一个新的全局队列或者对已经开启的队列转换访问模式所花费的时间。如果大于20ms,你的系统可能会出现超时。 |
Avg global cache cr block receive time (ms) | 从请求实例发送消息到mastering instance(2-way get)和一些到holding instance (3-way get)花费的时间。这个时间包括在holding instance生成数据块一致性读映像的时间。CR数据块获取耗费的时间不应该大于15ms。 |
Avg global cache current block receive time (ms) | 从请求实例发送消息到mastering instance(2-way get)和一些到holding instance (3-way get)花费的时间。这个时间包括holding instance日志刷新花费的时间。Current Block获取耗费的时间不大于30ms |
Avg global cache cr block build time (ms) | CR数据块创建耗费的时间 |
Avg global cache cr block send time (ms) | CR数据块发送耗费的时间 |
Global cache log flushes for cr blocks served % | 需要日志刷新的CR数据块占总的需要服务的CR数据块的比例。 |
Avg global cache cr block flush time (ms) | CR数据块刷新耗费的时间 |
Avg global cache current block pin time (ms) | Current数据块pin耗费的时间 |
Avg global cache current block send time (ms) | Current数据块发送耗费的时间 |
Global cache log flushes for current blocks served % | 需要日志刷新的Current数据块占总的需要服务的Current数据块的比例 |
Avg global cache current block flush time (ms) | Current数据块刷新耗费的时间 |
Global Cache and Enqueue Services – Messaging Statistics
Avg message sent queue time (ms): | 2,367.6 |
Avg message sent queue time on ksxp (ms): | 0.1 |
Avg message received queue time (ms): | 0.3 |
Avg GCS message process time (ms): | 0.0 |
Avg GES message process time (ms): | 0.0 |
% of direct sent messages: | 54.00 |
% of indirect sent messages: | 44.96 |
% of flow controlled messages: | 1.03 |
指标 | 指标说明 |
Avg message sent queue time (ms) | 一条信息进入队列到发送它的时间 |
Avg message sent queue time on ksxp (ms) | 对端收到该信息并返回ACK的时间,这个指标很重要,直接反应了网络延迟,一般小于1ms |
Avg message received queue time (ms) | 一条信息进入队列到收到它的时间 |
Avg GCS message process time (ms) | |
Avg GES message process time (ms) | |
% of direct sent messages | 直接发送信息占的比率 |
% of indirect sent messages | 间接发送信息占的比率,一般是排序或大的信息,流控制也可能引起 |
% of flow controlled messages | 流控制信息占的比率,流控制最常见的原因是网络状况不佳, % of flow
controlled messages应当小于1% |
Wait Event Histogram
% of Waits | |||||||||
Event | Total Waits | <1ms | <2ms | <4ms | <8ms | <16ms | <32ms | <=1s | >1s |
ADR block file read | 208 | 38.0 | 3.4 | 44.7 | 13.9 | ||||
ADR block file write | 40 | 100.0 | |||||||
ADR file lock | 48 | 100.0 | |||||||
ARCH wait for archivelog lock | 3 | 100.0 | |||||||
ASM file metadata operation | 12.8K | 99.7 | .1 | .0 | .0 | .0 | .2 | .0 | |
Backup: MML write backup piece | 310.5K | 7.6 | .1 | .1 | 1.3 | 10.4 | 30.2 | 50.2 | .0 |
CGS wait for IPC msg | 141.7K | 100.0 | |||||||
CSS initialization | 34 | 50.0 | 47.1 | 2.9 | |||||
CSS operation: action | 110 | 48.2 | 20.9 | 28.2 | 2.7 | ||||
CSS operation: query | 102 | 88.2 | 3.9 | 7.8 | |||||
DFS lock handle | 6607 | 93.9 | .5 | .2 | .0 | .0 | 5.3 | .0 | |
Disk file operations I/O | 1474 | 100.0 | |||||||
IPC send completion sync | 21.9K | 99.5 | .1 | .1 | .1 | .0 | .2 | ||
KJC: Wait for msg sends to complete | 13 | 100.0 | |||||||
LGWR wait for redo copy | 16.3K | 100.0 | .0 | ||||||
Log archive I/O | 3 | 33.3 | 66.7 | ||||||
PX Deq: Signal ACK EXT | 2256 | 99.8 | .1 | .1 | |||||
PX Deq: Signal ACK RSG | 2124 | 99.9 | .1 | .0 | |||||
PX Deq: Slave Session Stats | 7997 | 94.6 | .9 | .9 | 2.5 | .8 | .4 | ||
PX Deq: Table Q qref | 2355 | 99.9 | .1 | ||||||
PX Deq: reap credit | 1215.7K | 100.0 | .0 | .0 | |||||
PX qref latch | 1366 | 100.0 | |||||||
Parameter File I/O | 194 | 94.8 | 1.0 | 1.0 | 1.0 | 1.5 | .5 |
Wait Event Histogram:等待时间直方图
Event:等待事件名字
Total Waits:该等待事件在快照时间内等待的次数
%of Waits < 1ms :小于1ms的等待次数
%of Waits < 2ms :小于2ms的等待次数
%of Waits < 4ms :小于4ms的等待次数
%of Waits < 8ms :小于8ms的等待次数
%of Waits < 16ms :小于16ms的等待次数
%of Waits < 32ms :小于32ms的等待次数
%of Waits < =1s :小于等于1s的等待次数
%of Waits > 1s :大于1s的等待次数
Parent Latch Statistics
- only latches with sleeps are shown
- ordered by name
Latch Name | Get Requests | Misses | Sleeps | Spin & Sleeps 1->3+ |
Real-time plan statistics latch | 77,840 | 136 | 20 | 116/0/0/0 |
active checkpoint queue latch | 321,023 | 20,528 | 77 | 20451/0/0/0 |
active service list | 339,641 | 546 | 132 | 424/0/0/0 |
call allocation | 328,283 | 550 | 148 | 440/0/0/0 |
enqueues | 1,503,525 | 217 | 14 | 203/0/0/0 |
ksuosstats global area | 2,605 | 1 | 1 | 0/0/0/0 |
messages | 2,608,863 | 141,380 | 29 | 141351/0/0/0 |
name-service request queue | 155,047 | 43 | 15 | 28/0/0/0 |
qmn task queue latch | 2,368 | 90 | 78 | 12/0/0/0 |
query server process | 268 | 30 | 30 | 0/0/0/0 |
redo writing | 910,703 | 11,623 | 50 | 11573/0/0/0 |
resmgr:free threads list | 14,454 | 190 | 4 | 186/0/0/0 |
space background task latch | 11,209 | 15 | 7 | 8/0/0/0 |
Latch Name:闩名称
Get Requests:申请获得父闩的次数
本文永久地址https://www.askmac.cn/archives/rac-awr-statistics.html
Child Latch Statistics
- only latches with sleeps/gets > 1/100000 are shown
- ordered by name, gets desc
Latch Name | Child Num | Get Requests | Misses | Sleeps | Spin & Sleeps 1->3+ |
KJC message pool free list | 1 | 96,136 | 82 | 20 | 62/0/0/0 |
Lsod array latch | 10 | 2,222 | 153 | 118 | 58/0/0/0 |
Lsod array latch | 13 | 2,151 | 43 | 14 | 29/0/0/0 |
Lsod array latch | 4 | 2,066 | 154 | 124 | 59/0/0/0 |
Lsod array latch | 5 | 1,988 | 105 | 44 | 63/0/0/0 |
Lsod array latch | 9 | 1,734 | 95 | 32 | 64/0/0/0 |
Lsod array latch | 2 | 1,707 | 88 | 38 | 55/0/0/0 |
Lsod array latch | 11 | 1,695 | 88 | 32 | 57/0/0/0 |
Lsod array latch | 6 | 1,680 | 158 | 126 | 64/0/0/0 |
Lsod array latch | 12 | 1,657 | 155 | 111 | 65/0/0/0 |
Lsod array latch | 7 | 1,640 | 90 | 34 | 59/0/0/0 |
Lsod array latch | 1 | 1,627 | 169 | 153 | 46/0/0/0 |
Lsod array latch | 3 | 1,555 | 87 | 36 | 54/0/0/0 |
Lsod array latch | 8 | 1,487 | 127 | 88 | 57/0/0/0 |
cache buffers chains | 47418 | 354,313 | 391 | 4 | 387/0/0/0 |
cache buffers chains | 8031 | 337,135 | 250 | 8 | 242/0/0/0 |
cache buffers chains | 78358 | 305,022 | 528 | 9 | 519/0/0/0 |
cache buffers chains | 6927 | 241,808 | 129 | 4 | 125/0/0/0 |
Latch Name:闩名称
Child Num:
Get Requests:
Misses:
Sleeps:
Spin&Sleeps 1->3+:
Dictionary Cache Stats (RAC)
Cache | GES Requests | GES Conflicts | GES Releases |
dc_awr_control | 11 | 5 | 0 |
dc_global_oids | 5 | 0 | 0 |
dc_histogram_defs | 215 | 1 | 707 |
dc_objects | 90 | 9 | 0 |
dc_segments | 79 | 10 | 73 |
dc_sequences | 35,738 | 37 | 0 |
dc_table_scns | 6 | 0 | 0 |
dc_tablespace_quotas | 907 | 77 | 0 |
dc_users | 10 | 0 | 0 |
outstanding_alerts | 576 | 288 | 0 |
Cache:字典缓存类名
GES Requests:
GES Conflicts:
GES Releases:
Library Cache Activity (RAC)
Namespace | GES Lock Requests | GES Pin Requests | GES Pin Releases | GES Inval Requests | GES Invali- dations |
ACCOUNT_STATUS | 242 | 0 | 0 | 0 | 0 |
BODY | 0 | 1,530,013 | 1,530,013 | 0 | 0 |
CLUSTER | 74 | 74 | 74 | 0 | 0 |
DBLINK | 246 | 0 | 0 | 0 | 0 |
EDITION | 311 | 311 | 311 | 0 | 0 |
HINTSET OBJECT | 186 | 186 | 186 | 0 | 0 |
INDEX | 152,360 | 152,360 | 152,360 | 0 | 0 |
QUEUE | 223 | 9,717 | 9,717 | 0 | 0 |
SCHEMA | 255 | 0 | 0 | 0 | 0 |
SUBSCRIPTION | 0 | 26 | 26 | 0 | 0 |
TABLE/PROCEDURE | 275,215 | 3,023,083 | 3,023,083 | 0 | 0 |
TRIGGER | 0 | 384,493 | 384,493 | 0 | 0 |
Namespace:library cache 的命名空间
GES Lock Requests:
GES Pin Requests:
GES Inval Requests:
GES Invali-dations:
Interconnect Ping Latency Stats
- Ping latency of the roundtrip of a message from this instance to
- target instances.
- The target instance is identified by an instance number.
- Average and standard deviation of ping latency is given in miliseconds
- for message sizes of 500 bytes and 8K.
- Note that latency of a message from the instance to itself is used as
- control, since message latency can include wait for CPU
Target Instance | 500B Ping Count | Avg Latency 500B msg | Stddev 500B msg | 8K Ping Count | Avg Latency 8K msg | Stddev 8K msg |
1 | 1,138 | 0.20 | 0.03 | 1,138 | 0.20 | 0.03 |
2 | 1,138 | 0.17 | 0.04 | 1,138 | 0.20 | 0.05 |
3 | 1,138 | 0.19 | 0.22 | 1,138 | 0.23 | 0.22 |
4 | 1,138 | 0.18 | 0.04 | 1,138 | 0.21 | 0.04 |
Target Instance:目标实例
500B Ping Count:
Avg Latency 500B msg:
Stddev 500B msg:
8K Ping Count:
Avg Latency 8K msg:
Stddev 8K msg:
Interconnect Throughput by Client
- Throughput of interconnect usage by major consumers
- All throughput numbers are megabytes per second
Used By | Send Mbytes/sec | Receive Mbytes/sec |
Global Cache | 0.10 | 0.20 |
Parallel Query | 0.02 | 0.06 |
DB Locks | 0.09 | 0.09 |
DB Streams | 0.00 | 0.00 |
Other | 0.02 | 0.01 |
Used By:主要消费者
Send Mbytes/sec:发送Mb/每秒
Receive Mbytes/sec:接收Mb/每秒
Interconnect Device Statistics
- Throughput and errors of interconnect devices (at OS level)
- All throughput numbers are megabytes per second
Device Name | IP Address | Public | Source | Send Mbytes/sec | Send Errors | Send Dropped | Send Buffer Overrun | Send Carrier Lost | Receive Mbytes/sec | Receive Errors | Receive Dropped | Receive Buffer Overrun | Receive Frame Errors |
bondib0 | 192.168.10.8 | NO | cluster_interconnects parameter | 0.00 | 0 | 0 | 0 | 0 | 0.00 | 0 | 0 | 0 |
Device Name:设备名称
IP Address:IP地址
Public:是否为公用网络
Source:来源
Send Mbytes/sec:发送MB/每秒
Send Errors:发送错误
Send Dropped:
Send Buffer Overrun:
Send Carrier Lost:
Receive Mbytes/sec:
Receive Errors:
Receive Dropped:
Receive Buffer Overrun:
Receive Frame Errors:
Dynamic Remastering Stats
- times are in seconds
- Affinity objects – objects mastered due to affinity at begin/end snap
Name | Total | per Remaster Op | Begin Snap | End Snap |
remaster ops | 29 | 1.00 | ||
remastered objects | 40 | 1.38 | ||
replayed locks received | 1,990 | 68.62 | ||
replayed locks sent | 877 | 30.24 | ||
resources cleaned | 0 | 0.00 | ||
remaster time (s) | 5.0 | 0.17 | ||
quiesce time (s) | 1.7 | 0.06 | ||
freeze time (s) | 0.6 | 0.02 | ||
cleanup time (s) | 0.7 | 0.02 | ||
replay time (s) | 0.2 | 0.01 | ||
fixwrite time (s) | 1.3 | 0.04 | ||
sync time (s) | 0.5 | 0.02 | ||
affinity objects | 365 | 367 |
Name:
Total:
Per Remaster Op:
Begin Snap:
End Snap:
Leave a Reply