这篇文章将为大家详细讲解有关如何进行Oracle AWR报告指标的解析,文章内容质量较高,因此小编分享给大家做个参考,希望大家阅读完这篇文章后对相关知识有一定的了解。
2013/08/31 BY 26条评论
physical read bytes
physical read total bytes | 物理读的吞吐量/秒 | |
physical read IO requests | physical read total IO requests | 物理读的IOPS |
physical write bytes | physical write total bytes | 物理写的吞吐量/秒 |
physical write IO requests | physical write total IO requests | 物理写的IOPS |
总的物理吞吐量/秒=physical read total bytes+physical write total bytes
总的物理IOPS= physical read total IO requests+ physical write total IO requests
IO的主要指标 吞吐量、IOPS和延迟 均可以从AWR中获得了, IO延迟的信息可以从 User I/O的Wait Class Avg Wait time获得,也可以参考11g出现的IOStat by Function summary
Instance Activity Stats有大量的指标,但是对于这些指标的介绍 没有那一份文档有完整详尽的描述,即便在Oracle原厂内部要没有(或者是Maclean没找到),实际是开发人员要引入某一个Activity Stats是比较容易的,并不像申请引入一个新后台进程那样麻烦,Oracle对于新版本中新后台进程的引入有严格的要求,但Activity Stats却很容易,往往一个one-off patch中就可以引入了,实际上Activity Stats在源代码层仅仅是一些计数器。’
较为基础的statistics,大家可以参考官方文档的Statistics Descriptions描述,地址在这里。
对于深入的指标 例如 “Batched IO (space) vector count”这种由于某些新特性被引入的,一般没有很详细的材料,需要到源代码中去阅读相关模块才能总结其用途,对于这个工作一般原厂是很延迟去完成的,所以没有一个完整的列表。 如果大家有对此的疑问,请去t.askmaclean.com 发一个帖子提问。
Instance Activity Stats - Absolute Values Snaps: 7071 -> Statistics with absolute values (should not be diffed) Statistic Begin Value End Value -------------------------------- --------------- --------------- session pga memory max 1.157882826E+12 1.154290304E+12 session cursor cache count 157,042,373 157,083,136 session uga memory 5.496429019E+14 5.496775467E+14 opened cursors current 268,916 265,694 workarea memory allocated 827,704 837,487 logons current 2,609 2,613 session uga memory max 1.749481584E+13 1.749737418E+13 session pga memory 4.150306913E+11 4.150008177E+11
Instance Activity Stats – Absolute Values是显示快照 起点 和终点的一些指标的绝对值
logon current 当前时间点的登录数
opened cursors current 当前打开的游标数
session cursor cache count 当前存在的session缓存游标数
Instance Activity Stats - Thread ActivityDB/Inst: G10R25/G10R25 Snaps: 3663-3 -> Statistics identified by '(derived)' come from sources other than SYSSTAT Statistic Total per Hour -------------------------------- ------------------ --------- log switches (derived) 17 2,326.47
log switches (derived) 日志切换次数 , 见 《理想的在线重做日志切换时间是多长?》
5 IO 统计
5-1 Tablespace IO Stats 基于表空间分组的IO信息
Tablespace IO Stats DB/Inst: ITSCMP/itscmp2 Snaps: 70719-70723 -> ordered by IOs (Reads + Writes) desc Tablespace ------------------------------ Av Av Av Av Buffer Av Buf Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms) -------------- ------- ------- ------- ------------ -------- ---------- ------- DATA_TS 17,349,398 4,801 2.3 1.5 141,077 39 4,083,704 5.8 INDEX_TS 9,193,122 2,544 2.0 1.0 238,563 66 3,158,187 46.1 UNDOTBS1 1,582,659 438 0.7 1.0 2 0 12,431 69.0
reads : 指 该表空间上发生的物理读的次数(单位不是块,而是次数)
Av Reads/s : 指该表空间上平均每秒的物理读次数 (单位不是块,而是次数)
Av Rd(ms): 指该表空间上每次读的平均读取延迟
Av Blks/Rd: 指该表空间上平均每次读取的块数目,因为一次物理读可以读多个数据块;如果Av Blks/Rd>>1 则可能系统有较多db file scattered read 可能是诊断FULL TABLE SCAN或FAST FULL INDEX SCAN,需要关注table scans (long tables) 和index fast full scans (full) 2个指标
Writes : 该表空间上发生的物理写的次数 ; 对于那些Writes总是等于0的表空间 不妨了解下是否数据为只读,如果是可以通过read only tablespace来解决 RAC中的一些性能问题。
Av Writes/s : 指该表空间上平均每秒的物理写次数
buffer Waits: 该表空间上发生buffer busy waits和read by other session的次数( 9i中buffer busy waits包含了read by other session)。
Av Buf Wt(ms): 该表空间上发生buffer Waits的平均等待时间,单位为ms
5-2 File I/O
File IO Stats Snaps: 70719-70723 -> ordered by Tablespace, File Tablespace Filename ------------------------ ---------------------------------------------------- Av Av Av Av Buffer Av Buf Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms) -------------- ------- ------- ------- ------------ -------- ---------- ------- AMG_ALBUM_IDX_TS +DATA/itscmp/plugged/data2/amg_album_idx_ts01.dbf 23,298 6 0.6 1.0 2 0 0 0.0 AMG_ALBUM_IDX_TS +DATA/itscmp/plugged/data3/amg_album_idx_ts02.dbf 3,003 1 0.6 1.0 2 0 0 0.0
Tablespace 表空间名
FileName 数据文件的路径
Reads: 该数据文件上累计发生过的物理读次数,不是块数
Av Reads/s: 该数据文件上平均每秒发生过的物理读次数,不是块数
Av Rd(ms): 该数据文件上平均每次物理读取的延迟,单位为ms
Av Blks/Rd: 该数据文件上平均每次读取涉及到的块数,OLTP环境该值接近 1
Writes : 该数据文件上累计发生过的物理写次数,不是块数
Av Writes/s: 该数据文件上平均每秒发生过的物理写次数,不是块数
buffer Waits: 该数据文件上发生buffer busy waits和read by other session的次数( 9i中buffer busy waits包含了read by other session)。
Av Buf Wt(ms): 该数据文件上发生buffer Waits的平均等待时间,单位为ms
若某个表空间上有较高的IO负载,则有必要分析一下 是否其所属的数据文件上的IO 较为均匀 还是存在倾斜, 是否需要结合存储特征来 将数据均衡分布到不同磁盘上的数据文件上,以优化 I/O
6 缓冲池统计 Buffer Pool Statistics
Buffer Pool Statistics Snaps: 70719-70723 -> Standard block size Pools D: default, K: keep, R: recycle -> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k Free Writ Buffer Number of Pool Buffer Physical Physical Buff Comp Busy P Buffers Hit% Gets Reads Writes Wait Wait Waits --- ---------- ---- ------------ ------------ ----------- ------ ------ -------- 16k 15,720 N/A 0 0 0 0 0 0 D 2,259,159 98 2.005084E+09 42,753,650 560,460 0 1 8.51E+06
该环节的数据主要来源于WRH$_BUFFER_POOL_STATISTICS, 而WRH$_BUFFER_POOL_STATISTICS是定期汇总v$SYSSTAT中的数据
P pool池的名字 D: 默认的缓冲池 default buffer pool , K : Keep Pool , R: Recycle Pool ; 2k 4k 8k 16k 32k: 代表各种非标准块大小的缓冲池
Number of buffers: 实际的 缓冲块数目, 约等于 池的大小 / 池的块大小
Pool Hit % : 该缓冲池的命中率
Buffer Gets: 对该缓冲池的中块的访问次数 包括 consistent gets 和 db block gets
Physical Reads: 该缓冲池Buffer Cache引起了多少物理读, 其实是physical reads cache ,单位为 块数*次数
Physical Writes :该缓冲池中Buffer cache被写的物理写, 其实是physical writes from cache, 单位为 块数*次数
Free Buffer Waits: 等待空闲缓冲的次数, 可以看做该buffer pool 发生free buffer waits 等待的次数
Write Comp Wait: 等待DBWR写入脏buffer到磁盘的次数, 可以看做该buffer pool发生write complete waits等待的次数
Buffer Busy Waits: 该缓冲池发生buffer busy wait 等待的次数
7-1 Checkpoint Activity 检查点与 Instance Recovery Stats 实例恢复
Checkpoint Activity Snaps: 70719-70723 -> Total Physical Writes: 590,563 Other Autotune Thread MTTR Log Size Log Ckpt Settings Ckpt Ckpt Writes Writes Writes Writes Writes Writes ----------- ----------- ----------- ----------- ----------- ----------- 0 0 0 0 12,899 0 ------------------------------------------------------------- Instance Recovery Stats Snaps: 70719-70723 -> B: Begin Snapshot, E: End Snapshot Estd Targt Estd Log Ckpt Log Ckpt Opt RAC MTTR MTTR Recovery Actual Target Log Sz Timeout Interval Log Avail (s) (s) Estd IOs RedoBlks RedoBlks RedoBlks RedoBlks RedoBlks Sz(M) Time - ----- ----- -------- -------- -------- -------- -------- -------- ------ ----- B 0 6 12828 477505 1786971 5096034 1786971 N/A N/A 3 E 0 7 16990 586071 2314207 5096034 2314207 N/A N/A 3 -------------------------------------------------------------
该环节的数据来源于WRH$_INSTANCE_RECOVERY
MTTR Writes : 为了满足FAST_START_MTTR_TARGET 指定的MTTR值 而做出的物理写 WRITES_MTTR
Log Size Writes :由于最小的redo log file而做出的物理写 WRITES_LOGFILE_SIZE
Log Ckpt writes: 由于 LOG_CHECKPOINT_INTERVAL 和 LOG_CHECKPOINT_TIMEOUT 驱动的增量检查点而做出的物理写 WRITES_LOG_CHECKPOINT_SETTINGS
Other Settings Writes :由于其他设置(例如FAST_START_IO_TARGET)而引起的物理写, WRITES_OTHER_SETTINGS
Autotune Ckpt Writes : 由于自动调优检查点而引起的物理写, WRITES_AUTOTUNE
Thread Ckpt Writes :由于thread checkpoint而引起的物理写,WRITES_FULL_THREAD_CKPT
B 代表 开始点, E 代表结尾
Targt MTTR (s) : 目标MTTR (mean time to recover)意为有效恢复时间,单位为秒。 TARGET_MTTR 的计算基于 给定的参数FAST_START_MTTR_TARGET,而TARGET_MTTR作为内部使用。 实际在使用中 Target MTTR未必能和FAST_START_MTTR_TARGET一样。 如果FAST_START_MTTR_TARGET过小,那么TARGET_MTTR 将是系统条件所允许的最小估算值; 如果FAST_START_MTTR_TARGET过大,则TARGET_MTTR以保守算法计算以获得完成恢复的最长估算时间。
estimated_mttr (s): 当前基于 脏buffer和重做日志块的数量,而评估出的有效恢复时间 。 它的估算告诉用户 以当下系统的负载若发生实例crash,则需要多久时间来做crash recovery的前滚操作,之后才能打开数据库。
Recovery Estd IOs :实际是当前buffer cache中的脏块数量,一旦实例崩溃 这些脏块要被前滚
Actual RedoBlks : 当前实际需要恢复的redo重做块数量
Target RedoBlks :是 Log Sz RedoBlks 、Log Ckpt Timeout RedoBlks、 Log Ckpt Interval RedoBlks 三者的最小值
Log Sz RedoBlks : 代表 必须在log file switch日志切换之前完成的 checkpoint 中涉及到的redo block,也叫max log lag; 数据来源select LOGFILESZ from X$targetrba; select LOG_FILE_SIZE_REDO_BLKS from v$instance_recovery;
Log Ckpt Timeout RedoBlks : 为了满足LOG_CHECKPOINT_TIMEOUT 所需要处理的redo block数,lag for checkpoint timeout ; 数据来源select CT_LAG from x$targetrba;
Log Ckpt Interval RedoBlks :为了满足LOG_CHECKPOINT_INTERVAL 所需要处理的redo block数, lag for checkpoint interval; 数据来源select CI_LAG from x$targetrba;
Opt Log Sz(M) : 基于FAST_START_MTTR_TARGET 而估算出来的redo logfile 的大小,单位为MB 。 Oracle官方推荐创建的重做日志大小至少大于这个估算值
Estd RAC Avail Time :指评估的 RAC中节点失败后 集群从冻结到部分可用的时间, 这个指标仅在RAC中可用,单位为秒。 ESTD_CLUSTER_AVAILABLE_TIME
7-2 Buffer Pool Advisory 缓冲池建议
Buffer Pool Advisory DB/Inst: ITSCMP/itscmp2 Snap: 70723 -> Only rows with estimated physical reads >0 are displayed -> ordered by Block Size, Buffers For Estimate Est Phys Estimated Est Size for Size Buffers Read Phys Reads Est Phys %DBtime P Est (M) Factor (thousands) Factor (thousands) Read Time for Rds --- -------- ------ ------------ ------ -------------- ------------ ------- D 1,920 .1 227 4.9 1,110,565,597 1 1.0E+09 D 3,840 .2 454 3.6 832,483,886 1 7.4E+08 D 5,760 .3 680 2.8 634,092,578 1 5.6E+08 D 7,680 .4 907 2.2 500,313,589 1 4.3E+08 D 9,600 .5 1,134 1.8 410,179,557 1 3.5E+08 D 11,520 .6 1,361 1.5 348,214,283 1 2.9E+08 D 13,440 .7 1,588 1.3 304,658,441 1 2.5E+08 D 15,360 .8 1,814 1.2 273,119,808 1 2.2E+08 D 17,280 .9 2,041 1.1 249,352,943 1 2.0E+08 D 19,200 1.0 2,268 1.0 230,687,206 1 1.8E+08 D 19,456 1.0 2,298 1.0 228,664,269 1 1.8E+08 D 21,120 1.1 2,495 0.9 215,507,858 1 1.7E+08 D 23,040 1.2 2,722 0.9 202,816,787 1 1.6E+08 D 24,960 1.3 2,948 0.8 191,974,196 1 1.5E+08 D 26,880 1.4 3,175 0.8 182,542,765 1 1.4E+08 D 28,800 1.5 3,402 0.8 174,209,199 1 1.3E+08 D 30,720 1.6 3,629 0.7 166,751,631 1 1.2E+08 D 32,640 1.7 3,856 0.7 160,002,420 1 1.2E+08 D 34,560 1.8 4,082 0.7 153,827,351 1 1.1E+08 D 36,480 1.9 4,309 0.6 148,103,338 1 1.1E+08 D 38,400 2.0 4,536 0.6 142,699,866 1 1.0E+08
缓冲池的颗粒大小 可以参考 SELECT * FROM V$SGAINFO where name like(‘Granule%’);
P 指 缓冲池的名字 可能包括 有 D default buffer pool , K Keep Pool , R recycle Pool
Size For Est(M): 指以该尺寸的buffer pool作为评估的对象,一般是 目前current size的 10% ~ 200%,以便了解 buffer pool 增大 ~减小 对物理读的影响
Size Factor : 尺寸因子, 只 对应buffer pool 大小 对 当前设置的比例因子, 例如current_size是 100M , 则如果评估值是110M 那么 size Factor 就是 1.1
Buffers (thousands) :指这个buffer pool 尺寸下的buffer 数量, 要乘以1000才是实际值
Est Phys Read Factor :评估的物理读因子, 例如当前尺寸的buffer pool 会引起100个物理读, 则别的尺寸的buffer pool如果引起 120个物理读, 那么 对应尺寸的Est Phys Read Factor就是1.2
Estimated Phys Reads (thousands):评估的物理读数目, 要乘以 1000才是实际值, 显然不同尺寸的buffer pool对应不同的评估的物理读数目
Est Phys Read Time : 评估的物理读时间
Est %DBtime for Rds:评估的物理读占DB TIME的比率
我们 看buffer pool advisory 一般有2个目的:
在物理读较多的情况下,希望通过增加buffer pool 大小来缓解物理读等待,这是我们关注Size Factor > 1的buffer pool尺寸是否能共有效减少Est Phys Read Factor, 如果Est Phys Read Factor随着Size Factor 增大 而显著减少,那么说明增大buffer cache 是可以有效减少物理读的。
在内存紧张的情况下 ,希望从buffer pool中匀出部分内存来移作他用, 但是又不希望 buffer cache变小导致 物理读增多 性能下降, 则此时 观察Est Phys Read Factor 是否随着Size Factor 减小而 显著增大, 如果不是 则说明减少部分buffer cache 不会导致 物理读大幅增加,也就可以安心 减少 buffer cache
注意 Size Factor 和 Est Phys Read Factor之间不是简单的 线性关系,所以需要人为介入评估得失
7-3 PGA Aggr Summary
PGA Aggr Summary Snaps: 70719-70723 -> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written --------------- ------------------ -------------------------- 99.9 412,527 375
PGA Cache Hit % : 指 W/A WorkArea工作区的数据仅在内存中处理的比率, PGA缓存命中率
workarea是PGA中负责处理 排序、哈希连接和位图合并操作的区域; workarea 也叫做 SQL 作业区域
W/A MB processes: 指 在Workarea中处理过的数据的量,单位为MB
Extra W/A MB Read/Written : 指额外从磁盘上 读写的 工作区数据, 单位为 MB
7-4 PGA Aggr Target Stats
Warning: pga_aggregate_target was set too low for current workload, as this value was exceeded during this interval. Use the PGA Advisory view to help identify a different value for pga_aggregate_target. PGA Aggr Target Stats Snaps: 70719-70723 -> B: Begin Snap E: End Snap (rows dentified with B or E contain data which is absolute i.e. not diffed over the interval) -> Auto PGA Target - actual workarea memory target -> W/A PGA Used - amount of memory used for all Workareas (manual + auto) -> %PGA W/A Mem - percentage of PGA memory allocated to workareas -> %Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt -> %Man W/A Mem - percentage of workarea memory under manual control %PGA %Auto %Man PGA Aggr Auto PGA PGA Mem W/A PGA W/A W/A W/A Global Mem Target(M) Target(M) Alloc(M) Used(M) Mem Mem Mem Bound(K) - ---------- ---------- ---------- ---------- ------ ------ ------ ---------- B 8,192 512 23,690.5 150.1 .6 100.0 .0 838,860 E 8,192 512 23,623.6 156.9 .7 100.0 .0 838,860 -------------------------------------------------------------
此环节的数据来源主要是 WRH$_PGASTAT
PGA Aggr Target(M) :本质上就是pga_aggregate_target , 当然在AMM(memory_target)环境下 这个值可能会自动变化
Auto PGA Target(M) : 在自动PGA 管理模式下 实际可用的工作区内存 “aggregate PGA auto target “, 因为PGA还有其他用途 ,不能全部作为workarea memory
PGA Mem Alloc(M) :目前已分配的PGA内存, alloc 不等于 inuse 即分配的内存不等于在使用的内存,理论上PGA会将确实不使用的内存返回给OS(PGA memory freed back to OS) ,但是存在PGA占用大量内存而不释放的场景
在上例中 pga_aggregate_target 仅为8192M ,而实际processes 在 2,615~ 8000之间,如果一个进程耗费5MB的PGA 也需要 10000M的PGA ,而实际这里 PGA Mem Alloc(M)是23,690 M ,这说明 存在PGA 的过载, 需要调整pga_aggregate_target
W/A PGA Used(M) :所有的工作区workarea(包括manual和 auto)使用的内存总和量, 单位为MB
%PGA W/A Mem: 分配给workarea的内存量占总的PGA的比例, (W/A PGA Used)/PGA Mem Alloc
%Auto W/A Mem : AUTO 自动工作区管理所控制的内存(workarea_size_policy=AUTO) 占总的workarea内存的比例
%Man W/A Mem : MANUAL 手动工作区管理所控制的内存(workarea_size_policy=MANUAL)占总的workarea内存的比例
Global Mem Bound(K) : 指 在自动PGA管理模式下一个工作区所能分配的最大内存(注意 一个SQL执行过程中可能有多个工作区workarea)。 Global Mem Bound(K)这个指标在实例运行过程中将被持续性的修正,以反应数据库当时工作区的负载情况。显然在有众多活跃工作区的系统负载下相应地Global Mem Bound将会下降。 但应当保持global bound值不要小于1 MB , 否则建议 调高pga_aggregate_target
7-5 PGA Aggr Target Histogram
PGA Aggr Target Histogram Snaps: 70719-70723 -> Optimal Executions are purely in-memory operations Low High Optimal Optimal Total Execs Optimal Execs 1-Pass Execs M-Pass Execs ------- ------- -------------- -------------- ------------ ------------ 2K 4K 262,086 262,086 0 0 64K 128K 497 497 0 0 128K 256K 862 862 0 0 256K 512K 368 368 0 0 512K 1024K 440,585 440,585 0 0 1M 2M 68,313 68,313 0 0 2M 4M 169 161 8 0 4M 8M 50 42 8 0 8M 16M 82 82 0 0 16M 32M 1 1 0 0 32M 64M 12 12 0 0 128M 256M 2 0 2 0 -------------------------------------------------------------
数据来源:WRH$_SQL_WORKAREA_HISTOGRAM
Low Optimal: 此行所包含工作区workarea最适合内存要求的下限
High Optimal: 此行所包含工作区workarea最适合内存要求的上限
Total Execs: 在 Low Optimal~High Optimal 范围工作区内完成的总执行数
Optimal execs: optimal 执行是指完全在PGA内存中完成的执行次数
1-pass Execs : 指操作过程中仅发生1次磁盘读取的执行次数
M-pass Execs: 指操作过程中发生了1次以上的磁盘读取, 频发磁盘读取的执行次数
7-6 PGA Memory Advisory
PGA Memory Advisory Snap: 70723 -> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value where Estd PGA Overalloc Count is 0 Estd Extra Estd P Estd PGA PGA Target Size W/A MB W/A MB Read/ Cache Overallo Estd Est (MB) Factr Processed Written to Disk Hit % Count Time ---------- ------- ---------------- ---------------- ------ -------- ------- 1,024 0.1 2,671,356,938.7 387,531,258.9 87.0 1.07E+07 7.9E+11 2,048 0.3 2,671,356,938.7 387,529,979.1 87.0 1.07E+07 7.9E+11 4,096 0.5 2,671,356,938.7 387,518,881.8 87.0 1.07E+07 7.9E+11 6,144 0.8 2,671,356,938.7 387,420,749.5 87.0 1.07E+07 7.9E+11 8,192 1.0 2,671,356,938.7 23,056,196.5 99.0 1.07E+07 6.9E+11 9,830 1.2 2,671,356,938.7 22,755,192.6 99.0 6.81E+06 6.9E+11 11,469 1.4 2,671,356,938.7 20,609,438.5 99.0 4.15E+06 6.9E+11 13,107 1.6 2,671,356,938.7 19,021,139.1 99.0 581,362 6.9E+11 14,746 1.8 2,671,356,938.7 18,601,191.0 99.0 543,531 6.9E+11 16,384 2.0 2,671,356,938.7 18,561,361.1 99.0 509,687 6.9E+11 24,576 3.0 2,671,356,938.7 18,527,422.3 99.0 232,817 6.9E+11 32,768 4.0 2,671,356,938.7 18,511,872.6 99.0 120,180 6.9E+11 49,152 6.0 2,671,356,938.7 18,500,815.3 99.0 8,021 6.9E+11 65,536 8.0 2,671,356,938.7 18,498,733.0 99.0 0 6.9E+11
PGA Target Est (MB) 用以评估的 PGA_AGGREGATE _TARGET值
Size Factr , 当前用以评估的PGA_AGGREGATE _TARGET 和 当前实际设置的PGA_AGGREGATE _TARGET 之间的 比例因子 PGA Target Est / PGA_AGGREGATE_TARGE
W/A MB Processed :workarea中要处理的数据量, 单位为MB
Estd Extra W/A MB Read/ Written to Disk : 以 one-pass 、M-Pass方式处理的数据量预估值, 单位为MB
Estd P Cache Hit % : 预估的PGA缓存命中率
Estd PGA Overalloc Count: 预估的PGA过载量, 如上文所述PGA_AGGREGATE _TARGET仅是一个目标值,无法真正限制PGA内存的使用,当出现 PGA内存硬性需求时会产生PGA overallocate 过载(When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value where Estd PGA Overalloc Count is 0)
7-7 Shared Pool Advisory
Shared Pool Advisory Snap: 70723 -> SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor -> Note there is often a 1:Many correlation between a single logical object in the Library Cache, and the physical number of memory objects associated with it. Therefore comparing the number of Lib Cache objects (e.g. in v$librarycache), with the number of Lib Cache Memory Objects is invalid. Est LC Est LC Est LC Est LC Shared SP Est LC Time Time Load Load Est LC Pool Size Size Est LC Saved Saved Time Time Mem Obj Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Hits (K) -------- ----- -------- ------------ -------- ------ ------- ------ ------------ 304 .8 56 3,987 7,728 1.0 61 1.4 332 352 .9 101 6,243 7,745 1.0 44 1.0 334 400 1.0 114 7,777 7,745 1.0 44 1.0 334 448 1.1 114 7,777 7,745 1.0 44 1.0 334 496 1.2 114 7,777 7,745 1.0 44 1.0 334 544 1.4 114 7,777 7,745 1.0 44 1.0 334 592 1.5 114 7,777 7,745 1.0 44 1.0 334 640 1.6 114 7,777 7,745 1.0 44 1.0 334 688 1.7 114 7,777 7,745 1.0 44 1.0 334 736 1.8 114 7,777 7,745 1.0 44 1.0 334 784 2.0 114 7,777 7,745 1.0 44 1.0 334 832 2.1 114 7,777 7,745 1.0 44 1.0 334 -------------------------------------------------------------
Shared Pool Size(M) : 用以评估的shared pool共享池大小,在AMM /ASMM环境下 shared_pool 大小都可能浮动
SP Size Factr :共享池大小的比例因子, (Shared Pool Size for Estim / SHARED_POOL_SIZE)
Estd LC Size(M) : 评估的 library cache 大小 ,单位为MB , 因为是shared pool中包含 library cache 当然还有其他例如row cache
Est LC Mem Obj 指评估的指定大小的共享池内的library cache memory object的数量 ESTD_LC_MEMORY_OBJECTS
Est LC Time Saved(s): 指在 指定的共享池大小情况下可找到需要的library cache memory objects,从而节约的解析时间 。 这些节约的解析时间也是 花费在共享池内重复加载需要的对象(reload),这些对象可能因为共享池没有足够的free memory而被aged out. ESTD_LC_TIME_SAVED
Est LC Time Saved Factr : Est LC Time Saved(s)的比例因子,( Est LC Time Saved(s)/ Current LC Time Saved(s) ) ESTD_LC_TIME_SAVED_FACTOR
Est LC Load Time (s): 在指定的共享池大小情况下解析的耗时
Est LC Load Time Factr:Est LC Load Time (s)的比例因子, (Est LC Load Time (s)/ Current LC Load Time (s)) ESTD_LC_LOAD_TIME_FACTOR
Est LC Mem Obj Hits (K) : 在指定的共享池大小情况下需要的library cache memory object正好在共享池中被找到的次数 ESTD_LC_MEMORY_OBJECT_HITS;
对于想缩小 shared_pool_size 共享池大小的需求,可以关注Est LC Mem Obj Hits (K) ,如上例中共享池为352M时Est LC Mem Obj Hits (K) 就为334且之后不动,则可以考虑缩小shared_pool_size到该值,但要注意每个版本/平台上对共享池的最低需求,包括RAC中gcs resource 、gcs shadow等资源均驻留在shared pool中,增大db_cache_size时要对应关注。
7-8 SGA Target Advisory
SGA Target Advisory Snap: 70723 SGA Target SGA Size Est DB Est Physical Size (M) Factor Time (s) Reads ---------- ---------- ------------ ---------------- 3,752 0.1 1.697191E+09 1.4577142918E+12 7,504 0.3 1.222939E+09 832,293,601,354 11,256 0.4 1.000162E+09 538,390,923,784 15,008 0.5 895,087,191 399,888,743,900 18,760 0.6 840,062,594 327,287,716,803 22,512 0.8 806,389,685 282,881,041,331 26,264 0.9 782,971,706 251,988,446,808 30,016 1.0 765,293,424 228,664,652,276 33,768 1.1 751,135,535 210,005,616,650 37,520 1.3 739,350,016 194,387,820,900 41,272 1.4 733,533,785 187,299,216,679 45,024 1.5 732,921,550 187,299,216,679 48,776 1.6 732,691,962 187,299,216,679 52,528 1.8 732,538,908 187,299,216,679 56,280 1.9 732,538,917 187,299,216,679 60,032 2.0 732,462,391 187,299,458,716 -------------------------------------------------------------
该环节数据来源于WRH$_SGA_TARGET_ADVICE
SGA target Size : 用以评估的sga target大小 (sga_target)
SGA Size Factor: SGA Size的比例因子, (est SGA target Size / Current SGA target Size )
Est DB Time (s): 评估对应于该指定sga target size会产生多少量的DB TIME,单位为秒
Est Physical Reads:评估对应该指定的sga target size 会产生多少的物理读
7-9 Streams Pool Advisory
Streams Pool Advisory DB/Inst: ITSCMP/itscmp2 Snap: 70723 Size for Size Est Spill Est Spill Est Unspill Est Unspill Est (MB) Factor Count Time (s) Count Time (s) ---------- --------- ----------- ----------- ----------- ----------- 64 0.5 0 0 0 0 128 1.0 0 0 0 0 192 1.5 0 0 0 0 256 2.0 0 0 0 0 320 2.5 0 0 0 0 384 3.0 0 0 0 0 448 3.5 0 0 0 0 512 4.0 0 0 0 0 576 4.5 0 0 0 0 640 5.0 0 0 0 0 704 5.5 0 0 0 0 768 6.0 0 0 0 0 832 6.5 0 0 0 0 896 7.0 0 0 0 0 960 7.5 0 0 0 0 1,024 8.0 0 0 0 0 1,088 8.5 0 0 0 0 1,152 9.0 0 0 0 0 1,216 9.5 0 0 0 0 1,280 10.0 0 0 0 0
该环节只有当使用了Streams 流复制时才会有必要数据, 数据来源 WRH$_STREAMS_POOL_ADVICE
Size for Est (MB) : 用以评估的 streams pool大小
Size Factor :streams pool大小的比例因子
Est Spill Count :评估出的 当使用该大小的流池时 message溢出到磁盘的数量 ESTD_SPILL_COUNT
Est Spill Time (s): 评估出的 当使用该大小的流池时 message溢出到磁盘的耗时,单位为秒 ESTD_SPILL_TIME
Est Unspill Count:评估的 当使用该大小的流池时 message unspill 即从磁盘上读取的数量 ESTD_UNSPILL_COUNT
Est Unspill Time (s) : 评估的 当使用该大小的流池时 message unspill 即从磁盘上读取的耗时,单位为秒 ESTD_UNSPILL_TIME
7-10 Java Pool Advisory
java pool的相关指标与shared pool相似,不再鏖述
8 Wait Statistics
8-1 Buffer Wait Statistics
Buffer Wait Statistics Snaps: 70719-70723 -> ordered by wait time desc, waits desc Class Waits Total Wait Time (s) Avg Time (ms) ------------------ ----------- ------------------- -------------- data block 8,442,041 407,259 48 undo header 16,212 1,711 106 undo block 21,023 557 26 1st level bmb 1,038 266 256 2nd level bmb 540 185 342 bitmap block 90 25 276 segment header 197 13 66 file header block 132 6 43 bitmap index block 18 0 1 extent map 2 0 0
数据来源 : WRH$_WAITSTAT
该环节是对 缓冲池中各类型(class) 块 等待的汇总信息, wait的原因一般是 buffer busy waits 和 read by other session
class 数据块的class, 一个oracle数据块即有class 属性 还有type 属性,数据块中记录type属性(KCBH), 而在buffer header里存有class属性(X$BH.class)
Waits: 该类型数据块的等待次数
Total Wait Time (s) : 该类型数据块的合计等待时间 单位为秒
Avg Time (ms) : 该类型数据块 平均每次等待的耗时, 单位 ms
如果用户正使用 undo_management=AUTO 的SMU 则一般不会因为rollback segment过少而引起undo header block类块的等待
对于INSERT 而引起的 buffer争用等待:
1、 对于手动segment 管理MSSM 考虑增加Freelists、Freelist Groups
2、 使用ASSM ,当然ASSM本身没什么参数可调
对于INSERT ON INDEX 引起的争用:
使用反向索引key
使用HASH分区和本地索引
可能的情况下 减少index的density
8-2 Enqueue Activity
enqueue 队列锁等待
Enqueue Activity Snaps: 70719-70723 -> only enqueues with waits are shown -> Enqueue stats gathered prior to 10g should not be compared with 10g data -> ordered by Wait Time desc, Waits desc Enqueue Type (Request Reason) ------------------------------------------------------------------------------ Requests Succ Gets Failed Gets Waits Wt Time (s) Av Wt Time(ms) ------------ ------------ ----------- ----------- ------------ -------------- TX-Transaction (index contention) 201,270 201,326 0 193,948 97,517 502.80 TM-DML 702,731 702,681 4 1,081 46,671 43,174.08 SQ-Sequence Cache 28,643 28,632 0 17,418 35,606 2,044.19 HW-Segment High Water Mark 9,210 8,845 376 1,216 12,505 10,283.85 TX-Transaction (row lock contention) 9,288 9,280 0 9,232 10,486 1,135.80 CF-Controlfile Transaction 15,851 14,094 1,756 2,798 4,565 1,631.64 TX-Transaction (allocate ITL entry) 471 369 102 360 169 469.28
Enqueue Type (Request Reason) enqueue 队列的类型,大家在研究 enqueue 问题前 至少搞清楚enqueue type 和enqueue mode , enqueue type是队列锁所要保护的资源 如 TM 表锁 CF 控制文件锁, enqueue mode 是持有队列锁的模式 (SS、SX 、S、SSX、X)
Requests : 申请对应的enqueue type资源或者队列转换(enqueue conversion 例如 S 转 SSX ) 的次数
Succ Gets :对应的enqueue被成功 申请或转换的次数
Failed Gets :对应的enqueue的申请 或者转换失败的次数
Waits :由对应的enqueue的申请或者转换而造成等待的次数
Wt Time (s) : 由对应的enqueue的申请或者转换而造成等待的等待时间
Av Wt Time(ms) :由对应的enqueue的申请或者转换而造成等待的平均等待时间 , Wt Time (s) / Waits ,单位为ms
主要的enqueue 等待事件:
enq: TX – row lock/index contention、allocate ITL等待事件
enq: TM – contention等待事件
Oracle队列锁enq:TS,Temporary Segment (also TableSpace)
9-1 Undo Segment Summary
Undo Segment Summary Snaps: 70719-70723 -> Min/Max TR (mins) - Min and Max Tuned Retention (minutes) -> STO - Snapshot Too Old count, OOS - Out of Space count -> Undo segment block stats: -> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed -> eS - expired Stolen, eR - expired Released, eU - expired reUsed Undo Num Undo Number of Max Qry Max Tx Min/Max STO/ uS/uR/uU/ TS# Blocks (K) Transactions Len (s) Concurcy TR (mins) OOS eS/eR/eU ---- ---------- --------------- -------- -------- --------- ----- -------------- 4 85.0 200,127 55,448 317 1040.2/10 0/0 0/0/0/0/0/0 ------------------------------------------------------------- Undo Segment Stats Snaps: 70719-70723 -> Most recent 35 Undostat rows, ordered by Time desc Num Undo Number of Max Qry Max Tx Tun Ret STO/ uS/uR/uU/ End Time Blocks Transactions Len (s) Concy (mins) OOS eS/eR/eU ------------ ----------- ------------ ------- ------- ------- ----- ------------ 29-Aug 05:52 11,700 35,098 55,448 234 1,070 0/0 0/0/0/0/0/0 29-Aug 05:42 12,203 24,677 54,844 284 1,065 0/0 0/0/0/0/0/0 29-Aug 05:32 14,132 37,826 54,241 237 1,060 0/0 0/0/0/0/0/0 29-Aug 05:22 14,379 32,315 53,637 317 1,050 0/0 0/0/0/0/0/0 29-Aug 05:12 15,693 34,157 53,033 299 1,045 0/0 0/0/0/0/0/0 29-Aug 05:02 16,878 36,054 52,428 250 1,040 0/0 0/0/0/0/0/0
数据来源: WRH$_UNDOSTAT , undo相关的使用信息每10分钟刷新到v$undostat中
Undo Extent有三种状态 active 、unexpired 、expired
active => extent中 包括了活动的事务 ,active的undo extent 一般不允许被其他事务重用覆盖
unexpired => extent中没有活动的事务,但相关undo 记录从inactive到目前还未经过undo retention(注意 auto undo retention的问题 因为这个特性 可能在观察dba_undo_extents时看到大部分block都是unexpired,这是正常的) 指定的时间,所以为unexpired。 对于没有guarantee retention的undo tablespace而言,unexpired extent可能被 steal 为其他事物重用
expired => extent中没有活动事务,且超过了undo retention的时间
Undo TS# 在使用的这个undo 表空间的表空间号, 一个实例 同一时间只能用1个undo tablespace , RAC不同节点可以用不同的undo tablespace
Num Undo Blocks (K) 指被消费的 undo 数据块的数量, (K)代表要乘以1000才是实际值; 可以用该指标来评估系统对undo block的消费量, 以便基于实际负载情况来评估UNDO表空间的大小
Number of Transactions 指该段时间内该undo表空间上执行过的事务transaction总量
Max Qry Len (s) 该时段内 持续最久的查询 时间, 单位为秒
Max Tx Concy 该时段内 最大的事务并发量
Min/Max TR (mins) 最小和最大的tuned undo retention ,单位为分钟; tuned undo retention 是自动undo调优特性,见undo自动调优介绍。
STO/ OOS STO 指 ORA-01555 Snapshot Too Old错误出现的次数; OOS – 指Out of Space count 错误出现的次数
uS – unexpired Stolen 尝试从未过期的undo extent中偷取undo space的次数
uR – unexpired Released 从未过期的undo extent中释放的块数目
uU – unexpired reUsed 未过期的undo extent中的block被其他事务重用的 块数目
eS – expired Stolen 尝试从过期的undo extent中偷取undo space的次数
eR – expired Released 从过期的undo extent中释放的块数目
eU – expired reUsed 过期的undo extent中的block被其他事务重用的 块数目
UNXPSTEALCNT | NUMBER | Number of attempts to obtain undo space by stealing unexpired extents from other transactions |
UNXPBLKRELCNT | NUMBER | Number of unexpired blocks removed from certain undo segments so they can be used by other transactions |
UNXPBLKREUCNT | NUMBER | Number of unexpired undo blocks reused by transactions |
EXPSTEALCNT | NUMBER | Number of attempts to steal expired undo blocks from other undo segments |
EXPBLKRELCNT | NUMBER | Number of expired undo blocks stolen from other undo segments |
EXPBLKREUCNT | NUMBER | Number of expired undo blocks reused within the same undo segments |
SSOLDERRCNT | NUMBER | Identifies the number of times the error ORA-01555 occurred. You can use this statistic to decide whether or not the UNDO_RETENTION initialization parameter is set properly given the size of the undo tablespace. Increasing the value of UNDO_RETENTION can reduce the occurrence of this error. |
10-1 Latch Activity
Latch Activity Snaps: 70719-70723 -> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for willing-to-wait latch get requests -> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests -> "Pct Misses" for both should be very close to 0.0 Pct Avg Wait Pct Get Get Slps Time NoWait NoWait Latch Name Requests Miss /Miss (s) Requests Miss ------------------------ -------------- ------ ------ ------ ------------ ------ AQ deq hash table latch 4 0.0 0 0 N/A ASM Keyed state latch 9,048 0.1 0.2 0 0 N/A ASM allocation 15,017 0.2 0.8 1 0 N/A ASM db client latch 72,745 0.0 0 0 N/A ASM map headers 5,860 0.6 0.6 1 0 N/A ASM map load waiting lis 1,462 0.0 0 0 N/A ASM map operation freeli 63,539 0.1 0.4 1 0 N/A ASM map operation hash t 76,484,447 0.1 1.0 66 0 N/A
latch name Latch闩的名字
Get Requests latch被以willing-to-wait模式申请并获得的次数
Pct Get Miss miss是指latch被以willing-to-wait 模式申请但是申请者必须等待的次数, Pct Get Miss = Miss/Get Requests ; miss可以从后面的Latch Sleep Breakdown 获得
Avg Slps /Miss Sleep 是指latch被以willing-to-wait模式申请最终导致session需要sleep以等待该latch的次数 ; Avg Slps /Miss = Sleeps/ Misses ; Sleeps可以从后面的Latch Sleep Breakdown 获得
Wait Time (s) 指花费在等待latch上的时间,单位为秒
NoWait Requests 指latch被以no-wait模式来申请的次数
Pct NoWait Miss 以no-wait模式来申请latch但直接失败的次数
对于高并发的latch例如cache buffers chains,其Pct Misses应当十分接近于0
一般的调优原则:
如果latch : cache buffers chains是 Top 5 事件,则需要考虑优化SQL减少 全表扫描 并减少Top buffer gets SQL语句的逻辑读
如果latch : redo copy 、redo allocation 等待较多,则可以考虑增大LOG_BUFFER
如果latch:library cache 发生较多,则考虑增大shared_pool_size
10-2 Latch Sleep Breakdown
Latch Sleep Breakdown DB/Inst: ITSCMP/itscmp2 Snaps: 70719-70723 -> ordered by misses desc Get Spin Latch Name Requests Misses Sleeps Gets -------------------------- --------------- ------------ ----------- ----------- cache buffers chains 3,365,097,866 12,831,875 130,058 12,683,450 row cache objects 69,050,058 349,839 1,320 348,649 session idle bit 389,437,460 268,285 2,768 265,752 enqueue hash chains 8,698,453 239,880 22,476 219,950 ges resource hash list 8,388,730 158,894 70,728 91,104 gc element 100,383,385 135,759 6,285 129,742 gcs remastering latch 12,213,169 72,373 1 72,371 enqueues 4,662,545 46,374 259 46,155 ASM map operation hash tab 76,484,447 46,231 45,210 1,952 Lsod array latch 72,598 24,224 24,577 1,519
latch name Latch闩的名字
Get Requests latch被以willing-to-wait模式申请并获得的次数
misses 是指latch被以willing-to-wait 模式申请但是申请者必须等待的次数
9i以后miss之后一般有2种情况 spin gets了 或者sleep一睡不醒直到 被post,具体见全面解析9i以后Oracle Latch闩锁原理;
8i以前的latch算法可以参考:Oracle Latch:一段描绘Latch运作的伪代码
所以一般来说9i以后的 misses= Sleeps+ Spin Gets ,虽然不是绝对如此
Sleeps 是指latch被以willing-to-wait模式申请最终导致session需要sleep以等待该latch的次数
Spin Gets 以willing-to-wait模式去申请latch,在miss之后以spin方式获得了latch的次数
10-3 Latch Miss Sources
Latch Miss Sources Snaps: 70719-70723 -> only latches with sleeps are shown -> ordered by name, sleeps desc NoWait Waiter Latch Name Where Misses Sleeps Sleeps ------------------------ -------------------------- ------- ---------- -------- ASM Keyed state latch kfksolGet 0 1 1 ASM allocation kfgpnSetDisks2 0 17 0 ASM allocation kfgpnClearDisks 0 5 0 ASM allocation kfgscCreate 0 4 0 ASM allocation kfgrpGetByName 0 1 26 ASM map headers kffmUnidentify_3 0 7 8 ASM map headers kffmAllocate 0 6 0 ASM map headers kffmIdentify 0 6 11 ASM map headers kffmFree 0 1 0 ASM map operation freeli kffmTranslate2 0 15 8 ASM map operation hash t kffmUnidentify 0 44,677 36,784 ASM map operation hash t kffmTranslate 0 220 3,517
数据来源为DBA_HIST_LATCH_MISSES_SUMMARY
latch name Latch闩的名字
where : 指哪些代码路径内核函数持有过这些该latch ,而不是哪些代码路径要申请这些latch; 例如kcbgtcr函数的作用是Get a block for Consistent read,其持有latch :cache buffers chain是很正常的事情
NoWait Misses: 以no-wait模式来申请latch但直接失败的次数
Sleeps: 指latch被以willing-to-wait模式申请最终导致session需要sleep以等待该latch的次数 time of sleeps resulted in making the latch request
Waiter Sleeps:等待者休眠的次数 times of sleeps that waiters did for each where; Sleep 是阻塞者等待的次数 , Waiter Sleeps是被阻塞者等待的次数
10-4 Mutex Sleep Summary
Mutex Sleep Summary Snaps: 70719-70723 -> ordered by number of sleeps desc Wait Mutex Type Location Sleeps Time (ms) --------------------- -------------------------------- ------------ ------------ Cursor Pin kksfbc [KKSCHLFSP2] 4,364 14,520 Cursor Pin kkslce [KKSCHLPIN2] 2,396 2,498 Library Cache kglpndl1 95 903 475 Library Cache kglpin1 4 800 458 Library Cache kglpnal2 91 799 259 Library Cache kglget1 1 553 1,697 Library Cache kglpnal1 90 489 88 Library Cache kgllkdl1 85 481 1,528 Cursor Pin kksLockDelete [KKSCHLPIN6] 410 666 Cursor Stat kkocsStoreBindAwareStats [KKSSTA 346 497 Library Cache kglhdgn2 106 167 348 Library Cache kglhdgh2 64 26 84 Library Cache kgldtin1 42 19 55 Cursor Pin kksfbc [KKSCHLPIN1] 13 34 Library Cache kglhdgn1 62 11 13 Library Cache kgllkal1 80 9 12 Library Cache kgllkc1 57 6 0 Cursor Pin kksSetBindType [KKSCHLPIN3] 5 5 Library Cache kglGetHandleReference 124 4 20 Library Cache kglUpgradeLock 119 4 0 Library Cache kglget2 2 3 0 Library Cache kglati1 45 1 0 Library Cache kglini1 32 1 0 Library Cache kglobld1 75 1 0 Library Cache kglobpn1 71 1 0
Mutex是10.2.0.2以后引入的新的内存锁机制,具体对Mutex的描述见 《深入理解Oracle中的Mutex》:http://www.askmaclean.com/archives/understanding-oracle-mutex.html
Mutex Type
Mutex的类型其实就是 mutex对应的客户的名字, 在版本10.2中基本只有KKS使用Mutex,所以仅有3种:
Cursor Stat (kgx_kks1)
Cursor Parent (kgx_kks2)
Cursor Pin (kgx_kks3)
11g中增加了Library Cache
Location 发起对该Mutex申请的代码路径code location,而不是还持有该Mutex的代码路径或曰内核函数
10.2中最常见的下面的几个函数
kkspsc0 -负责解析游标 – 检测我们正在解析的游标是否有对象的parent cursor heap 0存在
kksfbc – 负责找到合适的子游标 或者创建一个新的子游标
kksFindCursorstat
Sleeps:
当一个Mutex被申请时, 一般称为一个get request。 若初始的申请未能得到授权, 则该进程会因为此次申请而进入到255次SPIN中(_mutex_spin_count Mutex spin count),每次SPIN循环迭代过程中该进程都会去看看Mutex被释放了吗。
若该Mutex在SPIN之后仍未被释放,则该进程针对申请的mutex进入对应的mutex wait等待事件中。 实际进程的等待事件和等待方式由mutex的类型锁决定,例如 Cursor pin、Cursor Parent。 举例来说,这种等待可能是阻塞等待,也可以是sleep。
但是请注意在V$MUTEX_SLEEP_*视图上的sleep列意味着等待的次数。相关代码函数在开始进入等待时自加这个sleep字段。
等待计时从进程进入等待前开始计算等待时间, 当一个进程结束其等待,则等待的时间加入都总和total中。 该进程再次尝试申请之前的Mutex,若该Mutex仍不可用,则它再次进入spin/wait的循环。
V$MUTEX_SLEEP_HISTORY视图的GETS列仅在成功申请到一个Mutex时才增加。
Wait Time (ms) 类似于latch,spin time 不算做mutex的消耗时间,它只包含等待消耗的时间。
=====================================================================
11 segment statistics 段级统计
11-1 Segments by Logical Reads
Segments by Logical Reads DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> Total Logical Reads: 2,021,476,421 -> Captured Segments account for 83.7% of Total Tablespace Subobject Obj. Logical Owner Name Object Name Name Type Reads %Total ---------- ---------- -------------------- ---------- ----- ------------ ------- CONTENT_OW INDEX_TS MZ_PRODUCT_ATTRIBUTE INDEX 372,849,920 18.44 CONTENT_OW INDEX_TS MZ_PRODUCT__LS_PK INDEX 329,829,632 16.32 CONTENT_OW DATA_TS MZ_PRODUCT_ATTRIBUTE TABLE 218,419,008 10.80 CONTENT_OW PLAYLIST_A MZ_PLAYLIST_ARTIST TABLE 182,426,240 9.02 CONTENT_OW DATA_TS MZ_PRODUCT TABLE 108,597,376 5.37
owner : 数据段的所有者
Tablespace Name: 数据段所在表空间名
Object Name : 对象名
Subobject Name:子对象名,例如一个分区表的某个分区
obj Type: 对象类型 一般为TABLE /INDEX 或者分区或子分区
Logical Reads :该数据段上发生过的逻辑读 , 单位为 块数*次数
%Total : 占总的逻辑读的百分比 , (当前对象上发生过的逻辑读/ Total DB 逻辑读)
11-2 Segments by Physical Reads
Segments by Physical Reads DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> Total Physical Reads: 56,839,035 -> Captured Segments account for 51.9% of Total Tablespace Subobject Obj. Physical Owner Name Object Name Name Type Reads %Total ---------- ---------- -------------------- ---------- ----- ------------ ------- CONTENT_OW SONG_TS MZ_SONG TABLE 7,311,928 12.86 CONTENT_OW DATA_TS MZ_CS_WORK_PENDING_R TABLE 4,896,554 8.61 CONTENT_OW DATA_TS MZ_CONTENT_PROVIDER_ TABLE 3,099,387 5.45 CONTENT_OW DATA_TS MZ_PRODUCT_ATTRIBUTE TABLE 1,529,971 2.69 CONTENT_OW DATA_TS MZ_PUBLICATION TABLE 1,391,735 2.45
Physical Reads: 该数据段上发生过的物理读 , 单位为 块数*次数
%Total : 占总的物理读的百分比 , (当前对象上发生过的逻辑读/ Total DB 逻辑读)
11-3 Segments by Physical Read Requests
Segments by Physical Read Requests DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> Total Physical Read Requests: 33,936,360 -> Captured Segments account for 45.5% of Total Tablespace Subobject Obj. Phys Read Owner Name Object Name Name Type Requests %Total ---------- ---------- -------------------- ---------- ----- ------------ ------- CONTENT_OW DATA_TS MZ_CONTENT_PROVIDER_ TABLE 3,099,346 9.13 CONTENT_OW DATA_TS MZ_PRODUCT_ATTRIBUTE TABLE 1,529,950 4.51 CONTENT_OW DATA_TS MZ_PRODUCT TABLE 1,306,756 3.85 CONTENT_OW DATA_TS MZ_AUDIO_FILE TABLE 910,537 2.68 CONTENT_OW INDEX_TS MZ_PRODUCT_ATTRIBUTE INDEX 820,459 2.42
Phys Read Requests : 物理读的申请次数
%Total : (该段上发生的物理读的申请次数/ physical read IO requests)
11-4 Segments by UnOptimized Reads
Segments by UnOptimized Reads DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> Total UnOptimized Read Requests: 811,466 -> Captured Segments account for 58.5% of Total Tablespace Subobject Obj. UnOptimized Owner Name Object Name Name Type Reads %Total ---------- ---------- -------------------- ---------- ----- ------------ ------- CONTENT_OW DATA_TS MZ_CONTENT_PROVIDER_ TABLE 103,580 12.76 CONTENT_OW SONG_TS MZ_SONG TABLE 56,946 7.02 CONTENT_OW DATA_TS MZ_IMAGE TABLE 47,017 5.79 CONTENT_OW DATA_TS MZ_PRODUCT_ATTRIBUTE TABLE 40,950 5.05 CONTENT_OW DATA_TS MZ_PRODUCT TABLE 30,406 3.75
UnOptimized Reads UnOptimized Read Reqs = Physical Read Reqts – Optimized Read Reqs
Optimized Read Requests是指 哪些满足Exadata Smart Flash Cache ( or the Smart Flash Cache in OracleExadata V2 (Note that despite same name, concept and use of
‘Smart Flash Cache’ in Exadata V2 is different from ‘Smart Flash Cache’ in Database Smart Flash Cache)).的物理读 次数 。 满足从smart flash cache走的读取申请呗认为是optimized ,因为这些读取要比普通从磁盘走快得多。
此外通过smart scan 读取storage index的情况也被认为是’optimized read requests’ ,源于可以避免读取不相关的数据。
当用户不在使用Exadata时,则UnOptimized Read Reqs总是等于 Physical Read Reqts
%Total : (该段上发生的物理读的UnOptimized Read Reqs / ( physical read IO requests – physical read requests optimized ))
11-5 Segments by Optimized Reads
Segments by Optimized Reads DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> Total Optimized Read Requests: 33,124,894 -> Captured Segments account for 45.2% of Total Tablespace Subobject Obj. Optimized Owner Name Object Name Name Type Reads %Total ---------- ---------- -------------------- ---------- ----- ------------ ------- CONTENT_OW DATA_TS MZ_CONTENT_PROVIDER_ TABLE 2,995,766 9.04 CONTENT_OW DATA_TS MZ_PRODUCT_ATTRIBUTE TABLE 1,489,000 4.50 CONTENT_OW DATA_TS MZ_PRODUCT TABLE 1,276,350 3.85 CONTENT_OW DATA_TS MZ_AUDIO_FILE TABLE 890,775 2.69 CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3 INDEX 816,067 2.46
关于optimizerd read 上面已经解释过了,这里的单位是 request 次数
%Total : (该段上发生的物理读的 Optimized Read Reqs/ physical read requests optimized )
11-6 Segments by Direct Physical Reads
Segments by Direct Physical Reads DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> Total Direct Physical Reads: 14,118,552 -> Captured Segments account for 94.2% of Total Tablespace Subobject Obj. Direct Owner Name Object Name Name Type Reads %Total ---------- ---------- -------------------- ---------- ----- ------------ ------- CONTENT_OW SONG_TS MZ_SONG TABLE 7,084,416 50.18 CONTENT_OW DATA_TS MZ_CS_WORK_PENDING_R TABLE 4,839,984 34.28 CONTENT_OW DATA_TS MZ_PUBLICATION TABLE 1,361,133 9.64 CONTENT_OW DATA_TS SYS_LOB0000203660C00 LOB 5,904 .04 CONTENT_OW DATA_TS SYS_LOB0000203733C00 LOB 1,656 .01
Direct reads 直接路径物理读,单位为 块数*次数
%Total (该段上发生的direct path reads /Total physical reads direct )
11-7 Segments by Physical Writes
Segments by Physical Writes DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> Total Physical Writes: 590,563 -> Captured Segments account for 38.3% of Total Tablespace Subobject Obj. Physical Owner Name Object Name Name Type Writes %Total ---------- ---------- -------------------- ---------- ----- ------------ ------- CONTENT_OW DATA_TS MZ_CS_WORK_PENDING_R TABLE 23,595 4.00 CONTENT_OW DATA_TS MZ_PODCAST TABLE 19,834 3.36 CONTENT_OW INDEX_TS MZ_IMAGE_IX2 INDEX 16,345 2.77 SYS SYSAUX WRH$_ACTIVE_SESSION_ 1367_70520 TABLE 14,173 2.40 CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3 INDEX 9,645 1.63
Physical Writes ,物理写 单位为 块数*次数
Total % (该段上发生的物理写 /Total physical writes )
11-9 Segments by Physical Write Requests
Segments by Physical Write Requests DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> Total Physical Write Requestss: 436,789 -> Captured Segments account for 43.1% of Total Tablespace Subobject Obj. Phys Write Owner Name Object Name Name Type Requests %Total ---------- ---------- -------------------- ---------- ----- ------------ ------- CONTENT_OW DATA_TS MZ_CS_WORK_PENDING_R TABLE 22,581 5.17 CONTENT_OW DATA_TS MZ_PODCAST TABLE 19,797 4.53 CONTENT_OW INDEX_TS MZ_IMAGE_IX2 INDEX 14,529 3.33 CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3 INDEX 9,434 2.16 CONTENT_OW DATA_TS MZ_AM_REQUEST TABLE 8,618 1.97
Phys Write Requests 物理写的请求次数 ,单位为次数
%Total (该段上发生的物理写请求次数 /physical write IO requests )
11-10 Segments by Direct Physical Writes
Segments by Direct Physical Writes DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> Total Direct Physical Writes: 29,660 -> Captured Segments account for 18.3% of Total Tablespace Subobject Obj. Direct Owner Name Object Name Name Type Writes %Total ---------- ---------- -------------------- ---------- ----- ------------ ------- SYS SYSAUX WRH$_ACTIVE_SESSION_ 1367_70520 TABLE 4,601 15.51 CONTENT_OW DATA_TS SYS_LOB0000203733C00 LOB 620 2.09 CONTENT_OW DATA_TS SYS_LOB0000203660C00 LOB 134 .45 CONTENT_OW DATA_TS SYS_LOB0000203779C00 LOB 46 .16 CONTENT_OW DATA_TS SYS_LOB0000203796C00 LOB 41 .14
Direct Writes 直接路径写, 单位额为块数*次数
%Total 为(该段上发生的直接路径写 /physical writes direct )
11-11 Segments by Table Scans
Segments by Table Scans DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> Total Table Scans: 10,713 -> Captured Segments account for 1.0% of Total Tablespace Subobject Obj. Table Owner Name Object Name Name Type Scans %Total ---------- ---------- -------------------- ---------- ----- ------------ ------- CONTENT_OW DATA_TS MZ_PUBLICATION TABLE 92 .86 CONTENT_OW DATA_TS MZ_CS_WORK_PENDING_R TABLE 14 .13 CONTENT_OW SONG_TS MZ_SONG TABLE 3 .03 CONTENT_OW DATA_TS MZ_AM_REQUEST TABLE 1 .01
Table Scans 来源为dba_hist_seg_stat.table_scans_delta 不过这个指标并不十分精确
11-12 Segments by DB Blocks Changes
Segments by DB Blocks Changes DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> % of Capture shows % of DB Block Changes for each top segment compared -> with total DB Block Changes for all segments captured by the Snapshot Tablespace Subobject Obj. DB Block % of Owner Name Object Name Name Type Changes Capture ---------- ---------- -------------------- ---------- ----- ------------ ------- CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX8 INDEX 347,856 10.21 CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3A INDEX 269,504 7.91 CONTENT_OW INDEX_TS MZ_AM_REQUEST_PK INDEX 251,904 7.39 CONTENT_OW DATA_TS MZ_AM_REQUEST TABLE 201,056 5.90 CONTENT_OW INDEX_TS MZ_PRODUCT_ATTRIBUTE INDEX 199,888 5.86
DB Block Changes ,单位为块数*次数
%Total : (该段上发生block changes / db block changes )
11-13 Segments by Row Lock Waits
Segments by Row Lock Waits DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> % of Capture shows % of row lock waits for each top segment compared -> with total row lock waits for all segments captured by the Snapshot Row Tablespace Subobject Obj. Lock % of Owner Name Object Name Name Type Waits Capture ---------- ---------- -------------------- ---------- ----- ------------ ------- CONTENT_OW LOB_8K_TS MZ_ASSET_WORK_EVENT_ INDEX 72,005 43.86 CONTENT_OW LOB_8K_TS MZ_CS_WORK_NOTE_RE_I _2013_1_36 INDEX 13,795 8.40 CONTENT_OW LOB_8K_TS MZ_CS_WORK_INFO_PART _2013_5_35 INDEX 12,383 7.54 CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3A INDEX 8,937 5.44 CONTENT_OW DATA_TS MZ_AM_REQUEST TABLE 8,531 5.20
Row Lock Waits 是指行锁的等待次数 数据来源于 dba_hist_seg_stat.ROW_LOCK_WAITS_DELTA
11-14 Segments by ITL WAITS
Segments by ITL Waits DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> % of Capture shows % of ITL waits for each top segment compared -> with total ITL waits for all segments captured by the Snapshot Tablespace Subobject Obj. ITL % of Owner Name Object Name Name Type Waits Capture ---------- ---------- -------------------- ---------- ----- ------------ ------- CONTENT_OW LOB_8K_TS MZ_ASSET_WORK_EVENT_ INDEX 95 30.16 CONTENT_OW LOB_8K_TS MZ_CS_WORK_NOTE_RE_I _2013_1_36 INDEX 48 15.24 CONTENT_OW LOB_8K_TS MZ_CS_WORK_INFO_PART _2013_5_35 INDEX 21 6.67 CONTENT_OW INDEX_TS MZ_SALABLE_FIRST_AVA INDEX 21 6.67 CONTENT_OW DATA_TS MZ_CS_WORK_PENDING_R TABLE 20 6.35
关于 ITL的介绍详见: http://www.askmaclean.com/archives/enqueue-tx-row-lock-index-itl-wait-event.html
ITL Waits 等待 ITL 的次数,数据来源为 dba_hist_seg_stat.itl_waits_delta
11-14 Segments by Buffer Busy Waits
Segments by Buffer Busy Waits DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> % of Capture shows % of Buffer Busy Waits for each top segment compared -> with total Buffer Busy Waits for all segments captured by the Snapshot Buffer Tablespace Subobject Obj. Busy % of Owner Name Object Name Name Type Waits Capture ---------- ---------- -------------------- ---------- ----- ------------ ------- CONTENT_OW LOB_8K_TS MZ_ASSET_WORK_EVENT_ INDEX 251,073 57.07 CONTENT_OW LOB_8K_TS MZ_CS_WORK_NOTE_RE_I _2013_1_36 INDEX 36,186 8.23 CONTENT_OW LOB_8K_TS MZ_CS_WORK_INFO_PART _2013_5_35 INDEX 31,786 7.23 CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3A INDEX 15,663 3.56 CONTENT_OW INDEX_TS MZ_CS_WORK_PENDING_R INDEX 11,087 2.52
Buffer Busy Waits 该数据段上发生 buffer busy wait的次数 数据来源 dba_hist_seg_stat.buffer_busy_waits_delta
11-15 Segments by Global Cache Buffer
Segments by Global Cache Buffer BusyDB/Inst: MAC/MAC2 Snaps: 70719-7072 -> % of Capture shows % of GC Buffer Busy for each top segment compared -> with GC Buffer Busy for all segments captured by the Snapshot GC Tablespace Subobject Obj. Buffer % of Owner Name Object Name Name Type Busy Capture ---------- ---------- -------------------- ---------- ----- ------------ ------- CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3 INDEX 2,135,528 50.07 CONTENT_OW DATA_TS MZ_CONTENT_PROVIDER_ TABLE 652,900 15.31 CONTENT_OW LOB_8K_TS MZ_ASSET_WORK_EVENT_ INDEX 552,161 12.95 CONTENT_OW LOB_8K_TS MZ_CS_WORK_NOTE_RE_I _2013_1_36 INDEX 113,042 2.65 CONTENT_OW LOB_8K_TS MZ_CS_WORK_INFO_PART _2013_5_35 INDEX 98,134 2.30
GC Buffer Busy 数据段上发挥僧gc buffer busy的次数, 数据源 dba_hist_seg_stat.gc_buffer_busy_delta
11-15 Segments by CR Blocks Received
Segments by CR Blocks Received DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> Total CR Blocks Received: 763,037 -> Captured Segments account for 40.9% of Total CR Tablespace Subobject Obj. Blocks Owner Name Object Name Name Type Received %Total ---------- ---------- -------------------- ---------- ----- ------------ ------- CONTENT_OW DATA_TS MZ_AM_REQUEST TABLE 69,100 9.06 CONTENT_OW DATA_TS MZ_CS_WORK_PENDING_R TABLE 44,491 5.83 CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3A INDEX 36,830 4.83 CONTENT_OW DATA_TS MZ_PODCAST TABLE 36,632 4.80 CONTENT_OW INDEX_TS MZ_AM_REQUEST_PK INDEX 19,646 2.57
CR Blocks Received :是指RAC中本地节点接收到global cache CR blocks 的数量; 数据来源为 dba_hist_seg_stat.gc_cu_blocks_received_delta
%Total : (该段上在本节点接收的Global CR blocks / gc cr blocks received )
11-16 Segments by Current Blocks Received
Segments by Current Blocks ReceivedDB/Inst: MAC/MAC2 Snaps: 70719-70723 -> Total Current Blocks Received: 704,731 -> Captured Segments account for 61.8% of Total Current Tablespace Subobject Obj. Blocks Owner Name Object Name Name Type Received %Total ---------- ---------- -------------------- ---------- ----- ------------ ------- CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3 INDEX 56,287 7.99 CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3A INDEX 45,139 6.41 CONTENT_OW DATA_TS MZ_AM_REQUEST TABLE 40,350 5.73 CONTENT_OW DATA_TS MZ_CS_WORK_PENDING_R TABLE 22,808 3.24 CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX8 INDEX 13,343 1.89
Current Blocks Received :是指RAC中本地节点接收到global cache Current blocks 的数量 ,数据来源DBA_HIST_SEG_STAT.gc_cu_blocks_received_delta
%Total : (该段上在本节点接收的 global cache current blocks / gc current blocks received)
12 Dictionary Cache Stats
Dictionary Cache Stats DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> "Pct Misses" should be very low (< 2% in most cases) -> "Final Usage" is the number of cache entries being used Get Pct Scan Pct Mod Final Cache Requests Miss Reqs Miss Reqs Usage ------------------------- ------------ ------ ------- ----- -------- ---------- dc_awr_control 87 2.3 0 N/A 6 1 dc_global_oids 1,134 7.8 0 N/A 0 13 dc_histogram_data 6,119,027 0.9 0 N/A 0 11,784 dc_histogram_defs 1,898,714 2.3 0 N/A 0 5,462 dc_object_grants 175 26.9 0 N/A 0 4 dc_objects 10,254,514 0.2 0 N/A 0 3,807 dc_profiles 8,452 0.0 0 N/A 0 2 dc_rollback_segments 3,031,044 0.0 0 N/A 0 1,947 dc_segments 1,812,243 1.4 0 N/A 10 3,595 dc_sequences 15,783 69.6 0 N/A 15,782 20 dc_table_scns 70 2.9 0 N/A 0 1 dc_tablespaces 1,628,112 0.0 0 N/A 0 37 dc_users 2,037,138 0.0 0 N/A 0 52 global database name 7,698 0.0 0 N/A 0 1 outstanding_alerts 264 99.6 0 N/A 8 1 sch_lj_oids 51 7.8 0 N/A 0 1
Dictionary Cache 字典缓存也叫row cache
数据来源为dba_hist_rowcache_summary
Cache 字典缓存类名kqrstcid <=> kqrsttxt cid=3(dc_rollback_segments)
Get Requests 申请获取该数据字典缓存对象的次数 gets
Miss : GETMISSES 申请获取该数据字典缓存对象但 miss的次数
Pct Miss : GETMISSES /Gets , Miss的比例 ,这个pct miss应当非常低 小于2%,否则有出现大量row cache lock的可能
Scan Reqs:扫描申请的次数 ,kqrssc 、kqrpScan 、kqrpsiv时发生scan 会导致扫描数增加 kqrstsrq++(scan requests) ,例如migrate tablespace 时调用 kttm2b函数 为了安全删除uet$中的记录会callback kqrpsiv (used extent cache),实际很少见
Pct Miss:SCANMISSES/SCANS
Mod Reqs: 申请修改字典缓存对象的次数,从上面的数据可以看到dc_sequences的mod reqs很高,这是因为sequence是变化较多的字典对象
Final Usage :包含有有效数据的字典缓存记录的总数 也就是正在被使用的row cache记录 USAGE Number of cache entries that contain valid data
Dictionary Cache Stats (RAC) DB/Inst: MAC/MAC2 Snaps: 70719-70723 GES GES GES Cache Requests Conflicts Releases ------------------------- ------------ ------------ ------------ dc_awr_control 14 2 0 dc_global_oids 88 0 102 dc_histogram_defs 43,518 0 43,521 dc_objects 21,608 17 21,176 dc_profiles 1 0 1 dc_segments 24,974 14 24,428 dc_sequences 25,178 10,644 347 dc_table_scns 2 0 2 dc_tablespaces 165 0 166 dc_users 119 0 119 outstanding_alerts 478 8 250 sch_lj_oids 4 0 4
GES Request kqrstilr total instance lock requests ,通过全局队列服务GES 来申请instance lock的次数
GES request 申请的原因可能是 dump cache object、kqrbfr LCK进程要background free some parent objects释放一些parent objects 等
GES Conflicts kqrstifr instance lock forced-releases , LCK进程以AST方式 释放锁的次数 ,仅出现在kqrbrl中
GES Releases kqrstisr instance lock self-releases ,LCK进程要background free some parent objects释放一些parent objects 时可能自增
上述数据中可以看到仅有dc_sequences 对应的GES Conflicts较多, 对于sequence 使用ordered和non-cache选项会导致RAC中的一个边际效应,即”row cache lock”等待源于DC_SEQUENCES ROW CACHE。 DC_SEQUENCES 上的GETS request、modifications 、GES requests和GES conflict 与引发生成一个新的 sequence number的特定SQL执行频率相关。
在Oracle 10g中,ORDERED Sequence还可能在高并发下造成大量DFS lock Handle 等待,由于bug 5209859
13 Library Cache Activity
Library Cache Activity DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> "Pct Misses" should be very low Get Pct Pin Pct Invali- Namespace Requests Miss Requests Miss Reloads dations --------------- ------------ ------ -------------- ------ ---------- -------- ACCOUNT_STATUS 8,436 0.3 0 N/A 0 0 BODY 8,697 0.7 15,537 0.7 49 0 CLUSTER 317 4.7 321 4.7 0 0 DBLINK 9,212 0.1 0 N/A 0 0 EDITION 4,431 0.0 8,660 0.0 0 0 HINTSET OBJECT 1,027 9.5 1,027 14.4 0 0 INDEX 792 18.2 792 18.2 0 0 QUEUE 10 0.0 1,733 0.0 0 0 RULESET 0 N/A 8 87.5 7 0 SCHEMA 8,169 0.0 0 N/A 0 0 SQL AREA 533,409 4.8 -4,246,727,944 101.1 44,864 576 SQL AREA BUILD 71,500 65.5 0 N/A 0 0 SQL AREA STATS 41,008 90.3 41,008 90.3 1 0 TABLE/PROCEDURE 320,310 0.6 1,033,991 3.6 25,378 0 TRIGGER 847 0.0 38,442 0.3 110 0
NameSpace library cache 的命名空间
GETS Requests 该命名空间所包含对象的library cache lock被申请的次数
GETHITS 对象的 library cache handle 正好在内存中被找到的次数
Pct Misses : ( 1- ( GETHITS /GETS Requests)) *100
Pin Requests 该命名空间所包含对象上pin被申请的次数
PINHITS 要pin的对象的heap metadata正好在shared pool中的次数
Pct Miss ( 1- ( PINHITS /Pin Requests)) *100
Reloads 指从object handle 被重建开始不是第一次PIN该对象的PIN ,且该次PIN要求对象从磁盘上读取加载的次数 ;Reloads值较高的情况 建议增大shared_pool_size
INVALIDATIONS 由于以来对象被修改导致该命名空间所包含对象被标记为无效的次数
Library Cache Activity (RAC) DB/Inst: MAC/MAC2 Snaps: 70719-70723 GES Lock GES Pin GES Pin GES Inval GES Invali- Namespace Requests Requests Releases Requests dations --------------- ------------ ------------ ------------ ----------- ----------- ACCOUNT_STATUS 8,436 0 0 0 0 BODY 0 15,497 15,497 0 0 CLUSTER 321 321 321 0 0 DBLINK 9,212 0 0 0 0 EDITION 4,431 4,431 4,431 0 0 HINTSET OBJECT 1,027 1,027 1,027 0 0 INDEX 792 792 792 0 0 QUEUE 8 1,733 1,733 0 0 RULESET 0 8 8 0 0 SCHEMA 4,226 0 0 0 0 TABLE/PROCEDURE 373,163 704,816 704,816 0 0 TRIGGER 0 38,430 38,430 0 0
GES Lock Request: dlm_lock_requests Lock instance-lock ReQuests 申请获得lock instance lock的次数
GES PIN request : DLM_PIN_REQUESTS Pin instance-lock ReQuests 申请获得pin instance lock的次数
GES Pin Releases DLM_PIN_RELEASES release the pin instance lock 释放pin instance lock的次数
GES Inval Requests DLM_INVALIDATION_REQUESTS get the invalidation instance lock 申请获得invalidation instance lock的次数
GES Invali- dations DLM_INVALIDATIONS 接收到其他节点的invalidation pings次数
14 Process Memory Summary
Process Memory Summary DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> B: Begin Snap E: End Snap -> All rows below contain absolute values (i.e. not diffed over the interval) -> Max Alloc is Maximum PGA Allocation size at snapshot time -> Hist Max Alloc is the Historical Max Allocation for still-connected processes -> ordered by Begin/End snapshot, Alloc (MB) desc Hist Avg Std Dev Max Max Alloc Used Alloc Alloc Alloc Alloc Num Num Category (MB) (MB) (MB) (MB) (MB) (MB) Proc Alloc - -------- --------- --------- -------- -------- ------- ------- ------ ------ B Other 16,062.7 N/A 6.1 66.6 3,370 3,370 2,612 2,612 SQL 5,412.2 4,462.9 2.2 89.5 4,483 4,483 2,508 2,498 Freeable 2,116.4 .0 .9 6.3 298 N/A 2,266 2,266 PL/SQL 94.0 69.8 .0 .0 1 1 2,610 2,609 E Other 15,977.3 N/A 6.1 66.9 3,387 3,387 2,616 2,616 SQL 5,447.9 4,519.0 2.2 89.8 4,505 4,505 2,514 2,503 Freeable 2,119.9 .0 .9 6.3 297 N/A 2,273 2,273 PL/SQL 93.2 69.2 .0 .0 1 1 2,614 2,613
数据来源为dba_hist_process_mem_summary, 这里是对PGA 使用的一个小结,帮助我们了解到底谁用掉了PGA
B: 开始快照 E: 结束快照
该环节列出 PGA中各分类的使用量
Category 分类名,包括”SQL”, “PL/SQL”, “OLAP” 和”JAVA”. 特殊分类是 “Freeable” 和”Other”. Free memory是指哪些 OS已经分配给进程,但没有分配给任何分类的内存。 “Other”是已经分配给分类的内存,但不是已命名的分类
Alloc (MB) allocated_total 该分类被分配的总内存
Used (MB) used_total 该分类已使用的内存
Avg Alloc (MB) allocated_avg 平均每个进程中该分类分配的内存量
Std Dev Alloc (MB) :该分类分配的内存在每个进程之间的标准差
Max Alloc (MB) ALLOCATED_MAX :在快照时间内单个进程该分类最大分配过的内存量:Max Alloc is Maximum PGA Allocation size at snapshot time
Hist Max Alloc (MB) MAX_ALLOCATED_MAX: 目前仍链接着的进程该分类最大分配过的内存量:Hist Max Alloc is the Historical Max Allocation for still-connected processes
Num Proc num_processes 进程数目
Num Alloc NON_ZERO_ALLOCS 分配了该类型 内存的进程数目
14 SGA信息
14 -1 SGA Memory Summary
SGA Memory Summary DB/Inst: MAC/MAC2 Snaps: 70719-70723 End Size (Bytes) SGA regions Begin Size (Bytes) (if different) ------------------------------ ------------------- ------------------- Database Buffers 20,669,530,112 Fixed Size 2,241,880 Redo Buffers 125,669,376 Variable Size 10,536,094,376 ------------------- sum 31,333,535,744
粗粒度的sga区域内存使用信息, End Size仅在于begin size不同时打印
14-2 SGA breakdown difference
SGA breakdown difference DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> ordered by Pool, Name -> N/A value for Begin MB or End MB indicates the size of that Pool/Name was insignificant, or zero in that snapshot Pool Name Begin MB End MB % Diff ------ ------------------------------ -------------- -------------- ------- java free memory 64.0 64.0 0.00 large PX msg pool 7.8 7.8 0.00 large free memory 247.8 247.8 0.00 shared Checkpoint queue 140.6 140.6 0.00 shared FileOpenBlock 2,459.2 2,459.2 0.00 shared KGH: NO ACCESS 1,629.6 1,629.6 0.00 shared KGLH0 997.7 990.5 -0.71 shared KKSSP 312.2 308.9 -1.06 shared SQLA 376.6 370.6 -1.61 shared db_block_hash_buckets 178.0 178.0 0.00 shared dbktb: trace buffer 156.3 156.3 0.00 shared event statistics per sess 187.1 187.1 0.00 shared free memory 1,208.9 1,220.6 0.97 shared gcs resources 435.0 435.0 0.00 shared gcs shadows 320.6 320.6 0.00 shared ges enqueues 228.9 228.9 0.00 shared ges resource 118.3 118.3 0.00 shared init_heap_kfsg 1,063.6 1,068.1 0.43 shared kglsim object batch 124.3 124.3 0.00 shared ksunfy : SSO free list 174.7 174.7 0.00 stream free memory 128.0 128.0 0.00 buffer_cache 19,712.0 19,712.0 0.00 fixed_sga 2.1 2.1 0.00 log_buffer 119.8 119.8 0.00 -------------------------------------------------------------
Pool 内存池的名字
Name 内存池中细分组件的名字 例如KGLH0 存放KEL Heap 0 、SQLA存放SQL执行计划等
Begin MB 快照开始时该组件的内存大小
End MB 快照结束时该组件的内存大小
% Diff 差异百分比
特别注意 由于AMM /ASMM引起的shared pool收缩 一般在sga breakdown中可以提现 例如SQLA 、KQR等组件大幅缩小 ,可能导致一系列的解析等待 cursor: Pin S on X 、row cache lock等
此处的free memory信息也值得我们关注, 一般推荐shared pool应当有300~400 MB 的free memory为宜
15 Streams统计
Streams CPU/IO Usage DB/Inst: ORCL/orcl1 Snaps: 556-559 -> Streams processes ordered by CPU usage -> CPU and I/O Time in micro seconds Session Type CPU Time User I/O Time Sys I/O Time ------------------------- -------------- -------------- -------------- QMON Coordinator 101,698 0 0 QMON Slaves 63,856 0 0 ------------------------------------------------------------- Streams Capture DB/Inst: CATGT/catgt Snaps: 911-912 -> Lag Change should be small or negative (in seconds) Captured Enqueued Pct Pct Pct Pct Per Per Lag RuleEval Enqueue RedoWait Pause Capture Name Second Second Change Time Time Time Time ------------ -------- -------- -------- -------- -------- -------- -------- CAPTURE_CAT 650 391 93 0 23 0 71 ------------------------------------------------------------- Streams Apply DB/Inst: CATGT/catgt Snaps: 911-912 -> Pct DB is the percentage of all DB transactions that this apply handled -> WDEP is the wait for dependency -> WCMT is the wait for commit -> RBK is rollbacks -> MPS is messages per second -> TPM is time per message in milli-seconds -> Lag Change should be small or negative (in seconds) Applied Pct Pct Pct Pct Applied Dequeue Apply Lag Apply Name TPS DB WDEP WCMT RBK MPS TPM TPM Change ------------ -------- ---- ---- ---- --- -------- -------- -------- -------- APPLY_CAT 0 0 0 0 0 0 0 0 0 -------------------------------------------------------------
Capture Name : Streams捕获进程名
Captured Per Second :每秒挖掘出来的message 条数
Enqueued Per Second: 每秒入队的message条数
lag change: 指日志生成的时间到挖掘到该日志生成 message的时间延迟
Pct Enqueue Time: 入队时间的比例
Pct redoWait Time : 等待redo的时间比例
Pct Pause Time : Pause 时间的比例
Apply Name Streams 应用Apply进程的名字
Applied TPS : 每秒应用的事务数
Pct DB: 所有的DB事务中 apply处理的比例
Pct WDEP: 由于等待依赖的数据而耗费的时间比例
Pct WCMT: 由于等待commit而耗费的时间比例
Pct RBK: 事务rollback 回滚的比例
Applied MPS: 每秒应用的message 数
Dequeue TPM: 每毫秒出队的message数
Lag Change:指最新message生成的时间到其被Apply收到的延迟
16 Resource Limit
Resource Limit Stats DB/Inst: MAC/MAC2 Snap: 70723 -> only rows with Current or Maximum Utilization > 80% of Limit are shown -> ordered by resource name Current Maximum Initial Resource Name Utilization Utilization Allocation Limit ------------------------------ ------------ ------------ ---------- ---------- ges_procs 2,612 8,007 10003 10003 processes 2,615 8,011 10000 10000
数据源于dba_hist_resource_limit
注意这里仅列出当前使用或最大使用量>80% *最大限制的资源名,如果没有列在这里则说明 资源使用量安全
Current Utilization 当前对该资源(包括Enqueue Resource、Lock和processes)的使用量
Maximum Utilization 从最近一次实例启动到现在该资源的最大使用量
Initial Allocation 初始分配值,一般等于参数文件中指定的值
Limit 实际上限值
17 init.ora Parameters
init.ora Parameters DB/Inst: MAC/MAC2 Snaps: 70719-70723 End value Parameter Name Begin value (if different) ----------------------------- --------------------------------- -------------- _compression_compatibility 11.2.0 _kghdsidx_count 4 _ksmg_granule_size 67108864 _shared_pool_reserved_min_all 4100 archive_lag_target 900 audit_file_dest /u01/app/oracle/admin/MAC/adum audit_trail OS cluster_database TRUE compatible 11.2.0.2.0 control_files +DATA/MAC/control01.ctl, +RECO db_16k_cache_size 268435456 db_block_size 8192 db_cache_size 19327352832 db_create_file_dest +DATA
Parameter Name 参数名
Begin value 开始快照时的参数值
End value 结束快照时的参数值 (仅在发生变化时打印)
18 Global Messaging Statistics
Global Messaging Statistics DB/Inst: MAC/MAC2 Snaps: 70719-70723 Statistic Total per Second per Trans --------------------------------- ---------------- ------------ ------------ acks for commit broadcast(actual) 53,705 14.9 0.2 acks for commit broadcast(logical 311,182 86.1 1.3 broadcast msgs on commit(actual) 317,082 87.7 1.3 broadcast msgs on commit(logical) 317,082 87.7 1.3 broadcast msgs on commit(wasted) 263,332 72.9 1.1 dynamically allocated gcs resourc 0 0.0 0.0 dynamically allocated gcs shadows 0 0.0 0.0 flow control messages received 267 0.1 0.0 flow control messages sent 127 0.0 0.0 gcs apply delta 0 0.0 0.0 gcs assume cvt 55,541 15.4 0.2
全局通信统计信息,数据来源WRH$_DLM_MISC;
20 Global CR Served Stats
Global CR Served Stats DB/Inst: MAC/MAC2 Snaps: 70719-70723 Statistic Total ------------------------------ ------------------ CR Block Requests 403,703 CURRENT Block Requests 444,896 Data Block Requests 403,705 Undo Block Requests 94,336 TX Block Requests 307,896 Current Results 652,746 Private results 21,057 Zero Results 104,720 Disk Read Results 69,418 Fail Results 508 Fairness Down Converts 102,844 Fairness Clears 15,207 Free GC Elements 0 Flushes 105,052 Flushes Queued 0 Flush Queue Full 0 Flush Max Time (us) 0 Light Works 71,793 Errors 117
LMS传输CR BLOCK的统计信息,数据来源WRH$_CR_BLOCK_SERVER
21 Global CURRENT Served Stats
Global CURRENT Served Stats DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> Pins = CURRENT Block Pin Operations -> Flushes = Redo Flush before CURRENT Block Served Operations -> Writes = CURRENT Block Fusion Write Operations Statistic Total % <1ms % <10ms % <100ms % <1s % <10s ---------- ------------ -------- -------- -------- -------- -------- Pins 73,018 12.27 75.96 8.49 2.21 1.08 Flushes 79,336 5.98 50.17 14.45 19.45 9.95 Writes 102,189 3.14 35.23 19.34 33.26 9.03
数据来源dba_hist_current_block_server
Time to process current block request = (pin time + flush time + send time)
Pins CURRENT Block Pin Operations , PIN的内涵是处理一个BAST 不包含对global current block的flush和实际传输
The pin time represents how much time is required to process a BAST. It does not include the flush time and
the send time. The average pin time per block served should be very low because the processing consists
mainly of code path and should never be blocked.
Flush 指 脏块被LMS进程传输出去之前,其相关的redo必须由LGWR已经flush 到磁盘上
Write 指fusion write number of writes which were mediated; 节点之间写脏块需求相互促成的行为 KJBL.KJBLREQWRITE gcs write request msgs 、gcs writes refused
% <1ms % <10ms % <100ms % <1s % <10s 分别对应为pin、flush、write行为耗时的比例
例如在上例中flush和 write 在1s 到10s之间的有9%,在100ms 和1s之间的有19%和33%,因为flush和write都是IO操作 所以这里可以预见IO存在问题,延迟较高
22 Global Cache Transfer Stats
Global Cache Transfer Stats DB/Inst: MAC/MAC2 Snaps: 70719-70723 -> Immediate (Immed) - Block Transfer NOT impacted by Remote Processing Delays -> Busy (Busy) - Block Transfer impacted by Remote Contention -> Congested (Congst) - Block Transfer impacted by Remote System Load -> ordered by CR + Current Blocks Received desc CR Current ----------------------------- ----------------------------- Inst Block Blocks % % % Blocks % % % No Class Received Immed Busy Congst Received Immed Busy Congst ---- ----------- -------- ------ ------ ------ -------- ------ ------ ------ 1 data block 133,187 76.3 22.6 1.1 233,138 75.2 23.0 1.7 4 data block 143,165 74.1 24.9 1.0 213,204 76.6 21.8 1.6 3 data block 122,761 75.9 23.0 1.1 220,023 77.7 21.0 1.3 1 undo header 104,219 95.7 3.2 1.1 941 93.4 5.8 .7 4 undo header 95,823 95.2 3.7 1.1 809 93.4 5.3 1.2 3 undo header 95,592 95.6 3.3 1.1 912 94.6 4.5 .9 1 undo block 25,002 95.8 3.4 .9 0 N/A N/A N/A 4 undo block 23,303 96.0 3.1 .9 0 N/A N/A N/A 3 undo block 21,672 95.4 3.7 .9 0 N/A N/A N/A 1 Others 1,909 92.0 6.8 1.2 6,057 89.6 8.9 1.5 4 Others 1,736 92.4 6.1 1.5 5,841 88.8 9.9 1.3 3 Others 1,500 92.4 5.9 1.7 4,405 87.7 10.8 1.6
数据来源DBA_HIST_INST_CACHE_TRANSFER
Inst No 节点号
Block Class 块的类型
CR Blocks Received 该节点上 该类型CR 块的接收数量
CR Immed %: CR块请求立即接收到的比例
CR Busy%:CR块请求由于远端争用而没有立即接收到的比例
CR Congst%: CR块请求由于远端负载高而没有立即接收到的比例
Current Blocks Received 该节点上 该类型Current 块的接收数量
Current Immed %: Current块请求立即接收到的比例
Current Busy%:Current块请求由于远端争用而没有立即接收到的比例
Current Congst%: Current块请求由于远端负载高而没有立即接收到的比例
Congst%的比例应当非常低 不高于2%, Busy%很大程度受到IO的影响,如果超过10% 一般会有严重的gc buffer busy acquire/release
参考文档
Statistics Descriptions http://docs.oracle.com/cd/B19306_01/server.102/b14237/stats002.htm
Memory Configuration and Use http://docs.oracle.com/cd/B19306_01/server.102/b14211/memory.htm
Library Cache Hit (%) http://docs.oracle.com/cd/B16240_01/doc/doc.102/e16282/oracle_database_help/oracle_database_instance_efficiency_libcache_hit_pct.html
Oracle? Database Performance Tuning Guide 12c Release 1 (12.1)
How to Interpret the “SQL ordered by Physical Reads (UnOptimized)” Section in AWR Reports (11.2 onwards) [ID 1466035.1]
FILED UNDER: ORACLE, ORACLE SQL性能调优
最新最早最热
26条评论
不了峰
精彩~~~~~~
2013年9月4日回复顶转发
Ask_Maclean_liu_Oracle
SELECT DECODE(:B2 , 0, TO_NUMBER(NULL), 100 * SUM(LOGICAL_READS_DELTA)/:B2 ), DECODE(:B1 , 0, TO_NUMBER(NULL), 100 * SUM(PHYSICAL_READS_DELTA)/:B1 ) FROM DBA_HIST_SEG_S
TAT WHERE :B6 < SNAP_ID AND SNAP_ID <= :B5 AND DBID = :B4 AND INSTANCE_NUMBER = :B3
2013年9月7日回复顶转发
Ask_Maclean_liu_Oracle
SELECT DECODE(B.TOTAL_SQL, 0, 0, 100 * (1 - B.SINGLE_USE_SQL / B.TOTAL_SQL)),
DECODE(E.TOTAL_SQL, 0, 0, 100 * (1 - E.SINGLE_USE_SQL / E.TOTAL_SQL)),
DECODE(B.TOTAL_SQL_MEM,
0,
0,
1 00 * (1 - B.SINGLE_USE_SQL_MEM / B.TOTAL_SQL_MEM)),
DECODE(E.TOTAL_SQL_MEM,
0,
0,
100 * (1 - E.SINGLE_USE_SQL_MEM / E.TOTAL_SQL_MEM))
FROM DBA_HIST_SQL_SUMMARY B, DBA_HIST_SQL_SUMM ARY E
WHERE B.SNAP_ID = :B4
AND E.SNAP_ID = :B3
AND B.INSTANCE_NUMBER = :B2
AND E.INSTANCE_NUMBER = :B2
AND B.DBID = :B1
AND E.DBID = :B1
2013年9月7日回复顶转发
Rogers.LL
顶!不顶还有天理么?
2013年9月24日回复顶转发
外网
太厉害了,权威
2013年9月27日回复顶转发
tiantian
不结合案例来讲,光讲理论,有个皮用啊。到网上一搜,一大把,太普通了,没有价值。
2013年10月8日回复顶转发
Croco
@tiantian
2013年10月10日回复顶转发
maclean
如果要结合案例的话 ,我想不够写一本书,也可以写一本200页的小册子吧?
2013年10月8日回复顶转发
maclean
PS:文章开头的 AWR鹰眼是 结合案例讲的,可能符合你的要求
2013年10月8日回复顶转发
Croco
别人公布出来也没找你要钱,你有能耐就说点有价值的,有建设性的。或者干脆你写一个结合案例的如何?
嘴里不干不净的,惹人厌烦
2013年10月10日回复顶转发
qy
不知道该说你这种人点儿什么……
2013年10月10日回复顶转发
long
不懂行吧,哥们
1月27日回复顶转发
rjx
顶
2013年10月10日回复顶转发
laughing
顶刘大。哈哈。好文章
2013年10月10日回复顶转发
ballontt
赞
2013年10月10日回复顶转发
Payne.Zhang
good job!
2013年10月10日回复顶转发
Leot
2-8 Host CPU
Host CPU (CPUs: 24 Cores: 12 Sockets: 2)
~~~~~~~~ Load Average
Begin End %User %System %WIO %Idle
--------- --------- --------- --------- --------- ---------
8.41 12.84 24.7 7.1 0.2 65.8
“Load Average” begin/end值代表每个CPU的大致运行队列大小。上例中快照开始到结束,平均 CPU负载增加了。
%User+%System=> 总的CPU使用率,在这里是31.8%
Elapsed Time * NUM_CPUS * CPU utilization= 60.23 (mins) * 24 * 31.8% = 459.67536 mins=Busy Time
与《2-5 Operating System Statistics》中的LOAD相呼应
这部分是如何与LOAD相呼应的?
LOAD在begin是8,end是13
另外,2-5里“所有CPU所能提供总的时间片”是不是敲错位置了,指的是哪个数值?
2013年10月10日回复顶转发
Ask_Maclean_liu_Oracle
2-5 Operating System Statistics
LOAD begin 8 end 13
与
2-8 Host CPU
Load Average 8.41 12.84
呼应
所有CPU所能提供总的时间片 这个指标没有在AWR中,但我觉得有必要说明下这个知识点
2013年10月10日回复顶转发
rocky_miko
Good job! Thank you for sharing~~:)
2013年10月10日回复顶转发
TimAkimoff
Good!
2013年10月11日回复顶转发
dla001
SYS$USERS一般是系统用户登录;
SYS$USERS is the default service for user sessions that are not associated with application services.
如果使用oracle_sid连接数据库的话,那么都是算到 SYS$USERS 中的。
2013年10月12日回复顶转发
刘洋
Logical Read 指的是次数吧,有时,一个数据块的内容需要一次Logical Read,有时,一个数据块的内容要多次Logical Read。
2月15日回复顶转发
snowdrop
1-4中利用FORCE_MATCHING_SIGNATURE捕获非绑定变量SQL,Xin提供的方法和刘大提供的方法测试出来的数据不一样;解释下原因吧
2月27日回复顶转发
随风H
我觉得是好文章,先看看
4月29日回复顶转发
恰逢90后
请问如何通过awr报告判断一个系统是OLTP还是OLAP?
7月14日回复顶转发
Ask_Maclean_liu_Oracle
常见问题:如何使用AWR报告来诊断数据库性能问题 (Doc ID 1523048.1)
适用于:
Oracle Database - Enterprise Edition - 版本 10.2.0.1 和更高版本
本文档所含信息适用于所有平台
目标
本文旨在提供如何解释跟数据库性能问题息息相关的AWR信息。
需要注意的是生成 AWR Report 或访问 AWR 相关的视图,以及使用任何 AWR 相关的诊断信息,都需要额外的 Diagnostic Pack License。这包括生成 AWR/ADDM/ASH report,也包括当技术支持要求的生成上述报表时。
注意: Oracle Diagnostics Pack (以及 Oracle Tuning Pack) 只在企业版中提供。
详见:
Oracle? Database Licensing Information
12c Release 1 (12.1)
Part number E17614-08
Chapter 1 1 Oracle Database Editions
Feature Availability by Edition
http://docs.oracle.com/cd/E16655_01/license.121/e17614/editions.htm#DBLIC116
最佳实践
如何主动避免问题发生及做好诊断信息的收集
有些问题是无法预见的,但大部分其它的问题如果及早发现一些征兆其实是可以避免的。同时,如果问题确实发生了,那么收集问题发生时的信息就非常重要。有关于如何主动避免问题及诊断信息的收集,请参见:
Document 1482811.1 Best Practices: Proactively Avoiding Database and Query Performance Issues
Document 1477599.1 Best Practices Around Data Collection For Performance Issues
提出问题、获取帮助并分享您的经验
您想要与其他 Oracle 客户、Oracle 员工及业内专家深入探讨吗?
Click here to join the discussion where you can ask questions, get help from others, and share your experiences with this specific article.
点击这里访问 My Oracle Support Community 数据库性能优化页,在这里您可以提出问题、获取他人的帮助并分享您的经验。
解决方案
对于数据库整体的性能问题,AWR的报告是一个非常有用的诊断工具。
一般来说,当检测到性能问题时,我们会收集覆盖了发生问题的时间段的AWR报告-但是最好只收集覆盖1个小时时间段的AWR报告-如果时间过长,那么AWR报告就不能很好的反映出问题所在。
还应该收集一份没有性能问题的时间段的AWR报告,作为一个参照物来对比有问题的时间段的AWR报告。这两个AWR报告的时间段应该是一致的,比如都是半个小时的,或者都是一个小时的。
关于如何收集AWR报告,请参照如下文档:
Document 1363422.1 Automatic Workload Repository (AWR) Reports - Start Point
注:最好一开始我们从ADDM报告入手,因为对应时间段的ADDM报告往往已经指出了问题所在。
参见: Use of ADDM Reports alongside AWR
Interpretation
在处理性能问题时,我们最关注的是数据库正在等待什么。
当进程因为某些原因不能进行操作时,它需要等待。花费时间最多的等待事件是我们最需要关注的,因为降低它,我们能够获得最大的好处。
AWR报告中的"Top 5 Timed Events"部分就提供了这样的信息,可以让我们只关注主要的问题。
Top 5 Timed Events
正如前面提到的,"Top 5 Timed Events"是AWR报告中最重要的部分。它指出了数据库的sessions花费时间最多的等待事件,如下:
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
db file scattered read 10,152,564 81,327 8 29.6 User I/O
db file sequential read 10,327,231 75,878 7 27.6 User I/O
CPU time 56,207 20.5
read by other session 4,397,330 33,455 8 12.2 User I/O
PX Deq Credit: send blkd 31,398 26,576 846 9.7 Other
-------------------------------------------------------------
Top 5 Events部分包含了一些跟Events(事件)相关的信息。它记录了这期间遇到的等待的总次数,等待所花费的总时间,每次等待的平均时间;这一部分是按照每个Event占总体call time的百分比来进行排序的。
根 据Top 5 Events部分的信息的不同,接下来我们需要检查AWR报告的其他部分,来验证发现的问题或者做定量分析。等待事件需要根据报告期的持续时间和当时数据 库中的并发用户数进行评估。如:10分钟内1000万次的等待事件比10个小时内的1000万等待更有问题;10个用户引起的1000万次的等待事件比 10,000个用户引起的相同的等待要更有问题。
就像上面的例子,将近60%的时间是在等待IO相关的事件。
事件"db file scattered read"一般表明正在做由全表扫描或者index fast full scan引起的多块读。
事件"db file sequential read"一般是由不能做多块读的操作引起的单块读(如读索引)
其他20%的时间是花在使用或等待CPU time上。过高的CPU使用经常是性能不佳的SQL引起的(或者这些SQL有可能用更少的资源完成同样的操作);对于这样的SQL,过多的IO操作也是一个症状。关于CPU使用方面,我们会在之后讨论。
在以上基础上,我们将调查是否这个等待事件是有问题的。若有问题,解决它;若是正常的,检查下个等待事件。
过多的IO相关的等待一般会有两个主要的原因:
数据库做了太多的读操作
每次的IO读操作都很慢
Top 5 Events部分的显示的信息会帮助我们检查:
是否数据库做了大量的读操作:
上面的图显示了在这段时间里两类读操作都分别大于1000万,这些操作是否过多取决于报告的时间是1小时或1分钟。我们可以检查AWR报告的elapsed time
如果这些读操作确实是太多了,接下来我们需要检查AWR报告中 SQL Statistics 部分的信息,因为读操作都是由SQL语句发起的。
是否是每次的IO读操作都很慢:
上面的图显示了在这段时间里两类读操作平均的等待时间是小于8ms的
至于8ms是快还是慢取决于底层的硬件设备;一般来讲小于20ms的都可以认为是可以接受的。
我们还可以在AWR报告"Tablespace IO Stats"部分得到更详细的信息
Tablespace IO Stats DB/Inst: VMWREP/VMWREP Snaps: 1-15
-> ordered by IOs (Reads + Writes) desc
Tablespace
------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
TS_TX_DATA
14,246,367 283 7.6 4.6 145,263,880 2,883 3,844,161 8.3
USER
204,834 4 10.7 1.0 17,849,021 354 15,249 9.8
UNDOTS1
19,725 0 3.0 1.0 10,064,086 200 1,964 4.9
AE_TS
4,287,567 85 5.4 6.7 932 0 465,793 3.7
TEMP
2,022,883 40 0.0 5.8 878,049 17 0 0.0
UNDOTS3
1,310,493 26 4.6 1.0 941,675 19 43 0.0
TS_TX_IDX
1,884,478 37 7.3 1.0 23,695 0 73,703 8.3
>SYSAUX
346,094 7 5.6 3.9 112,744 2 0 0.0
SYSTEM
101,771 2 7.9 3.5 25,098 0 653 2.7
如上图,我们关心Av Rd(ms)的指标。如果它高于20ms并且同时有很多读操作的,我们可能要开始从OS的角度调查是否有潜在的IO问题。
注:对于一些比较空闲的tablespace/files,我们可能会得到一个比较大的Av Rd(ms)值;对于这样的情况,我们应该忽略这样的tablespace/files;因为这个很大的值可能是由于硬盘自旋(spin)引起的,没有太大的参考意义。比如对
于一个有1000万次读操作而且很慢的系统,引起问题的基本不可能是一个只有10次read的tablespace/file
以下的文档可以帮助我们进一步调查IO方面的问题:
Note:223117.1 Troubleshooting I/O-related waits
虽 然高"db file scattered read"和"db file sequential read"等待可以是I / O相关的问题,但是很多时候这些等待也可能是正常的;实际上,对一个已经性能很好的数据库系统,这些等待事件往往在top 5等待事件里,因为这意味着您的数据库没有那些真正的“问题”。
诀窍是能够评估引起这些等待的语句是否使用了最优的访问路径。如果"db file scattered read"比较高,那么相关的SQL语句可能使用了全表扫描而没有使用索引(也许是没有创建索引,也许是没有合适的索引);相应的,如果"db file sequential read"过多,则表明也许是这些SQL语句使用了selectivity不高的索引从而导致访问了过多不必要的索引块或者使用了错误的索引。这些等待可 能说明SQL语句的执行计划不是最优的。
接下来就需要通过AWR来检查这些top SQL是否可以进一步的调优,我们可以查看AWR报告中 SQL Statistics 的部分.
上面的例子显示了20%的时间花在了等待或者使用CPU上,我们也需要检查 SQL statistics 部分来进一步的分析。
需要注意,接下来的分析步骤取决于我们在TOP 5部分的发现。在上面的例子里,3个top wait event表明问题可能与SQL语句执行计划不好有关,所以接下来我们要去分析"SQL Statistics"部分。
同样的,因为我们并没有看到latch相关的等待,latch在我们这个例子里并没有引发严重的性能问题;那么我们接下来就完全不需要分析latch相关的信息。
一 般来讲,如果数据库性能很慢,TOP 5等待事件里"CPU", "db file sequential read" 和"db file scattered read" 比较明显(不管它们之间的顺序如何),我们总是需要检查Top SQL (by logical and physical reads)部分;调用SQL Tuning Advisor或者手工调优这些SQL来确保它们是有效率的运行。
SQL Statistics
AWR包含了一些不同的SQL统计值:
sql stats
根据Top 5 部分的Top Wait Event不同,我们需要检查不同的SQL statistic。
在我们这个例子里,Top Wait Event是"db file scattered read","db file sequential read"和CPU;我们最需要关心的是SQL ordered by CPU Time, Gets and Reads。
我们会从"SQL ordered by gets"入手,因为引起高buffer gets的SQL语句一般是需要调优的对象。
SQL ordered by Gets
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets: 4,745,943,815
-> Captured SQL account for 122.2% of Total
Gets CPU Elapsed
Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
1,228,753,877 168 7,314,011.2 25.9 8022.46 8404.73 5t1y1nvmwp2
SELECT ADDRESSID",CURRENT$."ADDRESSTYPEID",CURRENT$URRENT$."ADDRESS3",
CURRENT$."CITY",CURRENT$."ZIP",CURRENT$."STATE",CURRENT$."PHONECOUNTRYCODE",
CURRENT$."PHONENUMBER",CURRENT$."PHONEEXTENSION",CURRENT$."FAXCOU
1,039,875,759 62,959,363 16.5 21.9 5320.27 5618.96 grr4mg7ms81
Module: DBMS_SCHEDULER
INSERT INTO "ADDRESS_RDONLY" ("ADDRESSID","ADDRESSTYPEID","CUSTOMERID","
ADDRESS1","ADDRESS2","ADDRESS3","CITY","ZIP","STATE","PHONECOUNTRYCODE","PHONENU
854,035,223 168 5,083,543.0 18.0 5713.50 7458.95 4at7cbx8hnz
SELECT "CUSTOMERID",CURRENT$."ISACTIVE",CURRENT$."FIRSTNAME",CURRENT$."LASTNAME",CU<
RRENT$."ORGANIZATION",CURRENT$."DATEREGISTERED",CURRENT$."CUSTOMERSTATUSID",CURR
ENT$."LASTMODIFIEDDATE",CURRENT$."SOURCE",CURRENT$."EMPLOYEEDEPT",CURRENT$.
对这些Top SQL,可以手工调优,也可以调用SQL Tuning Advisor。
参照以下文档:
Document 271196.1 Automatic SQL Tuning - SQL Profiles.
Document 262687.1 How to use the Sql Tuning Advisor.
Document 276103.1 PERFORMANCE TUNING USING ADVISORS AND MANAGEABILITY FEATURES: AWR, ASH, and ADDM and Sql Tuning Advisor.
注: 使用SQL Tuning Advisor需要额外的Oracle Tuning Pack License:
http://docs.oracle.com/cd/E11882_01/license.112/e10594/options.htm#DBLIC170
分析:
-> Total Buffer Gets: 4,745,943,815
假设这是一个一个小时的AWR报告,4,745,943,815是一个很大的值;所以需要进一步分析这个SQL是否使用了最优的执行计划
Individual Buffer Gets
上面的例子里单个的SQL的buffer get非常多,最少的那个都是8亿5千万。这三个SQL指向了两个不同的引起过多buffers的原因:
单次执行buffer gets过多
SQL_ID为'5t1y1nvmwp2'和'4at7cbx8hnz'的SQL语句总共被执行了168次,但是每次执行引起的buffer gets超过500万。这两个SQL应该是主要的需要调优的候选者。
执行次数过多
SQL_ID 'grr4mg7ms81' 每次执行只是引起16次buffer gets,减少这条SQL每次执行的buffer get可能并不能显著减少总共的buffer gets。这条语句的问题是它执行的太频繁了,6500万次。
改变这条SQL的执行次数可能会更有意义。这个SQL看起来是在一个循环里面被调用,如果可以让它一次处理的数据更多也许可以减少它执行的次数。
注意:对于某些非常繁忙的系统来讲,以上的数字可能都是正常的。这时候我们需要把这些数字跟正常时段的数字作对比,如果没有什么太大差别,那么这些SQL并不是引起问题的元凶(虽然通过调优这些SQL我们仍然可以受益)
Other SQL Statistic Sections
就像之前提到的那样,AWR报告中有很多不同的部分用来分析各种不同的问题。如果特定的问题并没有出现,那么分析AWR报告的这些部分并不能有很大的帮助。
下面提到了一些可能的问题:
Waits for 'Cursor: mutex/pin'
如 果发现了一些像"Cursor: pin S wait on X" 或"Cursor: mutex X" 类的mutex等待,那么可能是由于parsing引起的问题。检查"SQL ordered by Parse Calls" 和"SQL ordered by Version Count"部分的Top SQL,这些SQL可能引起这类的问题。
以下文档可以帮助我们分析这类问题:
Document 1356828.1 FAQ: 'cursor: mutex ..' / 'cursor: pin ..' / 'library cache: mutex ..' Type Wait Events
Note:1349387.1 Troubleshooting 'cursor: pin S wait on X' waits.
Load Profile
根据Top 5等待事件的不同,"Load Profile"可以提供一些有用的背景资料或潜在问题的细节信息。
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 4,585,414.80 3,165,883.14
Logical reads: 94,185.63 65,028.07
Block changes: 40,028.57 27,636.71
Physical reads: 2,206.12 1,523.16
Physical writes: 3,939.97 2,720.25
User calls: 50.08 34.58
Parses: 26.96 18.61
Hard parses: 1.49 1.03
Sorts: 18.36 12.68
Logons: 0.13 0.09
Executes: 4,925.89 3,400.96
Transactions: 1.45
% Blocks changed per Read: 42.50 Recursive Call %: 99.19
Rollback per transaction %: 59.69 Rows per Sort: 1922.64
在这个例子里,Top 5 Events部分显示问题可能跟SQL的执行有关,那么我们接下来检查load profile部分。
如果您检查AWR report是为了一般性的性能调优,那么可以看到有比较多的redo activity和比较高的physical writes. Physical writes比physical read要高,并且有42%的块被更改了.
此外,hard parse的次数要少于soft parse.
如果mutex等待事件比较严重,如"library cache: mutex X",那么查看所有parse的比率会更有用。
当然,如果把Load Profile部分跟正常时候的AWR报告做比较会更有用,比如,比较redo size, users calls, 和 parsing这些性能指标。
Instance Efficiency
Instance Efficiency部分更适用于一般性的调优,而不是解决某个具体问题(除非等待事件直接指向这些指标)。
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.91 Redo NoWait %: 100.00
Buffer Hit %: 98.14 In-memory Sort %: 99.98
Library Hit %: 99.91 Soft Parse %: 94.48
Execute to Parse %: 99.45 Latch Hit %: 99.97
Parse CPU to Parse Elapsd %: 71.23 % Non-Parse CPU: 99.00
从我们的这个例子来看,最有用的信息是%Non-Parse CPU,它表明几乎所有的CPU都消耗在了Execution而不是Parse上,所以调优SQL会对性能有改善。
94.48% 的soft parse比率显示hard parse的比例很小,这是可取的。Execute to Parse %很高,说明cursor被很好的重用了。我们总是期望这里的值都是接近100%,但是因为应用的不同,如果这个部分的参数的某些值很小,也是可以认为没 有问题的;如在数据仓库环境,hard parse因为使用了物化视图或histogram而变得很高。所以,重要的是,我们需要把这部分信息和正常时候的AWR报告做比较来判断是否有问题。
Latch Activity
在我们这个例子里,我们并没有看到很高的latch相关的等待,所以这部分的信息可以忽略。
但是如果latch相关的等待很严重,我们需要查看Latch Sleep Breakdown 部分sleeps很高的latch
Latch Sleep Breakdown
* ordered by misses desc
Latch Name
----------------------------------------
Get Requests Misses Sleeps Spin Gets Sleep1 Sleep2 Sleep3
-------------- ----------- ----------- ---------- -------- -------- --------
cache buffers chains
2,881,936,948 3,070,271 41,336 3,031,456 0 0 0
row cache objects
941,375,571 1,215,395 852 1,214,606 0 0 0
object queue header operation
763,607,977 949,376 30,484 919,782 0 0 0
cache buffers lru chain
376,874,990 705,162 3,192 702,090 0 0 0
这 里top latch是cache buffers chains. Cache Buffers Chains latches是用来保护buffer caches中的buffers。在我们读取数据时,这个latch是正常需要获得的。Sleep的数字上升代表session在读取buffers时开 始等待这个latch。争用通常来自于不良的SQL要读取相同的buffers。
在我们这个例子里,虽然读取buffer的操作发生了 28亿次,但是只sleep了41,336次,可以认为是比较低的。Avg Slps/Miss(Sleeps/ Misses)也比较低。这表明当前Server有能力处理这样多的数据,所以没有发生Cache Buffers Chains latch的争用。
关于其他的latch free等待,请参照以下文档:
Note:413942.1 How to Identify Which Latch is Associated with a "latch free" wait
值得注意的wait events
CPU time events
CPU变为top wait event并不总是代表出现了问题。但是如果同时数据库性能比较慢,那么就需要进一步分析了。首先,检查AWR报告的“ SQL ordered by CPU Time ”部分,看是否某个特定的SQL使用了大量的CPU。
SQL ordered by CPU Time
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total is the CPU Time divided into the Total CPU Time times 100
-> Total CPU Time (s): 56,207
-> Captured SQL account for 114.6% of Total
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) % Total DB Time SQL Id
---------- ---------- ------------ ----------- ------- ------- -------------
20,349 24,884 168 121.12 36.2 9.1 7bbhgqykv3cm9
Module: DBMS_SCHEDULER
DECLARE job BINARY_INTEGER := :job; next_date TIMESTAMP WITH TIME ZONE := :myda
te; broken BOOLEAN := FALSE; job_name VARCHAR2(30) := :job_name; job_subname
VARCHAR2(30) := :job_subname; job_owner VARCHAR2(30) := :job_owner; job_start
TIMESTAMP WITH TIME ZONE := :job_start; job_scheduled_start TIMESTAMP WITH TIME
Analysis:
-> Total CPU Time (s): 56,207
它代表了15分钟的CPU time。但是这个数字是否有问题取决于整个报告的时间。
Top SQL使用的CPU是 20,349秒(大概5分钟)
整个CPU时间占DB Time的9.1%
执行了168次,这个执行次数跟之前提到的几个SQL是一样的,说明这些SQL可能都是被同一个JOB调用的。
其他潜在的CPU相关的问题:
检查是否有其他等待事件与高CPU 事件同时出现
如cursor: pin S问题可能引起高CPU使用:
Note:6904068.8 Bug 6904068 - High CPU usage when there are "cursor: pin S" waits
数据库以外的CPU使用率过高
如果一个数据库以外的进程使用了过多CPU,那么数据库进程能够获得的CPU就会减少,数据库性能就会受到影响。在这种情况下,运行OSWather或者其他的OS工具去发现是哪个进程使用了过多CPU
Note:433472.1 OS Watcher For Windows (OSWFW) User Guide
诊断CPU使用率
下面的文档进一步描述了如何进一步分析CPU问题:
Note:164768.1 Troubleshooting: High CPU Utilization
'Log file sync' waits
当 一个user session commit或rollback时,log writer进程会把redo从log buffer中写入redo logfile文件。AWR报告会帮助我们来确定是否存在这方面的问题,并且确认是否是由物理IO引起。如果”log file sync”事件比较严重,下面的文档详细描述了如何来处理:
Document 1376916.1 Troubleshooting: "Log File Sync" Waits
Note:34592.1WAITEVENT: "log file sync"
Buffer busy waits
当 一个session从buffer cache读取一个buffer时,如果这个buffer处于busy的状态(由于其它session正在向其中读取数据,或者是由于这个buffer被 其它的session以不兼容模式持有),那么这个session就会等待这个事件。参照下面文档来找出哪个block处于busy状态和为什么:
Document 155971.1 Resolving Intense and "Random" Buffer Busy Wait Performance Problems:Note:34405.1 WAITEVENT: "buffer busy waits"
诊断其他问题
关于其他性能问题,请参照文档:
Document 1377446.1 Troubleshooting Performance Issues
使用ADDM的报告
当分析性能问题时,除了AWR报告,我们还可以同时参照ADDM报告,对于潜在的性能问题,它同时提供了具体的解决方案建议。下面是从如下文档拿到的一个ADDM报告示例:
Note:250655.1How to use the Automatic Database Diagnostic Monitor:
Example Output:
DETAILED ADDM REPORT FOR TASK 'SCOTT_ADDM' WITH ID 5
----------------------------------------------------
Analysis Period: 17-NOV-2003 from 09:50:21 to 10:35:47
Database ID/Instance: 494687018/1
Snapshot Range: from 1 to 3
Database Time: 4215 seconds
Average Database Load: 1.5 active sessions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
FINDING 1: 65% impact (2734 seconds)
------------------------------------
PL/SQL execution consumed significant database time.
RECOMMENDATION 1: SQL Tuning, 65% benefit (2734 seconds)
ACTION: Tune the PL/SQL block with SQL_ID fjxa1vp3yhtmr. Refer to
the "Tuning PL/SQL Applications" chapter of Oracle's "PL/SQL
User's Guide and Reference"
RELEVANT OBJECT: SQL statement with SQL_ID fjxa1vp3yhtmr
BEGIN EMD_NOTIFICATION.QUEUE_READY(:1, :2, :3); END;
FINDING 2: 35% impact (1456 seconds)
------------------------------------
SQL statements consuming significant database time were found.
RECOMMENDATION 1: SQL Tuning, 35% benefit (1456 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
gt9ahqgd5fmm2.
RELEVANT OBJECT: SQL statement with SQL_ID gt9ahqgd5fmm2 and
PLAN_HASH 547793521
UPDATE bigemp SET empno = ROWNUM
FINDING 3: 20% impact (836 seconds)
-----------------------------------
The throughput of the I/O subsystem was significantly lower than expected.
RECOMMENDATION 1: Host Configuration, 20% benefit (836 seconds)
ACTION: Consider increasing the throughput of the I/O subsystem.
Oracle's recommended solution is to stripe all data file using
the SAME methodology. You might also need to increase the
number of disks for better performance.
RECOMMENDATION 2: Host Configuration, 14% benefit (584 seconds)
ACTION: The performance of file
D:\ORACLE\ORADATA\V1010\UNDOTBS01.DBF was significantly worse
than other files. If striping all files using the SAME
methodology is not possible, consider striping this file over
multiple disks.
RELEVANT OBJECT: database file
"D:\ORACLE\ORADATA\V1010\UNDOTBS01.DBF"
SYMPTOMS THAT LED TO THE FINDING:
Wait class "User I/O" was consuming significant database time.
(34% impact [1450 seconds])
FINDING 4: 11% impact (447 seconds)
-----------------------------------
Undo I/O was a significant portion (33%) of the total database I/O.
NO RECOMMENDATIONS AVAILABLE
SYMPTOMS THAT LED TO THE FINDING:
The throughput of the I/O subsystem was significantly lower than
expected. (20% impact [836 seconds])
Wait class "User I/O" was consuming significant database time.
(34% impact [1450 seconds])
FINDING 5: 9.9% impact (416 seconds)
------------------------------------
Buffer cache writes due to small log files were consuming significant
database time.
RECOMMENDATION 1: DB Configuration, 9.9% benefit (416 seconds)
ACTION: Increase the size of the log files to 796 M to hold at
least 20 minutes of redo information.
ADDM报告相比AWR报告来说,它提供了可读性更好的建议;当然应该同时参照ADDM报告和AWR报告来得到更准确地诊断。
其他的AWR参考文章
当阅读AWR报告的其他部分时,可以参照下面的一些文档:
Document 786554.1 How to Read PGA Memory Advisory Section in AWR and Statspack Reports
Document 754639.1 How to Read Buffer Cache Advisory Section in AWR and Statspack Reports
Document 1301503.1 Troubleshooting: AWR Snapshot Collection issues
Document 1363422.1 Automatic Workload Repository (AWR) Reports - Start Point
Statspack
AWR报告取代了旧有的staspack及bstat/estat报告,下面的这些文档概述了如何阅读statspack报告:
http://www.oracle.com/technetwork/database/focus-areas/performance/statspack-opm4-134117.pdf
Additional information can be found in the following articles:
Document 94224.1 FAQ- Statspack Complete Reference
Document 394937.1 Statistics Package (STATSPACK) Guide
Document 149113.1 Installing and Configuring StatsPack Package
Document 149121.1 Gathering a StatsPack snapshot
Document 228913.1 Systemwide Tuning using STATSPACK Reports
关于如何进行Oracle AWR报告指标的解析就分享到这里了,希望以上内容可以对大家有一定的帮助,可以学到更多知识。如果觉得文章不错,可以把它分享出去让更多的人看到。
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。