Oracle Real Time SQL Monitor

Oracle Real Time SQL Monitor
Oracle数据库的实时SQL监控能让你用来监控正在执行SQL语句的性能。缺省情况下,当SQL语句使用 并行执行或当SQL语句的单个操作消耗了5秒的CPU或I/O时间就会自动启动SQL监控。可以使用v $sql_monitor与v$sql_plan_monitor视图来监控SQL语句执行的统计信息。可以使用这些视图时行关 联来获得被监控到的关于执行的其它信息:
.v$active_session_history
.v$session
.v$session_longops
.v$sql
.v$sql_plan

在监控初始化之后,数据库将会向动态性能视图v$sql_monitor中增加条目。这个条目跟踪SQL执行的 关键性能,包括:执行时间,CPU时间,读取与写入的次数,I/O等待时间和各种其它等待时间。这些 统计信息当SQL语句执行时间会被实时刷新,每秒生成一次。在执行结束后,监控信息不会立即被删 除,但在v$sql_monitor中至少保留一分钟。当需要为新的SQL语句腾出空间时,会评估条目是否需要 被删除来回收空间。

v$sql_monitor视图包含v$sql视图中统计信息的一组子集。然而,不像v$sql,监控统计信息不会对 多次执行进行累加。相反,v$sql_monitor中的一个条目被关联到SQL语句的一个单独操作。如果数据 库监控相同SQL语句执行两次,那么在v$sql_monitor中会有两次执行的统计信息。

为了唯一标识相同SQL语句的两次执行,会生成一个叫执行键的复合键。执行键是由三个属性组成, 它们分别为v$sql_monitor中的:
.SQL标识符用来标识SQL语句(SQL_ID)
.开始执行时间(SQL_EXEC_START)
.一个内部生成的标识符用来确保主键是唯一的(SQL_EXEC_ID)

SQL执行计划监控
真时SQL监控也包括监控SQL语句的执行计划中的每一个操作的统计信息。这些数据在v $sql_plan_monitor视图中可以看到。类似于v$sql_monitor视图,v$sql_plan_monitor视图中的统计 信息当SQL语句被执行时每秒会更新一次。这些统计信息在SQL执行结束后是存在的,它的生命周期与 v$sql_monitor是一样的。对于每个被监控的SQL语句,在v$sql_plan_monitor视图中将会有多个条目 ,每个条目关联执行计划中的一个操作。

并行执行监控
Oracle数据库会当SQL语句开始执行时会自动监控并行查询,DML与DDL语句。v$sql_monitor与v $sql_plan_monitor视图将会以单独的条目来记录并行执行中每个操作的监控信息。

v$sql_monitor对于并行执行协调进程和每个并行执行服务器进程有一个条目。对于每一个条目在v $sql_plan_monitor视图中也有相关的条目。因为对于SQL语句并行执行的所分配的进程是相互协作的 ,这些条目共享相同的执行键(由sql_id,sql_exec_start与sql_exec_id组成)。因此可以聚合执行 键来判断并行执行的整个统计信息。

生成SQL监控报告
可以使用SQL监控报告来查看SQL监控数据。SQL监控报告使用以下视图中的数据:
.gv$sql_monitor
.gv$sql_plan_monitor
.gv$sql
.gv$sql_plan
.gv$active_session_history
.gv$session_longops

为了生成SQL监控报告,运行dbms_sqltune.report_sql_monitor过程:

SQL> set long 10000000
SQL> set longchunksize 10000000
SQL> set linesize 200
SQL> variable my_rept clob;
SQL> begin
  2   :my_rept:=dbms_sqltune.report_sql_monitor();
  3  end;
  4  /
print :my_rept

PL/SQL procedure successfully completed.

SQL>
MY_REPT
---------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------- ------------------------
SQL Monitoring Report

SQL Text
------------------------------
/* SQL Analyze(1) */ select /*+ full(t) no_parallel(t) no_parallel_index(t) dbms_stats  cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring no_substrb_pad  */to_char(count("AAE064"
)),to_char(substrb(dump(min("AAE064"),16,0,32),1,120)),to_char(substrb(dump(max ("AAE064"),16,0,32),1,120)),to_char(count("AAB001")),to_char(substrb(dump(min ("AAB001"),16,0,32),1,120)),to_char(substrb(
dump(max("AAB001"),16,0,32),1,120)),to_char(count("AAB191")),to_char(substrb(dump(min ("AAB191"),16,
0,32),1,120)),to_char(substrb(dump(max("AAB191"),16,0,32),1,120)),to_char(count ("AAB190")),to_char(substrb(dump(min("AAB190"),16,0,32),1,120)),to_char(substrb(dump (max("AAB190"),16,0,32),1,120)),to_ch
ar(count("AAB211")),to_char(substrb(dump(min("AAB211"),16,0,32),1,120)),to_char(substrb (dump(max("AAB211"),16,0,32),1,120)),to_char(count("AAB212")),to_char(substrb(dump(min ("AAB212"),16,0,32),1,120))
,to_char(substrb(dump(max("AAB212"),16,0,32),1,120)),to_char(count("AAB213")),to_char (substrb(dump(
min("AAB213"),16,0,32),1,120)),to_char(substrb(dump(max ("AAB213"),16,0,32),1,120)),to_char(count("AAB214")),to_char(substrb(dump(min ("AAB214"),16,0,32),1,120)),to_char(substrb(dump(max("AAB214"),16,0,
32),1,120)),to_char(count("AAB215")),to_char(substrb(dump(min ("AAB215"),16,0,32),1,120)),to_char(substrb(dump(max("AAB215"),16,0,32),1,120)),to_char (count("AAE011")),to_char(substrb(dump(min("AAE011")
,16,0,32),1,120)),to_char(substrb(dump(max("AAE011"),16,0,32),1,120)),to_char(count ("AAE036")),to_c
har(substrb(dump(min("AAE036"),16,0,32),1,120)),to_char(substrb(dump(max ("AAE036"),16,0,32),1,120)),to_char(count("AAE017")),to_char(substrb(dump(min ("AAE017"),16,0,32),1,120)),to_char(substrb(dump(ma
x("AAE017"),16,0,32),1,120)),to_char(count("SJCQNY")),to_char(substrb(dump(min ("SJCQNY"),16,0,32),1,120)),to_char(substrb(dump(max("SJCQNY"),16,0,32),1,120)) from  "LEMIS_4307"."AB13" t /*
NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV, NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL*/

Global Information
------------------------------
 Status              :  DONE (ALL ROWS)
 Instance ID         :  1
 Session             :  SYS (1837:1553)
 SQL ID              :  5k014mmtyr7qw
 SQL Execution ID    :  16777216
 Execution Started   :  10/17/2016 22:02:26
 First Refresh Time  :  10/17/2016 22:02:32
 Last Refresh Time   :  10/17/2016 22:02:55
 Duration            :  29s
 Module/Action       :  DBMS_SCHEDULER/ORA$AT_OS_OPT_SY_1
 Service             :  SYS$USERS
 Program             :  oracle@sjjh (J001)
 Fetch Calls         :  1

Global Stats
=================================================
| Elapsed |   Cpu   |  Other   | Fetch | Buffer |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  |
=================================================
|      28 |      28 |     0.09 |     1 |   307K |
=================================================

SQL Plan Monitoring Details (Plan Hash Value=3367417341)
======================================================================================== ===========================================
| Id |          Operation           | Name |  Rows   | Cost  |   Time    | Start  |  Execs |   Rows   | Activity | Activity Detail |
|    |                              |      | (Estim) |       | Active(s) | Active |        | (Actual) |   (%)    |   (# samples)   |
======================================================================================== ===========================================
|  0 | SELECT STATEMENT             |      |         |       |        24 |     +6 |      1 |        1 |          |                 |
|  1 |   SORT AGGREGATE             |      |       1 |       |        28 |     +2 |      1 |        1 |    34.48 | Cpu (10)        |
|  2 |    APPROXIMATE NDV AGGREGATE |      |     29M | 89482 |        29 |     +1 |      1 |      27M |    55.17 | Cpu (16)        |
|  3 |     TABLE ACCESS FULL        | AB13 |     29M | 89482 |        25 |     +5 |      1 |      29M |    10.34 | Cpu (3)         |
======================================================================================== ===========================================

dbms_sqltune.report_sql_monitor过程接受几种输入参数来指定执行,报告的详细级别与报告类型 (‘TEXT’,’HTML’或’XML’)。缺省情况下,如果没有指定参数将会对最后所监控的到SQL生成文本类型 的SQL监控报告。

SQL> desc dbms_sqltune.report_sql_monitor
Parameter           Type     Mode Default?
------------------- -------- ---- --------
(RESULT)            CLOB
SQL_ID              VARCHAR2 IN   Y
SESSION_ID          NUMBER   IN   Y
SESSION_SERIAL      NUMBER   IN   Y
SQL_EXEC_START      DATE     IN   Y
SQL_EXEC_ID         NUMBER   IN   Y
INST_ID             NUMBER   IN   Y
START_TIME_FILTER   DATE     IN   Y
END_TIME_FILTER     DATE     IN   Y
INSTANCE_ID_FILTER  NUMBER   IN   Y
PARALLEL_FILTER     VARCHAR2 IN   Y
PLAN_LINE_FILTER    NUMBER   IN   Y
EVENT_DETAIL        VARCHAR2 IN   Y
BUCKET_MAX_COUNT    NUMBER   IN   Y
BUCKET_INTERVAL     NUMBER   IN   Y
BASE_PATH           VARCHAR2 IN   Y
LAST_REFRESH_TIME   DATE     IN   Y
REPORT_LEVEL        VARCHAR2 IN   Y
TYPE                VARCHAR2 IN   Y
SQL_PLAN_HASH_VALUE NUMBER   IN   Y

也可以使用以下方式来生成SQL监控报告:

SQL> set long 10000000
SQL> set longchunksize 10000000
SQL> set linesize 200
SQL> select dbms_sqltune.report_sql_monitor from dual;

REPORT_SQL_MONITOR
---------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------- ------------------------
SQL Monitoring Report

SQL Text
------------------------------
/* SQL Analyze(1) */ select /*+ full(t) no_parallel(t) no_parallel_index(t) dbms_stats  cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring no_substrb_pad  */to_char(count("MONEY_N
O")),to_char(substrb(dump(min("MONEY_NO"),16,0,32),1,120)),to_char(substrb(dump(max ("MONEY_NO"),16,0,32),1,120)),to_char(count("PAY_INFO_NO")),to_char(substrb(dump(min ("PAY_INFO_NO"),16,0,32),1,120)),
to_char(substrb(dump(max("PAY_INFO_NO"),16,0,32),1,120)),to_char(count ("PAY_MONEY")),to_char(substr
b(dump(min("PAY_MONEY"),16,0,32),1,120)),to_char(substrb(dump(max ("PAY_MONEY"),16,0,32),1,120)),to_char(count("MONEY_TYPE")),to_char(substrb(dump(min ("MONEY_TYPE"),16,0,32),1,120)),to_char(substrb(dum
p(max("MONEY_TYPE"),16,0,32),1,120)),to_char(count("CALC_MAN_SUM")),to_char(substrb (dump(min("CALC_MAN_SUM"),16,0,32),1,120)),to_char(substrb(dump(max ("CALC_MAN_SUM"),16,0,32),1,120)),to_char(count("C
ALC_BASE")),to_char(substrb(dump(min("CALC_BASE"),16,0,32),1,120)),to_char(substrb(dump (max("CALC_B
ASE"),16,0,32),1,120)),to_char(count("MONEY_ID")),to_char(substrb(dump(min ("MONEY_ID"),16,0,32),1,120)),to_char(substrb(dump(max ("MONEY_ID"),16,0,32),1,120)),to_char(count("CORP_ID")),to_char(substrb(
dump(min("CORP_ID"),16,0,32),1,120)),to_char(substrb(dump(max ("CORP_ID"),16,0,32),1,120)),to_char(count("PAYED_MONEY")),to_char(substrb(dump(min ("PAYED_MONEY"),16,0,32),1,120)),to_char(substrb(dump(ma
x("PAYED_MONEY"),16,0,32),1,120)),to_char(count("CALC_PRD")),to_char(substrb(dump(min ("CALC_PRD"),1
6,0,32),1,120)),to_char(substrb(dump(max("CALC_PRD"),16,0,32),1,120)),to_char(count ("SRC_TYPE")),to_char(substrb(dump(min("SRC_TYPE"),16,0,32),1,120)),to_char(substrb (dump(max("SRC_TYPE"),16,0,32),1,1
20)),to_char(count("PAYED_FLAG")),to_char(substrb(dump(min ("PAYED_FLAG"),16,0,32),1,120)),to_char(substrb(dump(max("PAYED_FLAG"),16,0,32),1,120))  from "SJGX_YB"."LV_CROPFUNDPAR" t /*
ACL,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV, NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL*/

Global Information
------------------------------
 Status              :  DONE (ALL ROWS)
 Instance ID         :  1
 Session             :  SYS (146:19)
 SQL ID              :  6sbw88979vmcv
 SQL Execution ID    :  16777216
 Execution Started   :  10/18/2016 22:01:04
 First Refresh Time  :  10/18/2016 22:01:11
 Last Refresh Time   :  10/18/2016 22:01:38
 Duration            :  34s
 Module/Action       :  DBMS_SCHEDULER/ORA$AT_OS_OPT_SY_4
 Service             :  SYS$USERS
 Program             :  oracle@sjjh (J001)
 Fetch Calls         :  1


Global Stats
=================================================
| Elapsed |   Cpu   |  Other   | Fetch | Buffer |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  |
=================================================
|      34 |      34 |     0.11 |     1 |   294K |
=================================================

SQL Plan Monitoring Details (Plan Hash Value=3534073399)
======================================================================================== =====================================================
| Id |          Operation           |      Name      |  Rows   | Cost  |   Time    |  Start  | Execs |   Rows   | Activity | Activity Detail |
|    |                              |                | (Estim) |       | Active(s) |  Active |       | (Actual) |   (%)    |   (# samples)   |
======================================================================================== =====================================================
|  0 | SELECT STATEMENT             |                |         |       |        28 |      +7 |     1 |        1 |          |                 |
|  1 |   SORT AGGREGATE             |                |       1 |       |        33 |      +2 |     1 |        1 |    40.00 | Cpu (14)        |
|  2 |    APPROXIMATE NDV AGGREGATE |                |     24M | 80111 |        35 |      +0 |     1 |      39M |    54.29 | Cpu (19)        |
|  3 |     MAT_VIEW ACCESS FULL     | LV_CROPFUNDPAR |     24M | 80111 |        28 |      +7 |     1 |      39M |     2.86 | Cpu (1)         |
======================================================================================== =====================================================

在Global Information部分,Status显示为DONE,说明SQL语句已经执行完成了。Time Active(s)列 显示操作已经执行了多长时间。Start Active列,以秒为单位,代表SQL语句执行的开始。在这个报 告中,MAT_VIEW ACCESS FULL是ID 3是第一个被执行的(+7s Start Active)并且执行了28秒。

Start列显示该操作被执行的次数。Rows(Actual)列显示被处理的行数,Rows(Estim)列显示优化器所 评估的行数。Memory与Temp列指示每个操作所消耗的内存与临时空间的大小。

Activity(percent)与Activity Detail(sample #)是通过连接v$sql_plan_monitor与v $active_session_history视图而得到。Activity(percent)显示执行计划中每个操作所占数据库时间 的百分比。Activity Detail(sample#)显示了活动的属性(比如CPU或等待事件)。在这个报告中,ID 2消耗了的数据库时间百分比为54.29%(APPROXIMATE NDV AGGREGATE)。活动由19samples组成。最 后一列,Progress,显示了v$session_longops视图所监控到的操作信息。

启用与禁用SQL监控
SQL监控功能当statistics _level参数被设置为all或typical(缺省值)时缺省是开启的。额外, control_management_pack_access参数必须设置为diagnostic+tuning(缺省值),因为SQL监控是 Oracle数据库优化包的一个功能。SQL监控对于所有运行时间长的查询会自动启动。

有两种语句级别的hint可以用来强制或者阻止SQL语句被监控。为了强制SQL监控,使用monitor hint:

select /*+ MONITOR */ from dual;

这种hint只有当control_management_pack_access参数被设置为diagnostic+tuning时才生效。为了 阻止SQL语句被监控可以使用no_monitor hint。

Oracle Orion Calibration Tool

Oracle Orion Calibration Tool
Oracle Orion是一种用来在没有安装Oracle或没有创建数据库的情况下来预测Oracle数据库性能的工 具。不像其它的I/O校准工具,Oracle Orion是特定为模拟Oracle数据库I/O工作量而设计的。Orion 可以通过Oracle ASM来模拟条带的性能。Orion可以使用不同的I/O负载来检测性能指标,比如 MBPS,IOPS与I/O延迟。

执行Orion需要注意以下两个问题:
.当存储是空闲或者接近空闲时执行。Orion校准存储性能是基于它所生成的I/O负载,如果在执行 Orion的同时也执行了其它非Orion的I/O工作量,那么Orion将不能正确评估性能指标。

.如果数据库已经在存储中创建,执行dbms_resource_manager.calibrate_io()时存储是一种选择。

执行Orion的步骤
1.使用-testname参数来指定一个测试名。这个参数用来唯一标识运行的Orion

2.基于测试名创建一个Orion输入文件,并且它的文件名与-testname参数所指定的测试名要相同,其 缺省的输入文件为orion.lun,例如,创建一个jytest.lun。在输入文件中列出裸设备或要测试的文 件。在输入文件的每一行添加一个卷名,不要加入任何注释。例如:

[oracle@jyrac1 bin]$ vi jytest.lun
/dev/raw/raw1
/dev/raw/raw2
/dev/raw/raw3

3.验证输入文件中所指定的所有卷,例如jytest.lun。对于在linux系统中执行以下命令进行检查:

[oracle@jyrac1 bin]$ dd if=/dev/raw/raw1 of=/dev/null bs=32k count=1024
1024+0 records in
1024+0 records out
33554432 bytes (34 MB) copied, 1.76808 seconds, 19.0 MB/s
[oracle@jyrac1 bin]$ dd if=/dev/raw/raw2 of=/dev/null bs=32k count=1024
1024+0 records in
1024+0 records out
33554432 bytes (34 MB) copied, 1.91884 seconds, 17.5 MB/s
[oracle@jyrac1 bin]$ dd if=/dev/raw/raw3 of=/dev/null bs=32k count=1024
1024+0 records in
1024+0 records out
33554432 bytes (34 MB) copied, 1.51707 seconds, 22.1 MB/s

4.第一次执行Orion时,使用oltp或dss选项的-run参数。如果数据库主要用于OLTP,那么使用-run oltp。如果数据库主要用于DSS,那么使用-run dss。

例如

[oracle@jyrac1 bin]$ ./orion -run oltp -testname jytest
ORION: ORacle IO Numbers -- Version 11.2.0.4.0
jytest_20161011_2122
Calibration will take approximately 24 minutes.
Using a large value for -cache_size may take longer.


Maximum Small IOPS=911 @ Small=60 and Large=0
Small Read Latency: avg=65784 us, min=2256 us, max=721805 us, std dev=58565 us @  Small=60 and Large=0

Minimum Small Latency=10292 usecs @ Small=3 and Large=0
Small Read Latency: avg=10292 us, min=968 us, max=164951 us, std dev=7615 us @ Small=3  and Large=0
Small Read / Write Latency Histogram @ Small=3 and Large=0
        Latency:                # of IOs (read)          # of IOs (write)
        0 - 1           us:             0                       0
        2 - 4           us:             0                       0
        4 - 8           us:             0                       0
        8 - 16          us:             0                       0
       16 - 32          us:             0                       0
       32 - 64          us:             0                       0
       64 - 128         us:             0                       0
      128 - 256         us:             0                       0
      256 - 512         us:             0                       0
      512 - 1024        us:             2                       0
     1024 - 2048        us:             17                      0
     2048 - 4096        us:             1291                    0
     4096 - 8192        us:             7044                    0
     8192 - 16384       us:             6899                    0
    16384 - 32768       us:             1870                    0
    32768 - 65536       us:             279                     0
    65536 - 131072      us:             25                      0
   131072 - 262144      us:             5                       0
   262144 - 524288      us:             0                       0
   524288 - 1048576     us:             0                       0
  1048576 - 2097152     us:             0                       0
  2097152 - 4194304     us:             0                       0
  4194304 - 8388608     us:             0                       0
  8388608 - 16777216    us:             0                       0
 16777216 - 33554432    us:             0                       0
 33554432 - 67108864    us:             0                       0
 67108864 - 134217728   us:             0                       0
134217728 - 268435456   us:             0                       0

在orion命令执行完后会生成以下文件:

[oracle@jyrac1 bin]$ ls -lrt jytest*
-rw-r--r-- 1 oracle oinstall 43524 Oct 11 21:42 jytest_20161011_2122_trace.txt
-rw-r--r-- 1 oracle oinstall  1930 Oct 11 21:42 jytest_20161011_2122_summary.txt
-rw-r--r-- 1 oracle oinstall   570 Oct 11 21:42 jytest_20161011_2122_mbps.csv
-rw-r--r-- 1 oracle oinstall   800 Oct 11 21:42 jytest_20161011_2122_lat.csv
-rw-r--r-- 1 oracle oinstall   742 Oct 11 21:42 jytest_20161011_2122_iops.csv
-rw-r--r-- 1 oracle oinstall 19369 Oct 11 21:42 jytest_20161011_2122_hist.txt

orion执行的汇总结果存储在*_summary.txt文件中:

[oracle@jyrac1 bin]$ cat jytest_20161011_2122_summary.txt
ORION VERSION 11.2.0.4.0

Command line:
-run oltp -testname jytest

These options enable these settings:
Test: jytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO types: small random IOs, large random IOs
Sequential stream pattern: one LUN per stream
Writes: 0%
Cache size: not specified
Duration for each data point: 60 seconds
Small Columns:,      3,      6,      9,     12,     15,     18,     21,     24,     27,      30,     33,     36,     39,     42,     45,     48,     51,     54,     57,     60
Large Columns:,      0
Total Data Points: 23

Name: /dev/raw/raw1     Size: 5368709120
Name: /dev/raw/raw2     Size: 5368709120
Name: /dev/raw/raw3     Size: 10737418240
3 files found.


Maximum Small IOPS=911 @ Small=60 and Large=0
Small Read Latency: avg=65784 us, min=2256 us, max=721805 us, std dev=58565 us @  Small=60 and Large=0

Minimum Small Latency=10292.33 usecs @ Small=3 and Large=0
Small Read Latency: avg=10292 us, min=968 us, max=164951 us, std dev=7615 us @ Small=3  and Large=0
Small Read / Write Latency Histogram @ Small=60 and Large=0
        Latency:                # of IOs (read)          # of IOs (write)
        0 - 1           us:             0                       0
        2 - 4           us:             0                       0
        4 - 8           us:             0                       0
        8 - 16          us:             0                       0
       16 - 32          us:             0                       0
       32 - 64          us:             0                       0
       64 - 128         us:             0                       0
      128 - 256         us:             0                       0
      256 - 512         us:             0                       0
      512 - 1024        us:             2                       0
     1024 - 2048        us:             17                      0
     2048 - 4096        us:             1291                    0
     4096 - 8192        us:             7044                    0
     8192 - 16384       us:             6899                    0
    16384 - 32768       us:             1870                    0
    32768 - 65536       us:             279                     0
    65536 - 131072      us:             25                      0
   131072 - 262144      us:             5                       0
   262144 - 524288      us:             0                       0
   524288 - 1048576     us:             0                       0
  1048576 - 2097152     us:             0                       0
  2097152 - 4194304     us:             0                       0
  4194304 - 8388608     us:             0                       0
  8388608 - 16777216    us:             0                       0
 16777216 - 33554432    us:             0                       0
 33554432 - 67108864    us:             0                       0
 67108864 - 134217728   us:             0                       0
134217728 - 268435456   us:             0                       0

Orion输入文件
当指定Orion -testname 参数时,这将设置Orion输入与输出的文件名的前缀。-testname 选项的缺省值为orion

Orion参数
Orion提供了命令参数来指定I/O工作量类型与其它Orion选项。
-run参数是Orion命令必须要指定的。

-run level选项,指定测试运行的级别level。这个选项提供了运行级别并且允许复杂命令来指定高 级级别。如果没有设置 -run advanced,那么设置任何其它参数,比如-cache_size或-verbose,都会 报错。

除了高级级别之外,所有其它的-run level设置将使用一组预定义参数设置。

level必须是以下之一:
.oltp
测试在增加负载时使用随机小(8K)I/O来判断最大的IOPS。这个参数相关的Orion调用如下:

/orion -run advanced \
-num_large 0 -size_small 8 -type rand \
-simulate concat -write 0 -duration 60 \
-matrix row

.dss
测试在增加负载时使用随机大(1M)I/O来判断最大的吞吐量。这个参数相关的Orion调用如下:

./orion -run advanced \
-num_small 0 -size_large 1024 -type rand \
-simulate concat -write 0 -duration 60   \
-matrix column

.simple
对负载生成小的随机I/O与大的随机I/O工作量。在这个选项中,小I/O与大I/O是隔离进行测试。唯一 能指定的选项参数是-cache_size与-verbose。这个参数相关的Orion调用如下:

./orion -run advanced \
-size_small 8 -size_large 1024 -type rand \
-simulate concat -write 0 -duration 60 \
-matrix basic

.normal
与simple相同,但也会生成小的随机I/O与大的随机I/O工作量。对于这个级别唯一能指定的选项参数 是-cache_size与-verbose。这个参数相关的Orion调用如下:

./orion -run advanced \
-size_small 8 -size_large 1024 -type rand \
-simulate concat -write 0 -duration 60 \
-matrix detailed

.advanced
使用自定义的选项参数来测试工作量。对于这个级别可以指定任何选项参数。

Orion的选项参数
-cache_size:指定存储阵列读取或写入的缓存大小(in MB)。对于大的连续 I/O工作量,Orion在每个 数据点之前执行随机大I/O来加热缓存。Orion使用缓存的大小来决定缓存加热的持续周期。如果设置 为0,将不会执行缓存加热。除非这个选项设置为0,Orion在每个大的连续的数据点之前执行随机I/O。 这些I/O将会填充到存储阵列的缓存中。使用随机数据,因此从一个数据点的I/O请求不会在下一个数 据点的命中。它的缺省值,如果没有指定,缺值为2分钟。

-duration num_seconds:测试每个数据点的测试持续时间,缺省值是60

-help:打印Orion帮助信息。

-matrix type:设置混合工作量类型。一个Orion测试可以由多个数据点测试组成。数据点测试可以用 两个维度的混合组成。 在混合工作量类型中每一列代表相同小I/O负载的数据点测试。每一行代表 相同大I/O负载的数据点测试,Orion测试可以是单点,单行,单列或整个混合工作量,依赖于混合类 型:
.basic:不是混合工作量。小的随机与大的随机/连续工作量是分别进行测试。

.detailed:小的随机与大的随机/连续工作量是组合进行测试,测试整个混合工作量。

.point:使用S指示使用小的随机I/O或连续流的单个数据点与使用L指示使用大的随机I/O或连续流的 单个数据点。S是通过-num_small参数进行设置。L是通过-num_large参数进行设置。使用-num_small 测试小I/O,-num_large测试大I/O。

.col:只用于大的随机/连续工作量。使用-num_large大I/O来测试不同大小的小I/O负载。

.row:只用于小的随机工作量。使用-num_large大I/O来测试不同大小的小I/O负载

.max:与deatiled一样,但只在最大负载测试工作量,指定-num_small与-num_large参数。测试的不 同大小负载受限于-num_small与-num_large。

它的缺省值是basic。

-num_disks value:指定测试的物理磁盘数。为了生成一个范围的负载。指定磁盘数。这里的value被 用来保存一定范围的负载。增加这个参数会造成严重的I/O负载。它的缺省值是testname.lun中的lun 数。

-num_large value:控制大I/O负载。注意,它是当指定-matrix时的唯一选项:row,point或max。当- type选项被设置rand,这个参数value指定显著数量的大I/O。
当-type选项设置为seq,这个参数value 指定连续I/O流的数量。

-num_small:指定小随机I/O工作量的显著I/O的最大数量。注意这个选项只能在指定-matrix 为:col,point或max时才能使用。

-simulate type:数据分布来模拟大的连续I/O工作量。Orion通过以以下方式来组合特定的LUN来以虚 拟LUN的格式来进行测试:
.concat:通过连续链接的特定LUN来模拟虚拟卷组。连续测试是一个LUN接着一个LUN的进行。

.raid0:通过跨指定的LUN条带化来模拟虚拟卷组。每个连续流使用raid0条带来跨所有LUN执行I/O。 缺省的条带深度是1M,为了匹配ASM的条带深度,可以使用-stripe参数进行修改。

I/O的偏移量由以下因素决定:
对于小的随机与大的随机工作量:
.LUN被链接成单个虚拟LUN(VLUN)并且对于VLUN选择随机偏移量。

对于大的连续工作量:
.使用条带(-simulate raid0)。LUN被用来创建单个条带化的VLUN。不使用并发的小的随机工作量, 连续流在条带VLUN时使用固定偏移量。对于n streams,stream 1在偏移量VLUN size*(i+1)/(n+1), 除非n为1,在这种情况下,单个流在offset 0开始。使用并发小的随机工作量,流在使用条带VLUN使 用随机偏移量。

.不使用条带(-simulate concat)。LUN被连接成单个VLUN。流从单个VLUN的随机偏移量开始。

这个参数通常只用于-type为seq。

-size_large num:指定大的随机或连续I/O工作量的I/O大小以KB为单位,缺省值为1024。

-size_small num:指定小的随机或连续I/O工作量的I/O大小以KB为单位,缺省值为8。

-testname tname:指定运行测试的标识符。当指定时,包含LUN磁盘的输入文件,或文件必须命名为 .lun。输出文件名以_.为前缀。缺省值为:orion

-type[rand | seq]:大I/O工作量类型,rand:随机分布的大I/O,seq:连续流大I/O,缺省值为rand

-verbose:打印状态与跟踪信息来标准输出。缺省值为option not set

-write num_write:指定被写入的I/O百分比。对于大的连续I/O,每个流要么是只读,要么是写,这个参数指定只写的百分比。缺省值为0。

Orion命令行示例
1.对于OLTP数据库来评估存储的IO性能

[oracle@jyrac1 bin]$ ./orion -run oltp -testname jytest
ORION: ORacle IO Numbers -- Version 11.2.0.4.0
jytest_20161011_2122
Calibration will take approximately 24 minutes.
Using a large value for -cache_size may take longer.


Maximum Small IOPS=911 @ Small=60 and Large=0
Small Read Latency: avg=65784 us, min=2256 us, max=721805 us, std dev=58565 us @  Small=60 and Large=0

Minimum Small Latency=10292 usecs @ Small=3 and Large=0
Small Read Latency: avg=10292 us, min=968 us, max=164951 us, std dev=7615 us @ Small=3  and Large=0
Small Read / Write Latency Histogram @ Small=3 and Large=0
        Latency:                # of IOs (read)          # of IOs (write)
        0 - 1           us:             0                       0
        2 - 4           us:             0                       0
        4 - 8           us:             0                       0
        8 - 16          us:             0                       0
       16 - 32          us:             0                       0
       32 - 64          us:             0                       0
       64 - 128         us:             0                       0
      128 - 256         us:             0                       0
      256 - 512         us:             0                       0
      512 - 1024        us:             2                       0
     1024 - 2048        us:             17                      0
     2048 - 4096        us:             1291                    0
     4096 - 8192        us:             7044                    0
     8192 - 16384       us:             6899                    0
    16384 - 32768       us:             1870                    0
    32768 - 65536       us:             279                     0
    65536 - 131072      us:             25                      0
   131072 - 262144      us:             5                       0
   262144 - 524288      us:             0                       0
   524288 - 1048576     us:             0                       0
  1048576 - 2097152     us:             0                       0
  2097152 - 4194304     us:             0                       0
  4194304 - 8388608     us:             0                       0
  8388608 - 16777216    us:             0                       0
 16777216 - 33554432    us:             0                       0
 33554432 - 67108864    us:             0                       0
 67108864 - 134217728   us:             0                       0
134217728 - 268435456   us:             0                       0

2.对于DSS数据库评估存储IO性能

[oracle@jyrac1 bin]$ ./orion -run dss -testname jytest
ORION: ORacle IO Numbers -- Version 11.2.0.4.0
jytest_20161012_1736
Calibration will take approximately 73 minutes.
Using a large value for -cache_size may take longer.

ORION VERSION 11.2.0.4.0

Command line:
-run dss -testname jytest

These options enable these settings:
Test: jytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO types: small random IOs, large random IOs
Sequential stream pattern: one LUN per stream
Writes: 0%
Cache size: not specified
Duration for each data point: 240 seconds
Small Columns:,      0
Large Columns:,      3,      6,      9,     12,     15,     18,     21,     24,     27,      30,     33,     36,     39,     42,     45
Total Data Points: 18

Name: /dev/raw/raw1     Size: 5368709120
Name: /dev/raw/raw2     Size: 5368709120
Name: /dev/raw/raw3     Size: 10737418240
3 files found.

Maximum Large MBPS=141.56 @ Small=0 and Large=27

3.对于基本的数据集评估存储IO性能

[oracle@jyrac1 bin]$ ./orion -run normal -testname jytest
ORION: ORacle IO Numbers -- Version 11.2.0.4.0
jytest_20161012_1923
Calibration will take approximately 113 minutes.
Using a large value for -cache_size may take longer.

Maximum Large MBPS=124.02 @ Small=0 and Large=6

Maximum Small IOPS=691 @ Small=15 and Large=0
Small Read Latency: avg=21670 us, min=1325 us, max=430186 us, std dev=21687 us @  Small=15 and Large=0

Minimum Small Latency=7141 usecs @ Small=1 and Large=0
Small Read Latency: avg=7141 us, min=934 us, max=212964 us, std dev=5383 us @ Small=1  and Large=0
Small Read / Write Latency Histogram @ Small=1 and Large=0
        Latency:                # of IOs (read)          # of IOs (write)
        0 - 1           us:             0                       0
        2 - 4           us:             0                       0
        4 - 8           us:             0                       0
        8 - 16          us:             0                       0
       16 - 32          us:             0                       0
       32 - 64          us:             0                       0
       64 - 128         us:             0                       0
      128 - 256         us:             0                       0
      256 - 512         us:             0                       0
      512 - 1024        us:             4                       0
     1024 - 2048        us:             24                      0
     2048 - 4096        us:             1398                    0
     4096 - 8192        us:             4993                    0
     8192 - 16384       us:             1672                    0
    16384 - 32768       us:             250                     0
    32768 - 65536       us:             20                      0
    65536 - 131072      us:             5                       0
   131072 - 262144      us:             1                       0
   262144 - 524288      us:             0                       0
   524288 - 1048576     us:             0                       0
  1048576 - 2097152     us:             0                       0
  2097152 - 4194304     us:             0                       0
  4194304 - 8388608     us:             0                       0
  8388608 - 16777216    us:             0                       0
 16777216 - 33554432    us:             0                       0
 33554432 - 67108864    us:             0                       0
 67108864 - 134217728   us:             0                       0
134217728 - 268435456   us:             0                       0

4.为了理解存储性能使用只读,小与大的随机I/O工作量:

[oracle@jyrac1 bin]$ ./orion -run simple -testname jytest
ORION: ORacle IO Numbers -- Version 11.2.0.4.0
jytest_20161012_2122
Calibration will take approximately 23 minutes.
Using a large value for -cache_size may take longer.

Maximum Large MBPS=125.09 @ Small=0 and Large=6

Maximum Small IOPS=696 @ Small=15 and Large=0
Small Read Latency: avg=21501 us, min=807 us, max=351786 us, std dev=22448 us @ Small=15  and Large=0

Minimum Small Latency=6916 usecs @ Small=1 and Large=0
Small Read Latency: avg=6916 us, min=774 us, max=174704 us, std dev=4991 us @ Small=1  and Large=0
Small Read / Write Latency Histogram @ Small=1 and Large=0
        Latency:                # of IOs (read)          # of IOs (write)
        0 - 1           us:             0                       0
        2 - 4           us:             0                       0
        4 - 8           us:             0                       0
        8 - 16          us:             0                       0
       16 - 32          us:             0                       0
       32 - 64          us:             0                       0
       64 - 128         us:             0                       0
      128 - 256         us:             0                       0
      256 - 512         us:             0                       0
      512 - 1024        us:             3                       0
     1024 - 2048        us:             23                      0
     2048 - 4096        us:             1480                    0
     4096 - 8192        us:             5299                    0
     8192 - 16384       us:             1611                    0
    16384 - 32768       us:             196                     0
    32768 - 65536       us:             22                      0
    65536 - 131072      us:             4                       0
   131072 - 262144      us:             3                       0
   262144 - 524288      us:             0                       0
   524288 - 1048576     us:             0                       0
  1048576 - 2097152     us:             0                       0
  2097152 - 4194304     us:             0                       0
  4194304 - 8388608     us:             0                       0
  8388608 - 16777216    us:             0                       0
 16777216 - 33554432    us:             0                       0
 33554432 - 67108864    us:             0                       0
 67108864 - 134217728   us:             0                       0
134217728 - 268435456   us:             0                       0

5.为了理解存储性能使用小与大混合的随机I/O工作量:

[oracle@jyrac1 bin]$ ./orion -run normal -testname jytest
ORION: ORacle IO Numbers -- Version 11.2.0.4.0
jytest_20161012_1923
Calibration will take approximately 113 minutes.
Using a large value for -cache_size may take longer.

Maximum Large MBPS=124.02 @ Small=0 and Large=6

Maximum Small IOPS=691 @ Small=15 and Large=0
Small Read Latency: avg=21670 us, min=1325 us, max=430186 us, std dev=21687 us @  Small=15 and Large=0

Minimum Small Latency=7141 usecs @ Small=1 and Large=0
Small Read Latency: avg=7141 us, min=934 us, max=212964 us, std dev=5383 us @ Small=1  and Large=0
Small Read / Write Latency Histogram @ Small=1 and Large=0
        Latency:                # of IOs (read)          # of IOs (write)
        0 - 1           us:             0                       0
        2 - 4           us:             0                       0
        4 - 8           us:             0                       0
        8 - 16          us:             0                       0
       16 - 32          us:             0                       0
       32 - 64          us:             0                       0
       64 - 128         us:             0                       0
      128 - 256         us:             0                       0
      256 - 512         us:             0                       0
      512 - 1024        us:             4                       0
     1024 - 2048        us:             24                      0
     2048 - 4096        us:             1398                    0
     4096 - 8192        us:             4993                    0
     8192 - 16384       us:             1672                    0
    16384 - 32768       us:             250                     0
    32768 - 65536       us:             20                      0
    65536 - 131072      us:             5                       0
   131072 - 262144      us:             1                       0
   262144 - 524288      us:             0                       0
   524288 - 1048576     us:             0                       0
  1048576 - 2097152     us:             0                       0
  2097152 - 4194304     us:             0                       0
  4194304 - 8388608     us:             0                       0
  8388608 - 16777216    us:             0                       0
 16777216 - 33554432    us:             0                       0
 33554432 - 67108864    us:             0                       0
 67108864 - 134217728   us:             0                       0
134217728 - 268435456   us:             0                       0

6.为了生成32KB与1MB的读取组合的随机I/O执行以下命令:

[oracle@jyrac1 bin]$ ./orion -run advanced -size_small 32 -size_large 1024 -type rand - matrix detailed -testname jytest
ORION: ORacle IO Numbers -- Version 11.2.0.4.0
jytest_20161012_2153
Calibration will take approximately 113 minutes.
Using a large value for -cache_size may take longer.

Maximum Large MBPS=122.87 @ Small=0 and Large=6

Maximum Small IOPS=578 @ Small=15 and Large=0
Small Read Latency: avg=25892 us, min=2162 us, max=390074 us, std dev=24858 us @  Small=15 and Large=0

Minimum Small Latency=7386 usecs @ Small=1 and Large=0
Small Read Latency: avg=7386 us, min=1196 us, max=140649 us, std dev=4656 us @ Small=1  and Large=0
Small Read / Write Latency Histogram @ Small=1 and Large=0
        Latency:                # of IOs (read)          # of IOs (write)
        0 - 1           us:             0                       0
        2 - 4           us:             0                       0
        4 - 8           us:             0                       0
        8 - 16          us:             0                       0
       16 - 32          us:             0                       0
       32 - 64          us:             0                       0
       64 - 128         us:             0                       0
      128 - 256         us:             0                       0
      256 - 512         us:             0                       0
      512 - 1024        us:             0                       0
     1024 - 2048        us:             5                       0
     2048 - 4096        us:             940                     0
     4096 - 8192        us:             4859                    0
     8192 - 16384       us:             2060                    0
    16384 - 32768       us:             204                     0
    32768 - 65536       us:             15                      0
    65536 - 131072      us:             6                       0
   131072 - 262144      us:             1                       0
   262144 - 524288      us:             0                       0
   524288 - 1048576     us:             0                       0
  1048576 - 2097152     us:             0                       0
  2097152 - 4194304     us:             0                       0
  4194304 - 8388608     us:             0                       0
  8388608 - 16777216    us:             0                       0
 16777216 - 33554432    us:             0                       0
 33554432 - 67108864    us:             0                       0
 67108864 - 134217728   us:             0                       0
134217728 - 268435456   us:             0                       0

7.为了生成多路1M写入流,模拟1MB的RAID-0条带:

./orion -run advanced -simulate raid0 -stripe 1024 -write 100 -type seq -matrix col - num_small 0 -testname jytest

8.为了生成32KB与1MB的组合随机读取:

./orion -run advanced -size_small 32 -size_large 1024 -type rand -matrix detailed - testname jytest

9.为了生成多路连续IMB的写入流,并模拟RAID0条带:

./orion -run advanced -simulate raid0 -write 100 -type seq -matrix col -num_small 0 - testnmae jytest

RAC 11G ASM磁盘损坏恢复

一个存储档案的rac数据库起不来了,生产环境是linux rac 11.2.0.4,原因是因为用工具测试磁盘IO时损坏了ocr所在磁盘组与存储数据ASM磁盘的磁盘头。下面是恢复过程:
1.检查crs的状态:

[grid@darac1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager


[root@darac1 crsd]# ps -ef|grep crs
root      3126     1  1 10:34 ?        00:00:31 /u01/app/product/11.2.0/crs/bin/ohasd.bin reboot
grid      3514     1  0 10:34 ?        00:00:07 /u01/app/product/11.2.0/crs/bin/oraagent.bin
grid      3525     1  0 10:34 ?        00:00:00 /u01/app/product/11.2.0/crs/bin/mdnsd.bin
grid      3537     1  0 10:34 ?        00:00:16 /u01/app/product/11.2.0/crs/bin/gpnpd.bin
grid      3549     1  1 10:34 ?        00:00:33 /u01/app/product/11.2.0/crs/bin/gipcd.bin
root      4128     1  0 10:54 ?        00:00:02 /u01/app/product/11.2.0/crs/bin/cssdmonitor
root      4144     1  0 10:54 ?        00:00:01 /u01/app/product/11.2.0/crs/bin/cssdagent
grid      4167     1  2 10:55 ?        00:00:14 /u01/app/product/11.2.0/crs/bin/ocssd.bin 
root      4354  3680  0 11:04 pts/1    00:00:00 grep crs

2.强制关闭crs

[root@darac1 bin]# ./crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'darac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'darac1'
CRS-2673: Attempting to stop 'ora.gipcd' on 'darac1'
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'darac1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'darac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'darac1' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'darac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'darac1'
CRS-2677: Stop of 'ora.gpnpd' on 'darac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'darac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.

3.以exclusive模式启动crs

[root@darac1 bin]# ./crsctl start crs -excl -nocrs
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.mdnsd' on 'darac1'
CRS-2676: Start of 'ora.mdnsd' on 'darac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'darac1'
CRS-2676: Start of 'ora.gpnpd' on 'darac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'darac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'darac1'
CRS-2676: Start of 'ora.gipcd' on 'darac1' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'darac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'darac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'darac1'
CRS-2676: Start of 'ora.diskmon' on 'darac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'darac1' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'darac1'
CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'darac1'
CRS-2672: Attempting to start 'ora.ctssd' on 'darac1'
CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'darac1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'darac1'
CRS-2676: Start of 'ora.ctssd' on 'darac1' succeeded
CRS-2676: Start of 'ora.drivers.acfs' on 'darac1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'darac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'darac1'
CRS-2676: Start of 'ora.asm' on 'darac1' succeeded

4.查看GI相关的alert.log日志文件如何

[ohasd(5040)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running). 
2016-10-13 11:20:47.302: 
[gpnpd(5215)]CRS-2328:GPNPD started on node darac1. 
2016-10-13 11:20:58.388: 
[ohasd(5040)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE
2016-10-13 11:21:00.608: 
[cssd(5318)]CRS-1713:CSSD daemon is started in clustered mode
2016-10-13 11:21:01.521: 
[/u01/app/product/11.2.0/crs/bin/orarootagent.bin(5304)]CRS-5013:Agent "/u01/app/product/11.2.0/crs/bin/orarootagent.bin" failed to start process 

"/u01/app/product/11.2.0/crs/bin/osysmond" for action "start": details at "(:CLSN00008:)" in 

"/u01/app/product/11.2.0/crs/log/darac1/agent/ohasd/orarootagent_root//orarootagent_root.log"
2016-10-13 11:21:03.585: 
[ohasd(5040)]CRS-2878:Failed to restart resource 'ora.crf'
2016-10-13 11:21:05.399: 
[/u01/app/product/11.2.0/crs/bin/orarootagent.bin(5340)]CRS-5013:Agent "/u01/app/product/11.2.0/crs/bin/orarootagent.bin" failed to start process 

"/u01/app/product/11.2.0/crs/bin/osysmond" for action "start": details at "(:CLSN00008:)" in 

"/u01/app/product/11.2.0/crs/log/darac1/agent/ohasd/orarootagent_root//orarootagent_root.log"
2016-10-13 11:21:10.703: 
[ohasd(5040)]CRS-2878:Failed to restart resource 'ora.crf'
2016-10-13 11:21:23.464: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:21:38.698: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:21:53.925: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:22:09.463: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:22:24.804: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:22:40.252: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:22:56.722: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:23:12.009: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:23:27.290: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:23:42.872: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:23:58.198: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:24:13.500: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:24:28.786: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:24:43.488: 
[client(5394)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/app/product/11.2.0/crs/log/darac1/client/ocrcheck_5394.log.
2016-10-13 11:24:43.959: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:24:51.823: 
[client(5424)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/app/product/11.2.0/crs/log/darac1/client/crsctl_grid.log.
2016-10-13 11:24:59.345: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:25:14.526: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:25:29.696: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:25:44.860: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:26:00.042: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:26:15.218: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:26:30.409: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:26:45.577: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:26:49.031: 
[client(5460)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/app/product/11.2.0/crs/log/darac1/client/ocrconfig_5460.log.
2016-10-13 11:27:00.766: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:27:15.951: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:27:31.142: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:27:46.339: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:28:01.530: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:28:16.733: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:28:32.008: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:28:47.191: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:29:02.389: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:29:17.610: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:29:32.832: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:29:48.035: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:30:03.229: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:30:18.434: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:30:33.679: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:30:48.876: 
[cssd(5318)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in 

/u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:31:01.534: 
[/u01/app/product/11.2.0/crs/bin/cssdagent(5284)]CRS-5818:Aborted command 'start' for resource 'ora.cssd'. Details at (:CRSAGF00113:) {0:0:2} in 

/u01/app/product/11.2.0/crs/log/darac1/agent/ohasd/oracssdagent_root//oracssdagent_root.log.
2016-10-13 11:31:01.540: 
[cssd(5318)]CRS-1656:The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00012:) in /u01/app/product/11.2.0/crs/log/darac1/cssd/ocssd.log
2016-10-13 11:31:01.541: 
[cssd(5318)]CRS-1603:CSSD on node darac1 shutdown by user.

从上面的信息可以看到找不到voting files

5.检查ASM的alert.log可以找如下创建CRSDG,DATADG磁盘组的创建语句:

Wed Dec 02 16:09:01 2015

SQL> CREATE DISKGROUP CRSDG EXTERNAL REDUNDANCY  DISK '/dev/raw/raw1' ATTRIBUTE 'compatible.asm'='11.2.0.0.0','au_size'='1M' /* ASMCA */ 

6.检查磁盘头

[grid@darac1 ~]$ kfed read /dev/raw/raw1
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                   300392945 ; 0x00c: 0x11e7a1f1
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
B7F46200 00000000 00000000 00000000 00000000  [................]
  Repeat 255 times
KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]

7.使用kfed恢复CRSDG的磁盘头,但因为备份信息也被损坏所以恢复时报错,而且没有手动备份

[grid@darac1 ~]$ kfed repair /dev/raw/raw1

KFED-00320: Invalid block num1 = [0], num2 = [1], error = [endian_kfbh]

没有通过自动备份的磁盘头信息来进行恢复,只能使用自动备份的ocr信息来恢复了操作如下。

8.创建磁盘组

[grid@darac1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Thu Oct 13 13:00:42 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select * from v$asm_diskgroup;

no rows selected

SQL> create diskgroup CRSDG external redundancy disk  '/dev/raw/raw1' attribute 'COMPATIBLE.ASM' = '11.2.0.0.0';

Diskgroup created.

9.查看自动备份的ocr文件

[root@darac1 bin]# ./ocrconfig -showbackup
PROT-26: Oracle Cluster Registry backup locations were retrieved from a local copy

darac2     2016/10/13 06:29:53     /u01/app/product/11.2.0/crs/cdata/darac-cluster/backup00.ocr

darac2     2016/10/13 02:29:45     /u01/app/product/11.2.0/crs/cdata/darac-cluster/backup01.ocr

darac2     2016/10/12 22:29:37     /u01/app/product/11.2.0/crs/cdata/darac-cluster/backup02.ocr

darac2     2016/10/12 02:27:20     /u01/app/product/11.2.0/crs/cdata/darac-cluster/day.ocr

darac2     2016/10/11 22:27:10     /u01/app/product/11.2.0/crs/cdata/darac-cluster/week.ocr

10.还原ocr

[root@darac1 bin]# ./ocrconfig -restore /u01/app/product/11.2.0/crs/cdata/darac-cluster/backup00.ocr

11.处理votedisk

[root@darac1 bin]# ./ocrconfig -restore /u01/app/product/11.2.0/crs/cdata/darac-cluster/backup00.ocr
[root@darac1 bin]# ./crsctl replace votedisk +CRSDG
Successful addition of voting disk 44eaf86504ea4f76bfb43cb7931a3fc7.
Successfully replaced voting disk group with +CRSDG.
CRS-4266: Voting file(s) successfully replaced

12.创建asm spfile

[grid@darac1 ~]$ vi /tmp/asm.txt
instance_type='asm'
large_pool_size=12M
remote_login_passwordfile= 'EXCLUSIVE'
asm_diskstring           = '/dev/raw/raw*'
asm_power_limit          =1


[grid@darac1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Thu Oct 13 13:40:02 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> create spfile='+CRSDG' FROM pfile='/tmp/asm.txt';

File created.

13.重启crs

[root@darac1 bin]# ./crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'darac1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'darac1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'darac1'
CRS-2673: Attempting to stop 'ora.asm' on 'darac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'darac1'
CRS-2677: Stop of 'ora.ctssd' on 'darac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'darac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'darac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'darac1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'darac1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'darac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'darac1'
CRS-2677: Stop of 'ora.cssd' on 'darac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'darac1'
CRS-2677: Stop of 'ora.gipcd' on 'darac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'darac1'
CRS-2677: Stop of 'ora.gpnpd' on 'darac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'darac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.

[root@darac1 bin]# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.


[grid@darac1 ~]$ crsctl stat res -t 
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDG.dg
               ONLINE  ONLINE       darac1                                       
               ONLINE  ONLINE       darac2                                       
ora.DATADG.dg
               ONLINE  OFFLINE      darac1                                       
               ONLINE  OFFLINE      darac2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       darac1                                       
               ONLINE  ONLINE       darac2                                       
ora.asm
               ONLINE  ONLINE       darac1                   Started             
               ONLINE  ONLINE       darac2                   Started             
ora.gsd
               OFFLINE OFFLINE      darac1                                       
               OFFLINE OFFLINE      darac2                                       
ora.net1.network
               ONLINE  ONLINE       darac1                                       
               ONLINE  ONLINE       darac2                                       
ora.ons
               ONLINE  ONLINE       darac1                                       
               ONLINE  OFFLINE      darac2                                       
ora.registry.acfs
               ONLINE  ONLINE       darac1                                       
               ONLINE  ONLINE       darac2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       darac1                                       
ora.cvu
      1        ONLINE  ONLINE       darac1                                       
ora.darac.db
      1        ONLINE  OFFLINE                               Corrupted Controlfi 
                                                             le                  
      2        ONLINE  OFFLINE                               Corrupted Controlfi 
                                                             le                  
ora.darac1.vip
      1        ONLINE  ONLINE       darac1                                       
ora.darac2.vip
      1        ONLINE  ONLINE       darac2                                       
ora.darac3.vip
      1        ONLINE  OFFLINE                                                   
ora.oc4j
      1        ONLINE  OFFLINE                               STARTING            
ora.scan1.vip
      1        ONLINE  ONLINE       darac1    

从上面的信息可以看到DATADG磁盘组没有加载,数据库darac也没有启动,并且显示错误的控制文件。alert_asm1.log中,有创建磁盘组的信息:

Wed Dec 02 18:27:46 2015

SQL> CREATE DISKGROUP DATADG EXTERNAL REDUNDANCY  DISK '/dev/raw/raw3' SIZE 10240M  ATTRIBUTE 'compatible.asm'='11.2.0.0.0','au_size'='1M' /* ASMCA */ 

14.查看磁盘组的状态

SQL> select name,state from v$asm_diskgroup;

NAME                                               STATE
-------------------------------------------------- ----------------------
CRSDG                                              MOUNTED
ARCH                                               MOUNTED

15.手动加载DATADG磁盘报错

SQL> alter diskgroup DATADG mount;
alter diskgroup DATADG mount
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15017: diskgroup "DATADG" cannot be mounted
ORA-15040: diskgroup is incomplete

16.查看磁盘组磁盘头的状态,可以看到/dev/raw/raw3为candidate

SQL> select name,path,header_status from v$asm_disk;

NAME                                               PATH                                               HEADER_STATUS
-------------------------------------------------- -------------------------------------------------- ------------------------------
                                                   /dev/raw/raw3                                      CANDIDATE
ARCH_0000                                          /dev/raw/raw2                                      MEMBER
CRSDG_0000                                         /dev/raw/raw1                                      MEMBER

17.尝试使用自动备份的磁盘头信息来恢复磁盘头,这个DATADG磁盘恢复成功。

[grid@darac1 ~]$ kfed repair /dev/raw/raw3


SQL> select name,state from v$asm_diskgroup;

NAME                                               STATE
-------------------------------------------------- ----------------------
CRSDG                                              MOUNTED
DATADG                                             DISMOUNTED
ARCH                                               MOUNTED

SQL> select name,path,header_status from v$asm_disk;

NAME                                               PATH                                               HEADER_STATUS
-------------------------------------------------- -------------------------------------------------- ------------------------------
                                                   /dev/raw/raw3                                      MEMBER
ARCH_0000                                          /dev/raw/raw2                                      MEMBER
CRSDG_0000                                         /dev/raw/raw1                                      MEMBER

18.手动加载DATADG磁盘报错

SQL> alter diskgroup DATADG mount;

Diskgroup altered.

SQL> select name,state from v$asm_diskgroup;

NAME                                               STATE
-------------------------------------------------- ----------------------
CRSDG                                              MOUNTED
DATADG                                             MOUNTED
ARCH                                               MOUNTED

19.查看磁盘组磁盘头的状态,可以看到/dev/raw/raw3为member

SQL> select name,path,header_status from v$asm_disk;

NAME                                               PATH                                               HEADER_STATUS
-------------------------------------------------- -------------------------------------------------- ------------------------------
ARCH_0000                                          /dev/raw/raw2                                      MEMBER
DATADG_0000                                        /dev/raw/raw3                                      MEMBER
CRSDG_0000                                         /dev/raw/raw1                                      MEMBER

20.启动数据库darac

[grid@darac1 ~]$ srvctl start database -d darac
[grid@darac1 ~]$ crsctl stat res -t 
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCH.dg
               ONLINE  ONLINE       darac1                                       
               ONLINE  ONLINE       darac2                                       
ora.CRSDG.dg
               ONLINE  ONLINE       darac1                                       
               ONLINE  ONLINE       darac2                                       
ora.DATADG.dg
               ONLINE  ONLINE       darac1                                       
               ONLINE  ONLINE       darac2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       darac1                                       
               ONLINE  ONLINE       darac2                                       
ora.asm
               ONLINE  ONLINE       darac1                   Started             
               ONLINE  ONLINE       darac2                   Started             
ora.gsd
               OFFLINE OFFLINE      darac1                                       
               OFFLINE OFFLINE      darac2                                       
ora.net1.network
               ONLINE  ONLINE       darac1                                       
               ONLINE  ONLINE       darac2                                       
ora.ons
               ONLINE  ONLINE       darac1                                       
               ONLINE  ONLINE       darac2                                       
ora.registry.acfs
               ONLINE  ONLINE       darac1                                       
               ONLINE  ONLINE       darac2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       darac1                                       
ora.cvu
      1        ONLINE  ONLINE       darac1                                       
ora.darac.db
      1        ONLINE  ONLINE       darac1                   Open                
      2        ONLINE  ONLINE       darac2                   Open                
ora.darac1.vip
      1        ONLINE  ONLINE       darac1                                       
ora.darac2.vip
      1        ONLINE  ONLINE       darac2                                       
ora.darac3.vip
      1        ONLINE  OFFLINE                                                   
ora.oc4j
      1        ONLINE  ONLINE       darac1                                       
ora.scan1.vip
      1        ONLINE  ONLINE       darac1    

到此数据库恢复成功。

dbms_resource_manager.calibrate_io测试数据库IO性能

在Oracle 11g中dbms_resource_manager PL/SQL包提供了calibrate_io过程可以用来对Oracle数据库的IO子系统进行IO能力测试,其使用方法如下:

SQL> SET SERVEROUTPUT ON
SQL> DECLARE
  2  lat INTEGER;
  3  iops INTEGER;
  4  mbps INTEGER;
  5  BEGIN
  6  -- DBMS_RESOURCE_MANAGER.CALIBRATE_IO (, , iops, mbps, lat);
  7  DBMS_RESOURCE_MANAGER.CALIBRATE_IO (2, 10, iops, mbps, lat);
  8  DBMS_OUTPUT.PUT_LINE ('max_iops = ' || iops);
  9  DBMS_OUTPUT.PUT_LINE ('latency = ' || lat);
 10  dbms_output.put_line('max_mbps = ' || mbps);
 11  end;
 12  /

max_iops = 2643
latency = 8
max_mbps = 388

PL/SQL procedure successfully completed

上面的输出信息中的max_iops=2643,说明IO子系统的每秒IO请求次数是2643次,latency=8,说明IO子系统的延迟是8ms。max_mbps=388,说明每少的最大处理能力是388MB。

在dbms_resource_manager.calibrate_io执行时可以查看其状态

SQL> select * from V$IO_CALIBRATION_STATUS;

STATUS        CALIBRATION_TIME
------------- -----------------------------
IN PROGRESS

在dbms_resource_manager.calibrate_io执行完后可以查看其结果

SQL> select * from DBA_RSRC_IO_CALIBRATE;

START_TIME                         END_TIME                           MAX_IOPS   MAX_MBPS  MAX_PMBPS    LATENCY NUM_PHYSICAL_DISKS
---------------------------------- -------------------------------- ---------- ---------- ---------- ---------- ------------------
11-OCT-16 04.53.17.202399 PM       11-OCT-16 05.02.21.056682 PM           2643        388         39          8                  2

这个功能相对于Oracle提供的Orion工具来说使用更简单,但它必须创建数据库并且在11g及以后版本才能使用。

dbms_addm执行oracle数据库诊断

为了诊断数据库性能问题,首先查看ADDM分析报告,通常它是在生成AWR快照时自动创建的。如果缺 省的分析不能满足,可以手动执行ADDM分析操作。ADDM可以对任何两个AWR快执行分析,只要快照仍然存储在数据库中而没有被清除掉。当在生成AWR快 照时如果出现了严重错误,ADDM将不会对实例进行分析。在这种情况下,ADDM将只能对实例的最大子集(没有出错的部分)进行分析。

手动执行ADDM分析可以使用dbms_addm包来执行操作,ADDM分析主要包括以下几种模式:
.以数据库模式来执行ADDM分析
.以实例模式来执行ADDM分析
.以部分模式来执行ADDM分析
.显式ADDM分析报告

以数据库模式来执行ADDM分析
对于Oracle RAC来说,可以以数据库模式来执行ADDM来分析数据库的所有实例。对于单实例数据库, 仍然可以以数据库模式来执行ADDM分析,如果以实例模式运行那么ADDM将简化其行为。

使用dbms_addm.analyze_db过程来以数据库模式执行ADDM:

begin
  dbms_addm.analyze_db(
    task_name       in out varchar2,
    begin_snapshot  in     number,
    end_snapshot    in     number,
    db_id           in     number :=null
    );
end;
/

task_name参数指定将要被创建的分析任务名称。begin_snapshot参数指定分析周期的开始快照。 end_snapshot参数指定分析周期的快照。db_id参数指定将要被分析的数据库标识。如果没有指定, 这个参数将使用当前所连接的数据库标识。

下面的例了创建一个以数据库模式来执行ADDM的任务,并且对快照481到484之间的时间周期对整个数 据库执行性能诊断。

SQL> var tname varchar2(30)
SQL> begin
  2    :tname:='ADDM for snapshot 481 to 484';
  3    dbms_addm.analyze_db(:tname,481,484);
  4  end;
  5  /

PL/SQL procedure successfully completed.

以实例模式执行ADDM
为了对数据库的特定实例进行分析,可以以实例模式来执行ADDM。使用dbms_addm.analyze_inst过程 来进行操作:

begin
  dbms_addm.analyze_inst(
    task_name       in out varchar2,
    begin_snapshot  in     number,
    end_snapshot    in     number,
    instance_number in     number :=null,
    db_id           in     number :=null
    );
end;
/

task_name参数指定将要被创建的分析任务名称。begin_snapshot参数指定分析周期的开始快照。 end_snapshot参数指定分析周期的快照。instance_number参数指定将会被分析的实例编号,如果没 有指定,将会使用当前所连接的实例。db_id参数指定将要被分析的数据库标识。如果没有指定,这 个参数将使用当前所连接的数据库标识。

下面的例子以实例模式来执行ADDM,并且对实例1的471到474的快照执行性能论断:

SQL> var tname varchar2(30)
SQL> begin
  2    :tname:='my addm for 471 to 474';
  3    dbms_addm.analyze_inst(:tname,471,474,1);
  4  end;
  5  /

PL/SQL procedure successfully completed.

以部分模式来执行ADDM
为了对所有实例中的部分实例执行分析,可以以部分模式来执行ADDM。可以使用 dbms_addm.analyze_partial过程来执行:

begin
  dbms_addm.analyze_partial(
    task_name       in out varchar2,
    instance_number in     number,
    begin_snapshot  in     number,
    end_snapshot    in     number,
    db_id           in     number :=null
    );
end;
/

task_name参数指定将要被创建的分析任务名称。instance_number参数用分号来隔开将会被分析的实 例编号。begin_snapshot参数指定分析周期的开始快照。end_snapshot参数指定分析周期的快照。 db_id参数指定将要被分析的数据库标识。如果没有指定,这个参数将使用当前所连接的数据库标识 。

下面的例子将以部分模式来创建ADDM诊断任务,并且对实例1,2,4的137到145之间的快照执行性能诊 断:

var tname varchar2(30)
begin
  :tname:='my addm for 137 to 145';
  dbms_addm.analyze_partial(:tname,'1,2,4',137,145);
end;
/

显示ADDM报告
为了以文本方式显示一个已经执行的ADDM任务的报告,可以使用dbms_addm.get_report函数:

dbms_addm.get_report(task_name in varchar2 return clob);

下面的例子使用tname变量指定addm任务名,使用dbms_addm.get_report来以文本方式来显示ADDM报 告:

SQL> set long 1000000 pagesize 0;
SQL> select dbms_addm.get_report(:tname) from dual;
          ADDM Report for Task 'my addm for 471 to 474'
          ---------------------------------------------

Analysis Period
---------------
AWR snapshot range from 471 to 474.
Time period starts at 28-SEP-16 03.00.18 AM
Time period ends at 28-SEP-16 06.00.39 AM

Analysis Target
---------------
Database 'SJJH' with DB ID 4134995129.
Database version 11.2.0.4.0.
ADDM performed an analysis of instance sjjh, numbered 1 and hosted at
localhost.localdomain.

Activity During the Analysis Period
-----------------------------------
Total database time was 13999 seconds.
The average number of active sessions was 1.29.

Summary of Findings
-------------------
   Description                               Active Sessions      Recommendation
s                                              Percent of Activity
   ----------------------------------------  -------------------  --------------
- 1  Top SQL Statements                        .91 | 69.99          6
2  Top Segments by "User I/O" and "Cluster"  .17 | 13.43          3
3  Undersized Redo Log Buffer                .13 | 10.23          1
4  Log File Switches                         .1 | 7.59            2
5  Buffer Busy - Hot Objects                 .06 | 4.37           3
6  Commits and Rollbacks                     .04 | 2.72           1
7  "Network" Wait Class                      .03 | 2.24           0


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


          Findings and Recommendations
          ----------------------------

Finding 1: Top SQL Statements
Impact is .91 active sessions, 69.99% of total activity.
--------------------------------------------------------
SQL statements consuming significant database time were found. These
statements offer a good opportunity for performance improvement.

   Recommendation 1: SQL Tuning
   Estimated benefit is .27 active sessions, 21.07% of total activity.
   -------------------------------------------------------------------
   Action
      Investigate the CREATE MATERIALIZED VIEW  statement with SQL_ID
      "7v89hvfv38196" for possible performance improvements. You can
      supplement the information given here with an ASH report for this
      SQL_ID.
      Related Object
         SQL statement with SQL_ID 7v89hvfv38196.
         create materialized view mt_fee_fin build immediate
         refresh fast with primary key on demand
         start with sysdate next sysdate+1
         as select * from mt_fee_fin@dbl_yb
   Rationale
      The SQL Tuning Advisor cannot operate on CREATE MATERIALIZED VIEW
      statements.
   Rationale
      Database time for this SQL was divided as follows: 100% for SQL
      execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
      execution.

   Recommendation 2: SQL Tuning
   Estimated benefit is .17 active sessions, 13.41% of total activity.
   -------------------------------------------------------------------
   Action
      Run SQL Tuning Advisor on the INSERT statement with SQL_ID
      "f64qufxuu0r5g".
      Related Object
         SQL statement with SQL_ID f64qufxuu0r5g.
         /* MV_REFRESH (INS) */INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO
         "SJGX_YB"."DWGRBTXX"("个人保险号","姓名","身份证","性别","人员类别","单
位代码","单位名称","应缴
         期间","险种","款项","补退金额","计算基数") SELECT
         "I"."INSR_CODE","I"."NAME","I"."IDCARD",DECODE("I"."SEX",0,'女','男'),"
          BPT"."PERS_NAME","C"."CORP_CODE","C"."CORP_NAME","IP"."PERIOD","PDI".
         "INSR_DETAIL_NAME","PMI"."MONEY_NAME","IP"."PAY_MONEY","IP"."CALC_BAS
         E" FROM "LV_INDIPAR" "IP","LV_CROPFUNDPAR" "CF","LV_INSR_TOPAY"
         "IT","BS_CORP" "C","BS_INSURED" "I","BS_PERSON_TYPE"
         "BPT","PFS_INSUR_DETAIL_INFO" "PDI","PFS_MONEY_INFO" "PMI" WHERE
         "IT"."PAY_INFO_NO"="CF"."PAY_INFO_NO" AND
         "CF"."MONEY_NO"="IP"."MONEY_NO" AND "PMI"."MONEY_ID"="CF"."MONEY_ID"
         AND "C"."CORP_ID"="IT"."CORP_ID" AND "I"."INDI_ID"="IP"."INDI_ID" AND
         "IT"."INSR_DETAIL_CODE"="PDI"."INSR_DETAIL_CODE"(+) AND
         ("IT"."BUSI_ASG_NO"=(-999) OR "IT"."BUSI_ASG_NO"=(-998) OR
         "IT"."BUSI_ASG_NO"=(-997) OR "IT"."BUSI_ASG_NO"=(-981) OR
         "IT"."BUSI_ASG_NO"=(-980) OR NVL("IT"."BUSI_ASG_NO",0)=0) AND
         "IT"."INDI_PAY_FLAG"=0 AND "BPT"."PERS_TYPE"="I"."PERS_TYPE" AND
         "BPT"."CENTER_ID"='430701' AND "IT"."TOPAY_TYPE"=3 AND
         "IT"."INSR_DETAIL_CODE"<>'21'
   Rationale
      The SQL spent 100% of its database time on CPU, I/O and Cluster waits.
      This part of database time may be improved by the SQL Tuning Advisor.
   Rationale
      Database time for this SQL was divided as follows: 100% for SQL
      execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
      execution.
   Rationale
      SQL statement with SQL_ID "f64qufxuu0r5g" was executed 18 times and had
      an average elapsed time of 87 seconds.
   Rationale
      Full scan of TABLE "SJGX_YB.LV_INDIPAR" with object ID 89473 consumed
      77% of the database time spent on this SQL statement.
   Rationale
      Top level calls to execute the PL/SQL statement with SQL_ID
      "2wkfgbnhtqcj9" are responsible for 100% of the database time spent on
      the INSERT statement with SQL_ID "f64qufxuu0r5g".
      Related Object
         SQL statement with SQL_ID 2wkfgbnhtqcj9.
         DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
         broken BOOLEAN := FALSE; BEGIN
         dbms_refresh.refresh('"SJGX_YB"."DWGRBTXX"'); :mydate := next_date;
         IF broken THEN :b := 1; ELSE :b := 0; END IF; END;

   Recommendation 3: SQL Tuning
   Estimated benefit is .16 active sessions, 12.12% of total activity.
   -------------------------------------------------------------------
   Action
      Run SQL Tuning Advisor on the INSERT statement with SQL_ID
      "c1fw0514uxxrs".
      Related Object
         SQL statement with SQL_ID c1fw0514uxxrs.
         INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO "SJGX_YB"."V_DWJKXX" select
         单位名称,
         单位编码,
         组织机构代码,
         业务单号,
         缴款来源,
         经办人,
         缴款日期,
         缴款金额,
         险种名称,
         分配金额,
         使用待转金额,
         生成待转金额,
         审核状态,
         凭证号
         from
         (
         select    distinct(a.busi_bill_sn) as 业务单号,
         bc.corp_name 单位名称,
         bc.corp_code 单位编码,
         bc.insur_org_code 组织机构代码,
         pm.pay_method_name as 缴款来源,
         a.make_bill 经办人,
         to_char(a.make_bill_tm, 'yyyy-mm-dd') 缴款日期,
         c.pay_money 缴款金额,
         d.insr_detail_name 险种名称,
         c.pay_money-c.bld_wait_money+c.use_wait_money as 分配金额,
         c.use_wait_money 使用待转金额,
         c.bld_wait_money 生成待转金额,
         (case a.audit_flag
         when 1 then
         '已审核'
         else
         '未审核'
         end) 审核状态,
         id.cred_no 凭证号
         from lv_busi_bill a,
         lv_busi_record b,
         lv_busi_assign c,
         (select insr_detail_code, insr_detail_name
         from pfs_insur_detail_info
         union
         select 999, '铺底险种' from dual) d,
         inte_data id,
         bs_corp bc,
         bs_pay_method pm
         where a.busi_bill_sn = b.busi_bill_sn
         and bc.corp_id = b.pay_object_id
         and bc.center_id = id.center_id
         and id.obj_code=bc.corp_id
         -- and a.center_id='430701'
         and a.pay_object = 1
         -- and b.pay_object_id='989'
         and id.bill_no = to_char(a.busi_bill_sn)
         and c.busi_reco_no = b.busi_reco_no
         and d.insr_detail_code = c.insr_detail_code
         and a.pay_method = pm.pay_method(+)
         --  and to_char(a.make_bill_tm, 'yyyymm') between '201601' and
         '201601'
         --and to_char(b.fact_pay_date, 'yyyymm') between '201601' and
         '201601'
         order by  to_char(a.make_bill_tm,'yyyy-mm-dd'))
   Rationale
      The SQL spent 100% of its database time on CPU, I/O and Cluster waits.
      This part of database time may be improved by the SQL Tuning Advisor.
   Rationale
      Database time for this SQL was divided as follows: 100% for SQL
      execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
      execution.
   Rationale
      I/O and Cluster wait for TABLE "SJGX_YB.INTE_DATA" with object ID 89405
      consumed 100% of the database time spent on this SQL statement.
   Rationale
      Top level calls to execute the PL/SQL statement with SQL_ID
      "2h0f0svtyt4c7" are responsible for 100% of the database time spent on
      the INSERT statement with SQL_ID "c1fw0514uxxrs".
      Related Object
         SQL statement with SQL_ID 2h0f0svtyt4c7.
         DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
         broken BOOLEAN := FALSE; BEGIN
         dbms_refresh.refresh('"SJGX_YB"."V_DWJKXX"'); :mydate := next_date;
         IF broken THEN :b := 1; ELSE :b := 0; END IF; END;

   Recommendation 4: SQL Tuning
   Estimated benefit is .16 active sessions, 12.04% of total activity.
   -------------------------------------------------------------------
   Action
      Run SQL Tuning Advisor on the DELETE statement with SQL_ID
      "3bvqft1u53xqz". Additionally, investigate this statement for possible
      performance improvements. You can supplement the information given here
      with an ASH report for this SQL_ID.
      Related Object
         SQL statement with SQL_ID 3bvqft1u53xqz.
         /* MV_REFRESH (DEL) */ delete from "SJGX_YB"."GRCBXX"
   Rationale
      The SQL spent 51% of its database time on CPU, I/O and Cluster waits.
      This part of database time may be improved by the SQL Tuning Advisor.
      Look at data given below and an ASH report for further performance
      improvements.
   Rationale
      Database time for this SQL was divided as follows: 100% for SQL
      execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
      execution.
   Rationale
      SQL statement with SQL_ID "3bvqft1u53xqz" was executed 18 times and had
      an average elapsed time of 79 seconds.
   Rationale
      Waiting for event "log buffer space" in wait class "Configuration"
      accounted for 46% of the database time spent in processing the SQL
      statement with SQL_ID "3bvqft1u53xqz".
   Rationale
      Top level calls to execute the PL/SQL statement with SQL_ID
      "8fn8wsfvd344t" are responsible for 100% of the database time spent on
      the DELETE statement with SQL_ID "3bvqft1u53xqz".
      Related Object
         SQL statement with SQL_ID 8fn8wsfvd344t.
         DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
         broken BOOLEAN := FALSE; BEGIN
         dbms_refresh.refresh('"SJGX_YB"."GRCBXX"'); :mydate := next_date; IF
         broken THEN :b := 1; ELSE :b := 0; END IF; END;

   Recommendation 5: SQL Tuning
   Estimated benefit is .1 active sessions, 7.74% of total activity.
   -----------------------------------------------------------------
   Action
      Investigate the INSERT statement with SQL_ID "0b6acnpktxcqd" for
      possible performance improvements. You can supplement the information
      given here with an ASH report for this SQL_ID.
      Related Object
         SQL statement with SQL_ID 0b6acnpktxcqd.
         /* MV_REFRESH (INS) */INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO
         "SJGX_YB"."GRCBXX"("姓名","身份证号","社保编号","险种子项类别","开始时
间","视同缴费月数","个人险种状
         态") SELECT "BI"."NAME","BI"."IDCARD","BI"."INSR_CODE","DI"."INSR_DETA
         IL_NAME",TO_CHAR("PI"."BEGIN_DATE",'yyyy-mm-dd'),"PI"."ALI_PAY_MONS",
         CASE "PI"."INDI_JOIN_STA" WHEN 0 THEN '无效' ELSE '有效' END  FROM
         "BS_PRES_INSUR" "PI","PFS_INSUR_DETAIL_INFO" "DI","BS_INSURED" "BI"
         WHERE "PI"."INDI_ID"="BI"."INDI_ID" AND
         "PI"."INSR_DETAIL_CODE"="DI"."INSR_DETAIL_CODE"
   Rationale
      The SQL spent only 22% of its database time on CPU, I/O and Cluster
      waits. Therefore, the SQL Tuning Advisor is not applicable in this case.
      Look at performance data for the SQL to find potential improvements.
   Rationale
      Database time for this SQL was divided as follows: 100% for SQL
      execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
      execution.
   Rationale
      SQL statement with SQL_ID "0b6acnpktxcqd" was executed 18 times and had
      an average elapsed time of 49 seconds.
   Rationale
      Waiting for event "log buffer space" in wait class "Configuration"
      accounted for 51% of the database time spent in processing the SQL
      statement with SQL_ID "0b6acnpktxcqd".
   Rationale
      Top level calls to execute the PL/SQL statement with SQL_ID
      "8fn8wsfvd344t" are responsible for 100% of the database time spent on
      the INSERT statement with SQL_ID "0b6acnpktxcqd".
      Related Object
         SQL statement with SQL_ID 8fn8wsfvd344t.
         DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
         broken BOOLEAN := FALSE; BEGIN
         dbms_refresh.refresh('"SJGX_YB"."GRCBXX"'); :mydate := next_date; IF
         broken THEN :b := 1; ELSE :b := 0; END IF; END;

   Recommendation 6: SQL Tuning
   Estimated benefit is .05 active sessions, 3.61% of total activity.
   ------------------------------------------------------------------
   Action
      Run SQL Tuning Advisor on the SELECT statement with SQL_ID
      "35vy818ghp687".
      Related Object
         SQL statement with SQL_ID 35vy818ghp687.
         SELECT
         lutt.name 姓名,
         lutt.idcard 身份证,
         '城镇居民医疗保险' as 险种,
         lutt.calc_prd as 计算年月,
         lutt.curr_year 所属期间,
         lutt.pay_money as 缴费基数,
         lutt.pay_money as 缴费金额,
         lutt.urban_type_name as 缴款类型,
         lutt.policy_item_name as 款项类别,
         Case
         When Nvl(lutt.busi_asg_no, 0) = 0 Then
         '未缴'
         When lutt.busi_asg_no In (-999, -998, -997, -981, -980) Then
         '未缴'
         Else
         '已缴'
         End as 缴费标志,
         pt.pers_name as 人员类别,
         '个体' as 个体缴费标志,
         bc.corp_name 缴费单位,
         bc.corp_code  单位编码,
         lbr.reco_time 计算时间,
         to_char(lutt.fac_pay_date, 'yyyy-mm-dd') as 实际缴款时间,
         Case
         When Nvl(lutt.busi_asg_no, 0) = 0 Then
         '未注资'
         When lutt.busi_asg_no In (-999, -998, -997, -981, -980) Then
         '未注资'
         Else
         '已注资'
         End as 注资标志,
         lbr.reco_staff 计算人
         FROM lv_urban_topay_tmp lutt,
         bs_corp bc,
         bs_person_type pt,
         /* lv_busi_bill lbb,*/
         lv_busi_record lbr,
         lv_busi_assign lba
         WHERE
         bc.corp_id=lutt.corp_id
         and pt.pers_type=lutt.pers_type and pt.center_id=lutt.center_id
         and lba.busi_asg_no(+)=lutt.busi_asg_no
         and lba.busi_reco_no=lbr.busi_reco_no(+)
         ORDER BY lutt.curr_year, lutt.src_type, lutt.policy_item_code
   Rationale
      The SQL spent 100% of its database time on CPU, I/O and Cluster waits.
      This part of database time may be improved by the SQL Tuning Advisor.
   Rationale
      Database time for this SQL was divided as follows: 100% for SQL
      execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
      execution.
   Rationale
      SQL statement with SQL_ID "35vy818ghp687" was executed 1 times and had
      an average elapsed time of 427 seconds.
   Rationale
      I/O and Cluster wait for TABLE "SJGX_YB.LV_URBAN_TOPAY_TMP" with object
      ID 90256 consumed 100% of the database time spent on this SQL statement.


Finding 2: Top Segments by "User I/O" and "Cluster"
Impact is .17 active sessions, 13.43% of total activity.
--------------------------------------------------------
Individual database segments responsible for significant "User I/O" and
"Cluster" waits were found.

   Recommendation 1: Segment Tuning
   Estimated benefit is .12 active sessions, 9.1% of total activity.
   -----------------------------------------------------------------
   Action
      Investigate application logic involving I/O on TABLE "SJGX_YB.INTE_DATA"
      with object ID 89405.
      Related Object
         Database object with ID 89405.
   Action
      Look at the "Top SQL Statements" finding for SQL statements consuming
      significant I/O on this segment. For example, the INSERT statement with
      SQL_ID "c1fw0514uxxrs" is responsible for 100% of "User I/O" and
      "Cluster" waits for this segment.
   Rationale
      The I/O usage statistics for the object are: 1 full object scans, 263894
      physical reads, 414 physical writes and 0 direct reads.

   Recommendation 2: Segment Tuning
   Estimated benefit is .03 active sessions, 2.35% of total activity.
   ------------------------------------------------------------------
   Action
      Investigate application logic involving I/O on TABLE
      "SJGX_YB.LV_URBAN_TOPAY_TMP" with object ID 90256.
      Related Object
         Database object with ID 90256.
   Action
      Look at the "Top SQL Statements" finding for SQL statements consuming
      significant I/O on this segment. For example, the SELECT statement with
      SQL_ID "35vy818ghp687" is responsible for 100% of "User I/O" and
      "Cluster" waits for this segment.
   Rationale
      The I/O usage statistics for the object are: 4 full object scans,
      1361966 physical reads, 682018 physical writes and 680863 direct reads.

   Recommendation 3: Segment Tuning
   Estimated benefit is .03 active sessions, 1.98% of total activity.
   ------------------------------------------------------------------
   Action
      Run "Segment Advisor" on TABLE "SJGX_YB.LV_INDIPAR" with object ID
      89473.
      Related Object
         Database object with ID 89473.
   Action
      Investigate application logic involving I/O on TABLE
      "SJGX_YB.LV_INDIPAR" with object ID 89473.
      Related Object
         Database object with ID 89473.
   Action
      Look at the "Top SQL Statements" finding for SQL statements consuming
      significant I/O on this segment. For example, the INSERT statement with
      SQL_ID "f64qufxuu0r5g" is responsible for 100% of "User I/O" and
      "Cluster" waits for this segment.
   Rationale
      The I/O usage statistics for the object are: 18 full object scans,
      36784620 physical reads, 0 physical writes and 0 direct reads.

   Symptoms That Led to the Finding:
   ---------------------------------
      Wait class "User I/O" was consuming significant database time.
      Impact is .25 active sessions, 19.52% of total activity.


Finding 3: Undersized Redo Log Buffer
Impact is .13 active sessions, 10.23% of total activity.
--------------------------------------------------------
Waits for redo log buffer space were consuming significant database time.

   Recommendation 1: Host Configuration
   Estimated benefit is .13 active sessions, 10.23% of total activity.
   -------------------------------------------------------------------
   Action
      Investigate the possibility of improving the performance of I/O to the
      online redo log files.
   Rationale
      The average size of writes to the online redo log files was 1590 K and
      the average time per write was 129 milliseconds.

   Symptoms That Led to the Finding:
   ---------------------------------
      Wait class "Configuration" was consuming significant database time.
      Impact is .23 active sessions, 17.94% of total activity.


Finding 4: Log File Switches
Impact is .1 active sessions, 7.59% of total activity.
------------------------------------------------------
Log file switch operations were consuming significant database time while
waiting for checkpoint completion.
This problem can be caused by use of hot backup mode on tablespaces.  DML to
tablespaces in hot backup mode causes generation of additional redo.

   Recommendation 1: Database Configuration
   Estimated benefit is .1 active sessions, 7.59% of total activity.
   -----------------------------------------------------------------
   Action
      Verify whether incremental shipping was used for standby databases.

   Recommendation 2: Database Configuration
   Estimated benefit is .1 active sessions, 7.59% of total activity.
   -----------------------------------------------------------------
   Action
      Increase the size of the log files to 2048 M to hold at least 20 minutes
      of redo information.

   Symptoms That Led to the Finding:
   ---------------------------------
      Wait class "Configuration" was consuming significant database time.
      Impact is .23 active sessions, 17.94% of total activity.


Finding 5: Buffer Busy - Hot Objects
Impact is .06 active sessions, 4.37% of total activity.
-------------------------------------------------------
Read and write contention on database blocks was consuming significant
database time.

   Recommendation 1: Schema Changes
   Estimated benefit is 0 active sessions, .35% of total activity.
   ---------------------------------------------------------------
   Action
      Consider using ORACLE's recommended solution of automatic segment space
      management in a locally managed tablespace for the tablespace "SYSTEM"
      containing the TABLE "SYS.AUD$" with object ID 407. Alternatively, you
      can move this object to a different tablespace that is locally managed
      with automatic segment space management.
      Related Object
         Database object with ID 407.
   Rationale
      There was significant read and write contention on TABLE "SYS.AUD$" with
      object ID 407.
      Related Object
         Database object with ID 407.

   Recommendation 2: Schema Changes
   Estimated benefit is 0 active sessions, .35% of total activity.
   ---------------------------------------------------------------
   Action
      Consider partitioning the TABLE "SYS.AUD$" with object ID 407 in a
      manner that will evenly distribute concurrent DML across multiple
      partitions.
      Related Object
         Database object with ID 407.

   Recommendation 3: Schema Changes
   Estimated benefit is 0 active sessions, .35% of total activity.
   ---------------------------------------------------------------
   Action
      A temporary solution may be achieved by increasing the number of free
      lists in segment "SYS.AUD$".
      Related Object
         Database object with ID 407.
   Action
      A temporary solution may be achieved by increasing the number of free
      list groups in segment "SYS.AUD$".
      Related Object
         Database object with ID 407.
   Rationale
      There was significant read and write contention on TABLE "SYS.AUD$" with
      object ID 407.
      Related Object
         Database object with ID 407.

   Symptoms That Led to the Finding:
   ---------------------------------
      Read and write contention on database blocks was consuming significant
      database time.
      Impact is .06 active sessions, 4.37% of total activity.
         Wait class "Concurrency" was consuming significant database time.
         Impact is .06 active sessions, 4.38% of total activity.


Finding 6: Commits and Rollbacks
Impact is .04 active sessions, 2.72% of total activity.
-------------------------------------------------------
Waits on event "log file sync" while performing COMMIT and ROLLBACK operations
were consuming significant database time.

   Recommendation 1: Host Configuration
   Estimated benefit is .04 active sessions, 2.72% of total activity.
   ------------------------------------------------------------------
   Action
      Investigate the possibility of improving the performance of I/O to the
      online redo log files.
   Rationale
      The average size of writes to the online redo log files was 1590 K and
      the average time per write was 129 milliseconds.
   Rationale
      The total I/O throughput on redo log files was 0 K per second for reads
      and 2.9 M per second for writes.
   Rationale
      The redo log I/O throughput was divided as follows: 0% by RMAN and
      recovery, 100% by Log Writer, 0% by Archiver, 0% by Streams AQ and 0% by
      all other activity.

   Symptoms That Led to the Finding:
   ---------------------------------
      Wait class "Commit" was consuming significant database time.
      Impact is .04 active sessions, 2.72% of total activity.


Finding 7: "Network" Wait Class
Impact is .03 active sessions, 2.24% of total activity.
-------------------------------------------------------
Wait class "Network" was consuming significant database time.

   No recommendations are available.



~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

          Additional Information
          ----------------------

Miscellaneous Information
-------------------------
Wait class "Application" was not consuming significant database time.
CPU was not a bottleneck for the instance.
Session connect and disconnect calls were not consuming significant database
time.
Hard parsing of SQL statements was not consuming significant database time.

使用dbms_advisor优化物化视图

为了快速刷新与查询重写优化物化视图
dbms_mview有多个过程可以帮助你来创建物化视图并为了快速刷新与查询重写而进行优化。 explain_mview过程可以告诉你一个物化视图是否可以快速刷新或满足一般的查询重写,而 explain_rewrite过程将会告诉你查询重写是否会执行。然而也会告诉你如何完成快速刷新或查询重写。

为了能更好的使用物化视图,dbms_advisor.tune_mview过程将会显示如何来优化你的create materialized view语句来满足其它的要求,比如物化视图日志与为了快速刷新与常见查询重重写等 价关系。dbms_advisor.tune_mview过程分析与处理create materialized view语句并生成两组输出 :一组是物化视图的实现与其它撤消创建物化视图的操作。两组输出结果可以通过Oracle视图来访问 或由SQL Access Advisor创建一个外部脚本文件。这些外部脚本文件是可以用来执行实现物化视图的。

使用dbms_advisor.tune_mview过程,你将不再为了创建物化视图而需要详细理解物化视图,因为物 化视与它所需要的组件(比如物化视图日志)将会通过这个过程来正确的创建。

dbms_advisor.tune_mview的语法与操作
下面是dbms_advisor.tune_mview的语法:

dbms_advisor.tune_mview(task_name in out varchar2, mv_create_stmt in [clob | varchar2])

dbms_advisor.tune_mview有两个输入参数:task_name与mv_create_stmt。task_name是一个用户提供 的任务标识用来访问输出结果。mv_create_stmt是一个要被优化的完整的create materialized view 语句。如果输入的create materialized view语句没有包含refresh fast或enable query rewrite或 者这两者的话,dbms_advisor.tune_mview将使用缺省子句refresh force与disable query rewrite 来优化语句使它能快速刷新或者能完全刷新。

dbms_advisor.tune_mview过程处理create materialized view语句的范围比较广,它可以是任意查 询。查询可能是一个简单的select语句或一个有集合操作,或内联视图的复杂查询。当物化视图的定 义查询包含refresh fast子句时,dbms_advisor.tune_mview过程将会分析查询并且检查看是否能满 足快速刷新。如果物化视图已经满足了快速刷新的要求,那么dbms_advisor.tune_mview过程将返回 消息说”这个物化视图已经是优化的了并且将不会执行优化处理”,否则,dbms_advisor.tune_mview 过程将对指定的语句启动优化操作。

dbms_advisor.tune_mview过程为了让物化视图能够满足fast refresh的要求可以通过增加额外的列 比如聚合操作列或修复物化视图日志来生成正确定义的查询语句。在物化视图使用复杂查询的情况下 ,dbms_advisor.tune_mview过程可能会分解查询并且生成两个或多个快速刷新的物化视图或者为了 满足快速刷新的要求而重新定义物化视图。dbms_advisor.tune_mview过程支持以下复杂的查询结构:
.集合操作(union,union all,minus与intersect)
.count distinct
.select distinct
.内联视图

当物化视图定义查询语句指定了enable query rewrite子句时,dbms_advisor.tune_mview过程也将 会使用类似于处理refresh fast的方式来修复语句,它将重新定义物化视图,因此尽可能多的使用更 先进的查询重写形式。

dbms_advisor.tune_mview过程将以可执行语句的方式来生成两组输出结果。一组输出是物化视图的 实现与所请求的组件,比如物化视图日志或为了实现快速刷新和尽可能的查询重写与原语句等价的重 写语句。另一组输出是用于删除物化视图和与原语句等价的重写语句。

实现过程的输出语句包括:
.create materialized view log语句:创建为了快速刷新而抽失的物化视图日志
.alter materialized view log force语句:修复任何物化视图日志为了快速刷新而缺失的过滤列, 序列等等
.一个或多个create materialized view语句:在这种情况下会输出一个由原始查询语句直接重写与转 换而来的语句。简单的查询转换可能是调整所请求的列。例如,为物化联接视图增加rowid列,并且 为物化聚合视图增加聚合列。在分解复杂语句的情况下,由原始语句会生成多个create materialized view语句并且通过一个新的顶级物化视图来引用一个或多个子物化视图来形成一个嵌 套的物化视图。这将尽可能地完成快速刷新与查询重写。
.build_safe_rewrite_equivalence语句:使用子物化视图来重写顶级物化视图。它被要求当组合出现 时能够使用查询重写。

分解的结果就子物化视图没有被共享。在分解情况下,dbms_advisor.tune_mview过程将总是包含新 的子物化视图并且它将在现有物化视图中将不再被引用。

撤消操作的输出语句包括:
.drop materialized view语句是为了撤消实现处理输出中所创建的物化视图。
.drop_rewrite_equivalence语句用来删除实现处理输出中与原始查询等价的语句。

注意撤消处理不会包含删除物化视图日志的操作。这是因为物化视图日志可以被多个不同的物化视图 所共享,有一些还可能存储在远程的数据库中。

访问dbms_advisor.tune_mview输出结果
有两种方法可以来访问dbms_advisor.tune_mview输出结果:
.使用dbms_advisor.get_task_script函数与dbms_advisor.create_file过程来生成脚本。
.使用user_tune_mview或dba_tune_mview视图。

user_tune_mview与dba_tune_mview视图
在执行tune_mview后,输出结果会插入到SQL Access Advisor档案库表中并且可以通过Oracle视图 user_tune_mview与dba_tune_mview来进行访问。

使用dbms_advisor函数与过程来生成脚本
使用dbms_advisor.get_task_script过程来对建议生成脚本最简单的方式。下面是一个简单的例子。 首先,创建一个目录用来存储脚本文件:

create directory tune_results as '/tmp/script_dir';
grant read, write on directory tune_results to public;

再次就是生成实现与撤消脚本并将它们分别存储到/tmp/script_dir/mv_create.sql 与/tmp/script_dir/mv_undo.sql中。

execute dbms_advisor.create_file(dbms_advisor.get_task_script (:task_name),'TUNE_RESULTS', 'mv_create.sql');
execute dbms_advisor.create_file(dbms_advisor.get_task_script(:task_name,'UNDO'),  'TUNE_RESULTS', 'mv_undo.sql');

下面介绍几个使用dbms_advisor.tune_mview过程来优化物化视图的例子
1.对快速刷新物化视图的查询语句进行优化
这个例子将介绍如何使用dbms_advisor.tune_mview过程来改变物化视图的查询语句来使用它满足快 速刷新的要求。create materialized view语句使用变量create_mv_ddl来进行定义,它包含一个 fast refresh子句。这个查询语句包含单个查询块,并且包含了一个聚合列sum(s.amount_sold),但 是聚合列不支持快速刷新。如果对这个物化视图创建语句执行dbms_advisor.tune_mview过程,那么 输出结果是建议物化视图快速刷新。

SQL> variable task_cust_mv varchar2(30);
SQL> variable create_mv_ddl varchar2(4000);
SQL> execute :task_cust_mv := 'cust_mv';

PL/SQL procedure successfully completed.

SQL> execute :create_mv_ddl := 'create materialized view cust_mv refresh fast disable  query rewrite as select s.prod_id, s.cust_id, sum(s.amount_sold) sum_amount from sales  s, customers cs where s.cust_id = cs.cust_id group by s.prod_id, s.cust_id';

PL/SQL procedure successfully completed.

SQL> execute dbms_advisor.tune_mview(:task_cust_mv, :create_mv_ddl);

PL/SQL procedure successfully completed.

使用dbms_advisor.get_task_script函数与dbms_advisor.create_file过程来生成脚本文本的方式来 查看dbms_advisor.tune_mview的输出结果:

创建存储脚本文件的目录

SQL> create directory tune_results as '/backup';
grant read, write on directory tune_results to public;
Directory created.


SQL> grant read, write on directory tune_results to public;

Grant succeeded.

生成脚本文件

SQL> execute dbms_advisor.create_file(dbms_advisor.get_task_script ('cust_mv'),'TUNE_RESULTS', 'jy_mv_create.sql');

PL/SQL procedure successfully completed.

SQL> execute dbms_advisor.create_file(dbms_advisor.get_task_script ('cust_mv','UNDO'),'TUNE_RESULTS', 'jy_mv_undo_create.sql');

PL/SQL procedure successfully completed.

查看脚本文件内容:

[root@weblogic28 backup]# cat jy_mv_create.sql
Rem  SQL Access Advisor: Version 10.2.0.5.0 - Production
Rem
Rem  Username:        SH
Rem  Task:            cust_mv
Rem  Execution date:
Rem

CREATE MATERIALIZED VIEW LOG ON
    "SH"."SALES"
    WITH ROWID, SEQUENCE("PROD_ID","CUST_ID","AMOUNT_SOLD")
    INCLUDING NEW VALUES;

ALTER MATERIALIZED VIEW LOG FORCE ON
    "SH"."SALES"
    ADD ROWID, SEQUENCE("PROD_ID","CUST_ID","AMOUNT_SOLD")
    INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON
    "SH"."CUSTOMERS"
    WITH ROWID, SEQUENCE("CUST_ID")
    INCLUDING NEW VALUES;

ALTER MATERIALIZED VIEW LOG FORCE ON
    "SH"."CUSTOMERS"
    ADD ROWID, SEQUENCE("CUST_ID")
    INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW SH.CUST_MV
    REFRESH FAST WITH ROWID
    DISABLE QUERY REWRITE
    AS SELECT SH.SALES.CUST_ID C1, SH.SALES.PROD_ID C2, SUM("SH"."SALES"."AMOUNT_SOLD")
       M1, COUNT("SH"."SALES"."AMOUNT_SOLD") M2, COUNT(*) M3 FROM SH.CUSTOMERS,
       SH.SALES WHERE SH.SALES.CUST_ID = SH.CUSTOMERS.CUST_ID GROUP BY SH.SALES.CUST_ID,
       SH.SALES.PROD_ID;

使用user_tune_mview或dba_tune_mview视图来查看dbms_advisor.tune_mview过程的输出结果:

SQL> set long 99999
SQL> col statement for a200
SQL> select statement from dba_tune_mview where owner='SH' and task_name='cust_mv' and  script_type='IMPLEMENTATION';

STATEMENT
---------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------- ------------------------
CREATE MATERIALIZED VIEW LOG ON "SH"."SALES" WITH ROWID, SEQUENCE  ("PROD_ID","CUST_ID","AMOUNT_SOLD")  INCLUDING NEW VALUES
ALTER MATERIALIZED VIEW LOG FORCE ON "SH"."SALES" ADD ROWID, SEQUENCE  ("PROD_ID","CUST_ID","AMOUNT_SOLD")  INCLUDING NEW VALUES
CREATE MATERIALIZED VIEW LOG ON "SH"."CUSTOMERS" WITH ROWID, SEQUENCE ("CUST_ID")   INCLUDING NEW VALUES
ALTER MATERIALIZED VIEW LOG FORCE ON "SH"."CUSTOMERS" ADD ROWID, SEQUENCE ("CUST_ID")   INCLUDING NEW VALUES
CREATE MATERIALIZED VIEW SH.CUST_MV   REFRESH FAST WITH ROWID DISABLE QUERY REWRITE AS  SELECT SH.SALES.CUST_ID C1, SH.SALES.PROD_ID C2, SUM("SH"."SALES"."AMOUNT_SOLD") M1,  COUNT("SH"."SALES"."AMOUNT_S
OLD") M2, COUNT(*) M3 FROM SH.CUSTOMERS, SH.SALES WHERE SH.SALES.CUST_ID =  SH.CUSTOMERS.CUST_ID GROUP BY SH.SALES.CUST_ID, SH.SALES.PROD_ID



dbms_advisor.tune_mview过程的输出结果包含了一个优化后的物化视图定义查询语句如下:

CREATE MATERIALIZED VIEW SH.CUST_MV
    REFRESH FAST WITH ROWID
    DISABLE QUERY REWRITE
    AS SELECT SH.SALES.CUST_ID C1, SH.SALES.PROD_ID C2, SUM("SH"."SALES"."AMOUNT_SOLD")
       M1, COUNT("SH"."SALES"."AMOUNT_SOLD") M2, COUNT(*) M3 FROM SH.CUSTOMERS,
       SH.SALES WHERE SH.SALES.CUST_ID = SH.CUSTOMERS.CUST_ID GROUP BY SH.SALES.CUST_ID,
       SH.SALES.PROD_ID;

dbms_advisor.tune_mview过程的撤消输出结果下:

[root@weblogic28 backup]# cat jy_mv_undo_create.sql
Rem  SQL Access Advisor: Version 10.2.0.5.0 - Production
Rem
Rem  Username:        SH
Rem  Task:            cust_mv
Rem  Execution date:
Rem

DROP MATERIALIZED VIEW SH.CUST_MV;

或者

SQL> select statement from dba_tune_mview where owner='SH' and task_name='cust_mv' and  script_type='UNDO';

STATEMENT
------------------------------------------------
DROP MATERIALIZED VIEW SH.CUST_MV

2.通过创建多个物化视图来满足查询重写的要求
这个例子介绍了一个使用了集合操作union的物化视图查询语句,它不支持查询重写,但通过分解成 多个子物化视图后可以满足查询重写的要求。对于物化视图所引用的基表sales,customers与 countries,它们没有物化视图日志。

首先对创建物化视图的语句执行dbms_advisor.tune_mview过程

SQL> variable task_cust_mv varchar2(30);
SQL> variable create_mv_ddl varchar2(4000);
SQL> execute :task_cust_mv := 'cust_mv_2';

PL/SQL procedure successfully completed.

SQL> execute :create_mv_ddl := 'create materialized view cust_mv enable query rewrite as  select s.prod_id, s.cust_id, count(*) cnt, sum(s.amount_sold) sum_amount from sales s,  customers cs, countries cn where s.cust_id = cs.cust_id and cs.country_id =  cn.country_id and cn.country_name in (''usa'',''canada'') group by s.prod_id, s.cust_id  union select s.prod_id, s.cust_id, count(*) cnt, sum(s.amount_sold) sum_amount from  sales s, customers cs where s.cust_id = cs.cust_id and s.cust_id in (1005,1010,1012)  group by s.prod_id, s.cust_id';

PL/SQL procedure successfully completed.

SQL> execute dbms_advisor.tune_mview(:task_cust_mv, :create_mv_ddl);

PL/SQL procedure successfully completed.

查看dbms_advisor.tune_mview优化的实现输出结果

SQL> execute dbms_advisor.create_file(dbms_advisor.get_task_script ('cust_mv_2'),'TUNE_RESULTS', 'jy_mv_2_create.sql');

PL/SQL procedure successfully completed.

SQL> execute dbms_advisor.create_file(dbms_advisor.get_task_script ('cust_mv_2','UNDO'),'TUNE_RESULTS', 'jy_mv_2_undo_create.sql');

PL/SQL procedure successfully completed.

[root@localhost backup]# cat jy_mv_2_create.sql
Rem  SQL Access Advisor: Version 11.2.0.4.0 - Production
Rem
Rem  Username:        SH
Rem  Task:            cust_mv_2
Rem  Execution date:
Rem

CREATE MATERIALIZED VIEW LOG ON
    "SH"."COUNTRIES"
    WITH ROWID, SEQUENCE("COUNTRY_ID","COUNTRY_NAME")
    INCLUDING NEW VALUES;

ALTER MATERIALIZED VIEW LOG FORCE ON
    "SH"."COUNTRIES"
    ADD ROWID, SEQUENCE("COUNTRY_ID","COUNTRY_NAME")
    INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON
    "SH"."SALES"
    WITH ROWID, SEQUENCE("PROD_ID","CUST_ID","AMOUNT_SOLD")
    INCLUDING NEW VALUES;

ALTER MATERIALIZED VIEW LOG FORCE ON
    "SH"."SALES"
    ADD ROWID, SEQUENCE("PROD_ID","CUST_ID","AMOUNT_SOLD")
    INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON
    "SH"."CUSTOMERS"
    WITH ROWID, SEQUENCE("CUST_ID","COUNTRY_ID")
    INCLUDING NEW VALUES;

ALTER MATERIALIZED VIEW LOG FORCE ON
    "SH"."CUSTOMERS"
    ADD ROWID, SEQUENCE("CUST_ID","COUNTRY_ID")
    INCLUDING NEW VALUES;

ALTER MATERIALIZED VIEW LOG FORCE ON
    "SH"."SALES"
    ADD ROWID, SEQUENCE("PROD_ID","CUST_ID","AMOUNT_SOLD")
    INCLUDING NEW VALUES;

ALTER MATERIALIZED VIEW LOG FORCE ON
    "SH"."CUSTOMERS"
    ADD ROWID, SEQUENCE("CUST_ID")
    INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW SH.CUST_MV$SUB1
    REFRESH FAST WITH ROWID ON COMMIT
    ENABLE QUERY REWRITE
    AS SELECT SH.COUNTRIES.COUNTRY_NAME C1, SH.SALES.CUST_ID C2, SH.SALES.PROD_ID C3,
       SUM("SH"."SALES"."AMOUNT_SOLD") M1, COUNT("SH"."SALES"."AMOUNT_SOLD")
       M2, COUNT(*) M3 FROM SH.CUSTOMERS, SH.SALES, SH.COUNTRIES WHERE SH.SALES.CUST_ID
       = SH.CUSTOMERS.CUST_ID AND SH.COUNTRIES.COUNTRY_ID = SH.CUSTOMERS.COUNTRY_ID
       AND (SH.COUNTRIES.COUNTRY_NAME IN ('usa', 'canada')) GROUP BY  SH.COUNTRIES.COUNTRY_NAME,
       SH.SALES.CUST_ID, SH.SALES.PROD_ID;

CREATE MATERIALIZED VIEW SH.CUST_MV$SUB2
    REFRESH FAST WITH ROWID ON COMMIT
    ENABLE QUERY REWRITE
    AS SELECT SH.SALES.CUST_ID C1, SH.SALES.PROD_ID C2, SUM("SH"."SALES"."AMOUNT_SOLD")
       M1, COUNT("SH"."SALES"."AMOUNT_SOLD") M2, COUNT(*) M3 FROM SH.CUSTOMERS,
       SH.SALES WHERE SH.SALES.CUST_ID = SH.CUSTOMERS.CUST_ID AND (SH.SALES.CUST_ID
       IN (1012, 1010, 1005)) GROUP BY SH.SALES.CUST_ID, SH.SALES.PROD_ID;

CREATE MATERIALIZED VIEW SH.CUST_MV
    REFRESH FORCE WITH ROWID
    ENABLE QUERY REWRITE
    AS (SELECT "CUST_MV$SUB1"."C3" "PROD_ID","CUST_MV$SUB1"."C2" "CUST_ID",SUM("CUST_MV $SUB1"."M3")
       "CNT",SUM("CUST_MV$SUB1"."M1") "SUM_AMOUNT" FROM "SH"."CUST_MV$SUB1" "CUST_MV $SUB1"
       GROUP BY "CUST_MV$SUB1"."C3","CUST_MV$SUB1"."C2")UNION (SELECT "CUST_MV $SUB2"."C2"
       "PROD_ID","CUST_MV$SUB2"."C1" "CUST_ID","CUST_MV$SUB2"."M3" "CNT","CUST_MV $SUB2"."M1"
       "SUM_AMOUNT" FROM "SH"."CUST_MV$SUB2" "CUST_MV$SUB2");

begin
  sys.dbms_advanced_rewrite.build_safe_rewrite_equivalence('SH.CUST_MV$RWEQ','select  s.prod_id, s.cust_id, count(*) cnt, sum(s.amount_sold) sum_amount from sales s,  customers cs, countries cn where s.cust_id = cs.cust_id and cs.country_id =  cn.country_id and cn.country_name in (''usa'',''canada'') group by s.prod_id, s.cust_id  union select s.prod_id, s.cust_id, count(*) cnt, sum(s.amount_sold) sum_amount from  sales s, customers cs where s.cust_id = cs.cust_id and s.cust_id in (1005,1010,1012)  group by s.prod_id, s.cust_id',' (SELECT "CUST_MV$SUB1"."C3" "PROD_ID","CUST_MV $SUB1"."C2" "CUST_ID",SUM("CUST_MV$SUB1"."M3") "CNT",SUM("CUST_MV$SUB1"."M1")  "SUM_AMOUNT" FROM "SH"."CUST_MV$SUB1" "CUST_MV$SUB1" GROUP BY "CUST_MV $SUB1"."C3","CUST_MV$SUB1"."C2")UNION (SELECT "CUST_MV$SUB2"."C2" "PROD_ID","CUST_MV $SUB2"."C1" "CUST_ID","CUST_MV$SUB2"."M3" "CNT","CUST_MV$SUB2"."M1" "SUM_AMOUNT" FROM  "SH"."CUST_MV$SUB2" "CUST_MV$SUB2")',-2066096392);
end
;
/

或者

SQL> select statement from dba_tune_mview where owner='SH' and task_name='cust_mv_2' and  script_type='IMPLEMENTATION';

STATEMENT
---------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------- ------------------------------------------------
CREATE MATERIALIZED VIEW LOG ON "SH"."COUNTRIES" WITH ROWID, SEQUENCE  ("COUNTRY_ID","COUNTRY_NAME")  INCLUDING NEW VALUES
ALTER MATERIALIZED VIEW LOG FORCE ON "SH"."COUNTRIES" ADD ROWID, SEQUENCE  ("COUNTRY_ID","COUNTRY_NAME")  INCLUDING NEW VALUES
CREATE MATERIALIZED VIEW LOG ON "SH"."SALES" WITH ROWID, SEQUENCE  ("PROD_ID","CUST_ID","AMOUNT_SOLD")  INCLUDING NEW VALUES
ALTER MATERIALIZED VIEW LOG FORCE ON "SH"."SALES" ADD ROWID, SEQUENCE  ("PROD_ID","CUST_ID","AMOUNT_SOLD")  INCLUDING NEW VALUES
CREATE MATERIALIZED VIEW LOG ON "SH"."CUSTOMERS" WITH ROWID, SEQUENCE  ("CUST_ID","COUNTRY_ID")  INCLUDING NEW VALUES
ALTER MATERIALIZED VIEW LOG FORCE ON "SH"."CUSTOMERS" ADD ROWID, SEQUENCE  ("CUST_ID","COUNTRY_ID")  INCLUDING NEW VALUES
ALTER MATERIALIZED VIEW LOG FORCE ON "SH"."SALES" ADD ROWID, SEQUENCE  ("PROD_ID","CUST_ID","AMOUNT_SOLD")  INCLUDING NEW VALUES
ALTER MATERIALIZED VIEW LOG FORCE ON "SH"."CUSTOMERS" ADD ROWID, SEQUENCE ("CUST_ID")   INCLUDING NEW VALUES
CREATE MATERIALIZED VIEW SH.CUST_MV$SUB1   REFRESH FAST WITH ROWID ON COMMIT ENABLE  QUERY REWRITE AS SELECT SH.COUNTRIES.COUNTRY_NAME C1, SH.SALES.CUST_ID C2,  SH.SALES.PROD_ID C3, SUM("SH"."SALES"."AMOUNT_SOLD") M1, COUNT ("SH"."SALES"."AMOUNT_SOLD") M2, COUNT(*) M3 FROM SH.CUSTOMERS, SH.SALES, SH.COUNTRIES  WHERE SH.SALES.CUST_ID = SH.CUSTOMERS.CUST_ID AND SH.COUNTRIES.COUNTRY_ID =  SH.CUSTOMERS.COU
NTRY_ID AND (SH.COUNTRIES.COUNTRY_NAME IN ('usa', 'canada')) GROUP BY  SH.COUNTRIES.COUNTRY_NAME, SH.SALES.CUST_ID, SH.SALES.PROD_ID

CREATE MATERIALIZED VIEW SH.CUST_MV$SUB2   REFRESH FAST WITH ROWID ON COMMIT ENABLE  QUERY REWRITE AS SELECT SH.SALES.CUST_ID C1, SH.SALES.PROD_ID C2, SUM ("SH"."SALES"."AMOUNT_SOLD") M1, COUNT("SH"."SALES"."AMOUNT_SOLD") M2, COUNT(*) M3 FROM  SH.CUSTOMERS, SH.SALES WHERE SH.SALES.CUST_ID = SH.CUSTOMERS.CUST_ID AND  (SH.SALES.CUST_ID IN (1012, 1010, 1005)) GROUP BY SH.SALES.CUST_ID, SH.SALES.PROD_ID
CREATE MATERIALIZED VIEW SH.CUST_MV   REFRESH FORCE WITH ROWID ENABLE QUERY REWRITE AS   (SELECT "CUST_MV$SUB1"."C3" "PROD_ID","CUST_MV$SUB1"."C2" "CUST_ID",SUM("CUST_MV $SUB1"."M3") "CNT",SUM("CUST_MV$SUB1"."M1") "SUM_AMOUNT" FROM "SH"."CUST_MV$SUB1"  "CUST_MV$SUB1" GROUP BY "CUST_MV$SUB1"."C3","CUST_MV$SUB1"."C2")UNION (SELECT "CUST_MV $SUB2"."C2" "PROD_ID","CUST_MV$SUB2"."C1" "CUST_ID","CUST_MV$SUB
2"."M3" "CNT","CUST_MV$SUB2"."M1" "SUM_AMOUNT" FROM "SH"."CUST_MV$SUB2" "CUST_MV$SUB2")

DBMS_ADVANCED_REWRITE.BUILD_SAFE_REWRITE_EQUIVALENCE ('SH.CUST_MV$RWEQ','select  s.prod_id, s.cust_id, count(*) cnt, sum(s.amount_sold) sum_amount from sales s,  customers cs, countries cn where s.cust_id = cs.cust_id and cs.country_id =  cn.country_id and cn.country_name in ('usa','canada') group by s.prod_id, s.cust_id  union select s.prod_id, s.cust_id, count(*) cnt, sum(s.amount_sold) sum_amount f
rom sales s, customers cs where s.cust_id = cs.cust_id and s.cust_id in (1005,1010,1012)  group by s.prod_id, s.cust_id',' (SELECT "CUST_MV$SUB1"."C3" "PROD_ID","CUST_MV $SUB1"."C2" "CUST_ID",SUM("CUST_MV$SUB1"."M3") "CNT",SUM("CUST_MV$SUB1"."M1")  "SUM_AMOUNT" FROM "SH"."CUST_MV$SUB1" "CUST_MV$SUB1" GROUP BY "CUST_MV $SUB1"."C3","CUST_MV$SUB1"."C2")UNION (SELECT "CUST_MV$SUB2"."C2" "PROD_ID","CUST_MV
$SUB2"."C1" "CUST_ID","CUST_MV$SUB2"."M3" "CNT","CUST_MV$SUB2"."M1" "SUM_AMOUNT" FROM  "SH"."CUST_MV$SUB2" "CUST_MV$SUB2")',-2066096392)

查看dbms_advisor.tune_mview优化的撤消输出结果

[root@localhost backup]# cat jy_mv_2_undo_create.sql
Rem  SQL Access Advisor: Version 11.2.0.4.0 - Production
Rem
Rem  Username:        SH
Rem  Task:            cust_mv_2
Rem  Execution date:
Rem

DROP MATERIALIZED VIEW SH.CUST_MV$SUB1;

DROP MATERIALIZED VIEW SH.CUST_MV$SUB2;

DROP MATERIALIZED VIEW SH.CUST_MV;

begin
  sys.dbms_advanced_rewrite.drop_rewrite_equivalence('SH.CUST_MV$RWEQ');
end;
/

原始物化视图cust_mv的查询语句已经被分解成了两个子物化视图cust_mv$sub1和cust_mv$sub2。有 一个额外的count(amount_sold)列增加到了cust_mv$sub1中让物化视图满足快速刷新要求。

原始物化视图cust_mv的查询语句已经变成引用两个子物化视图了,使用两个子物化视图来满足快速 刷新与查询重写的需要。

为了让两个子物化视图能够满足快速刷新,需要对所引用的基表增加物化视图日志。注意,每个基表 会生成两个创建物化视图日志的语句:一个是create materialized view log语句,另一个是alter materialized view log force语句。这可以确保create脚本可以执行多次。

dbms_advanced_rewrite.build_safe_rewrite_equivalence所创建的语句结合了原始物化视图查询语 句来定义新的顶级物化视图查询语句。它可以确保使用新的顶级物化视图可以满足查询重写的需要。

使用优化的子物化视图来满足快速刷新的例子
这个例子介绍如何使用dbms_advisor.tune_mview来优化物化视图来满足快速刷新的需要。在这个例 子中,物化视图的查询语句使用了集合操作,它被换为一个子物化视图与一个新的顶级物化视图。

这个例子中的物化视图的查询语句包含union集合操作,因此物化视图本身不能满足快速刷新。然而 物化视图中的两个子查询可以合成一个单独的查询语句。

SQL> variable task_cust_mv varchar2(30);
SQL> variable create_mv_ddl varchar2(4000);
SQL> execute :task_cust_mv := 'cust_mv3';

PL/SQL procedure successfully completed.

SQL> execute :create_mv_ddl := 'create materialized view cust_mv refresh fast on demand  enable query rewrite as select s.prod_id, s.cust_id, count(*) cnt, sum(s.amount_sold)  sum_amount from sales s, customers cs where s.cust_id = cs.cust_id and s.cust_id in  (2005,1020) group by s.prod_id, s.cust_id union select s.prod_id, s.cust_id, count(*)  cnt, sum(s.amount_sold) sum_amount from sales s, customers cs where s.cust_id =  cs.cust_id and s.cust_id in (1005,1010,1012) group by s.prod_id, s.cust_id';

PL/SQL procedure successfully completed.



SQL> execute dbms_advisor.tune_mview(:task_cust_mv, :create_mv_ddl);

PL/SQL procedure successfully completed.

查看dbms_advisor.tune_mview优化的实现输出结果
下面的建议使用一个优化的子物化视图(包含两个子查询)与引用子物化视图的新的顶级物化视图

SQL> execute dbms_advisor.create_file(dbms_advisor.get_task_script ('cust_mv3'),'TUNE_RESULTS', 'jy_mv3_create.sql');

PL/SQL procedure successfully completed.



SQL> execute dbms_advisor.create_file(dbms_advisor.get_task_script ('cust_mv3','UNDO'),'TUNE_RESULTS', 'jy_mv3_undo_create.sql');

PL/SQL procedure successfully completed.


[root@localhost backup]# cat jy_mv3_create.sql
Rem  SQL Access Advisor: Version 11.2.0.4.0 - Production
Rem
Rem  Username:        SH
Rem  Task:            cust_mv3
Rem  Execution date:
Rem

CREATE MATERIALIZED VIEW LOG ON
    "SH"."SALES"
    WITH ROWID, SEQUENCE("PROD_ID","CUST_ID","AMOUNT_SOLD")
    INCLUDING NEW VALUES;

ALTER MATERIALIZED VIEW LOG FORCE ON
    "SH"."SALES"
    ADD ROWID, SEQUENCE("PROD_ID","CUST_ID","AMOUNT_SOLD")
    INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON
    "SH"."CUSTOMERS"
    WITH ROWID, SEQUENCE("CUST_ID")
    INCLUDING NEW VALUES;

ALTER MATERIALIZED VIEW LOG FORCE ON
    "SH"."CUSTOMERS"
    ADD ROWID, SEQUENCE("CUST_ID")
    INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW SH.CUST_MV$SUB1
    REFRESH FAST WITH ROWID ON COMMIT
    ENABLE QUERY REWRITE
    AS SELECT SH.SALES.CUST_ID C1, SH.SALES.PROD_ID C2, SUM("SH"."SALES"."AMOUNT_SOLD")
       M1, COUNT("SH"."SALES"."AMOUNT_SOLD") M2, COUNT(*) M3 FROM SH.CUSTOMERS,
       SH.SALES WHERE SH.SALES.CUST_ID = SH.CUSTOMERS.CUST_ID AND (SH.SALES.CUST_ID
       IN (2005, 1020, 1012, 1010, 1005)) GROUP BY SH.SALES.CUST_ID, SH.SALES.PROD_ID;

CREATE MATERIALIZED VIEW SH.CUST_MV
    REFRESH FORCE WITH ROWID
    ENABLE QUERY REWRITE
    AS (SELECT "CUST_MV$SUB1"."C2" "PROD_ID","CUST_MV$SUB1"."C1" "CUST_ID","CUST_MV $SUB1"."M3"
       "CNT","CUST_MV$SUB1"."M1" "SUM_AMOUNT" FROM "SH"."CUST_MV$SUB1" "CUST_MV$SUB1"
       WHERE "CUST_MV$SUB1"."C1"=2005 OR "CUST_MV$SUB1"."C1"=1020)UNION (SELECT
       "CUST_MV$SUB1"."C2" "PROD_ID","CUST_MV$SUB1"."C1" "CUST_ID","CUST_MV$SUB1"."M3"
       "CNT","CUST_MV$SUB1"."M1" "SUM_AMOUNT" FROM "SH"."CUST_MV$SUB1" "CUST_MV$SUB1"
       WHERE "CUST_MV$SUB1"."C1"=1012 OR "CUST_MV$SUB1"."C1"=1010 OR "CUST_MV $SUB1"."C1"=1005);

begin
  sys.dbms_advanced_rewrite.build_safe_rewrite_equivalence('SH.CUST_MV$RWEQ','select  s.prod_id, s.cust_id, count(*) cnt, sum(s.amount_sold) sum_amount from sales s,  customers cs where s.cust_id = cs.cust_id and s.cust_id in (2005,1020) group by  s.prod_id, s.cust_id union select s.prod_id, s.cust_id, count(*) cnt, sum(s.amount_sold)  sum_amount from sales s, customers cs where s.cust_id = cs.cust_id and s.cust_id in  (1005,1010,1012) group by s.prod_id, s.cust_id',' (SELECT "CUST_MV$SUB1"."C2"  "PROD_ID","CUST_MV$SUB1"."C1" "CUST_ID","CUST_MV$SUB1"."M3" "CNT","CUST_MV$SUB1"."M1"  "SUM_AMOUNT" FROM "SH"."CUST_MV$SUB1" "CUST_MV$SUB1" WHERE "CUST_MV$SUB1"."C1"=2005 OR  "CUST_MV$SUB1"."C1"=1020)UNION (SELECT "CUST_MV$SUB1"."C2" "PROD_ID","CUST_MV$SUB1"."C1"  "CUST_ID","CUST_MV$SUB1"."M3" "CNT","CUST_MV$SUB1"."M1" "SUM_AMOUNT" FROM "SH"."CUST_MV $SUB1" "CUST_MV$SUB1" WHERE "CUST_MV$SUB1"."C1"=1012 OR "CUST_MV$SUB1"."C1"=1010 OR  "CUST_MV$SUB1"."C1"=1005)',-179817297);
end
;
/

或者

SQL> select statement from dba_tune_mview where owner='SH' and task_name='cust_mv3' and  script_type='IMPLEMENTATION';

STATEMENT
---------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------- ------------------------------------------------
CREATE MATERIALIZED VIEW LOG ON "SH"."SALES" WITH ROWID, SEQUENCE  ("PROD_ID","CUST_ID","AMOUNT_SOLD")  INCLUDING NEW VALUES
ALTER MATERIALIZED VIEW LOG FORCE ON "SH"."SALES" ADD ROWID, SEQUENCE  ("PROD_ID","CUST_ID","AMOUNT_SOLD")  INCLUDING NEW VALUES
CREATE MATERIALIZED VIEW LOG ON "SH"."CUSTOMERS" WITH ROWID, SEQUENCE ("CUST_ID")   INCLUDING NEW VALUES
ALTER MATERIALIZED VIEW LOG FORCE ON "SH"."CUSTOMERS" ADD ROWID, SEQUENCE ("CUST_ID")   INCLUDING NEW VALUES
CREATE MATERIALIZED VIEW SH.CUST_MV$SUB1   REFRESH FAST WITH ROWID ON COMMIT ENABLE  QUERY REWRITE AS SELECT SH.SALES.CUST_ID C1, SH.SALES.PROD_ID C2, SUM ("SH"."SALES"."AMOUNT_SOLD") M1, COUNT("SH"."SALES"."AMOUNT_SOLD") M2, COUNT(*) M3 FROM  SH.CUSTOMERS, SH.SALES WHERE SH.SALES.CUST_ID = SH.CUSTOMERS.CUST_ID AND  (SH.SALES.CUST_ID IN (2005, 1020, 1012, 1010, 1005)) GROUP BY SH.SALES.CUST_ID, SH.SAL
ES.PROD_ID

CREATE MATERIALIZED VIEW SH.CUST_MV   REFRESH FORCE WITH ROWID ENABLE QUERY REWRITE AS   (SELECT "CUST_MV$SUB1"."C2" "PROD_ID","CUST_MV$SUB1"."C1" "CUST_ID","CUST_MV$SUB1"."M3"  "CNT","CUST_MV$SUB1"."M1" "SUM_AMOUNT" FROM "SH"."CUST_MV$SUB1" "CUST_MV$SUB1" WHERE  "CUST_MV$SUB1"."C1"=2005 OR "CUST_MV$SUB1"."C1"=1020)UNION (SELECT "CUST_MV$SUB1"."C2"  "PROD_ID","CUST_MV$SUB1"."C1" "CUST_ID","CUST_MV$SUB
1"."M3" "CNT","CUST_MV$SUB1"."M1" "SUM_AMOUNT" FROM "SH"."CUST_MV$SUB1" "CUST_MV$SUB1"  WHERE "CUST_MV$SUB1"."C1"=1012 OR "CUST_MV$SUB1"."C1"=1010 OR "CUST_MV$SUB1"."C1"=1005)

DBMS_ADVANCED_REWRITE.BUILD_SAFE_REWRITE_EQUIVALENCE ('SH.CUST_MV$RWEQ','select  s.prod_id, s.cust_id, count(*) cnt, sum(s.amount_sold) sum_amount from sales s,  customers cs where s.cust_id = cs.cust_id and s.cust_id in (2005,1020) group by  s.prod_id, s.cust_id union select s.prod_id, s.cust_id, count(*) cnt, sum(s.amount_sold)  sum_amount from sales s, customers cs where s.cust_id = cs.cust_id and
s.cust_id in (1005,1010,1012) group by s.prod_id, s.cust_id',' (SELECT "CUST_MV $SUB1"."C2" "PROD_ID","CUST_MV$SUB1"."C1" "CUST_ID","CUST_MV$SUB1"."M3" "CNT","CUST_MV $SUB1"."M1" "SUM_AMOUNT" FROM "SH"."CUST_MV$SUB1" "CUST_MV$SUB1" WHERE "CUST_MV $SUB1"."C1"=2005 OR "CUST_MV$SUB1"."C1"=1020)UNION (SELECT "CUST_MV$SUB1"."C2"  "PROD_ID","CUST_MV$SUB1"."C1" "CUST_ID","CUST_MV$SUB1"."M3" "CNT","CUST_MV$SU
B1"."M1" "SUM_AMOUNT" FROM "SH"."CUST_MV$SUB1" "CUST_MV$SUB1" WHERE "CUST_MV $SUB1"."C1"=1012 OR "CUST_MV$SUB1"."C1"=1010 OR "CUST_MV$SUB1"."C1"=1005)',-179817297)

原始物化视图cust_mv查询被优化成一个包含谓词条件的两个子查询的子物化视图cust_mv$sub1。为 了能让子物化视图满足快速刷新的需要,对所引用的基表增加物化视图日志。

查看dbms_advisor.tune_mview过程的撤消输出结果:

[root@localhost backup]# cat jy_mv3_undo_create.sql
Rem  SQL Access Advisor: Version 11.2.0.4.0 - Production
Rem
Rem  Username:        SH
Rem  Task:            cust_mv3
Rem  Execution date:
Rem

DROP MATERIALIZED VIEW SH.CUST_MV$SUB1;

DROP MATERIALIZED VIEW SH.CUST_MV;

begin
  sys.dbms_advanced_rewrite.drop_rewrite_equivalence('SH.CUST_MV$RWEQ');
end;
/

使用dbms_advisor来执行sql access advisor

使用dbms_advisor来执行sql access advisor主要有以下几个步骤:
1.创建任务
2.定义工作量
3.生成建议
4.查看与实现建议

步骤1 创建任务
在任务建议被生成之前,必须创建一个任务。任务很重要因为它是与建议进程相关的所有信息的居住 地,包括建议处理的结果。如果使用dbms_advisor.quick_tune过程,那么会自动创建任务。在所有 其它的情况下,必须手动执行dbms_advisor.create_task过程来创建任务。通过使用dbms_adviosr.set_task_parameter过程来定义参数可以控制任务的执行。

步骤2 定义工作量
工作量是SQL Access Advisor的主要输入,并且它由一个或多个SQL语句组成,并且加上用来完全描 述每个SQL语句的各种统计信息与属性。如果工作量包含了一个目标业务系统的所有SQL语句,那么这 个工作量可以认为是一个完全工作量,如果工作量只包含了一部分SQL语句,那么就认为是部分工作 量。完全与部分工作量之间的差别是,对于完全工作量,SQL Access Advisor如果发现有些现有的物 化视图与索引没有被有效的使用,可能会建议删除特定的现有物化视图与索引。

通常来说,SQL Access Adviosr使用工作量作为所有分析活云贵的基础。虽然工作量可能包含各种各 样的SQL语句,它将根据特定的统计信息,业务重要性或统计信息与业务重要性的组合来仔细排列条 目。这种排列是关键,因为它能让SQL Access Advisor使用更少的业务影响来处理最重要的SQL语句

对于数据集合可以认为是一个有效的工作量,SQL Access Advisor可能需要存在特定的属性。如果丢 失某些条目虽然可以执行分析,那么建议的质量将会大幅度的降低。例如,SQL Access Advisor请求 一个包含SQL查询与谁执行这个查询的工作量。所有其它属性都是可选项,然而,如果工作量还包含 了I/O与CPU信息,那么SQL Access Advisor可能会更好的对语句评估当前的效率。工作量作为一个单 独的对象被存储,它是由dbms_advisor.create_sqlwkld过程所创建,并且可以很容易的在多个指导 任务之间共享内存。因为工作量是独立的,它必须使用dbms_advisor.add_sqlwkld_ref过程来链接到 一个指导任务。一旦这种链接被建立,直到所有指导任务删除了对工作量的依赖性之前,这种链接是 不能被删除的。当一个父指导任务被删除或者当工作量引用由用户执行 dbms_advisor.delete_sqlwkld_ref过程手动从指导任务中删除后,工作量引用也将被册子。

也可以在不使用工作量的情况下使用SQL Access Advisor,然而,为了最佳的结果,工作量必须以用 户提供的表形式被提供,SQL Tuning Set或从SQL Cache中导入。如果没有提供工作量,SQL Access Advisor可以生成并使用基于你用户方案所定义规模的假想工作量。

一旦工作量被加载到档案库或在生成建议时加载,可以对工作量应用过滤来限制分析什么。这提供了 一种基于不同工作量生成不同建议的能力。

建议处理与工作量的定制是由SQL Access Advisor参数来控制的。参数在任务或工作量对象的生命周 期内仍然是生效的。当使用dbms_advisor.set_task_parameter过程设置参数后,直到重新设置之前 参数值是不会发生改变的。

步骤3 生成建议
一旦任务创建并且工作量被链接到任务,且还设置了合适的参数,可以使用 dbms_advisor_execute_task过程来生成建议。这些建议会存储在SQL Access Advisor档案库中。

建议处理机制会生成一些建议并且每个建议将由一个或多个操作组成。例如,创建物化视图,然后为 了收集统计信息而对其进行分析。

任务建议可能是一个复杂解决方案的简单建议,它要求实现一组数据库对象,比如索引,物化视图与 物化视图日志。当一个指导任务被执行时,它将仔细分析收集的数据与用户调整的任务参数。SQL Access Advisor将试图基于内置的机制来格式化解决方案。解决方案然后会被精致的以结构化建议的 形式被存储可以由用户查看与实现。

步骤4 查看与实现建议
查看建议有两种方式:使用目录视图或使用dbms_advisor.get_task_script过程来生成脚本。不是所有的建议你都需要接受,并且可以在建议脚本中标记你要接受的建议。最后的步骤就是实现建 议然后验证查询性能是否有所提高。

SQL Access Advisor档案库
SQL Access Advisor所需要与所生成的所有信息都存储在SQL Access Advisor档案库中,它是数据库 数据字典的一部分。使用档案库有以下优点:
.为SQL Access Advisor收集完整的工作量
.支持历史数据
.由服务器进行管理

使用SQL Access Advisor需要的系统权限
需要有advisor权限来管理或使用SQL Access Advisor。当处理一个工作量时,SQL Access Advisor 为了试图验证每个语句需要识别引用的表与列。通过处理每个语句就像原始用户执行语句一样来完成 验证。如果用户对于特定的表没有select权限,SQL Access Advisor将跳过语句所引用的表。这可能 造成许多语句从分析操作变成了执行操作。如果SQL Access Advisor在一个工作量中执行所有语句, 工作量将会失效并且SQL Access Advisor将会返回如下信息:
QSM-00774, there are no SQL statements to process for task TASK_NAME

为了避免丢失关键的工作量查询,当前数据库用户必须有对被分析的物化视图所引用的目标表有 select权限。对于这些表,select权限不能通过角色被获得。

设置任务与模板
1.创建任务
任务是你所定义需要分析什么以及分析结果将存储在什么地方。用户可以创建任意数量的任务,每个 有特定的设置。都是基于相同指导任务模型并且共享相同档案库的。使用dbms_advisor.create_task 过程创建任务的语法如下:

DBMS_ADVISOR.CREATE_TASK (
advisor_name IN VARCHAR2,
task_id OUT NUMBER,
task_name IN OUT VARCHAR2,
task_desc IN VARCHAR2 := NULL,
template IN VARCHAR2 := NULL,
is_template IN VARCHAR2 := 'FALSE',
how_created IN VARCHAR2 := NULL);



DBMS_ADVISOR.CREATE_TASK (
advisor_name IN VARCHAR2,
task_name IN VARCHAR2,
task_desc IN VARCHAR2 := NULL,
template IN VARCHAR2 := NULL,
is_template IN VARCHAR2 := 'FALSE',
how_created IN VARCHAR2 := NULL);

下面是创建任务的一示例:

SQL> variable task_id number;
SQL> variable task_name varchar2(255);
SQL> execute :task_name:='JYTASK';

PL/SQL procedure successfully completed
task_name
---------
JYTASK
SQL> execute dbms_advisor.create_task('SQL Access Advisor',:task_id,:task_name);

PL/SQL procedure successfully completed
task_id
---------
45354
task_name
---------
JYTASK

使用模板
当对一个任务或工作量识别一个空闲配置后,这个配置可以被作为模板进行保存。将来的任务可以基 于这个模板进行创建。
通过设置模板,当执行调整分析时可以节省时间。还能对业务操作制定一种合 适的调整分析。

为了使用模板创建任务,当创建新任务时需要指定要使用的模板。这时,SQL Access Advisor将从模 板中复制数据与参数到新创建的任务中。当创建任务时或使用dbms_advisor.update_task_attribute 过程来设置模板属性时可以将现有的任务设置为模板。

为了使用任务作为模板,可以告诉SQL Access Advisor当创建新任务时指定要使用的任务。在这时, SQL Access Advisor将复制任务模板的数据与参数设置到新创建的任务。可以通过设置模板属性将现 有的任务设置为模板。

工作量对象也可以用来作为模板来创建新的工作量对象。

创建模板
1.创建一个名叫my_template的模板

SQL> variable template_id number;
SQL> variable template_name varchar2(255);
SQL> execute :template_name := 'my_template';

PL/SQL procedure successfully completed
template_name
---------
my_template
SQL> execute dbms_advisor.create_task('SQL Access Advisor',:template_id,:template_name,  is_template => 'true');

PL/SQL procedure successfully completed
template_id
---------
45357
template_name
---------
my_template

2.设置模板参数

SQL> -- set naming conventions for recommended indexes/mvs
SQL> execute dbms_advisor.set_task_parameter(:template_name, 'INDEX_NAME_TEMPLATE',  'SH_IDX$$_');

PL/SQL procedure successfully completed
template_name
---------
my_template
SQL> execute dbms_advisor.set_task_parameter(:template_name, 'MVIEW_NAME_TEMPLATE',  'SH_MV$$_');

PL/SQL procedure successfully completed
template_name
---------
my_template
SQL> -- set default tablespace for recommended indexes/mvs
SQL> execute dbms_advisor.set_task_parameter(:template_name, 'DEF_INDEX_TABLESPACE',  'USERS');

PL/SQL procedure successfully completed
template_name
---------
my_template
SQL> execute dbms_advisor.set_task_parameter(:template_name, 'DEF_MVIEW_TABLESPACE',  'USERS');

PL/SQL procedure successfully completed
template_name
---------
my_template

3.使用模板来创建任务

SQL> variable task_id number;
SQL> variable task_name varchar2(255);
SQL> execute :task_name := 'mytask';

PL/SQL procedure successfully completed
task_name
---------
mytask
SQL> execute dbms_advisor.create_task('SQL Access Advisor', :task_id,:task_name,  template=>'my_template');

PL/SQL procedure successfully completed
task_id
---------
45376
task_name
---------
mytask

下面的例子使用预先定义的模板SQLACCESS_WAREHOUSE来创建任务

execute dbms_advisor.create_task('SQL Access Advisor', :task_id, :task_name,  template=>'SQLACCESS_WAREHOUSE');

管理工作量
管理工作量主要从以下方面进行:
.工作量对象
.使用工作量
.链接任务与工作量
.定义工作量内容
.向工作量增加SQL语句
.从工作量中删除SQL语句
.改变工作量中的SQL语句
.维护工作量
.删除工作量

工作量对象
因为工作量作为单独的工作量对象被存储,它可以很容易的在多个指导任务之间被共享。一旦一个工 作量对象被一个指导任务所引用,那么直到所有指导任务删除它们所依赖的数据之前,工作量是不能 被删除或修改的。当父指导任务被删除或者当工作量引用被手动从指导任务中被删除时,工作量引用 将会被删除。

当工作量可以使用时,SQL Access Advisor会执行的最好。SQL Access Workload档案库能够存储多 个工作量,因此真实世界数据仓库或事务处理环境中的不同用户可以在很长一段时间内查看并且可以 踊跃数据库的重启。

在工作量的真实SQL语句被定义之前,工作量必须使用dbms_advisor.create_sqlwkld过程来进行创建
然后,使用合适的import_sqlwkld过程来加载工作量。一个特定的工作量可以通过调用 dbms_advisor.delete_sqlwkld过程来进行删除。为了删除当前用户的所有工作量,可以在调用 dbms_advisor.delete_sqlwkld过程时传递一个常量advisor_all或%。

使用工作量
dbms_advisor.create_sqlwkld过程用来创建工作量并且它必须在执行任何其它工作量操作之前存在 ,比如导入或更新SQL语句。通过工作量名来识别工作量,因此应该定义一个唯一名称来标识操作。

语法如下:

dbms_advisor.create_sqlwkld (
workload_name in out varchar2,
description in varchar2 := null,
template in varchar2 := null,
is_template in varchar2 := 'false');

创建工作量

SQL> variable workload_name varchar2(255);
SQL> execute :workload_name := 'myworkload';

PL/SQL procedure successfully completed
workload_name
---------
myworkload
SQL> execute dbms_advisor.create_sqlwkld(:workload_name,'this is my first workload');

PL/SQL procedure successfully completed
workload_name
---------
myworkload

使用模板来创建工作量
1.创建变量

SQL> variable template_id number;
SQL> variable template_name varchar2(255);

2.创建名叫my_wk_template的模板

SQL> execute :template_name := 'my_wk_template';

PL/SQL procedure successfully completed
template_name
---------
my_wk_template
SQL> execute dbms_advisor.create_sqlwkld(:template_name, is_template=>'true');

PL/SQL procedure successfully completed
template_name
---------
my_wk_template

3.设置模板参数。下面设置过滤只有insur_changde方案中的表被优化:

SQL> -- set USERNAME_LIST filter to insur_changde
SQL> execute dbms_advisor.set_sqlwkld_parameter(:template_name, 'USERNAME_LIST',  'insur_changde');

PL/SQL procedure successfully completed
template_name
---------
my_wk_template

4.使用模板来创建工作量

SQL> variable workload_name varchar2(255);
SQL> execute :workload_name := 'myworkload';

PL/SQL procedure successfully completed
workload_name
---------
myworkload
SQL> execute dbms_advisor.create_sqlwkld (:workload_name, 'this is my first workload',  'my_wk_template');

PL/SQL procedure successfully completed
workload_name
---------
myworkload

将任务与工作量进行链接
在建议进程开始之前,任务必须被链接到一个工作量。可以通过调用 dbms_advisor.add_sqklwkld_ref过程,通过使用它们的名字来将任务与工作量进行链接。这个过程 会在指导任务与工作量之间创建一个链接。一旦在指导任务与工作量之间创建了链接,工作量将会受 保护从而避免被删除。语法如下:

dbms_advisor.add_sqlwkld_ref (task_name in varchar2,workload_name in varchar2);

下面将任务mytask与工作量myworkload进行链接.

SQL> execute dbms_advisor.add_sqlwkld_ref('mytask', 'myworkload');

PL/SQL procedure successfully completed

定义工作量的内容
一旦工作量被创建,它必须要加载一些信息。理想情况下,一个工作量将由SQL语句(除非它是一个 假想的工作量)组成,它们是在数据库中正在执行的SQL语句。SQL Access Advisor可以从以下来源 获得工作量:
.SQL调优集
.加载用户定义的工作量
.加载SQL缓存工作量
.使用假想工作量
.使用汇总的9i工作量

SQL调优集
SQL调优集是工作量档案中的一种工作量。可以使用SQL调整集作为SQL Access Advisor的工作量通过 dbms_advisor.import_workload_sts过程进行导入。下面是使用语法:

dbms_advisor.import_sqlwkld_sts (workload_name in varchar2,
sts_owner in varchar2,
sts_name in varchar2,
import_mode in varchar2 := 'new',
priority in number := 2,
saved_rows out number,
failed_rows out number);


dbms_advisor.import_sqlwkld_sts (workload_name in varchar2,
sts_name in varchar2,
import_mode in varchar2 := 'new',
priority in number := 2,
saved_rows out number,
failed_rows out number);

在工作量被收集并且过滤语句后,SQL Access Advisor使用工作量中的DML语句来计算使用统计数据 ,下面的例子使用SQL调优集创建一个名叫my_sts_workload的工作量

SQL> exec dbms_sqltune.create_sqlset(sqlset_name => 'MY_STS_WORKLOAD',description =>  'SQL Access Advisor Test',sqlset_owner => 'INSUR_CHANGDE');

PL/SQL procedure successfully completed

SQL>
SQL> declare
  2    cur dbms_sqltune.sqlset_cursor;
  3  begin
  4    open cur for
  5      select value(p)
  6        from table(dbms_sqltune.select_cursor_cache ('parsing_schema_name=''INSUR_CHANGDE'' and force_matching_signature<>0 and  buffer_gets>1000 and executions>100 and command_type<>2',
  7                                                    null,
  8                                                    null,
  9                                                    null,
 10                                                    null,
 11                                                    1,
 12                                                    null,
 13                                                    'all')) p;
 14    dbms_sqltune.load_sqlset(sqlset_name     => 'MY_STS_WORKLOAD',
 15                             populate_cursor => cur);
 16  end;
 17  /

PL/SQL procedure successfully completed



SQL> variable sqlsetname varchar2(30);
SQL> variable workload_name varchar2(30);
SQL> variable saved_stmts number;
SQL> variable failed_stmts number;
SQL> execute :sqlsetname := 'my_sts_workload';

PL/SQL procedure successfully completed
sqlsetname
---------
my_sts_workload
SQL> execute :workload_name := 'my_workload';

PL/SQL procedure successfully completed
workload_name
---------
my_workload
SQL> execute dbms_advisor.create_sqlwkld (:workload_name);

PL/SQL procedure successfully completed
workload_name
---------
my_workload

SQL> execute dbms_advisor.import_sqlwkld_sts (:workload_name ,:sqlsetname, 'NEW',  1,:saved_stmts, :failed_stmts);

PL/SQL procedure successfully completed
workload_name
---------
my_workload
sqlsetname
---------
MY_STS_WORKLOAD
saved_stmts
---------
1219
failed_stmts
---------
13

加载用户定义工作量
为了加载用户定义工伤脑筋量,使用dbms_advisor.import_sqlwkld_user过程。这个过程从用户结构 表或视图中收集应用程序工作量并将其保存在指导档案库中。owner_name与table_name两个参数识别 表是从那种类型的工作量中获得的。对于工作量存储在那个方案,表名,或有多少个用户定义的表存 在都没有限制。唯一的要求就是用户表的结构必须与USER_WORKLOAD相关,并且用户要对工作量表或 视图有select访问权限。语法如下:

dbms_advisor.import_sqlwkld_user (
workload_name in varchar2,
import_mode in varchar2 := 'NEW',
owner_name in varchar2,
table_name in varchar2,
saved_rows out number,
failed_rows out number);

下面的例子加载之前创建的工作量MYWORKLOAD,使用用户表SH.USER_WORKLOAD。假设表 USER_WORKLOAD已经加载了SQL语句,并且它的结构与USER_WORKLOAD Table Format相符

variable saved_stmts number;
variable failed_stmts number;
execute dbms_advisor.import_sqlwkld_user(
'MYWORKLOAD', 'NEW', 'SH', 'USER_WORKLOAD', :saved_stmts, :failed_stmts);

加载SQL Cache工作量
可以使用dbms_advisor.import_sqlwkld_sqlcache过程来获得SQL Cache工作量。在调用这个过程的 时候,SQL Cache中的当前内容将会被分析并且加载到工作量中。 dbms_advisor.import_sqlwkld_sqlcache过程从SQL Cache中加载SQL工作量,语法如下:

dbms_advisor.import_sqlwkld_sqlcache (
workload_name in varchar2,
import_mode in varchar2,
priority in number := 2,
saved_rows out number,
failed_rows out number);

下面的例子从SQL Cache中加载之前创建的工作量MYWORKLOAD。加载工作量语句的优先级为2:

variable saved_stmts number;
variable failed_stmts number;
execute dbms_advisor.import_sqlwkld_sqlcache('MYWORKLOAD', 'APPEND', 2, :saved_stmts,  :failed_stmts);

SQL Access Advisor可以从SQL Cache中检索工作量信息。如果收集的数据是从实例参数 cursor_sharing设置为similar或force的服务器中所检索到的,那么使用文本值的查询将会被转换为 包含系统生成变量的语句。如果使用SQL Access Advisor来建议物化视图,那么服务器应该将参数 cursor_sharing设置为exact,因此有where子句的物化视图会被建议。

使用假想工作量
在许多情况下,应用程序工作量是不存在的。 在这种情况下,SQL Access Advisor可以检查当前逻 辑方案设计与基于表之间所定义的关系的格式化建议。这种类型的工作量也称为假想工作量。SQL Access Advisor可以产生一组初始化建议,并且将会成为优化应用程序的坚实基础。

使用假想工作量的优点:
.只需要方案与表关系
.对于假定场景建模是有效的

使用假想工作量的缺点:
.只有定义了维度才能工作
.在建议访问结构方面没有提供关于影响DML操作的任何信息
.不一定是完整的

为了成功导入一个假想工作量,目标方案必须包含维度信息。可以使用 dbms_advisor.import_sqlwkld_schema过程,语法如下:

dbms_advisor.import_sqlwkld_schema (
workload_name in varchar2,
import_mode in varchar2 := 'NEW',
priority in number := 2,
saved_rows out number,
failed_rows out number);

下面的例子创建一个名叫SCHEMA_WKLD的假想工作量,将valid_table_list设置为sh,并调用 dbms_advisor.import_sqlwkld_schema过程来生成假想工作量。

variable workload_name varchar2(255);
variable saved_stmts number;
variable failed_stmts number;
execute :workload_name := 'SCHEMA_WKLD';
execute dbms_advisor.create_sqlwkld(:workload_name);
execute dbms_advisor.set_sqlwkld_parameter(:workload_name,VALID_TABLE_LIST, 'SH');
execute dbms_advisor.import_sqlwkld_schema(:workload_name, 'NEW', 2, :saved_stmts,  :failed_stmts);

当使用dbms_advisor.import_sqlwkld_schema过程时,valid_table_list参数不能包含通配符比如 SCO%或SCOTT.EMP%。只支持唯一的通配符格式SCOTT.%,它将指定用户方案中的所有表。

使用9i汇总指导工作量
使用9i汇总工作量可以创建工作量。这些工作量可以被SQL Access Advisor使用,通过使用 dbms_advisor.import_sqlwld_sumadv过程来进行导入。为了使用这个过程,必须知道Oracle 9i的工 作量ID。

这个过程从汇总指导工作量中收集SQL工作量。这个过程的目标是为了帮助Oracle 9i汇总指导用户迁 移到SQL Access Advisor。语法如下:

DBMS_ADVISOR.IMPORT_SQLWKLD_SUMADV (
workload_name IN VARCHAR2,
import_mode IN VARCHAR2 := 'NEW',
priority IN NUMBER := 2,
sumadv_id IN NUMBER,
saved_rows OUT NUMBER,
failed_rows OUT NUMBER);

下面的例子从Oracle 9i汇总指导工作量来创建一个SQL工作量。oracle 9i的工作量workload_id为 777
1.创建一些变量

variable workload_name varchar2(255);
variable saved_stmts number;
variable failed_stmts number;

2.创建工作量WKLD_9I

execute :workload_name := 'WKLD_9I';
execute dbms_advisor.create_sqlwkld(:workload_name);

3.从Oracle 9i汇总指导工作量导入SQL工作量

execute dbms_advisor.import_sqlwkld_sumadv (:workload_name, 'NEW', 2, 777, :saved_stmts,  :failed_stmts);

SQL Access Advisor工作量参数
一个SQL工作量可以通过dbms_advisor.set_sqlwkld_parameter过程设置一个或多个参数来在加载时 进行过滤操作。

下面的例子设置了SQL工作量参数。将SQL_LIMIT设置为3,ORDER_LIST设置为OPTIMIZER_COST。当导入 工作量时,这意味着,语句将通过OPTIMIZER_COST来进行排序并且前三个语句将会被保留。

-- Order statements by OPTIMIZER_COST
execute dbms_advisor.set_sqlwkld_parameter ('MYWORKLOAD', 'ORDER_LIST',  'OPTIMIZER_COST');
-- Max number of statements 3
execute dbms_advisor.set_sqlwkld_parameter('MYWORKLOAD', 'SQL_LIMIT', 3);

向工作量增加SQL语句
一种替导入工作量的方法是手动指定SQL语句并且使用dbms_advisor.add_sqlwkld_statement过程将 它们加入工作量。这个过程允许向指定的工作量增加SQL语句。语法如下:

dbms_advisor.add_sqlwkld_statement (
workload_name in varchar2,
module in varchar2,
action in varchar2,
cpu_time in number := 0,
elapsed_time in number := 0,
disk_reads in number := 0,
buffer_gets in number := 0,
rows_processed in number := 0,
optimizer_cost in number := 0,
executions in number := 1,
priority in number := 2,
last_execution_date in date := 'SYSDATE',
stat_period in number := 0,
username in varchar2,
sql_text in clob);

下面的例子向MYWORKLOAD工作量增加一个单独的SQL语句:

variable sql_text varchar2(400);
execute :sql_text := 'select avg(amount_sold) from sales';
execute dbms_advisor.add_sqlwkld_statement ('MYWORKLOAD', 'MONTHLY', 'ROLLUP',  priority=>1, executions=>10,username => 'SH', sql_text => :sql_text);

从工作量中删除SQL语句
使用dbms_advisor.delete_sqlwkld_statement过程可以从指定的工作量中删除SQL语句。在删除SQL 语句时需要指定sql_id。

dbms_advisor.delete_sqlwkld_statement (workload_name in varchar2,sql_id in number);

下面的例子将从MYWORKLOAD工作量中从邮sql_id为10的SQL语句:

execute dbms_advisor.delete_sqlwkld_statement('MYWORKLOAD', 10);

如果工作量当前被一个活动任务所引用,那么工作量是不能被修改或删除的。如果工作量不是处于初 始状态,那么就可以认为工作量处于活动状态。dbms_advisor.reset_task过程可以将工作量设置为 初始状态。

改变工作量中的SQL语句
可以通过dbms_advisor.update_sqlwkld_statement过程来修改工作量中的SQL语句。这个过程将会更 新指定工作量中的现有SQL语句。通过指定sql_id来更新SQL语句,语法如下:

dbms_advisor.update_sqlwkld_statement (
workload_name in varchar2,
sql_id in number,
application in varchar2 := null,
action in varchar2 := null,
priority in number := null,
username in varchar2 := null);


dbms_advisor.update_sqlwkld_statement (
workload_name in varchar2,
search in varchar2,
updated out number,
application in varchar2 := null,
action in varchar2 := null,
priority in number := null,
username in varchar2 := null);

下面的例子将sql_id为10的SQL语句的优先级修改为3

维护工作量
有以下几种操作可以用来维护工作量:
.设置工作量属性
.重设工作量
.删除工作量与任务之间的链接

设置工作量属性
dbms_advisor.update_sqlwkld_attributes过程可以用来修改工作量对象或模板的各种属性。有些这 样的属性是用来描述的,比如描述它是否是一个模板或是只读。语法如下:

dbms_advisor.update_sqlwkld_attributes (
workload_name in varchar2,
new_name in varchar2 := null,
description in varchar2 := null,
read_only in varchar2 := null,
is_template in varchar2 := null,
how_created in varchar2 := null);

下面的例子将MYWORKLOAD工作量修改为只读:

execute dbms_advisor.update_sqlwkld_attributes('myworkload', read_only=> 'true');

重设工作量
dbms_advisor.reset_sqlwkld过程可以将一个工作量设置为初始状态。当工作量数据保持原样时这将 会删除所有的日记和日志信息,并重新计算易波动的统计信息。这个过程应该在任何工作量被调整后 ,比如增加或删除SQL语句后执行这个过程。下面的例子将重新设置MYWORKLOAD工作量.

execute dbms_advisor.reset_sqlwkld('myworkload');

删除工作量与任务之间的链接
在任务或工作量被删除之前,如果它被分别链接到一个工作量或任务,那么在任务与工作量之间的链 接必须使用delete_sqlwkld_ref过程来进行删除。下面的例子将会把任务MYTASK与SQL工作量 MYWORKLOAD之间的链接删除。

execute dbms_advisor.delete_sqlwkld_ref('mytask', 'myworkload');

删除工作量
当工作量不再需要时,可以使用dbms_advisor.delete_sqlwkld过程来将其删除。可以删除所有工作 量或者一个特定的集合,但如果工作量仍然被链接到一个任务,那么这个工作量将不会被删除。

下面的例子用来删除一个特定的工作量。

dbms_advisor.delete_sqlwkld (workload_name in varchar2);
execute dbms_advisor.delete_sqlwkld('myworkload');

处理建议
处理建议包含以下方面的内容:
.建议选项
.评估模式
.生成建议
.查看建议
.SQL工作量日记
.停止建议处理
.标记建议
.修改建议
.生成SQL脚本
.何时将不再需要建议

建议选项
在建议生成之前,任务的参数首先必须使用dbms_advisor.set_task_parameter过程来进行定义。如 果没有定义参数,那么将会使用缺省值。使用dbms_advisor.set_task_parameter过程设置参数的语 法如下:

dbms_advisor.set_task_parameter (
task_name in varchar2,
parameter in varchar2,
value in [varchar2 | number]);

在下面的例子中,将任务MYTASK的存储大小修改为100MB。这将指示对于建议将有额外的100MB空间。 如果设置为0,说明没有额外的空间被分配。如果设置为一个负值,则说明指导必须试图削减当前空间 利用的大小,其大小是参数所指定。

execute dbms_advisor.set_task_parameter('mytask','storage_change', 100000000);

在下面的例子中,将使用valid_table_list参数来过滤掉所有不包括sh.sales和sh.customers表的查询。

execute dbms_advisor.set_task_parameter('MYTASK', 'VALID_TABLE_LIST', 'SH.SALES,  SH.CUSTOMERS');

评估模式
当执行一个任务时,SQL Access Advisor有两种操作模式:问题解决与评估。缺省情况下,SQL Access Advisor将试图通过寻找索引结构,物化视图与物化视图日志的改进来解决访问方法的问题。 当只进行评估操作时,SQL Access Advisor将只会给出提供什么访问结构让工作量使用的见意。例如 ,一个问题可能可以通过创建一个新索引,增加一个新列到物化视图日志,等等的方法来解决。而当 进行评估操作时,只会生成比如保留索引,保留物化视图等建议。当进行评估操作时不会考虑对访问 方法的调整。它是一种严格的方法用来查看现有访问方法结构和它们如何被提供给工作量所使用。

生成建议
通过执行dbms_advisor_execute_task过程并指定任务名来生成建议。在这个过程执行完成后,可以 检查dba_advisor_log表来检查真实的执行状态与生成的建议数量与操作数。可以使用任务名来查询
{dba,user}_advisor_recommendations视图来查看建议,查询{dba,user}_advisor_actions视图来查 看这些建议的操作。

dbms_advisor.execute_task
dbms_advisor.execute_task过程用来对特定任务执行SQL Access Advisor分析或评估。执行任务是 一个同步操作,所以直到操作完成之前不会将控制返回给用户。任务在执行时或执行完成后,可以检 查dba_advisor_log表来查看真实的执行状态。

执行dbms_advisor.execute_task过程来生成建议,建议是由一个或多个操作组成,比如创建物化视 图日志或物化视图。语法如下:

dbms_advisor.execute_task (task_name in varchar2);

下面的例子执行任务MYTASK:

execute dbms_advisor.execute_task('MYTASK');

查看建议
由SQL Access Advisor生成的每个建议可以使用目录视图来进行查看,比如{dba,user} _advisor_recommendations视图。然而,最简单的方法是使用dbms_advisor.get_task_script过程或 使用EM中的SQL Access Advisor,它是一个图形工具来显示建议并且提供了一个超链接来快速查看那 个语句将会受益于这个建议。由SQL Access Advisor生成的每个建议被链接到受益于它的SQL语句。

下面显了由SQL Access Advisor由生成的建议(rec_id),以及它们的排名与总收益。排名是一种测量 标准判断建议对于查询的重要性。收益是所有查询使用建议后它的执行成本(优化器成本)的提高总量。

variable workload_name varchar2(255);
variable task_name varchar2(255);
execute :task_name := 'MYTASK';
execute :workload_name := 'MYWORKLOAD';
select rec_id, rank, benefit
from user_advisor_recommendations where task_name = :task_name;
    REC_ID       RANK    BENEFIT
---------- ---------- ----------
         1          2       2754
         2          3       1222
         3          1       5499
         4          4        594

为了识别那个查询受益于那个建议,可以使用dba_*与user_advisor_sqla_wk_stmts。precost与 postcost是分别对不使用与使用建议的发生改变的访问结构来评估优化器成本的项目(在explain plan中所示)。为了查看每个查询,执行以下语句:

select sql_id, rec_id, precost, postcost,
(precost-postcost)*100/precost as percent_benefit
from user_advisor_sqla_wk_stmts
where task_name = :task_name and workload_name = :workload_name;
SQL_ID         REC_ID    PRECOST   POSTCOST PERCENT_BENEFIT
---------- ---------- ---------- ---------- ---------------
121                 1       3003        249      91.7082917
122                 2       1404        182       87.037037
123                 3       5503          4      99.9273124
124                 4        730        136       81.369863

每一个建议都是由一个或多个操作组成,这些操作必须都执行才能受益于建议。SQL Access Advisor 提供了以下类型的操作:

.create|drop|retain materialized view
.create|alter|retain materialized view log
.create|drop|retain index
.gather stats

create操作关联新的访问结构,retain建议指示现有的访问结构必须保留,drop建议只有在 workload_scope参数设置为full时才会生成。gather stats操作将生成调用dbms_stats过程的语句用 来对新生成的访问结构收集统计信息。需要注意的是可能多个建议引用了相同操作,然后在生成的建 议脚本中,对于每个操作只会看到一次。

在下面的例子中,可以看到对于这组建议有许多不同的操作。

select 'action count', count(distinct action_id) cnt
from user_advisor_actions where task_name = :task_name;

'ACTIONCOUNT        CNT
------------ ----------
Action Count         20
-- see the actions for each recommendations
select rec_id, action_id, substr(command,1,30) as command
from user_advisor_actions where task_name = :task_name
order by rec_id, action_id;

REC_ID      ACTION_ID COMMAND
---------- ---------- ------------------------------
         1          5 CREATE MATERIALIZED VIEW LOG
         1          6 ALTER MATERIALIZED VIEW LOG
         1          7 CREATE MATERIALIZED VIEW LOG
         1          8 ALTER MATERIALIZED VIEW LOG
         1          9 CREATE MATERIALIZED VIEW LOG
         1         10 ALTER MATERIALIZED VIEW LOG
         1         11 CREATE MATERIALIZED VIEW
         1         12 GATHER TABLE STATISTICS
         1         19 CREATE INDEX
         1         20 GATHER INDEX STATISTICS
         2          5 CREATE MATERIALIZED VIEW LOG
         2          6 ALTER MATERIALIZED VIEW LOG
         2          9 CREATE MATERIALIZED VIEW LOG
         ...

每个操作有多个属性,它是关于访问结构的属性。每个访问结构的名称与表空间被分别存储在 dba_advisor_actions视图中的attr1与attr2列中。每个新访问结构所占用的空间大小存储在 dba_advisor_actions视图中的num_attr1列中。每个操作的所有其它属性是不同的。

下面的PL/SQL过程可以用来打印建议的一些属性.

create or replace procedure show_recm (in_task_name in varchar2) is
cursor curs is
 select distinct action_id, command, attr1, attr2, attr3, attr4
from user_advisor_actions
where task_name = in_task_name
order by action_id;

v_action number;
v_command varchar2(32);
v_attr1 varchar2(4000);
v_attr2 varchar2(4000);
v_attr3 varchar2(4000);
v_attr4 varchar2(4000);
v_attr5 varchar2(4000);
begin
 open curs;
 dbms_output.put_line('=========================================');
 dbms_output.put_line('Task_name = ' || in_task_name);
 loop
    fetch curs into
      v_action, v_command, v_attr1, v_attr2, v_attr3, v_attr4 ;
  exit when curs%notfound;
  dbms_output.put_line('Action ID: ' || v_action);
  dbms_output.put_line('Command : ' || v_command);
  dbms_output.put_line('Attr1 (name) : ' || SUBSTR(v_attr1,1,30));
  dbms_output.put_line('Attr2 (tablespace): ' || SUBSTR(v_attr2,1,30));
  dbms_output.put_line('Attr3 : ' || SUBSTR(v_attr3,1,30));
  dbms_output.put_line('Attr4 : ' || v_attr4);
  dbms_output.put_line('Attr5 : ' || v_attr5);
  dbms_output.put_line('----------------------------------------');
  end loop;
  close curs;
  dbms_output.put_line('=========end recommendations============');
end show_recm;
/
-- see what the actions are using sample procedure
set serveroutput on size 99999
execute show_recm(:task_name);
A fragment of a sample output from this procedure is as follows:
Task_name = MYTASK
Action ID: 1
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "SH"."CUSTOMERS"
Attr2 (tablespace):
Attr3 : ROWID, SEQUENCE
Attr4 : INCLUDING NEW VALUES
Attr5 :
----------------------------------------
..
----------------------------------------
Action ID: 15
Command : CREATE MATERIALIZED VIEW
Attr1 (name) : "SH"."SH_MV$$_0004"
Attr2 (tablespace): "SH_MVIEWS"
Attr3 : REFRESH FAST WITH ROWID
Attr4 : ENABLE QUERY REWRITE
Attr5 :
----------------------------------------
..
----------------------------------------
Action ID: 19
Command : CREATE INDEX
Attr1 (name) : "SH"."SH_IDX$$_0013"
Attr2 (tablespace): "SH_INDEXES"
Attr3 : "SH"."SH_MV$$_0002"
Attr4 : BITMAP
Attr5 :

SQL工作量日记
在执行分析处理(execute_task)时,SQL Access Advisor保存了关于分析日记的有用信息。可以使用
user_advisor_journal视图来查看日记。输出的信息量依赖于任务参数journaling的设置。

在导入工作量时,各种信息被导入到SQL工作量日记中。它们可以使用user_advisor_sqlw_journal视 图来进行查看。例如,如果一个特定的SQL语句引用了无效的表,表丢失统计信息或有权限错误,这 些信息会被记录在日记中。输出的信息量可以通过journaling。

为了关闭日记,执行以下语句:

execute dbms_advisor.set_task_parameter('MYTASK', 'JOURNALING', 0);

为了查看信息,执行以下语句:

execute dbms_advisor.set_task_parameter('MYTASK', 'JOURNALING', 4);

为了查看致命信息,执行以下语句:

execute dbms_advisor.set_sqlwkld_parameter('MYWORKLOAD', 'JOURNALING', 1);

停止建议处理
如果SQL Access Advisor执行dbms_advisor.execute_task过程花了太长时间来生成建议,可以执行 dbms_advisor.cancel_task过程并传入任务名来进行停止处理。如果执行dbms_advisor.cancel_task 过程,将不会生成建议。另外可以执行dbms_advisor.interrupt_task过程来终止建议处理。

dbms_advisor.interrupt_task过程会造成一个Advisor操作被终止就像正常结束一样。因此用户可以 看到到终止时间点已经格式化的任何建议。

一个终止的任务不能被重启。语法如下:

dbms_advisor.interrupt_task(task_name in varchar2);:

终止任务MY_TASK

execute dbms_advisor.interrupt_task ('my_task');

dbms_advisor.cancel_task过程会造成一个当前执行操作被终止。一个Advisor操作可能花费了几秒 来响应这个调用。因为所有Advisor任务过程是同步进行的,为了取消一个操作,必须使用一个单独 的数据库会话。

取消命令有效的将任务还原到它启动取消操作开始的条件状态。因此,取消的任务或数据对象不需要重启。

dbms_advisor.cancel_task (task_name in varchar2);

取消任务MYTASK:

execute dbms_advisor.cancel_task('MYTASK');

标记建议
缺省情况下,所有SQL Access Advisor建议已经准备好被实现了,然而,用户可以通过执行 dbms_advisor.mark_recommendation过程来选择跳过或执行所选择的建议。 dbms_advisor.mark_recommendation过程允许用户使用reject或ignore设置来注释一个建议,当生成 实现过程时它将造成dbms_advisor.get_task_script过程来跳过它。语法如下:

dbms_advisor.mark_recommendation (
task_name in varchar2
id in number,
action in varchar2);

下面的例子将ID为2的建议标记为reject。这个建议与任何依赖的建议将不会出现在脚本中。

execute dbms_advisor.mark_recommendation('MYTASK', 2, 'REJECT');

修改建议
可以使用dbms_advisor.update_rec_attributes过程,SQL Access Advisor名字与指派给新对象的关 系,比如在分析操作时的索引与物化视图。然而,它不是必须要选择合适的名字,因此可以手动设置 所有者,名字与新对象的表空间名。对于建议引用的现有数据库对象,所有者与名字不会改变。语法 如下:

dbms_advisor.update_rec_attributes (
task_name in varchar2
rec_id in number,
action_id in number,
attribute_name in varchar2,
value in varchar2);

其中,attribute_name参数可以有以下参数值:
.owner:指定建议对象的所有者
.name:指定建议对象名字
.tablespace:指定建议对象表空间

下面的用来修改SH_MVIEWS操作ID为1,建议ID为1的tablespace属性

execute dbms_advisor.update_rec_attributes('MYTASK', 1, 1,'TABLESPACE', 'SH_MVIEWS');

生成SQL脚本
一种查看建议的替代方法就是查询元数据,它将使用dbms_advisor.task_script过程来为SQL语句创 建建议脚本。最终脚本是一个可以执行的SQL文件,它可能包含drop,create和alter语句。对于新对 象,物化视图名字,物化视图日志名与使用用户定义模板自动生成的索引。你也能在试图执行它之前 生成SQL脚本。

这里有四个任务参数控制着命名规则(mview_name_template与index_name_template),这些对象的所 有者(def_index_owner与def_mview_owner)与表空间(def_mview_tablespace与 def_index_tablespace)。

下面的例子显示了如何生成包含建议的CLOB:

execute dbms_advisor.create_file(dbms_advisor.get_task_script ('MYTASK'),'ADVISOR_RESULTS', 'advscript.sql');

为了保存脚本文件,必须提供目录路径,因此dbms_advisor.create_file过程需要知道脚本存储位置 。另外,必须要对这个目录持有读写权限。下面的例子显示了如何保存一个CLOB类型的指导脚本文件


SQL> create directory advisor_results as '/bak';

Directory created
SQL> grant read,write on directory advisor_results to public;

Grant succeeded

SQL> execute dbms_advisor.create_file(dbms_advisor.get_task_script ('JYTASK'),'ADVISOR_RESULTS', 'jy_advscript.sql');

PL/SQL procedure successfully completed

下面是由这个过程生成的脚本内容的一部分。这个脚本包含了对建议访问结构收集统计信息的调用并 且在最后将建议标记为implemented。

Rem Access Advisor V10.1.0.0.0 - Production
Rem
Rem Username: SH
Rem Task: MYTASK
Rem Execution date: 15/04/2005 11:35
Rem
set feedback 1
set linesize 80
set trimspool on
set tab off
set pagesize 60
whenever sqlerror CONTINUE
create materialized view log on "sh"."products"
with rowid, sequence("prod_id","prod_subcategory")
including new values;
alter materialized view log force on "sh"."products"
add rowid, sequence("prod_id","prod_subcategory")
including new values;
..
create materialized view "sh"."mv$$_00510002"
refresh fast with rowid
enable query rewrite
as select sh.customers.cust_state_province c1, count(*) m1 from
sh.customers where (sh.customers.cust_state_province = 'ca') group
by sh.customers.cust_state_province;
begin
dbms_stats.gather_table_stats('"sh"', '"mv$$_00510002"', null,
dbms_stats.auto_sample_size);
end;
/

如何不再需要建议
dbms_advisor.reset_task过程可以将一个任务重新设置为初始启动状态。这将会删除所有建议和任 务的中间数据。任务的实际状态被设置为initial。语法如下:

dbms_advisor.reset_task (task_name in varchar2);

下面的例子将任务MYTASK进行重设置:

execute dbms_advisor.reset_task('mytask');

执行快速优化
如果只想要优化单个SQL语句,dbms_advisor.quick_tune过程将接受任务名与一个SQL语句作为参数 来执行。它将创建一个任务与工作量,并且执行任务。使用dbms_advisor.quick_tune过程所得到的 结果没有差异。它们实际上与使用dbms_advisor.execute_task过程得到的结果是一样的,但是对于 只优化一个SQL语句来说,快速优化这种方法更容易。其语法如下:

dbms_advisor.quick_tune (
advisor_name in varchar2,
task_name in varchar2,
attr1 in clob,
attr2 in varchar2 := null,
attr3 in number := null,
task_or_template in varchar2 := null);

下面是快速优化一个SQL语句的例子:

SQL> variable task_name varchar2(255);
SQL> variable sql_stmt varchar2(4000);
SQL> execute :sql_stmt := 'select count(1) cs  from bs_hospital a, bs_biztype g,  mt_biz_fin b, bs_center h, bs_insured j  where a.hospital_id = b.hospital_id    and  b.center_id = g.center_id    and b.biz_type = g.biz_type    and b.center_id =  h.center_id(+)    and b.valid_flag = ''1''    and b.indi_id = j.indi_id    and  (j.urban_type = ''all'' or ''all'' = ''all'')    and (b.district_code = b.center_id and  b.center_id in (430721) or    b.district_code <> b.center_id and b.center_id in (430721)  or b.district_code <> b.center_id and b.district_code in (430721)) and g.biz_stat =  ''2'' and b.biz_type <> ''52'' and b.biz_type <> ''17'' and b.pers_type_detail in (2)  and nvl(b.finish_flag, ''0'') = ''1'' and b.end_date between  to_date(''2014-01-01  00:00:00'', ''yyyy-mm-dd hh24:mi:ss'') and  to_date(''2014-12-31 23:59:59'', ''yyyy-mm- dd hh24:mi:ss'')';

PL/SQL procedure successfully completed
sql_stmt
---------
select count(1) cs  from bs_hospital a, bs_biztype g, mt_biz_fin b, bs_center h,  bs_insured j  where a.hospital_id = b.hospital_id    and b.center_id = g.center_id     and b.biz_type = g.biz_type    and b.center_id = h.center_id(+)    and b.valid_flag =  '1'    and b.indi_id = j.indi_id    and (j.urban_type = 'all' or 'all' = 'all')    and  (b.district_code = b.center_id and b.center_id in (430721) or    b.district_code <>  b.center_id and b.center_id in (430721) or b.district_code <> b.center_id and  b.district_code in (430721)) and g.biz_stat = '2' and b.biz_type <> '52' and b.biz_type  <> '17' and b.pers_type_detail in (2) and nvl(b.finish_flag, '0') = '1' and b.end_date  between  to_date('2014-01-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss') and  to_date('2014-12- 31 23:59:59', 'yyyy-mm-dd hh24:mi:ss')
SQL> execute :task_name := 'my_quicktune_task';

PL/SQL procedure successfully completed
task_name
---------
my_quicktune_task
SQL> execute dbms_advisor.quick_tune(dbms_advisor.sqlaccess_advisor,:task_name,  :sql_stmt);

PL/SQL procedure successfully completed
task_name
---------
my_quicktune_task
sql_stmt
---------
select count(1) cs  from bs_hospital a, bs_biztype g, mt_biz_fin b, bs_center h,  bs_insured j  where a.hospital_id = b.hospital_id    and b.center_id = g.center_id     and b.biz_type = g.biz_type    and b.center_id = h.center_id(+)    and b.valid_flag =  '1'    and b.indi_id = j.indi_id    and (j.urban_type = 'all' or 'all' = 'all')    and  (b.district_code = b.center_id and b.center_id in (430721) or    b.district_code <>  b.center_id and b.center_id in (430721) or b.district_code <> b.center_id and  b.district_code in (430721)) and g.biz_stat = '2' and b.biz_type <> '52' and b.biz_type  <> '17' and b.pers_type_detail in (2) and nvl(b.finish_flag, '0') = '1' and b.end_date  between  to_date('2014-01-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss') and  to_date('2014-12- 31 23:59:59', 'yyyy-mm-dd hh24:mi:ss')


SQL> create or replace procedure show_recm (in_task_name in varchar2) is
  2  cursor curs is
  3   select distinct action_id, command, attr1, attr2, attr3, attr4
  4  from user_advisor_actions
  5  where task_name = in_task_name
  6  order by action_id;
  7
  8  v_action number;
  9  v_command varchar2(32);
 10  v_attr1 varchar2(4000);
 11  v_attr2 varchar2(4000);
 12  v_attr3 varchar2(4000);
 13  v_attr4 varchar2(4000);
 14  v_attr5 varchar2(4000);
 15  begin
 16   open curs;
 17   dbms_output.put_line('=========================================');
 18   dbms_output.put_line('Task_name = ' || in_task_name);
 19   loop
 20      fetch curs into
 21        v_action, v_command, v_attr1, v_attr2, v_attr3, v_attr4 ;
 22    exit when curs%notfound;
 23    dbms_output.put_line('Action ID: ' || v_action);
 24    dbms_output.put_line('Command : ' || v_command);
 25    dbms_output.put_line('Attr1 (name) : ' || SUBSTR(v_attr1,1,30));
 26    dbms_output.put_line('Attr2 (tablespace): ' || SUBSTR(v_attr2,1,30));
 27    dbms_output.put_line('Attr3 : ' || SUBSTR(v_attr3,1,30));
 28    dbms_output.put_line('Attr4 : ' || v_attr4);
 29    dbms_output.put_line('Attr5 : ' || v_attr5);
 30    dbms_output.put_line('----------------------------------------');
 31    end loop;
 32    close curs;
 33    dbms_output.put_line('=========end recommendations============');
 34  end show_recm;
 35  /

Procedure created

SQL> set serveroutput on size 99999
SQL> execute show_recm('my_quicktune_task');

=========================================
Task_name = my_quicktune_task
Action ID: 1
Command : CREATE MATERIALIZED VIEW
Attr1 (name) : "INSUR_CHANGDE"."MV$$_B1D10000
Attr2 (tablespace):
Attr3 : REFRESH FORCE WITH ROWID
Attr4 : ENABLE QUERY REWRITE
Attr5 :
----------------------------------------
Action ID: 2
Command : GATHER TABLE STATISTICS
Attr1 (name) : "INSUR_CHANGDE"."MV$$_B1D10000
Attr2 (tablespace):
Attr3 : -1
Attr4 :
Attr5 :
----------------------------------------
=========end recommendations============

PL/SQL procedure successfully completed

管理任务
每次建议被生成,任务被创建,并且除非对这些任务执行一些维护操作,它们将会随着时间而增长并 且将会占用存储空间。有些任务可以想要保留并且防止意外删除。因此对于任务有多个管理操作可以 执行。
.更新任务属性
.删除任务
.设置days_to_expire参数

更新任务属性
使用dbms_advisor.update_task_attributes过程,可以执行以下操作:
.改变任务名称
.指定任务描述
.设置任务为只读状态,因上不能被修改
.将任务设置为模板让其它任务可以依赖它进行定义
.改变任务或任务模板的各种属性

其语法如下:

dbms_advisor.update_task_attributes (
task_name in varchar2
new_name in varchar2 := null,
description in varchar2 := null,
read_only in varchar2 := null,
is_template in varchar2 := null,
how_created in varchar2 := null);

下面的例子将任务名称MYTASK更新为TUNING1:

execute dbms_advisor.update_task_attributes('MYTASK', 'TUNING1');

下面的例子将任务TUNING1标记为只读:

execute dbms_advisor.update_task_attributes('TUNING1', read_only => 'TRUE');

下面的例子将把任务MYTASK标记为模板

execute dbms_advisor.update_task_attributes('TUNING1', is_template=>'TRUE');

删除任务
dbms_advisor.delete_task过程将从档案库中删除指导任务。其语法如下:

dbms_advisor.delete_task(task_name in varchar2);

下面的例子将任务MYTASK删除:

execute dbms_advisor.delete_task('MYTASK');

设置DAYS_TO_EXPIRE参数
当一个任务或工作量被创建后,参数days_to_expire被设置为30天。这个参数指示直到创建时间达到 指定天数时任务或对象将会由系统自动删除。如果你想要无限期的保存一个任务或工作量,参数 days_to_expire应该被设置为advisor_unlimited。

使用SQL Access Advisor进行SQL优化的例子
使用SQL调优集来加载工作量执行SQL Access Advisor
1.创建任务

SQL> variable task_id number;
SQL> variable task_name varchar2(255);
SQL> execute :task_name:='JYTASK';

PL/SQL procedure successfully completed
task_name
---------
JYTASK
SQL> execute dbms_advisor.create_task('SQL Access Advisor',:task_id,:task_name);

PL/SQL procedure successfully completed
task_id
---------
45354
task_name
---------
JYTASK

2.创建工作量

SQL> variable workload_name varchar2(255);
SQL> execute :workload_name := 'myworkload';

PL/SQL procedure successfully completed
workload_name
---------
myworkload
SQL> execute dbms_advisor.create_sqlwkld(:workload_name,'this is my first workload');

PL/SQL procedure successfully completed
workload_name
---------
myworkload

3.将任务与工作量进行链接

SQL> execute dbms_advisor.add_sqlwkld_ref('JYTASK', 'myworkload');

PL/SQL procedure successfully completed

4.从SQL调优集加载工作量

SQL> exec dbms_sqltune.create_sqlset(sqlset_name => 'MY_STS_WORKLOAD',description =>  'SQL Access Advisor Test',sqlset_owner => 'INSUR_CHANGDE');

PL/SQL procedure successfully completed

SQL>
SQL> declare
  2    cur dbms_sqltune.sqlset_cursor;
  3  begin
  4    open cur for
  5      select value(p)
  6        from table(dbms_sqltune.select_cursor_cache ('parsing_schema_name=''INSUR_CHANGDE'' and force_matching_signature<>0 and  buffer_gets>1000 and executions>100 and command_type<>2',
  7                                                    null,
  8                                                    null,
  9                                                    null,
 10                                                    null,
 11                                                    1,
 12                                                    null,
 13                                                    'all')) p;
 14    dbms_sqltune.load_sqlset(sqlset_name     => 'MY_STS_WORKLOAD',
 15                             populate_cursor => cur);
 16  end;
 17  /

PL/SQL procedure successfully completed



SQL> variable sqlsetname varchar2(30);
SQL> variable workload_name varchar2(30);
SQL> variable saved_stmts number;
SQL> variable failed_stmts number;
SQL> execute :sqlsetname := 'MY_STS_WORKLOAD';

PL/SQL procedure successfully completed
sqlsetname
---------
MY_STS_WORKLOAD
SQL> execute :workload_name := 'myworkload';

PL/SQL procedure successfully completed
workload_name
---------
myworkload

SQL> execute dbms_advisor.import_sqlwkld_sts (:workload_name ,:sqlsetname,  'NEW',1,:saved_stmts, :failed_stmts);

PL/SQL procedure successfully completed
workload_name
---------
myworkload
sqlsetname
---------
MY_STS_WORKLOAD
saved_stmts
---------
1219
failed_stmts
---------
13

5.执行任务

SQL> exec dbms_advisor.execute_task('JYTASK');

PL/SQL procedure successfully completed

6.查看建议

SQL> select rec_id, rank, benefit from user_advisor_recommendations where task_name  ='JYTASK';

    REC_ID       RANK    BENEFIT
---------- ---------- ----------
         1          1          0
         2          2     511856
         3          3          0
         4          4          0
         5          5          0
         6          6          0
         7          7          0
         8          8      17712
         9          9          0
        10         10     171589
        11         11     352280
        12         12          0
        13         13   35201233
        14         14     249249
        15         15          0
        16         16          0
        17         17          0
        18         18          0
        19         19      67650
        20         20      24901
        21         21          0
        22         22      45756
        23         23          0
        24         24          0
        25         25          0
        26         26      44170




SQL> select sql_id, rec_id, precost, postcost,
  2  decode((precost-postcost),0,0,round((precost-postcost)*100/precost,2)) as  percent_benefit
  3  from user_advisor_sqla_wk_stmts
  4  where task_name ='JYTASK' and workload_name ='myworkload';

    SQL_ID     REC_ID    PRECOST   POSTCOST PERCENT_BENEFIT
---------- ---------- ---------- ---------- ---------------
      2450          5       4440       4440               0
      2451         13      19012      13580           28.57
      2452          1       2908       2908               0
      2453          3       4907       4907               0
      2454          9       2665       2665               0
      2455          4       7180       7180               0
      2456         12       4230       4230               0
      2457          9       1521       1521               0
      2458         85      14322        434           96.97
      2459         27       9809       9809               0
      2460          2      31086       8478           72.73
      2461          5       4050       4050               0
      2462          6      15432      15432               0
      2463          1        935        935               0
      2464        307      14490        420            97.1
      2465         20       2900       2175              25
      2466          5       1401       1401               0
      2467          7       1980       1980               0
      2468        128     224180      67254              70
      2469         23       8098       8098               0

下面的PL/SQL过程可以用来打印建议的一些属性.

SQL> create or replace procedure show_recm (in_task_name in varchar2) is
  2  cursor curs is
  3   select distinct action_id, command, attr1, attr2, attr3, attr4
  4  from user_advisor_actions
  5  where task_name = in_task_name
  6  order by action_id;
  7
  8  v_action number;
  9  v_command varchar2(32);
 10  v_attr1 varchar2(4000);
 11  v_attr2 varchar2(4000);
 12  v_attr3 varchar2(4000);
 13  v_attr4 varchar2(4000);
 14  v_attr5 varchar2(4000);
 15  begin
 16   open curs;
 17   dbms_output.put_line('=========================================');
 18   dbms_output.put_line('Task_name = ' || in_task_name);
 19   loop
 20      fetch curs into
 21        v_action, v_command, v_attr1, v_attr2, v_attr3, v_attr4 ;
 22    exit when curs%notfound;
 23    dbms_output.put_line('Action ID: ' || v_action);
 24    dbms_output.put_line('Command : ' || v_command);
 25    dbms_output.put_line('Attr1 (name) : ' || SUBSTR(v_attr1,1,30));
 26    dbms_output.put_line('Attr2 (tablespace): ' || SUBSTR(v_attr2,1,30));
 27    dbms_output.put_line('Attr3 : ' || SUBSTR(v_attr3,1,30));
 28    dbms_output.put_line('Attr4 : ' || v_attr4);
 29    dbms_output.put_line('Attr5 : ' || v_attr5);
 30    dbms_output.put_line('----------------------------------------');
 31    end loop;
 32    close curs;
 33    dbms_output.put_line('=========end recommendations============');
 34  end show_recm;
 35  /

Procedure created


SQL> set serveroutput on size 9999999
SQL> execute show_recm('JYTASK');

=========================================
Task_name = JYTASK
Action ID: 1
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_SPECIAL_DI
Attr2 (tablespace):
Attr3 : ROWID
Attr4 :
Attr5 :
----------------------------------------
Action ID: 3
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."FC_BIZ_POLICY
Attr2 (tablespace):
Attr3 : ROWID, SEQUENCE
Attr4 :  INCLUDING NEW VALUES
Attr5 :
----------------------------------------
Action ID: 5
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_TREAT_TYPE
Attr2 (tablespace):
Attr3 : ROWID, PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 7
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_PERSON_TYP
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 9
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."PFS_INSUR_DET
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 11
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_SPECIAL_DI
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 13
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_CORP_STAC"
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 15
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_CULTURE_ST
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 17
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_CALLING"
Attr2 (tablespace):
Attr3 : ROWID
Attr4 :
Attr5 :
----------------------------------------
Action ID: 19
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_OCCUPATION
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 21
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_BIZ_LICE"
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 23
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_CORP_TYPE"
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 25
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_SALARYSYS"
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 27
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_JOIN_STA"
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 29
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_CORP_REVEN
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 31
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_MEDI_ITEM_
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 33
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_HIRED_TYPE
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 35
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_CORP_CHARG
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 37
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_ECON_TYPE"
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 39
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."CG_INDI_TYPE"
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 41
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_CORP_FINAL
Attr2 (tablespace):
Attr3 : ROWID
Attr4 :
Attr5 :
----------------------------------------
Action ID: 43
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_LOWINSR"
Attr2 (tablespace):
Attr3 : ROWID
Attr4 :
Attr5 :
----------------------------------------
Action ID: 45
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_SUB_CONN"
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 47
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_OFFICIAL_I
Attr2 (tablespace):
Attr3 : ROWID
Attr4 :
Attr5 :
----------------------------------------
Action ID: 49
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_OFFICIAL"
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :
Attr5 :
----------------------------------------
Action ID: 51
Command : CREATE MATERIALIZED VIEW LOG
Attr1 (name) : "INSUR_CHANGDE"."BS_CHARGE_COD
Attr2 (tablespace):
Attr3 : PRIMARY KEY
Attr4 :ant succeeded


SQL> create directory advisor_results as '/bak';

Directory created
SQL> grant read,write on directory advisor_results to public;

Grant succeeded

SQL> execute dbms_advisor.create_file(dbms_advisor.get_task_script ('JYTASK'),'ADVISOR_RESULTS', 'jy_advscript.sql');

PL/SQL procedure successfully completed

[IBMP740-1:root:/bak]#more jy_advscript.sql
Rem  SQL Access Advisor: Version 10.2.0.4.0 - Production
Rem
Rem  Username:        INSUR_CHANGDE
Rem  Task:            JYTASK
Rem  Execution date:  06/09/2016 09:48
Rem

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_SPECIAL_DISEASE_RANGE"
    WITH ROWID ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."FC_BIZ_POLICY"
    WITH ROWID, SEQUENCE ("POLICY_CODE","POLICY_VALUE","CENTER_OR_HOSP","VALID_FLAG","CENTER_ID")
    INCLUDING NEW VALUES;

...省略...
CREATE MATERIALIZED VIEW "INSUR_CHANGDE"."MV$$_B12A0069"
    REFRESH FORCE WITH ROWID
    ENABLE QUERY REWRITE
    AS SELECT A.SERIAL_MATCH, A.HOSPITAL_ID, A.MATCH_TYPE, A.ITEM_CODE, A.ITEM_NAME,
       A.HOSP_CODE, A.HOSP_NAME, A.MODEL AS MODEL_NAME, A.HOSP_MODEL, TO_CHAR (A.EDIT_DATE,
       'yyyy-mm-dd') AS EDIT_DATE, A.EDIT_STAFF, A.EDIT_MAN, A.VALID_FLAG, NVL (B.SP_FLAG,
       '0') AS SP_FLAG, TO_CHAR(A.EFFECT_DATE, 'yyyy-mm-dd') AS EFFECT_DATE,
       TO_CHAR(A.EXPIRE_DATE, 'yyyy-mm-dd') AS EXPIRE_DATE, A.AUDIT_FLAG, TO_CHAR (A.AUDIT_DATE,
       'yyyy-mm-dd') AS AUDIT_DATE, A.AUDIT_STAFF, A.AUDIT_MAN, B.CLINIC_FLAG,
       NVL(B.DEFRAY_TYPE_CJ,B.DEFRAY_TYPE) AS DEFRAY_TYPE_CJ, B.STAPLE_FLAG,
       B.SELF_SCALE, NVL(A.HOSP_PRICE, 0) AS PRICE, B.STAT_TYPE AS STAT_TYPE,
       DECODE(SUBSTR('110',1,1),'5', DECODE(B.BO_FLAG,1,DECODE(SUBSTR (B.DEFRAY_TYPE,1,3),
       'Bed', B.DEFRAY_TYPE, 'A000_00'),'A000_100'),'4', DECODE(B.WL_FLAG,1,DECODE (B.DEFRAY_TYPE,
       'A001','A001','A000_00'),'A000_100'), B.DEFRAY_TYPE) AS DEFRAY_TYPE, DECODE(NVL (C.TREATMENT_TYPE,
       '000'), '161', '161', '162', '162', '000') AS TREATMENT_TYPE FROM  BS_CATALOG_MATCH
       A, BS_ITEM B, BS_SPECIAL_ITEM_RANGE C ,BS_CENTER D WHERE A.CENTER_ID =
       '430701' AND A.HOSPITAL_ID = '4307000001' AND A.HOSP_CODE = '111965**'
       AND A.MATCH_TYPE = '0' AND A.VALID_FLAG = '1' AND A.EFFECT_DATE < = TO_DATE(
       '2016-09-02','yyyy-mm-dd') AND (A.EXPIRE_DATE >= TO_DATE('2016-09-02'
       ,'yyyy-mm-dd') OR A.EXPIRE_DATE IS NULL) AND A.CENTER_ID = D.CENTER_ID
       AND NVL(D.CATALOG_CENTER,D.CENTER_ID) = B.CENTER_ID AND TO_CHAR(A.ITEM_CODE)
       = (B.ITEM_CODE) AND B.VALID_FLAG = '1' AND B.EFFECT_DATE < = TO_DATE('2016-09-02'
       ,'yyyy-mm-dd') AND (B.EXPIRE_DATE >= TO_DATE('2016-09-02','yyyy-mm-dd'
       ) OR B.EXPIRE_DATE IS NULL) AND A.CENTER_ID = C.CENTER_ID(+) AND A.MATCH_TYPE
       = C.MEDI_ITEM_TYPE(+) AND A.ITEM_CODE = C.MEDI_ITEM_CODE(+) AND '1' =
       C.VALID_FLAG(+) AND TO_DATE('2016-09-02','yyyy-mm-dd') >= C.EFFECT_DATE(+)
       AND TO_DATE('2016-09-02','yyyy-mm-dd') < = NVL(C.EXPIRE_DATE(+), TO_DATE(
       '2099-12-31', 'yyyy-mm-dd')) AND '131' <> C.TREATMENT_TYPE(+);

begin
  dbms_stats.gather_table_stats('"INSUR_CHANGDE"','"MV$ $_B12A0069"',NULL,dbms_stats.auto_sample_size);
end;
/

CREATE MATERIALIZED VIEW "INSUR_CHANGDE"."MV$$_B12A006A"
    REFRESH FORCE WITH ROWID
    ENABLE QUERY REWRITE
    AS SELECT A.SERIAL_MATCH, A.HOSPITAL_ID, A.MATCH_TYPE, A.ITEM_CODE, A.ITEM_NAME,
       A.HOSP_CODE, A.HOSP_NAME, A.MODEL AS MODEL_NAME, A.HOSP_MODEL, TO_CHAR (A.EDIT_DATE,
       'yyyy-mm-dd') AS EDIT_DATE, A.EDIT_STAFF, A.EDIT_MAN, A.VALID_FLAG, NVL (B.SP_FLAG,
       '0') AS SP_FLAG, TO_CHAR(A.EFFECT_DATE, 'yyyy-mm-dd') AS EFFECT_DATE,
       TO_CHAR(A.EXPIRE_DATE, 'yyyy-mm-dd') AS EXPIRE_DATE, A.AUDIT_FLAG, TO_CHAR (A.AUDIT_DATE,
       'yyyy-mm-dd') AS AUDIT_DATE, A.AUDIT_STAFF, A.AUDIT_MAN, B.CLINIC_FLAG,
       NVL(B.DEFRAY_TYPE_CJ,B.DEFRAY_TYPE) AS DEFRAY_TYPE_CJ, B.STAPLE_FLAG,
       B.SELF_SCALE, NVL(A.HOSP_PRICE, 0) AS PRICE, B.STAT_TYPE AS STAT_TYPE,
       DECODE(SUBSTR('120',1,1),'5', DECODE(B.BO_FLAG,1,DECODE(SUBSTR (B.DEFRAY_TYPE,1,3),
       'Bed', B.DEFRAY_TYPE, 'A000_00'),'A000_100'),'4', DECODE(B.WL_FLAG,1,DECODE (B.DEFRAY_TYPE,
       'A001','A001','A000_00'),'A000_100'), B.DEFRAY_TYPE) AS DEFRAY_TYPE, DECODE(NVL (C.TREATMENT_TYPE,
       '000'), '161', '161', '162', '162', '000') AS TREATMENT_TYPE FROM  BS_CATALOG_MATCH
       A, BS_ITEM B, BS_SPECIAL_ITEM_RANGE C ,BS_CENTER D WHERE A.CENTER_ID =
       '430722' AND A.HOSPITAL_ID = '4307220004' AND A.HOSP_CODE = '00358' AND
       A.MATCH_TYPE = '0' AND A.VALID_FLAG = '1' AND A.EFFECT_DATE < = TO_DATE(
       '2016-09-01','yyyy-mm-dd') AND (A.EXPIRE_DATE >= TO_DATE('2016-09-01'
       ,'yyyy-mm-dd') OR A.EXPIRE_DATE IS NULL) AND A.CENTER_ID = D.CENTER_ID
       AND NVL(D.CATALOG_CENTER,D.CENTER_ID) = B.CENTER_ID AND TO_CHAR(A.ITEM_CODE)
       = (B.ITEM_CODE) AND B.VALID_FLAG = '1' AND B.EFFECT_DATE < = TO_DATE('2016-09-01'
       ,'yyyy-mm-dd') AND (B.EXPIRE_DATE >= TO_DATE('2016-09-01','yyyy-mm-dd'
       ) OR B.EXPIRE_DATE IS NULL) AND A.CENTER_ID = C.CENTER_ID(+) AND A.MATCH_TYPE
       = C.MEDI_ITEM_TYPE(+) AND A.ITEM_CODE = C.MEDI_ITEM_CODE(+) AND '1' =
       C.VALID_FLAG(+) AND TO_DATE('2016-09-01','yyyy-mm-dd') >= C.EFFECT_DATE(+)
       AND TO_DATE('2016-09-01','yyyy-mm-dd') < = NVL(C.EXPIRE_DATE(+), TO_DATE(
       '2099-12-31', 'yyyy-mm-dd')) AND '131' <> C.TREATMENT_TYPE(+);

begin
  dbms_stats.gather_table_stats('"INSUR_CHANGDE"','"MV$ $_B12A006A"',NULL,dbms_stats.auto_sample_size);
end;
/

...省略...



使用任务模板来加载工作量执行SQL Access Advisor
1.创建任务模板my_template

SQL> variable template_id number;
SQL> variable template_name varchar2(255);
SQL> execute :template_name := 'my_template';

PL/SQL procedure successfully completed
template_name
---------
my_template
SQL> execute dbms_advisor.create_task('SQL Access Advisor',:template_id,:template_name,  is_template => 'true');

PL/SQL procedure successfully completed
template_id
---------
45357
template_name
---------
my_template

2.设置模板参数

SQL> execute dbms_advisor.set_task_parameter(:template_name, 'INDEX_NAME_TEMPLATE',  'SH_IDX$$_');

PL/SQL procedure successfully completed
template_name
---------
my_template
SQL> execute dbms_advisor.set_task_parameter(:template_name, 'MVIEW_NAME_TEMPLATE',  'SH_MV$$_');

PL/SQL procedure successfully completed
template_name
---------
my_template
SQL> -- set default tablespace for recommended indexes/mvs
SQL> execute dbms_advisor.set_task_parameter(:template_name, 'DEF_INDEX_TABLESPACE',  'USERS');

PL/SQL procedure successfully completed
template_name
---------
my_template
SQL> execute dbms_advisor.set_task_parameter(:template_name, 'DEF_MVIEW_TABLESPACE',  'USERS');

PL/SQL procedure successfully completed
template_name
---------
my_template

3.使用模板来创建任务mytask

SQL> variable task_id number;
SQL> variable task_name varchar2(255);
SQL> execute :task_name := 'mytask';

PL/SQL procedure successfully completed
task_name
---------
mytask
SQL> execute dbms_advisor.create_task('SQL Access Advisor', :task_id,:task_name,  template=>'my_template');

PL/SQL procedure successfully completed
task_id
---------
45376
task_name
---------
mytask

4.创建工作量my_template_workload

SQL> variable workload_name varchar2(255);
SQL> execute :workload_name := 'my_template_workload';

PL/SQL procedure successfully completed
workload_name
---------
my_template_workload
SQL> execute dbms_advisor.create_sqlwkld(:workload_name,'this is my first workload');

PL/SQL procedure successfully completed
workload_name
---------
my_template_workload

5.从SQL调优集加载工作量

SQL> exec dbms_sqltune.create_sqlset(sqlset_name =>  'MY__TEMPLATE_STS_WORKLOAD',description => 'SQL Access Advisor Test',sqlset_owner =>  'INSUR_CHANGDE');

PL/SQL procedure successfully completed
SQL> declare
  2    cur dbms_sqltune.sqlset_cursor;
  3  begin
  4    open cur for
  5      select value(p)
  6        from table(dbms_sqltune.select_cursor_cache ('parsing_schema_name=''INSUR_CHANGDE''
  7
  8  and force_matching_signature<>0 and buffer_gets>1000 and executions>100 and
  9
 10  command_type<>2',
 11                                                    null,
 12                                                    null,
 13                                                    null,
 14                                                    null,
 15                                                    1,
 16                                                    null,
 17                                                    'all')) p;
 18    dbms_sqltune.load_sqlset(sqlset_name     => 'MY__TEMPLATE_STS_WORKLOAD',
 19                             populate_cursor => cur);
 20  end;
 21  /

PL/SQL procedure successfully completed

SQL> variable sqlsetname varchar2(30);
SQL> variable workload_name varchar2(30);
SQL> variable saved_stmts number;
SQL> variable failed_stmts number;
SQL> execute :sqlsetname := 'MY__TEMPLATE_STS_WORKLOAD';

PL/SQL procedure successfully completed
sqlsetname
---------
MY__TEMPLATE_STS_WORKLOAD
SQL> execute :workload_name := 'my_template_workload';

PL/SQL procedure successfully completed
workload_name
---------
my_template_workload
SQL> execute dbms_advisor.import_sqlwkld_sts (:workload_name ,:sqlsetname,  'NEW',1,:saved_stmts, :failed_stmts);

PL/SQL procedure successfully completed
workload_name
---------
my_template_workload
sqlsetname
---------
MY__TEMPLATE_STS_WORKLOAD
saved_stmts
---------
650
failed_stmts
---------
38

6.创建任务与工作量之间的链接

SQL> execute dbms_advisor.add_sqlwkld_ref('mytask','my_template_workload');

PL/SQL procedure successfully completed

7.执行任务

SQL> execute dbms_advisor.execute_task('mytask');

PL/SQL procedure successfully completed

8.生成建议脚本

SQL> execute dbms_advisor.create_file(dbms_advisor.get_task_script ('mytask'),'ADVISOR_RESULTS', 'jy_mytask_advscript.sql');

PL/SQL procedure successfully completed


[IBMP740-1:root:/bak]# more jy_mytask_advscript.sql
Rem  SQL Access Advisor: Version 10.2.0.4.0 - Production
Rem
Rem  Username:        INSUR_CHANGDE
Rem  Task:            mytask
Rem  Execution date:  06/09/2016 10:42
Rem

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_PERSON_TYPE"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_HIRED_TYPE"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_HOSP_TYPE"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_OFFICIAL_IDEN"
    WITH ROWID ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_TOWNS"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_OFFICIAL"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_VILLAGE"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_SPECIAL"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_BANK_INFO"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_FOLK"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_TAX_INFO"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_STREET"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_FUND_TYPE"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_SEX"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_OCCUPATION"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_COMMUNITY"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_POSITION"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_KINDRED"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."CM_OUTHOS_CIRCS"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_DISTRICT"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."CG_INDI_REASON"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."SYS_QUESTIONNAIRE"
    WITH ROWID ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_GROUP"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_HOSP_BIZ_PERS"
    WITH ROWID ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_HOSPITAL_COLLATE"
    WITH ROWID ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_HOSPITAL"
    WITH ROWID ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_BIZTYPE"
    WITH ROWID, PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."CG_INDI_ALT_DATA"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."MS_RECEIVE_SQL"
    WITH ROWID ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."MQ_SQL"
    WITH ROWID ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."LV_PRD"
    WITH ROWID, SEQUENCE("CALC_PRD","CURR_YEAR","CENTER_ID")
    INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_TREAT_TYPE"
    WITH ROWID, PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_SPECIAL_DISEASE_RANGE"
    WITH ROWID ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."BS_DISEASE"
    WITH ROWID, PRIMARY KEY ;

CREATE MATERIALIZED VIEW LOG ON
    "INSUR_CHANGDE"."APPLICATION_VERSION"
    WITH PRIMARY KEY ;

CREATE MATERIALIZED VIEW "INSUR_CHANGDE"."SH_MV$$_0008"
    TABLESPACE "USERS"
    REFRESH FORCE WITH ROWID
    ENABLE QUERY REWRITE
    AS SELECT SUM(CITY_CODE) CITY_CODE FROM ( SELECT '1' CITY_CODE FROM BS_OUT O WHERE
       O.LEAVE_TYPE <> '2' AND O.LEAVE_STATUS = '1' AND O.INDI_ID = '208227'
       AND TO_CHAR(O.LEAVE_DATE, 'yyyy-mm-dd') < = '2016-09-06' UNION SELECT
       '1' CITY_CODE FROM BS_OUT O, (SELECT BOC.LEAVE_NO, BOC.INPUT_DATE FROM
       BS_OUT_CHANGE BOC, BS_OUT O WHERE BOC.LEAVE_NO = O.LEAVE_NO AND BOC.VALUE_AFTER
       = '异地正式取消' AND O.INDI_ID = '208227') C WHERE O.LEAVE_NO = C.LEAVE_NO(+)
       AND O.LEAVE_TYPE <> '2' AND O.LEAVE_STATUS = '4' AND O.INDI_ID = '208227'
       AND TO_CHAR(O.LEAVE_DATE, 'yyyy-mm-dd') < = '2016-09-06' AND TO_CHAR(C.INPUT_DATE,
       'yyyy-mm-dd') > '2016-09-06' UNION SELECT NVL(BI.CITY_CODE, 0) CITY_CODE
       FROM BS_INSURED BI WHERE BI.INDI_ID = '208227' AND NOT EXISTS (SELECT
       1 FROM BS_OUT BO WHERE BO.INDI_ID = BI.INDI_ID AND BO.LEAVE_TYPE <> 2
       AND BO.LEAVE_STATUS = 1) );

begin
  dbms_stats.gather_table_stats('"INSUR_CHANGDE"','"SH_MV$ $_0008"',NULL,dbms_stats.auto_sample_size);
end;
/

使用SQL Caceh加载工作量来调用SQL Access Advisor
1.创建名为my_cache_workload的工作量

SQL> execute :workload_name := 'my_cache_workload';

PL/SQL procedure successfully completed
workload_name
---------
my_cache_workload
SQL> execute dbms_advisor.create_sqlwkld(:workload_name);

PL/SQL procedure successfully completed
workload_name
---------
my_cache_workload

2.设置过滤
只加载只包含INSUR_CHANGDE用户所拥有表的SQL语句

SQL> execute dbms_advisor.set_sqlwkld_parameter('my_cache_workload', 'USERNAME_LIST',  'INSUR_CHANGDE');

PL/SQL procedure successfully completed

3.从SQL Cache中加载工作量

SQL> variable task_id number;
SQL> variable task_name varchar2(255);
SQL> variable workload_name varchar2(255);
SQL> variable saved_stmts number;
SQL> variable failed_stmts number;
SQL> execute :workload_name := 'my_cache_workload';

PL/SQL procedure successfully completed
workload_name
---------
my_cache_workload

SQL> EXECUTE DBMS_ADVISOR.IMPORT_SQLWKLD_SQLCACHE (:workload_name, 'APPEND', 2,  :saved_stmts, :failed_stmts);

PL/SQL procedure successfully completed
workload_name
---------
my_cache_workload
saved_stmts
---------
120672
failed_stmts
---------
1515

SQL> SELECT num_select_stmt, create_date
  2  FROM user_advisor_sqlw_sum
  3  WHERE workload_name = :workload_name;

NUM_SELECT_STMT CREATE_DATE
--------------- -----------
          84547 2016/9/6 11
workload_name
---------
my_cache_workload

SQL> SELECT sql_id, username, optimizer_cost, SUBSTR(sql_text, 1, 30)
  2  FROM user_advisor_sqlw_stmts
  3  WHERE workload_name ='my_cache_workload'
  4  ORDER BY sql_id;

    SQL_ID USERNAME                       OPTIMIZER_COST SUBSTR(SQL_TEXT,1,30)
---------- ------------------------------ --------------  --------------------------------------------------------------------------------
      4319 INSUR_CHANGDE                               0 insert into bs_indi_freeze_his
      4320 INSUR_CHANGDE                               0 update bs_pres_insur set
      4321 INSUR_CHANGDE                               0 select a.serial_match,
      4322 INSUR_CHANGDE                               0 update mt_serial set cur_num_n
      4323 INSUR_CHANGDE                               0 select nvl(catalog_center,cent
      4324 INSUR_CHANGDE                               0 select scene_value from mt_biz
      4325 INSUR_CHANGDE                               0 update mt_serial set cur_num_n
      4326 INSUR_CHANGDE                               0 insert into bs_mdi_indi_acc( I
      4327 INSUR_CHANGDE                               0 select biz_type,center_id from
      4328 INSUR_CHANGDE                               0 select bi.indi_id,bi.pers_type
      4329 INSUR_CHANGDE                               0 select nvl(cp.indi_join_flag,0
      4330 INSUR_CHANGDE                               0 update mt_serial set cur_num_n
      4331 INSUR_CHANGDE                               0 update mt_serial set cur_num_n
      4332 INSUR_CHANGDE                               0 update mt_serial set cur_num_n
      4333 INSUR_CHANGDE                               0 select a.dataobj_name    from
      4334 INSUR_CHANGDE                               0 update mt_serial set cur_num_n
      4335 INSUR_CHANGDE                               0 select password from bs_cards
      4336 INSUR_CHANGDE                               0 select t.old_value,t.new_value
      4337 INSUR_CHANGDE                               0 update mt_serial set cur_num_n
      4338 INSUR_CHANGDE                               0 update mt_serial set cur_num_n

4.创建名为my_cache_task的任务

SQL> execute :task_name := 'my_cache_task';

PL/SQL procedure successfully completed
task_name
---------
my_cache_task
SQL> execute dbms_advisor.create_task('SQL Access Advisor', :task_id, :task_name);

PL/SQL procedure successfully completed
task_id
---------
45556
task_name
---------
my_cache_task

5.在任务与工作量之间创建链接

SQL> execute dbms_advisor.add_sqlwkld_ref('my_cache_task','my_cache_workload');

PL/SQL procedure successfully completed

6.执行任务

SQL> execute dbms_advisor.execute_task('my_cache_task');

PL/SQL procedure successfully completed

7.生成脚本

SQL> execute dbms_advisor.create_file(dbms_advisor.get_task_script ('my_cache_task'),'ADVISOR_RESULTS', 'my_cache_workload_script.sql');

PL/SQL procedure successfully completed

oracle 11gr2中使用dbms_sqldiag.dump_trace来获得10053跟踪文件

从Oracle 11gr2开始在不执行SQL语句的情况下可以使用dbms_sqldiag.dump_trace来生成10053跟踪文件。它的操作步骤如下:

1.先执行sql语句

SQL> column slq_text format a30
SQL> select sysdate from dual;

SYSDATE
------------
15-AUG-14

2.通过sql语句的文本来搜索v$sql找到该语句相应的sql_id.

SQL> select sql_id from v$sql where sql_text like 'select sysdate from dual%';

SQL_ID
-------------
7h35uxf5uhmm1

3.执行dbms_sqldiag.dump_trace过程来生成10053跟踪文件

SQL> execute dbms_sqldiag.dump_trace(p_sql_id=>'7h35uxf5uhmm1',p_child_number=>0,p_component=>'Compiler',p_file_id=>'DIAG');

PL/SQL procedure successfully completed.

SQL> show parameter user_dump_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
user_dump_dest                       string      /u01/app/oracle/diag/rdbms/jyc
                                                 s/jycs/trace

4.找到生成的10053跟踪文件

SQL> host ls -lrt /u01/app/oracle/diag/rdbms/jycs/jycs/trace/*DIAG*.trc
-rw-r----- 1 oracle oinstall 66194 Aug 15 09:49 /u01/app/oracle/diag/rdbms/jycs/jycs/trace/jycs_ora_8474_DIAG.trc

5.查看10053跟踪文件的内容

SQL> host cat /u01/app/oracle/diag/rdbms/jycs/jycs/trace/jycs_ora_8474_DIAG.trc
Trace file /u01/app/oracle/diag/rdbms/jycs/jycs/trace/jycs_ora_8474_DIAG.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/11.2.0/db
System name:    Linux
Node name:      jyrac1
Release:        2.6.18-164.el5
Version:        #1 SMP Tue Aug 18 15:51:48 EDT 2009
Machine:        x86_64
Instance name: jycs
Redo thread mounted by this instance: 1
Oracle process number: 33
Unix process pid: 8474, image: oracle@jyrac1 (TNS V1-V3)


*** 2014-08-15 09:49:11.244
*** SESSION ID:(146.49619) 2014-08-15 09:49:11.244
*** CLIENT ID:() 2014-08-15 09:49:11.244
*** SERVICE NAME:(SYS$USERS) 2014-08-15 09:49:11.244
*** MODULE NAME:(sqlplus@jyrac1 (TNS V1-V3)) 2014-08-15 09:49:11.244
*** ACTION NAME:() 2014-08-15 09:49:11.244

Enabling tracing for cur#=7 sqlid=84zghzsc8b7rj recursive
Parsing cur#=7 sqlid=84zghzsc8b7rj len=50
sql=/* SQL Analyze(146,0) */ select sysdate from dual
End parsing of cur#=7 sqlid=84zghzsc8b7rj
Semantic Analysis cur#=7 sqlid=84zghzsc8b7rj
OPTIMIZER INFORMATION

******************************************
----- Current SQL Statement for this session (sql_id=84zghzsc8b7rj) -----
/* SQL Analyze(146,0) */ select sysdate from dual
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0x7f6236e8       145  package body SYS.DBMS_SQLTUNE_INTERNAL
0x7f6236e8     12043  package body SYS.DBMS_SQLTUNE_INTERNAL
0x854a3268      1276  package body SYS.DBMS_SQLDIAG
0x758e9c58         1  anonymous block
*******************************************
................省略
kkfdapdml
        oct:3 pgadep:1 pdml mode:0 PX allowed DML allowed RowLock is not Intent
        => not allowed
        /* SQL Analyze(146,0) */ select sysdate from dual
Registered qb: SEL$1 0xfb907cb0 (PARSER)
---------------------
QUERY BLOCK SIGNATURE
---------------------
  signature (): qb_name=SEL$1 nbfros=1 flg=0
    fro(0): flg=4 objn=116 hint_alias="DUAL"@"SEL$1"

SPM: statement not found in SMB
SPM: statement not a candidate for auto-capture
        kkfdPaForcePrm return FALSE
kkfdPaPrm: use dictionary DOP(1) on table
kkfdPaPrm:- The table : 116
kkfdPaPrm:DOP = 1 (computed from hint/dictionary/autodop)
kkfdPaPrm:- returns FALSE, i.e (serial)
qksbgCreateSessionEnv: inherit from system bgc:0x2b4afb8f0888
qksbgCreateCursorEnv: create a new one and copy from the session bgc:0x2b4afb90aef0

**************************
Automatic degree of parallelism (ADOP)
**************************
kkfdIsAutoDopSupported: Yes, ctxoct is 3
Automatic degree of parallelism is disabled: Parameter.

PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No partitioned table in query block.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for 84zghzsc8b7rj.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE:     CSE not performed on query block SEL$1 (#0).
OBYE:   Considering Order-by Elimination from view SEL$1 (#0)
***************************
Order-by elimination (OBYE)
***************************
OBYE:     OBYE bypassed: no order by to eliminate.
CVM: Considering view merge in query block SEL$1 (#0)
query block SEL$1 (#0) unchanged
Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No partitioned table in query block.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for 84zghzsc8b7rj.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE:     CSE not performed on query block SEL$1 (#0).
SU: Considering subquery unnesting in query block SEL$1 (#0)
********************
Subquery Unnest (SU)
********************
SJC: Considering set-join conversion in query block SEL$1 (#0)
*************************
Set-Join Conversion (SJC)
*************************
SJC: not performed
PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
PM:     PM bypassed: Outer query contains no views.
PM:     PM bypassed: Outer query contains no views.
query block SEL$1 (#0) unchanged
End Semantic analysis of cur#=7 sqlid=84zghzsc8b7rj
Typechecking cur#=7 sqlid=84zghzsc8b7rj
FPD: Considering simple filter push in query block SEL$1 (#0)
 ??
apadrv-start sqlid=9402936571143233265
  :
    call(in-use=1008, alloc=16344), compile(in-use=53512, alloc=54384), execution(in-use=2424, alloc=4032)

*******************************************
Peeked values of the binds in SQL statement
*******************************************

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT SYSDATE@! "SYSDATE" FROM "SYS"."DUAL" "DUAL"
kkoqbc: optimizing query block SEL$1 (#0)

        :
    call(in-use=1008, alloc=16344), compile(in-use=54576, alloc=56336), execution(in-use=2424, alloc=4032)

kkoqbc-subheap (create addr=0x2b4afb8cfb08)
****************
QUERY BLOCK TEXT
****************
select sysdate from dual
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (optimizer): qb_name=SEL$1 nbfros=1 flg=0
  fro(0): flg=0 objn=116 hint_alias="DUAL"@"SEL$1"

-----------------------------
SYSTEM STATISTICS INFORMATION
-----------------------------
  Using NOWORKLOAD Stats
  CPUSPEEDNW: 2657 millions instructions/sec (default is 100)
  IOTFRSPEED: 4096 bytes per millisecond (default is 4096)
  IOSEEKTIM: 10 milliseconds (default is 10)
  MBRC: -1 blocks (default is 8)

***************************************
BASE STATISTICAL INFORMATION
***********************
Table Stats::
  Table: DUAL  Alias: DUAL
    #Rows: 1  #Blks:  1  AvgRowLen:  2.00
kkfdPaForcePrm return FALSE
kkfdPaPrm: use dictionary DOP(1) on table
kkfdPaPrm:- The table : 116
kkfdPaPrm:DOP = 1 (computed from hint/dictionary/autodop)
kkfdPaPrm:- returns FALSE, i.e (serial)
Access path analysis for DUAL
        kkfdPaForcePrm return FALSE
kkfdPaPrm: use dictionary DOP(1) on table
kkfdPaPrm:- The table : 116
kkfdPaPrm:DOP = 1 (computed from hint/dictionary/autodop)
kkfdPaPrm:- returns FALSE, i.e (serial)
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for DUAL[DUAL]
  Table: DUAL  Alias: DUAL
    Card: Original: 1.000000  Rounded: 1  Computed: 1.00  Non Adjusted: 1.00
  Access Path: TableScan
    Cost:  2.00  Resp: 2.00  Degree: 0
      Cost_io: 2.00  Cost_cpu: 7271
      Resp_io: 2.00  Resp_cpu: 7271
  Best:: AccessPath: TableScan
         Cost: 2.00  Degree: 1  Resp: 2.00  Card: 1.00  Bytes: 0

        kkfdPaForcePrm return FALSE
kkfdPaPrm: use dictionary DOP(1) on table
kkfdPaPrm:- The table : 116
kkfdPaPrm:DOP = 1 (computed from hint/dictionary/autodop)
kkfdPaPrm:- returns FALSE, i.e (serial)
***************************************


OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]:  DUAL[DUAL]#0
        kkfdPaForcePrm return FALSE
kkfdPaPrm: use dictionary DOP(1) on table
kkfdPaPrm:- The table : 116
kkfdPaPrm:DOP = 1 (computed from hint/dictionary/autodop)
kkfdPaPrm:- returns FALSE, i.e (serial)
***********************
Best so far:  Table#: 0  cost: 2.0002  card: 1.0000  bytes: 0
***********************
        kkfdPaForcePrm return FALSE
kkfdPaPrm: use dictionary DOP(1) on table
kkfdPaPrm:- The table : 116
kkfdPaPrm:DOP = 1 (computed from hint/dictionary/autodop)
kkfdPaPrm:- returns FALSE, i.e (serial)
(newjo-stop-1) k:0, spcnt:0, perm:1, maxperm:2000

*********************************
Number of join permutations tried: 1
*********************************
        kkfdPaForcePrm return FALSE
kkfdPaPrm: use dictionary DOP(1) on table
kkfdPaPrm:- The table : 116
kkfdPaPrm:DOP = 1 (computed from hint/dictionary/autodop)
kkfdPaPrm:- returns FALSE, i.e (serial)
        kkfdPaForcePrm return FALSE
kkfdPaPrm: use dictionary DOP(1) on table
kkfdPaPrm:- The table : 116
kkfdPaPrm:DOP = 1 (computed from hint/dictionary/autodop)
kkfdPaPrm:- returns FALSE, i.e (serial)
Trying or-Expansion on query block SEL$1 (#0)
Transfer Optimizer annotations for query block SEL$1 (#0)
        kkfdPaForcePrm return FALSE
kkfdPaPrm: use dictionary DOP(1) on table
kkfdPaPrm:- The table : 116
kkfdPaPrm:DOP = 1 (computed from hint/dictionary/autodop)
kkfdPaPrm:- returns FALSE, i.e (serial)
        kkfdPaForcePrm return FALSE
kkfdPaPrm: use dictionary DOP(1) on table
kkfdPaPrm:- The table : 116
kkfdPaPrm:DOP = 1 (computed from hint/dictionary/autodop)
kkfdPaPrm:- returns FALSE, i.e (serial)
Final cost for query block SEL$1 (#0) - All Rows Plan:
  Best join order: 1
  Cost: 2.0002  Degree: 1  Card: 1.0000  Bytes: 0
  Resc: 2.0002  Resc_io: 2.0000  Resc_cpu: 7271
  Resp: 2.0002  Resp_io: 2.0000  Resc_cpu: 7271
kkoqbc-subheap (delete addr=0x2b4afb8cfb08, in-use=11112, alloc=14424)
kkoqbc-end:
        :
    call(in-use=6272, alloc=32712), compile(in-use=55136, alloc=56336), execution(in-use=2424, alloc=4032)

kkoqbc: finish optimizing query block SEL$1 (#0)
apadrv-end
          :
    call(in-use=6272, alloc=32712), compile(in-use=56080, alloc=56336), execution(in-use=2424, alloc=4032)


Code generation for query block SEL$1 (#0)
qksqbDumpQbcdef() dumping query block tree sqlid=84zghzsc8b7rj
qbcdef qbcp=@0x2b4afb907cb0 name=SEL$1
FROM position=40
end of FROM position=49
START position=25
END position=49
SELECT clause=@0x2b4afb903080 (qbcsel)
FROM clause=@0x2b4afb902d88 (qbcfro)
WHERE clause=@(nil) (qbcwhr)
GROUP BY clause=@(nil) (qbcgbh)
HAVING clause=@(nil) (qbchav)
ORDER BY clause=@(nil) (qbcord)
Outer Query Block=@(nil) (qbcoqb)
Inner Query Block=@(nil) (qbciqb)
Next Query Block=@(nil) (qbcnxt)
View Query Block=@(nil) (qbcvqb)
Set Query Block=@(nil) (qbcseb)
Set Q.B. Parent=@(nil) (qbcsep)
qbcflg=0x40400
qbcxfl=0x0
qbcxxfl=0x0
qbcxxxfl=0x0
seldef selp=@0x2b4afb903080 name=SYSDATE
name=SYSDATE
flags=0x11
end position=40
select operand=@0x2b4afb903010
next element=@(nil)
opndef opnp=@0x2b4afb903010 type=base operand [3]
position=32
opnflg=0x30040
opnxfl=0x0
opnflg2=0x0
strtyp=SYSDATE
frodef frop=@0x2b4afb902d88 alias=DUAL
alias=DUAL
table=DUAL
next pointer=@(nil) (fronxt)
containing q.b. (froqbc)=@0x2b4afb907cb0
view q.b. (frovqb)=@(nil)
outer join to=@(nil) (frooutj)
flags, flags, and more flags
froflg=0x43
froxfl=0x100
froxxfl=0x40000000
froxxxfl=0x0
fro4xfl=0x0
Code generation for table DUAL[DUAL] using frokmode:23
qknAllocate
        Allocate FAST_DUAL_QKNTYP(0x2b4afb91adf0 rwo:0x2b4afb91aea8)

        []
qkatab: froqkn:0x2b4afb91adf0 fro:DUAL
frorwo:
        []
froqkn:
 FAST_DUAL (0x2b4afb91adf0)

Query block SEL$1 (#0) processed
Traversing query block SEL$1 (#0) because of (14)
 FAST_DUAL (0x2b4afb91adf0)
Query block SEL$1 (#0) traversed
Traversing query block SEL$1 (#0) because of (3)
 FAST_DUAL (0x2b4afb91adf0)
Query block SEL$1 (#0) traversed
qkaMarkQkn: -> FAST_DUAL_QKNTYP(0x2b4afb91adf0)
            flags_in:
qkaMarkQkn: < - FAST_DUAL_QKNTYP(0x2b4afb91adf0)
   out:
   out_left:
   out_right:
Traversing query block SEL$1 (#0) because of (6)
 FAST_DUAL (0x2b4afb91adf0)
Query block SEL$1 (#0) traversed
Traversing query block SEL$1 (#0) because of (12)
 FAST_DUAL (0x2b4afb91adf0)
Query block SEL$1 (#0) traversed
Traversing query block SEL$1 (#0) because of (5)
 FAST_DUAL (0x2b4afb91adf0)
Query block SEL$1 (#0) traversed
qknAllocate
        Allocate STMT_MARKER_QKNTYP(0x2b4afb919fb8 rwo:(nil)) on top of FAST_DUA
        L_QKNTYP(0x2b4afb91adf0)
qkenndfs: node 0x2b4afb91adf0(dnum_qkn   1) of type FAST_DUAL_QKNTYP               exprs_qkn 2b4afb91ae30
qkenndfs: node 0x2b4afb919fb8(dnum_qkn   2) of type STMT_MARKER_QKNTYP             exprs_qkn 2b4afb919ff8
**** qkeDumpExpressionScopes expression scopes ****
Expression:
        [(0x2b4afb903010:8:SYSDATE@!)]
Defined by   : Node STMT_MARKER_QKNTYP        (dnum_qkn   2) type QKE_REF  dob   1
Referenced by: Node STMT_MARKER_QKNTYP        (dnum_qkn   2)
********** End of qkeDumpExpressionScopes *********
Traversing query block SEL$1 (#0) because of (4)
 FAST_DUAL (0x2b4afb91adf0)
Query block SEL$1 (#0) traversed
Traversing query block SEL$1 (#0) because of (1)
 FAST_DUAL (0x2b4afb91adf0)
Query block SEL$1 (#0) traversed
Traversing query block SEL$1 (#0) because of (5)
qkaPlanSignatureCB
        node : 0x2b4afb91adf0, node->type_qkn #: 66,node->exprs: (nil), node->dn
        _qkn: (nil), dn->kkfdntyp: 0
 FAST_DUAL (0x2b4afb91adf0)
Query block SEL$1 (#0) traversed
Starting SQL statement dump

user_id=0 user_name=SYS module=sqlplus@jyrac1 (TNS V1-V3) action=
sql_id=84zghzsc8b7rj plan_hash_value=1388734953 problem_type=3
----- Current SQL Statement for this session (sql_id=84zghzsc8b7rj) -----
/* SQL Analyze(146,0) */ select sysdate from dual
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0x7f6236e8       145  package body SYS.DBMS_SQLTUNE_INTERNAL
0x7f6236e8     12043  package body SYS.DBMS_SQLTUNE_INTERNAL
0x854a3268      1276  package body SYS.DBMS_SQLDIAG
0x758e9c58         1  anonymous block
sql_text_length=50
sql=/* SQL Analyze(146,0) */ select sysdate from dual
----- Explain Plan Dump -----
----- Plan Table -----

============
Plan Table
============
------------------------------------+-----------------------------------+
| Id  | Operation         | Name    | Rows  | Bytes | Cost  | Time      |
------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT  |         |       |       |     2 |           |
| 1   |  FAST DUAL        |         |     1 |       |     2 |  00:00:01 |
------------------------------------+-----------------------------------+
Predicate Information:
----------------------

Content of other_xml column
===========================
  db_version     : 11.2.0.1
  parse_schema   : SYS
  plan_hash      : 1388734953
  plan_hash_2    : 308129442
  Outline Data:
  /*+
    BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.1')
      DB_VERSION('11.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
    END_OUTLINE_DATA
  */

..........省略


Query Block Registry:
SEL$1 0xfb907cb0 (PARSER) [FINAL]

:
    call(in-use=9216, alloc=32712), compile(in-use=79040, alloc=141816), execution(in-use=3600, alloc=4032)

End of Optimizer State Dump
Dumping Hints
=============
====================== END SQL Statement Dump ======================

这种方法它是事件基础框架的一部分,它有先天的优势能捕获包含在PL/SQL块中的SQL语句。下面显示了一个例子。

1.创建包的定义

SQL> create or replace package getcircarea as
  2  function getcircarea(radius number)
  3  return number;
  4  end getcircarea;
  5  /

Package created

2.创建包体

SQL> create or replace package body getcircarea as
  2  function getcircarea (radius number) return number
  3  is area number(8,2);
  4  begin
  5  select 3.142*radius*radius into area from dual;
  6  return area;
  7  end;
  8  end getcircarea;
  9  /

Package body created

3.调用过程

SQL> set serveroutput on size 100000;
SQL> declare
  2  area number(8,2);
  3  begin
  4  area:= getcircarea.getcircarea(10);
  5  dbms_output.put_line('Area is '||area);
  6  end;
  7  /

Area is 314.2

PL/SQL procedure successfully completed

4.查询PL/SQL中特定语句的sql_id

SQL>  select sql_text, sql_id from v$sqlarea where sql_text like '%3.142%';

SQL_TEXT
--------------------------------------------------------------------------------
SQL_ID
-------------
 select sql_text, sql_id from v$sqlarea where sql_text like '%3.142%'
0wu4zyqk0jkg2

SELECT 3.142*:B1 *:B1 FROM DUAL
9rjmrhbjuasav

5.清空shared_pool(备注:必须flush shared pool,否则trace不会生成)

SQL> alter system flush shared_pool;

System altered.

6.设置跟踪会话标识符

SQL> alter session set tracefile_identifier='PLSQL';

Session altered.
--也可以oradebug
--oradebug event trace[rdbms.SQL_Optimizer.*][sql:9rjmrhbjuasav]

7.开启跟踪

SQL> alter session set events 'trace[rdbms.SQL_Optimizer.*][sql:9rjmrhbjuasav]';

Session altered.

8.再次调用 getcircarea.getcircarea过程

SQL> set serveroutput on size 100000;
SQL> declare
  2  area number(8,2);
  3  begin
  4  area:= getcircarea.getcircarea(10);
  5  dbms_output.put_line('Area is '||area);
  6  end;
  7  /
Area is 314.2

PL/SQL procedure successfully completed.

9.关闭跟踪

SQL> ALTER SESSION SET EVENTS 'trace[rdbms.SQL_Optimizer.*]off';

Session altered.

10. 查找生成跟踪文件的位置

SQL> select value from v$diag_info where name='Default Trace File';

VALUE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/jycs/jycs/trace/jycs_ora_12089_PLSQL.trc

11.查看跟踪文件

SQL> host cat /u01/app/oracle/diag/rdbms/jycs/jycs/trace/jycs_ora_12089_PLSQL.trc
Trace file /u01/app/oracle/diag/rdbms/jycs/jycs/trace/jycs_ora_12089_PLSQL.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/11.2.0/db
System name:    Linux
Node name:      jyrac1
Release:        2.6.18-164.el5
Version:        #1 SMP Tue Aug 18 15:51:48 EDT 2009
Machine:        x86_64
Instance name: jycs
Redo thread mounted by this instance: 1
Oracle process number: 21
Unix process pid: 12089, image: oracle@jyrac1 (TNS V1-V3)


*** 2014-08-15 10:47:46.809
*** SESSION ID:(146.49829) 2014-08-15 10:47:46.809
*** CLIENT ID:() 2014-08-15 10:47:46.809
*** SERVICE NAME:(SYS$USERS) 2014-08-15 10:47:46.809
*** MODULE NAME:(sqlplus@jyrac1 (TNS V1-V3)) 2014-08-15 10:47:46.809
*** ACTION NAME:() 2014-08-15 10:47:46.809

Registered qb: SEL$1 0xb0a521f0 (PARSER)
---------------------
QUERY BLOCK SIGNATURE
---------------------
  signature (): qb_name=SEL$1 nbfros=1 flg=0
    fro(0): flg=4 objn=116 hint_alias="DUAL"@"SEL$1"

SPM: statement not found in SMB
SPM: statement not a candidate for auto-capture

**************************
Automatic degree of parallelism (ADOP)
**************************
Automatic degree of parallelism is disabled: Parameter.

PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
OPTIMIZER INFORMATION

******************************************
----- Current SQL Statement for this session (sql_id=9rjmrhbjuasav) -----
SELECT 3.142*:B1 *:B1 FROM DUAL
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0x7f525220         5  package body SYS.GETCIRCAREA
0x7f553b20         4  anonymous block
*******************************************
Legend
The following abbreviations are used by optimizer trace.
CBQT - cost-based query transformation
JPPD - join predicate push-down
OJPPD - old-style (non-cost-based) JPPD
FPD - filter push-down
PM - predicate move-around
CVM - complex view merging
SPJ - select-project-join
SJC - set join conversion
SU - subquery unnesting
OBYE - order by elimination
OST - old style star transformation
ST - new (cbqt) star transformation
CNT - count(col) to count(*) transformation
JE - Join Elimination
JF - join factorization
SLP - select list pruning
DP - distinct placement
qb - query block
LB - leaf blocks
DK - distinct keys
LB/K - average number of leaf blocks per key
DB/K - average number of data blocks per key
CLUF - clustering factor
NDV - number of distinct values
Resp - response cost
Card - cardinality
Resc - resource cost
NL - nested loops (join)
SM - sort merge (join)
HA - hash (join)
CPUSPEED - CPU Speed
IOTFRSPEED - I/O transfer speed
IOSEEKTIM - I/O seek time
SREADTIM - average single block read time
MREADTIM - average multiblock read time
MBRC - average multiblock read count
MAXTHR - maximum I/O system throughput
SLAVETHR - average slave I/O throughput
dmeth - distribution method
  1: no partitioning required
  2: value partitioned
  4: right is random (round-robin)
  128: left is random (round-robin)
  8: broadcast right and partition left
  16: broadcast left and partition right
  32: partition left using partitioning of right
  64: partition right using partitioning of left
  256: run the join in serial
  0: invalid distribution method
sel - selectivity
ptn - partition
***************************************
PARAMETERS USED BY THE OPTIMIZER
********************************
  *************************************
  PARAMETERS WITH ALTERED VALUES
  ******************************
Compilation Environment Dump
is_recur_flags                      = 128
Bug Fix Control Environment

..........省略
Query transformations (QT)
**************************
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No partitioned table in query block.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for 9rjmrhbjuasav.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE:     CSE not performed on query block SEL$1 (#0).
OBYE:   Considering Order-by Elimination from view SEL$1 (#0)
***************************
Order-by elimination (OBYE)
***************************
OBYE:     OBYE bypassed: no order by to eliminate.
CVM: Considering view merge in query block SEL$1 (#0)
query block SEL$1 (#0) unchanged
Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No partitioned table in query block.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for 9rjmrhbjuasav.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE:     CSE not performed on query block SEL$1 (#0).
SU: Considering subquery unnesting in query block SEL$1 (#0)
********************
Subquery Unnest (SU)
********************
SJC: Considering set-join conversion in query block SEL$1 (#0)
*************************
Set-Join Conversion (SJC)
*************************
SJC: not performed
PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
PM:     PM bypassed: Outer query contains no views.
PM:     PM bypassed: Outer query contains no views.
query block SEL$1 (#0) unchanged
FPD: Considering simple filter push in query block SEL$1 (#0)
 ??
apadrv-start sqlid=11224790525316260187
  :
    call(in-use=1136, alloc=16344), compile(in-use=53272, alloc=58176), execution(in-use=3336, alloc=4032)

*******************************************
Peeked values of the binds in SQL statement
*******************************************
----- Bind Info (kkscoacd) -----
 Bind#0
  oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1206001 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=2b8cb0aa3880  bln=22  avl=02  flg=05
  value=10
 Bind#1
  oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=2b8cb0aa3898  bln=22  avl=02  flg=01
  value=10

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT 3.142*:B1*:B2 "3.142*:B1*:B1" FROM "SYS"."DUAL" "DUAL"
kkoqbc: optimizing query block SEL$1 (#0)

        :
    call(in-use=1136, alloc=16344), compile(in-use=54200, alloc=58176), execution(in-use=3512, alloc=4032)

kkoqbc-subheap (create addr=0x2b8cb0a5b698)
****************
QUERY BLOCK TEXT
****************
SELECT 3.142*:B1 *:B1 FROM DUAL
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (optimizer): qb_name=SEL$1 nbfros=1 flg=0
  fro(0): flg=0 objn=116 hint_alias="DUAL"@"SEL$1"

-----------------------------
SYSTEM STATISTICS INFORMATION
-----------------------------
  Using NOWORKLOAD Stats
  CPUSPEEDNW: 2657 millions instructions/sec (default is 100)
  IOTFRSPEED: 4096 bytes per millisecond (default is 4096)
  IOSEEKTIM: 10 milliseconds (default is 10)
  MBRC: -1 blocks (default is 8)

***************************************
BASE STATISTICAL INFORMATION
***********************
Table Stats::
  Table: DUAL  Alias: DUAL
    #Rows: 1  #Blks:  1  AvgRowLen:  2.00
Access path analysis for DUAL
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for DUAL[DUAL]
  Table: DUAL  Alias: DUAL
    Card: Original: 1.000000  Rounded: 1  Computed: 1.00  Non Adjusted: 1.00
  Access Path: TableScan
    Cost:  2.00  Resp: 2.00  Degree: 0
      Cost_io: 2.00  Cost_cpu: 7271
      Resp_io: 2.00  Resp_cpu: 7271
  Best:: AccessPath: TableScan
         Cost: 2.00  Degree: 1  Resp: 2.00  Card: 1.00  Bytes: 0

***************************************


OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]:  DUAL[DUAL]#0
***********************
Best so far:  Table#: 0  cost: 2.0002  card: 1.0000  bytes: 0
***********************
(newjo-stop-1) k:0, spcnt:0, perm:1, maxperm:2000

*********************************
Number of join permutations tried: 1
*********************************
Trying or-Expansion on query block SEL$1 (#0)
Transfer Optimizer annotations for query block SEL$1 (#0)
Final cost for query block SEL$1 (#0) - All Rows Plan:
  Best join order: 1
  Cost: 2.0002  Degree: 1  Card: 1.0000  Bytes: 0
  Resc: 2.0002  Resc_io: 2.0000  Resc_cpu: 7271
  Resp: 2.0002  Resp_io: 2.0000  Resc_cpu: 7271
kkoqbc-subheap (delete addr=0x2b8cb0a5b698, in-use=11112, alloc=14296)
kkoqbc-end:
        :
    call(in-use=6400, alloc=32712), compile(in-use=54760, alloc=58176), execution(in-use=3512, alloc=4032)

kkoqbc: finish optimizing query block SEL$1 (#0)
apadrv-end
          :
    call(in-use=6400, alloc=32712), compile(in-use=55672, alloc=58176), execution(in-use=3512, alloc=4032)


Starting SQL statement dump

user_id=0 user_name=SYS module=sqlplus@jyrac1 (TNS V1-V3) action=
sql_id=9rjmrhbjuasav plan_hash_value=1388734953 problem_type=3
----- Current SQL Statement for this session (sql_id=9rjmrhbjuasav) -----
SELECT 3.142*:B1 *:B1 FROM DUAL
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0x7f525220         5  package body SYS.GETCIRCAREA
0x7f553b20         4  anonymous block
sql_text_length=32
sql=SELECT 3.142*:B1 *:B1 FROM DUAL
----- Explain Plan Dump -----
----- Plan Table -----

============
Plan Table
============
------------------------------------+-----------------------------------+
| Id  | Operation         | Name    | Rows  | Bytes | Cost  | Time      |
------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT  |         |       |       |     2 |           |
| 1   |  FAST DUAL        |         |     1 |       |     2 |  00:00:01 |
------------------------------------+-----------------------------------+
Predicate Information:
----------------------

Content of other_xml column
===========================
  db_version     : 11.2.0.1
  parse_schema   : SYS
  plan_hash      : 1388734953
  plan_hash_2    : 308129442
  Outline Data:
  /*+
    BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.1')
      DB_VERSION('11.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
    END_OUTLINE_DATA
  */

............省略


Query Block Registry:
SEL$1 0xb0a521f0 (PARSER) [FINAL]

:
    call(in-use=8672, alloc=32712), compile(in-use=80744, alloc=144008), execution(in-use=4712, alloc=8088)

End of Optimizer State Dump
Dumping Hints
=============
====================== END SQL Statement Dump ======================

网络故障造成备库standby logfile的损坏

某生产库exadata,oracle 11.2.0.4,客户说物理备库与主库不能进行同步,如是要客户把生产库与备库的alert_sid.log文件打包传给我。查看了一个主库的alert.log日志文件发现从15号开始网络就断断续续的,信息如下:

Wed Jun 15 14:25:48 2016


***********************************************************************

Fatal NI connect error 12543, connecting to:
 (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=10.137.8.101)(PORT=1521)))

(CONNECT_DATA=(SERVICE_NAME=syyjm)(CID=(PROGRAM=oracle)(HOST=yb01dbadm01.dnsserver)

(USER=oracle))))

  VERSION INFORMATION:
	TNS for Linux: Version 11.2.0.4.0 - Production
	TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
  Time: 15-JUN-2016 14:25:48
  Tracing not turned on.
  Tns error struct:
    ns main err code: 12543
    
TNS-12543: TNS:destination host unreachable
    ns secondary err code: 12560
    nt main err code: 513
    
TNS-00513: Destination host unreachable
    nt secondary err code: 113
    nt OS err code: 0


***********************************************************************

Fatal NI connect error 12543, connecting to:
 (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=10.137.8.101)(PORT=1521)))

(CONNECT_DATA=(SERVICE_NAME=syyjm)(CID=(PROGRAM=oracle)(HOST=yb01dbadm01.dnsserver)

(USER=oracle))))

  VERSION INFORMATION:
	TNS for Linux: Version 11.2.0.4.0 - Production
	TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
  Time: 15-JUN-2016 14:25:51
  Tracing not turned on.
  Tns error struct:
    ns main err code: 12543
    
TNS-12543: TNS:destination host unreachable
    ns secondary err code: 12560
    nt main err code: 513
    
TNS-00513: Destination host unreachable
    nt secondary err code: 113
    nt OS err code: 0
Wed Jun 15 14:26:02 2016
...省略...

对应的备库这个时间点的alert.log日志文件信息如下:

Wed Jun 15 14:24:47 2016


***********************************************************************

Fatal NI connect error 12170.
Wed Jun 15 14:24:47 2016


***********************************************************************

  VERSION INFORMATION:
	TNS for Linux: Version 11.2.0.4.0 - Production
	Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
	TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production

Fatal NI connect error 12170.
  Time: 15-JUN-2016 14:24:47

  VERSION INFORMATION:
	TNS for Linux: Version 11.2.0.4.0 - Production
	Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
	TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
  Tracing not turned on.
  Time: 15-JUN-2016 14:24:47
  Tns error struct:
  Tracing not turned on.
    ns main err code: 12535
  Tns error struct:
    
    ns main err code: 12535
    
TNS-12535: TNS:operation timed out
TNS-12535: TNS:operation timed out
    ns secondary err code: 12560
    ns secondary err code: 12560
    nt main err code: 505
    nt main err code: 505
    
    
TNS-00505: Operation timed out
TNS-00505: Operation timed out
    nt secondary err code: 110
    nt secondary err code: 110
    nt OS err code: 0
    nt OS err code: 0
  Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.137.8.73)(PORT=12282))
  Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.137.8.73)(PORT=12091))
RFS[4]: Possible network disconnect with primary database
RFS[5]: Possible network disconnect with primary database
Wed Jun 15 14:30:55 2016
Killing 1 processes with pids 25908 (idle RFS by thread/sequence) in order to allow the 

in-transit gap to be queued. Requested by OS process 58349
Wed Jun 15 14:30:57 2016
RFS[10]: Assigned to RFS process 58352
RFS[10]: Selected log 8 for thread 1 sequence 40085 dbid 643576469 branch 867103448
Wed Jun 15 14:30:58 2016
Archived Log entry 3997 added for thread 1 sequence 40085 ID 0x265be592 dest 1:
Wed Jun 15 14:30:59 2016
Primary database is in MAXIMUM PERFORMANCE mode
RFS[11]: Assigned to RFS process 58363
RFS[11]: Selected log 7 for thread 1 sequence 40087 dbid 643576469 branch 867103448
RFS[10]: Selected log 8 for thread 1 sequence 40086 dbid 643576469 branch 867103448
Wed Jun 15 14:31:01 2016
Killing 1 processes with pids 25910 (idle RFS by thread/sequence) in order to retry 

receiving a log after reattaching. Requested by OS process 58367
Wed Jun 15 14:31:01 2016
Archived Log entry 3998 added for thread 1 sequence 40086 ID 0x265be592 dest 1:
RFS[12]: Assigned to RFS process 58367
RFS[12]: Selected log 10 for thread 2 sequence 

35718 dbid 643576469 branch 867103448

从信息RFS[4]: Possible network disconnect with primary database可以看到备库不能与主库进行连接。从信息Killing 1 processes with pids 25908 (idle RFS by thread/sequence) in order to allow the in-transit gap to be queued. Requested by OS process 58349,可知因为不能从主库接受归档日志,且超时而被进程58349给kill掉了,而58349进程是另一个RFS进程用于接受主库所传输过来的归档日志。也就是说,因为网络连接出现了问题,RFS进程在不断的kill,restart重复这样的操作

在2016-06-15 14:34:25这个时间点由于网络恢复日志传输恢复正常,但在2016-05-15 16:59:00这个时间点网络又断了,不能进行日志传输

Wed Jun 15 14:34:25 2016
ARC3: Standby redo logfile selected for thread 1 sequence 40085 for destination 

LOG_ARCHIVE_DEST_2
Wed Jun 15 14:34:27 2016
Thread 1 advanced to log sequence 40087 (LGWR switch)
  Current log# 1 seq# 40087 mem# 0: +DATAC1/yyjm/onlinelog/group_1.261.867103449
Wed Jun 15 14:34:27 2016
Archived Log entry 139166 added for thread 1 sequence 40086 ID 0x265be592 dest 1:
Wed Jun 15 14:34:27 2016
LNS: Standby redo logfile selected for thread 1 sequence 40087 for destination 

LOG_ARCHIVE_DEST_2
ARC3: Standby redo logfile selected for thread 1 sequence 40086 for destination 

LOG_ARCHIVE_DEST_2
Wed Jun 15 15:58:19 2016
Thread 1 advanced to log sequence 40088 (LGWR switch)
  Current log# 2 seq# 40088 mem# 0: +DATAC1/yyjm/onlinelog/group_2.262.867103449
Wed Jun 15 15:58:19 2016
LNS: Standby redo logfile selected for thread 1 sequence 40088 for destination 

LOG_ARCHIVE_DEST_2
Wed Jun 15 15:58:20 2016
Archived Log entry 139173 added for thread 1 sequence 40087 ID 0x265be592 dest 1:
Wed Jun 15 16:59:00 2016

***********************************************************************

Fatal NI connect error 12170.

  VERSION INFORMATION:
	TNS for Linux: Version 11.2.0.4.0 - Production
	Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
	TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
  Time: 15-JUN-2016 16:59:00
  Tracing not turned on.
  Tns error struct:
    ns main err code: 12535
    
TNS-12535: TNS:operation timed out
    ns secondary err code: 12560
    nt main err code: 505
    
TNS-00505: Operation timed out
    nt secondary err code: 110
    nt OS err code: 0
  Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.137.32.4)(PORT=1222))
Wed Jun 15 17:05:01 2016

从备库日志信息来看,备库其实在2016-06-15 16:08:43这个时间点网络就已经断开了

Wed Jun 15 16:08:43 2016
RFS[9]: Possible network disconnect with primary database
Wed Jun 15 16:08:43 2016
RFS[6]: Possible network disconnect with primary database
Wed Jun 15 17:56:36 2016

这个DG环境的网络时断时连的情况从2016-06-15一直到2016-06-19号,从备库的日志信息可以看到如下信息,不能在备库创建归档日志文件,从信息RFS[23]: No standby redo logfiles available for thread 1 与RFS[24]: No standby redo logfiles available for thread 2,以及ORA-00312: online log 9 thread 1: ‘/data/syyjm/datafile/standy03’可知因为备库的standby log的standy03出现了损坏。

Sun Jun 19 12:15:21 2016
RFS[23]: Assigned to RFS process 18969
RFS[23]: No standby redo logfiles available for thread 1 
Creating archive destination file : /arch/syyjm/1_40277_867103448.dbf (12905 blocks)
Sun Jun 19 12:15:56 2016
Archiver process freed from errors. No longer stopped
Sun Jun 19 12:15:56 2016
Primary database is in MAXIMUM PERFORMANCE mode
Re-archiving standby log 11 thread 2 sequence 36153
RFS[24]: Assigned to RFS process 18976
RFS[24]: No standby redo logfiles available for thread 2 
...省略...
Sun Jun 19 12:21:03 2016
Unable to create archive log file '/arch/syyjm/1_40254_867103448.dbf'
ARC3: Error 19504 Creating archive log file to '/arch/syyjm/1_40254_867103448.dbf'
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance syyjm - Archival Error
ORA-16038: log 9 sequence# 40254 cannot be archived
ORA-19504: failed to create file ""
ORA-00312: online log 9 thread 1: '/data/syyjm/datafile/standy03'
Sun Jun 19 12:21:03 2016
...省略...
Sun Jun 19 12:31:03 2016
Suppressing further error logging of LOG_ARCHIVE_DEST_1.
Suppressing further error logging of LOG_ARCHIVE_DEST_1.
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance syyjm - Archival Error
ORA-16038: log 9 sequence# 40254 cannot be archived
ORA-19504: failed to create file ""
ORA-00312: online log 9 thread 1: '/data/syyjm/datafile/standy03'
Sun Jun 19 12:31:03 2016
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance syyjm - Archival Error
ORA-16014: log 9 sequence# 40254 not archived, no available destinations
ORA-00312: online log 9 thread 1: '/data/syyjm/datafile/standy03'
...省略...
Wed Jun 22 11:22:11 2016
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance syyjm - Archival Error
ORA-16038: log 9 sequence# 40254 cannot be archived
ORA-19504: failed to create file ""
ORA-00312: online log 9 thread 1: '/data/syyjm/datafile/standy03'
Wed Jun 22 11:22:11 2016
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance syyjm - Archival Error
ORA-16014: log 9 sequence# 40254 not archived, no available destinations
ORA-00312: online log 9 thread 1: '/data/syyjm/datafile/standy03'

从主库alert日志文件也可以看到与不能进行对远程目录进行归档操作

Wed Jun 22 11:05:22 2016
Thread 1 advanced to log sequence 41141 (LGWR switch)
  Current log# 2 seq# 41141 mem# 0: +DATAC1/yyjm/onlinelog/group_2.262.867103449
FAL[server, ARC0]: FAL archive failed, see trace file.
ARCH: FAL archive failed. Archiver continuing
ORACLE Instance yyjm1 - Archival Error. Archiver continuing.
FAL[server, ARC1]: FAL archive failed, see trace file.
ARCH: FAL archive failed. Archiver continuing
ORACLE Instance yyjm1 - Archival Error. Archiver continuing.
Wed Jun 22 11:05:23 2016
Archived Log entry 142098 added for thread 1 sequence 41140 ID 0x265be592 dest 1:

通过主库与备库的alert.log文件信息可以清楚的知道原因是因为DG环境的网络时断时连引起了备库的RFS进程在重复kill,restart操作过程中损坏了standby log文件,解决方法很简单就是重建备库被损不的standby log文件。

linux下/dev/shm的大小引发ORA-00845: MEMORY_TARGET not supported on this system故障

Linux操作系统,oracle 11.2.0.4 启动实例时出现如下错误:

SQL> startup nomount pfile=/u03/app/oracle/11.2.0/db/dbs/initcssb.ora
ORA-00845: MEMORY_TARGET not supported on this system

查看错误帮助信息

[oracle11@oracle11g dbs]$ oerr ora 845
00845, 00000, "MEMORY_TARGET not supported on this system"
// *Cause: The MEMORY_TARGET parameter was not supported on this operating system or /dev/shm was not sized correctly on Linux.
// *Action: Refer to documentation for a list of supported operating systems. Or, size /dev/shm to be at least the SGA_MAX_SIZE on each Oracle instance running on the system.

错误原因是这个操作系统不支持MEMORY_TARGET参数或/dev/shm在Linux上的大小不正确造成的,这是该操作系统上的第二个实例,第一个实例设置了MEMORY_TARGET参数,所以并不是不支持这个参数,原因就只有/dev/shm大小不正确了,解决方法是要将/dev/shm的最小值设置为操作系统上运行实例SGA_MAX_SIZE所设置的大小。/dev/shm/是linux下一个目录,/dev/shm目录不在磁盘上,而是在内存里,因此使用linux /dev/shm/的效率非常高,直接写进内存。
tmpfs有以下特点:
1.tmpfs 是一个文件系统,而不是块设备;您只是安装它,它就可以使用了。
2.动态文件系统的大小。
3.tmpfs 的另一个主要的好处是它闪电般的速度。因为典型的 tmpfs 文件系统会完全驻留在 RAM 中,读写几乎可以是瞬间的。
4.tmpfs 数据在重新启动之后不会保留,因为虚拟内存本质上就是易失的。所以有必要做一些脚本做诸如加载、绑定的操作。

linux下/dev/shm的容量默认最大为内存的一半大小,使用df -h命令可以看到。但它并不会真正的占用这块内存,如果/dev/shm/下没有任何文件,它占用的内存实际上就是0字节;如果它最大为1G,里头放有100M文件,那剩余的900M仍然可为其它应用程序所使用,但它所占用的100M内存,是绝不会被系统回收重新划分的。

linux /dev/shm容量(大小)是可以调整,在有些情况下(如oracle数据库)默认的最大一半内存不够用,并且默认的inode数量很低一般都要调高些,这时可以用mount命令来管理它。
mount -o size=1500M -o nr_inodes=1000000 -o noatime,nodiratime -o remount /dev/shm
在2G的机器上,将最大容量调到1.5G,并且inode数量调到1000000,这意味着大致可存入最多一百万个小文件通过/etc/fstab文件来修改/dev/shm的容量(增加size选项即可),修改后,重新挂载即可。

这里该实例的SGA_MAX_SIZE为1G,下面的命令查看/dev/shm的大小。

[root@oracle11g ~]# df -lh
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              23G   20G  1.6G  93% /
/dev/sdb1             9.9G  5.8G  3.6G  62% /u02
tmpfs                 2G    1.3M  0.7G  65% /dev/shm

从上面结果可以看到/dev/shm可用大小只有0.7G,执行下面的命令来进行修改。

[root@oracle11g ~]# vi /etc/fstab
LABEL=/                 /                       ext3    defaults        1 1
/dev/sdb1               /u02                    ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults,size=4G        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
LABEL=SWAP-sda2         swap                    swap    defaults        0 0

"/etc/fstab" 7L, 540C written

卸载/dev/shm,但/dev/shm正被访问

[root@oracle11g ~]#  umount /dev/shm
umount: /dev/shm: device is busy
umount: /dev/shm: device is busy

用fuser处理,fuser命令,-k:kill processes accessing the named file(杀死所有正在访问指定文件的进程),-m 表示指定文件所在的文件系统或者块设备(处于 mount 状态)。所有访问该文件系统的进程都被列出。

[root@oracle11g ~]# fuser -km /dev/shm
/dev/shm:             3152m  3154m  3156m  3160m  3162m  3164m  3166m  3168m  3170m  3172m  3174m  3176m  3178m  3180m  3182m  3184m  3186m  3193m  3195m  3197m  3199m  3201m  3236m  3248m  3250m  3256m  3292m  4366m
[root@oracle11g ~]#  umount /dev/shm
[root@oracle11g ~]# mount /dev/shm
[root@oracle11g ~]# df -lh
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              23G   20G  1.6G  93% /
/dev/sdb1             9.9G  5.8G  3.6G  62% /u02
tmpfs                 4.0G     0  4.0G   0% /dev/shm

再重新启动实例
SQL> startup nomount pfile=/u03/app/oracle/11.2.0/db/dbs/initcssb.ora
ORACLE instance started.

Total System Global Area 1334786560 bytes
Fixed Size 1364480 bytes
Variable Size 171970048 bytes
Database Buffers 1155189248 bytes
Redo Buffers 6262784 bytes

小结:Oracle 11g的AMM内存管理模式就是使用/dev/shm,所以有时候修改MEMORY_TARGET或者MEMORY_MAX_TARGET会出现ORA-00845的错误,在安装配置实例内存时为了避免出现这个故障可以对Linux系统中的/dev/shm进行调整,让其可用大小至少等于实例的
sga_max_size。