Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Few days ago, one of my clients performed a "drop" action occidentally on the web
site. But he could not confirm what he dropped is correct or not. He want to see the
screen before he dropped the items.
The first thought we got is to utilize flashback query. However, this simple action
actually deleted important data from many tables (over 10), and the code of displaying
the i is fairly complex. Flashback query doesn't help in this case. We finally exported
entire schema using expdp with flashback_time option, and imported it to a test
environment.
I ever thought if oracle provided a session parameter flashback_scn/flashback_time to
allow the user flashback query all of data to a specified scn/timestamp in the session,
it will make things simple. What we need to do is to create a new connection, and
change this parameter after connected to database.
Then I was thinking can I find a workaround way? I finally got one. I can build a new
schema, then create a set of views referring to the existing schema, involving the
flashback query feature. And I can use a "global parameter" to control the flashback
scn/timestamp.
Here is the code.
SQL
1. -- ############################################################################
####
2. -- #
3. -- #
$Id: schema_snapshot.sql
4. -- #
5. -- #
Created: 07/02/2014
9. -- #
10. -- # User run as: / as sysdba (OS user should be oracle owner)
11. -- #
12. -- #
13. -- #
14. -- #
15. -- #
16. -- #
17. -- # History
18. -- # Modified by
When
Why
19. -- # ------------
-------
--------------------------------------------------
20. -- ############################################################################
####
21.
22. prompt Usage: @schema_snapshot <existing_schema_name> <snapshot_schema_name>
23. prompt Description: create a snapshot for a schema
24. prompt
25.
26. declare
27.
sql_str varchar2(4000);
28.
c number;
29. begin
30.
31.
if c = 0 then
32.
33.
34.
sql_str := q'[
var varchar2(255);
37.
38.
44.
BEGIN
45.
var := val;
46.
end set_var;
47.
48.
49.
IS
50.
BEGIN
51.
return var;
52.
END get_var;
59.
60.
61.
62.
63.
64.
end loop;
for q in (select 'create or replace synonym '||upper('&2')||'.'||
table_name||' for '||upper('&2')||'.V_'||table_name||';' from dba_tables where
owner=upper('&1') loop
65.
66.
end loop;
67.
68.
69.
&2.var_pkg.set_var('&3');
end if;
70. end;
71. /
This will generate the code to create a "snapshot schema". Clients connecting this
schema will query all the data before the specified time. Of course, if there are
procedures/views in the existing schema, they should be created in the new schema
referring to those synonyms.
3. ---------------------------------------------------------4.
recursive calls
5.
db block gets
6.
460
7.
physical reads
8.
redo size
9.
1203583
consistent gets
10.
3868
11.
306
12.
sorts (memory)
13.
sorts (disk)
14.
4563
rows processed
15.
16. SQL 2:
17. Statistics
18. ---------------------------------------------------------19.
recursive calls
20.
db block gets
21.
167
22.
physical reads
23.
redo size
24.
267325
25.
3868
26.
306
27.
sorts (memory)
28.
sorts (disk)
29.
4563
consistent gets
rows processed
The consistent gets of the 1st SQL is almost 3 times of the 2nd one. Seems the 2nd
must have better performance. Isn't it?
Ok, let's go back to see how did these data got.
SQL
1. HelloDBA.COM> create table t1 as select * from dba_tables;
2.
3. Table created.
4.
5. HelloDBA.COM> create table t2 as select * from dba_users;
6.
7. Table created.
8.
9. HelloDBA.COM> exec dbms_stats.gather_table_stats('DEMO', 'T1');
10.
11. PL/SQL procedure successfully completed.
12.
13. HelloDBA.COM> exec dbms_stats.gather_table_stats('DEMO', 'T2');
14.
15. PL/SQL procedure successfully completed.
16.
17. HelloDBA.COM> set timing on
18. HelloDBA.COM> set autot trace
19. HelloDBA.COM> select * from t1;
20.
21. 4563 rows selected.
22.
23. Elapsed: 00:00:00.10
24.
25. Execution Plan
26. ---------------------------------------------------------27. Plan hash value: 3617692013
28.
29. -------------------------------------------------------------------------30. | Id
| Operation
| Name | Rows
31. -------------------------------------------------------------------------32. |
0 | SELECT STATEMENT
33. |
1 |
4563 |
1078K|
49
(0)| 00:00:01 |
4563 |
1078K|
49
(0)| 00:00:01 |
34. -------------------------------------------------------------------------35.
36.
37. Statistics
38. ---------------------------------------------------------39.
recursive calls
40.
db block gets
41.
460
42.
physical reads
43.
redo size
44.
1203583
45.
3868
46.
306
47.
sorts (memory)
48.
sorts (disk)
49.
4563
50.
consistent gets
rows processed
| Operation
| Name | Rows
63. ----------------------------------------------------------------------------64. |
0 | SELECT STATEMENT
4563 |
1581K|
52
(0)| 00:00:01 |
65. |
1 |
4563 |
1581K|
52
(0)| 00:00:01 |
66. |*
2 |
| T2
1 |
113 |
(0)| 00:00:01 |
67. |
3 |
BUFFER SORT
4563 |
1078K|
49
(0)| 00:00:01 |
68. |
4 |
4563 |
1078K|
49
(0)| 00:00:01 |
69. ----------------------------------------------------------------------------70.
71. Predicate Information (identified by operation id):
72. --------------------------------------------------73.
74.
2 - filter("T2"."USERNAME"='SYS')
75.
76.
77. Statistics
78. ---------------------------------------------------------79.
recursive calls
80.
db block gets
81.
167
82.
physical reads
83.
redo size
84.
267325
85.
3868
86.
306
87.
sorts (memory)
88.
sorts (disk)
89.
4563
consistent gets
rows processed
These 2 SQLs are simple. If we ignore the performance statistics data, we can easily
determine their performance from their logical structure or from their execution plan
--- The performance of the 1st one must be better than the 2nd one, because there is
one more full table scan in the execution plan.
Then why the 2nd one has less consistent gets?
Set SQL Trace for them, and then look into the formatted trace file.
SQL
1. Rows (1st) Rows (avg) Rows (max)
---------------------------------------------
3.
4563
4563
4563 MERGE JOIN CARTESIAN (cr=167 pr=0 pw=0 time=3
8433 us cost=52 size=1619865 card=4563)
4.
1
1
us cost=3 size=113 card=1)
5.
4563
4563
4563
cost=49 size=1104246 card=4563)
6.
4563
4563
4563
TABLE ACCESS FULL T1 (cr=164 pr=0 pw=0 time
=11815 us cost=49 size=1104246 card=4563)
This is the plan statistics data of the 2nd SQL. Obviously, there are 2 parts of
consistent gets, FTS on t1 and FTS on t2.
In this plan, FTS on t1 is 164. But why the 1st SQL got 466? That is because of fetch
array size. Default array size of SQL*Plus is 15. If we set it large enough, it will be,
SQL
1. HelloDBA.COM> set arraysize 5000
2. HelloDBA.COM> set autot trace stat
3. HelloDBA.COM> select * from t1;
4.
5. Statistics
6. ---------------------------------------------------------7.
recursive calls
8.
db block gets
9.
165
consistent gets
10.
physical reads
11.
redo size
12.
1147039
13.
524
14.
15.
sorts (memory)
16.
sorts (disk)
17.
4563
rows processed
165. Yes, becasue no matter how large of the array size, oracle will always retrieve the
1st row in the 1st fetch. More details refer to this article.
http://www.hellodba.com/reader.php?ID=39&lang=EN
F2 is a fairly small, consistent gets is just 3, it makes sense.
Is this the end of the story? No, let remove the filter from the 2nd SQL,
SQL
1. HelloDBA.COM> select * from t1, t2;
2.
3. 246402 rows selected.
4.
5.
6. Statistics
7. ---------------------------------------------------------8.
recursive calls
9.
db block gets
10.
219
consistent gets
11.
physical reads
12.
redo size
13.
14113903
14.
181209
15.
16428
16.
sorts (memory)
17.
sorts (disk)
18.
246402
rows processed
Only 219 consistent gets? It's a Cartesian Join, how come the small consistent gets is
it?
Generate the SQL trace file again,
SQL
1. Rows (1st) Rows (avg) Rows (max)
---------------------------------------------
3.
246402
246402
246402 MERGE JOIN CARTESIAN (cr=219 pr=0 pw=0 time=9
57833 us cost=2553 size=87472710 card=246402)
4.
54
54
54
28 us cost=3 size=6102 card=54)
5.
246402
246402
246402
cost=2550 size=1104246 card=4563)
6.
4563
4563
4563
TABLE ACCESS FULL T1 (cr=164 pr=0 pw=0 time
=10674 us cost=47 size=1104246 card=4563)
COUNT(*)
4. ---------5.
54
But wait, logically, Cartesian Join means n*m, right? How come a result of n+m?
Actually, Oracle do read the data of t1 for multiple times (54). However, after first
scan on t1, data has been cached into private work area, the following reads are from
the private buffer instead of shared buffer. Therefore, they are not counted into
consistent gets.
To get its real "gets", we may use nested loop join hint to force it read data from
shared buffer instead of private buffer.
SQL
1. HelloDBA.COM> select /*+use_nl(t1) leading(t1)*/* from t1, t2;
2.
3. 246402 rows selected.
4.
5. Elapsed: 00:00:07.43
6.
7. Execution Plan
| Operation
| Name | Rows
13. ----------------------------------------------------------------------------14. |
0 | SELECT STATEMENT
246K|
83M|
5006
(1)| 00:01:01 |
15. |
1 |
246K|
83M|
5006
(1)| 00:01:01 |
16. |
2 |
| T1
4563 |
1078K|
49
(0)| 00:00:01 |
17. |
3 |
BUFFER SORT
54 |
6102 |
4956
(1)| 00:01:00 |
18. |
4 |
54 |
6102 |
(0)| 00:00:01 |
19. ----------------------------------------------------------------------------20.
21.
22. Statistics
23. ---------------------------------------------------------24.
recursive calls
25.
db block gets
26.
4568
27.
physical reads
28.
redo size
29.
16632868
30.
181209
31.
16428
32.
sorts (memory)
33.
sorts (disk)
34.
246402
consistent gets
rows processed
Although the execution plan is not changed, the consistents gets is increased
significantly.
* SQL Text
* Execution Plan
* Plan Predicate
* Wait events
Example:
SQL
1. HelloDBA.COM> @showplan 8z91j441gu9n1
2. Usage: @showplan <SQL_ID> [Plan Hash Value] [Details: [+](B)inds|SQL (T)ext|
(Pee(K)ed Binds|(P)lan|(O)utlines|Pre(D)icate|Plan (L)oading|(W)ait events|
(S)tatistics]
3. Description: Show SQL Plan
4.
5.
6. SQL ID: 8z91j441gu9n1
7.
8. ------------- Last Monitored Binds -------------9.
10. --SID: 258,16699
11. var ACCEPTDIS VARCHAR2(32)
12. var BENAME VARCHAR2(32)
13. var EENAME VARCHAR2(32)
14. var IMPLTYP VARCHAR2(32)
15. var PROFILETYP VARCHAR2(32)
16. var SQLTYP NUMBER
17. var TID NUMBER
18.
19. --SID: 258,16699
20. exec :ACCEPTDIS:='ACCEPTDISABLED';
21. exec :BENAME:='EXEC_42756';
22. exec :EENAME:='EXEC_43638';
50.
/* STN_REPT_TOP_PROF */
51.
xmlelement(
52.
"top_profiles",
53.
xmlagg(xmlelement("obj_id", object_id)))
54.
FROM
id,
55.
56.
FROM
57.
e.task_id task_id,
58.
e.execution_name exec_name,
59.
e.execution_id exec_id,
60.
e.execution_start exec_start,
61.
o.id object_id,
62.
o.attr1 sql_id,
63.
o.attr3 parsing_schema,
64.
to_number(nvl(o.attr5, '0'))
65.
nvl(o.attr8,0) obj_attr8,
66.
row_number() over
67.
phv,
(partition by o.attr1
68.
order by
69.
70.
rn
71.
FROM
tion_id,
72.
73.
74.
75.
task_id,
min(execution_name) keep (dense_rank first order by
execution_start) bename,
76.
77.
execution_start) eename,
78.
min(execution_start) bestart,
79.
max(execution_start) eestart
80.
FROM
81.
exec_start execution_start
82.
FROM
wri$_adv_executions
83.
WHERE
84.
85.
GROUP BY task_id) r,
86.
wri$_adv_executions e
87.
88.
89.
90.
91.
92.
93.
94.
wri$_adv_objects o
95.
WHERE
/* e */,
96.
97.
o.type = :sqltyp)
98.
99.
WHERE rn = 1) oe
/* oe */,
wri$_adv_findings f,
100.
wri$_adv_recommendations r,
101.
wri$_adv_rationale l
102.
WHERE
103.
104.
105.
106.
107.
108.
109.
110.
FROM
dba_sql_profiles p
111.
WHERE
112.
113.
114.
115.
p.task_rec_id = r.id)
116.
117.
118.
119.
120.
121.
122.
(1)
123.
(2)
124.
(3)
125.
(4)
126.
#6
(5)
127.
0/123)
#7
(6)
(7)
128.
NESTED LOOPS
129.
(8)
130.
10
(9)
131.
11
(10)
132.
12
(11)
133.
13 (12)
TABLE ACCESS (BY INDEX ROWID) OF 'WRI$_ADV_EXECU
TIONS' (TABLE) (Cost=3 Card=2 rows Bytes=0/24)
134.
*#14 (13)
INDEX (RANGE SCAN) OF 'WRI$_ADV_EXECS_PK' (INDE
X (UNIQUE)) (Cost=2 Card=2 rows Bytes=0/)
135.
#15 (10)
TABLE ACCESS (BY INDEX ROWID) OF 'WRI$_ADV_EXECUTI
ONS' (TABLE) (Cost=2 Card=33 rows Bytes=0/27)
136.
*16 (15)
INDEX (RANGE SCAN) OF 'WRI$_ADV_EXECS_IDX_03' (IN
DEX) (Cost=1 Card=33 rows Bytes=0/)
137.
*17
(9)
INDEX (RANGE SCAN) OF 'WRI$_ADV_OBJECTS_IDX_01' (IN
DEX (UNIQUE)) (Cost=1 Card=1667 rows Bytes=0/)
138.
#18
(8)
TABLE ACCESS (BY INDEX ROWID) OF 'WRI$_ADV_OBJECTS'
(TABLE) (Cost=2 Card=1634 rows Bytes=0/273)
139.
*19
(5)
INDEX (RANGE SCAN) OF 'WRI$_ADV_FINDINGS_IDX_02' (INDEX
(UNIQUE)) (Cost=1 Card=608 rows Bytes=0/23)
140.
#20
(4)
TABLE ACCESS (BY INDEX ROWID) OF 'WRI$_ADV_RECOMMENDATIO
NS' (TABLE) (Cost=2 Card=159 rows Bytes=0/39)
141.
*21 (20)
INDEX (RANGE SCAN) OF 'WRI$_ADV_RECS_IDX_02' (INDEX (UN
IQUE)) (Cost=1 Card=345 rows Bytes=0/)
142.
#22
(3)
TABLE ACCESS (BY INDEX ROWID) OF 'WRI$_ADV_RATIONALE' (TA
BLE) (Cost=15 Card=11 rows Bytes=0/36)
143.
*23 (22)
INDEX (RANGE SCAN) OF 'WRI$_ADV_RATIONALE_PK' (INDEX (UN
IQUE)) (Cost=3 Card=27293304 rows Bytes=0/)
144.
24
ows Bytes=0/)
(2)
145.
25
(24)
146.
26
(25)
147.
*27
(26)
148.
#28 (27)
TABLE ACCESS (BY INDEX ROWID) OF 'SQLOBJ$AUXDATA' (TAB
LE) (Cost=2 Card=0 rows Bytes=0/462)
149.
*29 (28)
INDEX (RANGE SCAN) OF 'I_SQLOBJ$AUXDATA_TASK' (INDEX)
(Cost=1 Card=0 rows Bytes=0/)
150.
*#30 (27)
INDEX (SKIP SCAN) OF 'SQLOBJ$_PKEY' (INDEX (UNIQUE)) (
Cost=1 Card=0 rows Bytes=0/322)
151.
))
*31
(26)
152.
*32
(25)
153.
154.
------------- Predicate Information (Plan Hash Value:589376886) ------------155.
156.
6 Filter: "RN"=1
157.
7 Filter: ROW_NUMBER() OVER ( PARTITION BY "O"."ATTR1" ORDER BY BITAND(
"O"."ATTR7",32),INTERNAL_FUNCTION("E"."EXEC_START") DESC )<=1
158.
14 Access: "TASK_ID"=:TID
159.
160.
15 Filter: (INTERNAL_FUNCTION("E"."STATUS") AND ("BENAME"<>"EENAME" OR "
E"."NAME"="BENAME"))
161.
16 Access: "E"."TASK_ID"="R"."TASK_ID" AND "E"."EXEC_START">="BESTART" A
ND "E"."EXEC_START"<="EESTART"
162.
163.
18 Filter: "O"."TYPE"=:SQLTYP
164.
19 Access: "TASK_ID"="F"."TASK_ID" AND "EXEC_NAME"="F"."EXEC_NAME" AND "
OBJECT_ID"="F"."OBJ_ID"
165.
20 Filter: "R"."TYPE"=:PROFILETYP
166.
21 Access: "F"."TASK_ID"="R"."TASK_ID" AND "F"."EXEC_NAME"="R"."EXEC_NAM
E" AND "F"."ID"="R"."FINDING_ID"
167.
22 Filter: ("L"."TYPE"=:IMPLTYP AND "L"."ATTR1"=:ACCEPTDIS AND "L"."EXEC
_NAME"="R"."EXEC_NAME" AND "L"."REC_ID"="R"."ID")
168.
23 Access: "L"."TASK_ID"="R"."TASK_ID"
169.
27 Access: "SO"."SIGNATURE"="AD"."SIGNATURE" AND "SO"."CATEGORY"="AD"."C
ATEGORY"
170.
28 Filter: "AD"."OBJ_TYPE"=1
171.
29 Access: "AD"."TASK_ID"="R"."TASK_ID" AND "AD"."TASK_EXEC_NAME"="R"."E
XEC_NAME" AND "AD"."TASK_OBJ_ID"="OBJECT_ID" AND "AD"."TASK_FND_ID"="R"."FINDIN
G_ID" AND "AD"."TASK_REC_ID"="R"."ID"
172.
30 Access: "SO"."OBJ_TYPE"=1
173.
30 Filter: "SO"."OBJ_TYPE"=1
174.
31 Access: "SO"."SIGNATURE"="ST"."SIGNATURE"
175.
32 Access: "SO"."SIGNATURE"="SQ"."SIGNATURE"
176.
177.
178.
179.
22: TABLE ACCESS BY INDEX ROWID
###########################(89.47%)
##################
180.
#####(10.53%)
181.
182.
183.
184.
ON CPU on SYS.WRI$_ADV_RECS_IDX_02(INDEX)
#####(9.65%)
185.
186.
187.
188.
Loads: 1
189.
Load Versions: 1
190.
191.
192.
User Openings: 0
193.
Parse Calls: 11
194.
Executions: 11
195.
Sorts(Average): 2
196.
Fetches(Average): 1
197.
198.
199.
200.
201.
202.
203.
Temp Space(Maximum): 0G
Note: This version works in 11gR2. You may need to remove the part containing the
not existing views/columns in other DB versions, e.g. v$sql_monitor.
Download the latest verstion at
here: http://www.HelloDBA.com/download/showplan.zip
Session altered.
3
FROM (select /*+qb_name(inv) no_merge(v)*/o.owner, o.status,
o.object_name, o.created, t.tablespace_name from v_objects_sys o, t_tables t
where o.owner=t.owner and o.object_name=t.table_name) q
4
PARTITION BY (status)
DIMENSION BY (owner)
MEASURES (object_name v, 1 s)
RULES
10
... ...
PARTITION BY (status)
DIMENSION BY (owner)
MEASURES (object_name v, 1 s)
RULES
10
Comparing the trace content, we can find the optimizer performed simple filter push
analysis:
FPD: Considering simple filter push (pre rewrite) in query block M (#0)
FPD:
??
try to generate transitive predicate from check constraints for query block M (#0)
finally:
??
MODEL_COMPILE_SUBQUERY
Usage: MODEL_COMPILE_SUBQUERY
Description: Unknown. It might be used for model query transformation.
MODEL_DONTVERIFY_UNIQUENESS
Usage: MODEL_DONTVERIFY_UNIQUENESS
Description: Unknown. It might be used for model query transformation.
MODEL_DYNAMIC_SUBQUERY
Usage: MODEL_DYNAMIC_SUBQUERY
Description: Unknown. It might be used for model query transformation.
Partitioning Hints
X_DYN_PRUNE
Usage: X_DYN_PRUNE
Description: Instructs the SQL executor to using the result of sub query to prune the
partitions dynamically.
Session altered.
HELLODBA.COM>alter session set events '10128 trace name context forever, level
31';
Session altered.
--------------------------------------------------------| Id | Operation
| Name
--------------------------------------------------------|
0 | SELECT STATEMENT
1 |
2 |
| :BF0000
3 |
4 |
5 |
6 |
HASH JOIN
PART JOIN FILTER CREATE
| T_TABLES_IDX3
|
|
|
| T_OBJECTS_RANGE |
---------------------------------------------------------
------------------------------------------------------| Id | Operation
| Name
------------------------------------------------------|
0 | SELECT STATEMENT
1 |
MERGE JOIN
2 |
SORT JOIN
3 |
4 |
5 |
6 |
7 |
8 |
9 |
VIEW
HASH JOIN
| index$_join$_002 |
|
| T_TABLES_IDX3
| T_TABLES_PK
SORT JOIN
| T_OBJECTS_RANGE
|
|
-------------------------------------------------------
Description: Prevents the optimizer to using sub query to prune the partitions dynamically.
---------------------------------------------------------| Id | Operation
| Name
---------------------------------------------------------|
0 | SELECT STATEMENT
1 |
2 |
NESTED LOOPS
3 |
HASH JOIN
4 |
5 |
6 |
7 |
8 |
9 |
NESTED LOOPS
| T_USERS_PK
|
|
| T_OBJECTS_RANGE |
| T_TABLES_PK
| T_TABLES
----------------------------------------------------------
Description: Instructs the optimizer to convert the Object to a relational table, similar to
RELATIONAL function.
HELLODBA.COM>desc xmltable
Name
Null?
Type
SYS_NC_ROWINFO$
-----------------------------------------------------------------------------------------------------<other_xml>
<outline_data>
<hint>
<IGNORE_OPTIM_EMBEDDED_HINTS/>
</hint>
... ...
</outline_data>
</other_xml>
1 row selected.
SYS_NC_OID$
XMLDATA
----------------------------------------------------------------------------------------------------5477ABC43C2D4A85917F7328AA961884
<other_xml><outline_data><hint><IGNORE_OPTIM_EMBEDDED_HINTS></IGNORE_OPTIM_EMBED
DED_HINTS></hint><hint><OPTIMIZER_FEATURES_ENABLE>10.2.0.3</OPTIMIZER_FEATURES_E
NABLE></hint><hint><![CDATA[ALL_ROWS]]></hint><hint><OUTLINE_LEAF>"SEL$3BA1AD7C"
MONITOR
Usage: MONITOR
Description: Instruct Oracle monitor the running status of the statement, regardless of
whether it fulfills the criteria (Parallel Query or Running for more than 5 seconds) or not.
NAME
TYPE
VALUE
string
DIAGNOSTIC+TUNING
COUNT(*)
---------31
'%monitor%';
SQL_TEXT
STATUS
NO_MONITOR
Usage: NO_MONITOR
Description: Prevent Oracle monitor the running status of the statement, regardless of
whether it fulfills the criteria (Parallel Query or Running for more than 5 seconds) or not.
COUNT(*)
---------72116
no rows selected
NESTED_TABLE_FAST_INSERT
Usage: NESTED_TABLE_FAST_INSERT
Description: Instructs SQL executor to insert data into nested table in fast mode. From the
trace content of 10046 event, the data was inserted in batch.
Type created.
Table created.
Elapsed: 00:00:18.77
Elapsed: 00:00:07.79
NESTED_TABLE_GET_REFS
Usage: NESTED_TABLE_GET_REFS
Description: With this hint, user can access to nested table directly.
COUNT(*)
---------72116
NESTED_TABLE_SET_SETID
Usage: NESTED_TABLE_SET_SETID
Description: With this hint, user can access to nested table directly.
COUNT(*)
---------72116
NO_MONITORING
Usage: NO_MONITORING
Description: Prevent Oracle monitor the column usage in predication, consequently, the
dictionary table col_usage$ will not be updated by the execution of the statement.
LIKE_PREDS
---------18
COUNT(*)
---------30
LIKE_PREDS
---------19
COUNT(*)
---------30
LIKE_PREDS
---------19
NO_SQL_TUNE
Usage: NO_SQL_TUNE
Description: Prevent the optimizer to do SQL tuning on the statement.
COUNT(*)
---------31
HELLODBA.COM>exec :exec_name := dbms_sqltune.execute_tuning_task (:task_name,
'EXEC_'||substr(:task_name, length(:task_name)-4));
RESTRICT_ALL_REF_CONS
Usage: RESTRICT_ALL_REF_CONS
Description: Restricts all cascaded operations caused by referencing constraints in the
transactions.
OWNER
AME
TABLE_NAME
DELETE_RULE
CONSTRAINT_NAME
R_OWNER
R_CONSTRAINT_N
T_C
CASCADE
T_C_FK
DEMO
T_P_PK
1 row deleted.
COUNT(A)
---------1
HELLODBA.COM>commit;
commit
*
ERROR at line 1:
ORA-02091: transaction rolled back
ORA-02292: integrity constraint (DEMO.T_C_FK) violated - child record found
USE_HASH_AGGREGATION
Usage: USE_HASH_AGGREGATION([<@Block>])
Description: Instructs the optimizer using hash algorithm to perform aggregation operations.
Session altered.
--------------------------------------------------------------------------------------| Id | Operation
Time
|
| Name
--------------------------------------------------------------------------------------|
0 | SELECT STATEMENT
00:00:02 |
|
1 | HASH GROUP BY
00:00:02 |
23 |
23 |
|
2 |
INDEX FAST FULL SCAN| T_OBJECTS_IDX8 | 72116 |
00:00:02 |
138 |
138 |
422K|
196
196
185
(8)|
(8)|
(2)|
--------------------------------------------------------------------------------------NO_USE_HASH_AGGREGATION
Usage: NO_USE_HASH_AGGREGATION([<@Block>])
Description: Prevents the optimizer using hash algorithm to perform aggregation operations.
----------------------------------------------------------------------------------
-----| Id | Operation
Time
|
| Name
--------------------------------------------------------------------------------------|
0 | SELECT STATEMENT
00:00:02 |
|
1 | SORT GROUP BY
00:00:02 |
23 |
23 |
|
2 |
INDEX FAST FULL SCAN| T_OBJECTS_IDX8 | 72116 |
00:00:02 |
138 |
138 |
422K|
196
196
185
(8)|
(8)|
(2)|
--------------------------------------------------------------------------------------BYPASS_RECURSIVE_CHECK
Usage: BYPASS_RECURSIVE_CHECK
Description: Unknown. It might instruct the parser not do recursive checking. It could be
observed in the internal statement generated by Materialized View updating.
Demo:
Session altered.
BYPASS_UJVC
Usage: BYPASS_UJVC
Description: Unknown. It might instruct parser not check unique constraint for the join view.
It could be observed in the internal statement generated by Materialized View
updating.
We got below statement from the trace file generated in previous demo.
--------------------------------------------------------------------------------------------| Id | Operation
(%CPU)| Time
|
| Name
--------------------------------------------------------------------------------------------|
0 | SELECT STATEMENT
00:00:01 |
|
1 | TABLE ACCESS BY INDEX ROWID| T_TABLES
00:00:01 |
|*
|
2 |
|
1 |
241 |
(0)|
1 |
241 |
(0)|
DOMAIN INDEX
| T_TABLES_DIX03
|
4 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------
2 - access("CTXSYS"."CONTAINS"("OWNER",'aaa',1)>0)
filter("STATUS"='VALID')
NO_DOMAIN_INDEX_FILTER
Usage: NO_DOMAIN_INDEX_FILTER([<@Block>] <Table> [( <Index>)]) or
NO_DOMAIN_INDEX_FILTER([<@Block>] <Table> [( <Indexed Columns>)])
Description: Prevents the optimizer the push filter to Composite Domain Index.
--------------------------------------------------------------------------------------------| Id | Operation
(%CPU)| Time
|
| Name
----------------------------------------------------------------------------------
-----------|
0 | SELECT STATEMENT
00:00:01 |
2 |
|
DOMAIN INDEX
|
4
1 |
241 |
(0)|
1 |
241 |
(0)|
| T_TABLES_DIX03
(0)| 00:00:01 |
---------------------------------------------------------------------------------------------
1 - filter("STATUS"='VALID')
2 - access("CTXSYS"."CONTAINS"("OWNER",'aaa',1)>0)
DOMAIN_INDEX_SORT
Usage: DOMAIN_INDEX_SORT
Description: Instructs the optimizer the push sorting columns to Composite Domain Index.
--------------------------------------------------------------------------------------------| Id | Operation
| Name
(%CPU)| Time
--------------------------------------------------------------------------------------------|
0 | SELECT STATEMENT
00:00:01 |
|
1 | TABLE ACCESS BY INDEX ROWID| T_TABLES
00:00:01 |
|*
|
2 |
|
DOMAIN INDEX
|
4
1 |
241 |
5 (20)|
1 |
241 |
5 (20)|
| T_TABLES_DIX02
(0)| 00:00:01 |
---------------------------------------------------------------------------------------------
2 - access("CTXSYS"."CONTAINS"("TABLESPACE_NAME",'aaa',1)>0)
NO_DOMAIN_INDEX_ SORT
Usage: NO_DOMAIN_INDEX_ SORT
Description: Prevents the optimizer the push sorting columns to Composite Domain Index.
---------------------------------------------------------------------------------------------| Id | Operation
| Name
(%CPU)| Time
---------------------------------------------------------------------------------------------|
|
0 | SELECT STATEMENT
5 (20)| 00:00:01 |
|
|
1 | SORT ORDER BY
5 (20)| 00:00:01 |
|
|
2 |
4
|*
|
3 |
1 |
1 |
1 |
241
241
241
DOMAIN INDEX
| T_TABLES_DIX02
|
4 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------
3 - access("CTXSYS"."CONTAINS"("TABLESPACE_NAME",'aaa',1)>0)
DST_UPGRADE_INSERT_CONV
Usage: DST_UPGRADE_INSERT_CONV
Description: With this hint, Oracle will add an internal function
(ORA_DST_CONVERT(INTERNAL_FUNCTION())) to modify the column defined as TIMESTAMP
WITH TIME ZONE when using package DBMS_DST to upgrade the time zone of the database.
NO_DST_UPGRADE_INSERT_CONV
Usage: NO_DST_UPGRADE_INSERT_CONV
Description: With this hint, Oracle will not add an internal function
(ORA_DST_CONVERT(INTERNAL_FUNCTION())) to modify the column defined as TIMESTAMP
WITH TIME ZONE when using package DBMS_DST to upgrade the time zone of the database.
STREAMS
Usage: STREAMS
Description: Unknown. It might instructs the SQL execution to transfer the data in stream.
DEREF_NO_REWRITE
Usage: DEREF_NO_REWRITE(<@Block>)
Description: Unknown. It might prevent the optimizer to rewrite the Materialized View
with BUILD DEFERRED option.
MV_MERGE
Usage: MV_MERGE
Description: Unknown. It might be used for CUBE.
EXPR_CORR_CHECK
Usage: EXPR_CORR_CHECK
Description: Unknown. It might instruct the parser to do referencing checking where
analyzing Expression Filter.
INCLUDE_VERSION
Usage: INCLUDE_VERSION
Description: Unknown. It could be observed from the internal statement generated by
Advanced Replication. It might be used to keep the compatibility when replicating data
among databases with different versions.
VECTOR_READ
Usage: VECTOR_READ
Description: Unknown. It might be used for Vector Filter in hash join.
VECTOR_READ_TRACE
Usage: VECTOR_READ_TRACE
Description: Unknown. It might be used for Vector Filter in hash join.
USE_WEAK_NAME_RESL
Usage: USE_WEAK_NAME_RESL
Description: Unknown. It might instructs the parser using the internal name instead the userdefined name to find Resource Location. It could be observed from the internal statements
generated statistics data gathering and Expression Filter.
Session altered.
... ...
"DEMO"."T_NT_B"
REF_CASCADE_CURSOR
Usage: REF_CASCADE_CURSOR
Description: Unknown. It might be used to prevent the commit of internal recursive
transaction. It could be observed from the internal statements generated by maintenance of
the table with nested object.
Refer to the demo of NO_PARTIAL_COMMIT
NO_REF_CASCADE
Usage: NO_REF_CASCADE
Description: Unknown. It might prevent the internal recursive statement to use the cascade
cursor.
SQLLDR
Usage: SQLLDR
Description: Unknown. It might be used in the internal statements generated by SQL*Loader.
SYS_RID_ORDER
Usage: SYS_RID_ORDER
Description: Unknown. It might be used in the internal statements generated by
maintenance of Materialized View.
OVERFLOW_NOMOVE
Usage: OVERFLOW_NOMOVE
Description: Unknown. It might prevent Oracle to move the data of other segment when
overflow occurs due to partition splitting.
LOCAL_INDEXES
Usage: LOCAL_INDEXES
Description: Unknown
MERGE_CONST_ON
Usage: MERGE_CONST_ON
Description: Unknown
QUEUE_CURR
Usage: QUEUE_CURR
Description: Unknown. It might be used for Advanced Queue.
CACHE_CB
Usage: CACHE_CB([<@Block>] <Table>)
Description: Unknown. It might be used for Advanced Queue.
Trace the process of DBMS_AQ.DEQUEUUE (delivery_mode is PERSISTENT), we got below
statement.