Wednesday, February 23, 2011

Script to Report Index Fragmentation

Execution Environment:
SQL, SQL*Plus

Access Privileges:
Requires SELECT on DBA_IND_COLUMNS, DBA_INDEXES and INDEX_STATS.

Usage:
sqlplus user/

Instructions:
Copy the script to a file and execute it from SQL*Plus.


PROOFREAD THIS SCRIPT BEFORE USING IT! Due to differences in the way text
editors, e-mail packages, and operating systems handle text formatting (spaces,
tabs, and carriage returns), this script may not be in an executable state
when you first receive it. Check over the script to ensure that errors of
this type are corrected.


Description

There are 3 scripts available to report various information on indexes.


Script 1: Index Fragmentation

TFSIFRAG.SQL provides information critical in determining whether an
index is a candidate for rebuilding. An index is a candidate for
rebuilding when a relatively high number of index leaf row deletes have
occured.

Please note: This report does not indicate the actual index "balance."


Script 2: Index Statistics

TFSISTAT.SQL prints a variety of statistics about the given index.
Statistics include B*tree height, the number of distinct and repeated
keys, the number of branch rows and blocks, the number of leaf rows
and blocks and information about space utilization.

Please note: You will receive an "ORA-01476: divisor is equal to
zero" for an index on a table into which no rows have been inserted.


Script 3: Index Keys for a Table

TFSINKEY.SQL prints uniqueness, name and column information of indexes for the
given table and owner. The arguments are not case-sensitive, but may not
include wildcards. The script will behave unreliably for an index not owned
by the owner of its base table.


References


Script

==========
Script #1:
==========

SET ECHO off
REM NAME: TFSIFRAG.SQL
REM USAGE:"@path/tfsifrag schema_name index_name"
REM ------------------------------------------------------------------------
REM REQUIREMENTS:
REM SELECT on INDEX_STATS
REM ------------------------------------------------------------------------
REM PURPOSE:
REM Reports index fragmentation statistics
REM ------------------------------------------------------------------------
REM EXAMPLE:
REM Index Fragmentation Statistic
REM
REM index name S_EMP_USERID_UK
REM leaf rows deleted 0
REM leaf rows in use 25
REM index badness 0.000
REM
REM ------------------------------------------------------------------------
REM Main text of script follows:
set verify off
def ownr = &&1
def name = &&2

ttitle -
center 'Index Fragmentation Statistic' skip 2

set heading off

col name newline
col lf_blk_rows newline
col del_lf_rows newline
col ibadness newline

validate index &ownr..&name;

select
'index name '||name,
'leaf rows deleted '||to_char(del_lf_rows,'999,999,990') del_lf_rows,
'leaf rows in use '||to_char(lf_rows-del_lf_rows,'999,999,990') lf_blk_rows,
'index badness '||to_char(del_lf_rows/(lf_rows+0.00001),'999,990.999') ibadness
from
index_stats
/

undef ownr
undef name
set verify on


==============
Sample Output:
==============

Index Fragmentation Statistic


index name S_EMP_USERID_UK
leaf rows deleted 0
leaf rows in use 25
index badness 0.000




==========
Script #2:
==========

SET ECHO off
REM NAME: TFSISTAT.SQL
REM USAGE:"@path/tfsistat schema_name index_name"
REM ------------------------------------------------------------------------
REM REQUIREMENTS:
REM SELECT on INDEX_STATS
REM ------------------------------------------------------------------------
REM PURPOSE:
REM Report index statistics.
REM ------------------------------------------------------------------------
REM EXAMPLE:
REM Index Statistics
REM
REM S_EMP_USERID_UK
REM ----------------------------------------------------------
REM height 1
REM blocks 5
REM del_lf_rows 0
REM del_lf_rows_len 0
REM distinct_keys 25
REM most_repeated_key 1
REM btree_space 1,876
REM used_space 447
REM pct_used 24
REM rows_per_key 1
REM blks_gets_per_access 2
REM lf_rows 25 br_rows 0
REM lf_blks 1 br_blks 0
REM lf_rows_len 447 br_rows_len 0
REM lf_blk_len 1,876 br_blk_len 0
REM
REM ------------------------------------------------------------------------
REM Main text of script follows:
set verify off
def ownr = &&1
def name = &&2

ttitle -
center 'Index Statistics' skip 2

set heading off

col name newline
col headsep newline
col height newline
col blocks newline
col lf_rows newline
col lf_blks newline
col lf_rows_len newline
col lf_blk_len newline
col br_rows newline
col br_blks newline
col br_rows_len newline
col br_blk_len newline
col del_lf_rows newline
col del_lf_rows_len newline
col distinct_keys newline
col most_repeated_key newline
col btree_space newline
col used_space newline
col pct_used newline
col rows_per_key newline
col blks_gets_per_access newline

validate index &ownr..&name;

select
name,
'----------------------------------------------------------' headsep,
'height '||to_char(height, '999,999,990') height,
'blocks '||to_char(blocks, '999,999,990') blocks,
'del_lf_rows '||to_char(del_lf_rows,'999,999,990') del_lf_rows,
'del_lf_rows_len '||to_char(del_lf_rows_len,'999,999,990') del_lf_rows_len,
'distinct_keys '||to_char(distinct_keys,'999,999,990') distinct_keys,
'most_repeated_key '||to_char(most_repeated_key,'999,999,990') most_repeated_key,
'btree_space '||to_char(btree_space,'999,999,990') btree_space,
'used_space '||to_char(used_space,'999,999,990') used_space,
'pct_used '||to_char(pct_used,'990') pct_used,
'rows_per_key '||to_char(rows_per_key,'999,999,990') rows_per_key,
'blks_gets_per_access '||to_char(blks_gets_per_access,'999,999,990') blks_gets_per_access,
'lf_rows '||to_char(lf_rows, '999,999,990')||' '||+
'br_rows '||to_char(br_rows, '999,999,990') br_rows,
'lf_blks '||to_char(lf_blks, '999,999,990')||' '||+
'br_blks '||to_char(br_blks, '999,999,990') br_blks,
'lf_rows_len '||to_char(lf_rows_len,'999,999,990')||' '||+
'br_rows_len '||to_char(br_rows_len,'999,999,990') br_rows_len,
'lf_blk_len '||to_char(lf_blk_len, '999,999,990')||' '||+
'br_blk_len '||to_char(br_blk_len, '999,999,990') br_blk_len
from
index_stats
/

undef ownr
undef name
set verify on


==============
Sample Output:
==============

Index Statistics
S_EMP_USERID_UK
----------------------------------------------------------
height 1
blocks 5
del_lf_rows 0
del_lf_rows_len 0
distinct_keys 25
most_repeated_key 1
btree_space 1,876
used_space 447
pct_used 24
rows_per_key 1
blks_gets_per_access 2
lf_rows 25
br_rows 0
lf_blks 1
br_blks 0
lf_rows_len 447
br_rows_len 0
lf_blk_len 1,876
br_blk_len 0




==========
Script #3:
==========

SET ECHO off
REM NAME: TFSIKEYS.SQL
REM USAGE:"@path/tfsikeys idx_owner table_name"
REM ------------------------------------------------------------------------
REM REQUIREMENTS:
REM SELECT on DBA_IND_COLUMNS and DBA_INDEXES
REM ------------------------------------------------------------------------
REM PURPOSE:
REM Shows the index keys for a particular table.
REM ------------------------------------------------------------------------
REM EXAMPLE:
REM Index Keys Summary
REM
REM Uniqueness Index Name Column Name
REM ---------- ---------------------------------------- ------------------
REM UNIQUE SCOTT.S_EMP_ID_PK ID
REM
REM UNIQUE SCOTT.S_EMP_USERID_UK USERID
REM
REM ------------------------------------------------------------------------
REM Main text of script follows:
set verify off
def ixowner = &&1
def tabname = &&2

ttitle -
center 'Index Keys Summary' skip 2

col uniq format a10 heading 'Uniqueness' justify c trunc
col indname format a40 heading 'Index Name' justify c trunc
col colname format a25 heading 'Column Name' justify c trunc

break -
on indname skip 1 -
on uniq

select
ind.uniqueness uniq,
ind.owner||'.'||col.index_name indname,
col.column_name colname
from
dba_ind_columns col,
dba_indexes ind
where
ind.owner = upper('&ixowner')
and
ind.table_name = upper('&tabname')
and
col.index_owner = ind.owner
and
col.index_name = ind.index_name
order by
col.index_name,
col.column_position
/

undef ixowner
undef tabname
set verify on


==============
Sample Output:
==============


Index Keys Summary


Uniqueness Index Name Column Name
---------- ---------------------------------------- ----------------------
UNIQUE SCOTT.S_EMP_ID_PK ID

UNIQUE SCOTT.S_EMP_USERID_UK USERID



Disclaimer :

EXCEPT WHERE EXPRESSLY PROVIDED OTHERWISE, THE INFORMATION, SOFTWARE,
PROVIDED ON AN "AS IS" AND "AS AVAILABLE" BASIS. ORACLE EXPRESSLY DISCLAIMS
ALL WARRANTIES OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE AND NON-INFRINGEMENT. ORACLE MAKES NO WARRANTY THAT: (A) THE RESULTS
THAT MAY BE OBTAINED FROM THE USE OF THE SOFTWARE WILL BE ACCURATE OR
RELIABLE; OR (B) THE INFORMATION, OR OTHER MATERIAL OBTAINED WILL MEET YOUR
EXPECTATIONS. ANY CONTENT, MATERIALS, INFORMATION OR SOFTWARE DOWNLOADED OR
OTHERWISE OBTAINED IS DONE AT YOUR OWN DISCRETION AND RISK. ORACLE SHALL HAVE
NO RESPONSIBILITY FOR ANY DAMAGE TO YOUR COMPUTER SYSTEM OR LOSS OF DATA THAT
RESULTS FROM THE DOWNLOAD OF ANY CONTENT, MATERIALS, INFORMATION OR SOFTWARE.

ORACLE RESERVES THE RIGHT TO MAKE CHANGES OR UPDATES TO THE SOFTWARE AT ANY
TIME WITHOUT NOTICE.


Limitation of Liability :

IN NO EVENT SHALL ORACLE BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL OR CONSEQUENTIAL DAMAGES, OR DAMAGES FOR LOSS OF PROFITS, REVENUE,
DATA OR USE, INCURRED BY YOU OR ANY THIRD PARTY, WHETHER IN AN ACTION IN
CONTRACT OR TORT, ARISING FROM YOUR ACCESS TO, OR USE OF, THE SOFTWARE.

SOME JURISDICTIONS DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY.
ACCORDINGLY, SOME OF THE ABOVE LIMITATIONS MAY NOT APPLY TO YOU.

Script to Report Table Fragmentation

Note : We have 3 script in sequence to capture the table fragmentation , each script having there own responsibility.

First script :The first script, TFSLDTFR.SQL, gathers table fragmentation
characteristics and inserts it into the newly created TFRAG table for
subsequent reporting.


Second Script : This script displays summary table fragmentation information.

Third Script : Detailed report of table fragmentation characteristics based on the
data in the tfrag table.



*********************First Script *****************************

The first script, TFSLDTFR.SQL, gathers table fragmentation characteristics
and inserts it into the newly created TFRAG table for subsequent reporting.

TFSLDTFR performs a single "analyze table 't' compute statistics" command.
You may want to change the compute to estimate for speed and usability.

Currently the following characteristics are gathered:

- Owner of table.
- Name of table
- Number of data blocks with rows
- Number of table extents
- Number of chained rows
- Number of blocks that have ever contained a row (high water mark)

Scripts two and three display table fragmentation information. The
information is queried from the TFRAG table which is created during
script one.




******************* Script *********************************


=============
First Script:
=============

SET ECHO off
REM NAME: TFSLDTFR.SQL
REM USAGE:"@path/tfsldtr table_owner table_name"
REM ------------------------------------------------------------------------
REM REQUIREMENTS:
REM ANALYZE on table, SELECT on DBA_TABLES, DBA_SEGMENTS, DBA_EXTENTS
REM ------------------------------------------------------------------------
REM AUTHOR:
REM Craig A. Shallahamer, Oracle US
REM ------------------------------------------------------------------------
REM PURPOSE:
REM Load the tfrag table with a given table's fragmentation stats.
REM ------------------------------------------------------------------------
REM EXAMPLE:
REM N/A
REM ------------------------------------------------------------------------
REM DISCLAIMER:
REM This script is provided for educational purposes only. It is NOT
REM supported by Oracle World Wide Technical Support.
REM The script has been tested and appears to work as intended.
REM You should always run new scripts on a test instance initially.
REM ------------------------------------------------------------------------
REM Main text of script follows:

set feedback on
set echo on
set verify off

def towner=&1
def tname=&2

rem *******************************************************************
rem * Goal: Analyze table to gather statistics
rem *******************************************************************
rem Specifically we are looking for:
rem - blocks ABOVE the hwm, i.e. empty blocks (dba_tables.blocks)
rem - average row length (dba_tables.blocks)

analyze table &towner..&tname compute statistics
/
col val1 new_value blks_w_rows noprint
col val2 new_value blks_above noprint
select blocks val1,
empty_blocks val2
from dba_tables
where owner = upper('&towner') and
table_name = upper('&tname')
/
rem *******************************************************************
rem * Goal: Get the number of blocks allocated to the segment
rem *******************************************************************
rem Specifically we are looking for:
rem - allocated blocks dba_segments.blocks

col val1 new_value alloc_blocks noprint
select blocks val1
from dba_segments
where owner = upper('&towner') and
segment_name = upper('&tname')
/

rem *******************************************************************
rem * Goal: Calculate the HWM
rem *******************************************************************
rem Specifically we are looking for:
rem HWM = dba_segments.blocks - dba_tables.empty_blocks - 1
rem HWM = allocated blocks - blocks above the hwn - 1
col val1 new_value hwm noprint
select &alloc_blocks-&blks_above-1 val1
from dual
/

rem *******************************************************************
rem * Goal: Get the Number of Fragmented Rows or Chained Frows (cr)
rem *******************************************************************
col val1 new_value cr noprint
select chain_cnt val1
from dba_tables
where owner = upper('&towner') and
table_name = upper('&tname')
/

rem ***********************************************************
rem * Goal : Determine the Segment Fragmentation (sf)
rem ***********************************************************
col val1 new_val sf noprint
select count(*) val1
from dba_extents
where owner = upper('&towner') and
segment_name = upper('&tname')
/
rem ***********************************************************
rem ***********************************************************
rem * Load the TFRAG table with the just gathered information.
rem ***********************************************************
rem ***********************************************************
rem *
rem * Create the tfrag table if it does not exist.
rem *
drop table tfrag;

create table tfrag
(
owner char(30),
name char(30),
hwm number,
blks_w_rows number,
avg_row_size number,
possible_bytes_per_block number,
no_frag_rows number,
no_extents number
)
/
create unique index tfrag_u1 on tfrag (owner,name)
/
rem *
rem * Delete and insert the new stats.
rem *
delete
from tfrag
where owner='&towner' and
name='&tname'
/
insert into tfrag values
('&towner','&tname',&hwm,&blks_w_rows,0,0,&cr,&sf)
/
commit;

set echo off
set verify on



*************************Second Script*********************************


SET ECHO off
REM NAME: TFSTFRAG.SQL
REM USAGE:"@path/tfstfrag"
REM ------------------------------------------------------------------------
REM REQUIREMENTS:
REM SELECT on TFRAG
REM ------------------------------------------------------------------------
REM AUTHOR:
REM Craig A. Shallahamer, Oracle USA
REM ------------------------------------------------------------------------
REM PURPOSE:
REM This script displays summary table fragmentation information. The
REM information is queried from the tfrag table which is loaded via the
REM ldtfrag script. Once the ldtfrag script has been run for a given
REM table, this report displays the following information:
REM
REM - Table owner
REM - Table name
REM - Segment fragmentation (number of extents)
REM - Number of table rows
REM - Table block fragmentation (1.0 bad, 0.0 good)
REM - Row fragmentation (chains)
REM ------------------------------------------------------------------------
REM EXAMPLE:
REM Table Fragmentation Characteristics
REM
REM Owner Table Name Exts Omega1 Chains
REM -------- ---------------------------------------- ---- ------ -------
REM scott s_emp 1 0.000 0
REM ------------------------------------------------------------------------
REM DISCLAIMER:
REM This script is provided for educational purposes only. It is NOT
REM supported by Oracle World Wide Technical Support.
REM The script has been tested and appears to work as intended.
REM You should always run new scripts on a test instance initially.
REM ------------------------------------------------------------------------
REM Main text of script follows:

col towner heading 'Owner' format a8 trunc
col tname heading 'Table Name' format a40 trunc
col exts heading 'Exts' format 999 trunc
col omega1 heading 'Omega1' format 0.999 trunc
col chains heading 'Chains' format 99,990 trunc

ttitle -
center 'Table Fragmentation Characteristics' skip 2

select owner towner,
name tname,
no_extents exts,
(hwm - blks_w_rows)/(hwm + 0.0001) omega1,
no_frag_rows chains
from tfrag
order by 1,2
/



===========================================
Sample Output from the tfstfrag.sql script:
===========================================


Table Fragmentation Characteristics


Owner Table Name Exts Omega1 Chains
-------- ---------------------------------------- ---- ------ -------
scott s_emp1 0.000 0



*******************Third Script ************************************

SET ECHO off
REM NAME: TFSDTFRG.SQL
REM USAGE:"@path/tfsdtfrg"
REM ------------------------------------------------------------------------
REM REQUIREMENTS:
REM SELECT on TFRAG table created by TFSLDTFR.SQL
REM ------------------------------------------------------------------------
REM AUTHOR:
REM Craig A. Shallahamer, Oracle US
REM ------------------------------------------------------------------------
REM PURPOSE:
REM Detailed report of table fragmentation characteristics based on the
REM data in the tfrag table.
REM ------------------------------------------------------------------------
REM EXAMPLE:
REM Detailed Table Fragmentation Characteristics
REM
REM
REM Table Owner : scott
REM Name : s_emp
REM Extents : 1
REM High water mark : 1
REM Blocks with rows : 1
REM Block frag: Omega1 : 0
REM Migrated rows : 0
REM
REM ------------------------------------------------------------------------
REM DISCLAIMER:
REM This script is provided for educational purposes only. It is NOT
REM supported by Oracle World Wide Technical Support.
REM The script has been tested and appears to work as intended.
REM You should always run new scripts on a test instance initially.
REM ------------------------------------------------------------------------
REM Main text of script follows:

col towner format a70
col tname format a70
col exts format 999
col omega1 format 90.9999
col chains format 99,990
col rpb format 999
col hwm format 9,999,999
col bwr format 9,999,999

ttitle -
center 'Detailed Table Fragmentation Characteristics' skip 2

set heading off

select 'Table Owner : '||owner towner,
' Name : '||name tname,
'Extents : '||no_extents exts,
'High water mark : '||hwm hwm,
'Blocks with rows : '||blks_w_rows bwr,
'Block frag: Omega1 : '||(hwm - blks_w_rows)/(hwm + 0.0001) omega1,
'Migrated rows : '||no_frag_rows chains
from tfrag
order by 1,2
/

set heading on


===========================================
Sample Output from the tfsdtfrg.sql script:
===========================================


Detailed Table Fragmentation Characteristics


Table Owner : scott
Name : s_emp
Extents : 1
High water mark : 1
Blocks with rows : 1
Block frag: Omega1 : 0
Migrated rows : 0

DBAs must be aware that unused indexes:

SCOPE :

DBAs must be aware that unused indexes:

* consume storage space

* degrade performance by unnecessary overheads during DML operations


How to Identify Unused Indexes in Order to Remove Them:
=======================================================
The examples below work if the user logged is the owner of the index.

1. Set the index under MONITORING:

SQL> alter index I_EMP monitoring usage;
Index altered.

SQL> select * from v$object_usage;

INDEX_NAME TABLE_NAME MON USED START_MONITORING END_MONITORING
--------------- --------------- --- ---- ------------------- -------------------
I_EMP EMP YES NO 03/14/2001 09:17:19


2. Check if the monitored indexes are used or not through the USED column in
V$OBJECT_USAGE view:

SQL> select sal from emp where ename='SMITH';

SAL
----------
800


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP'
2 1 INDEX (RANGE SCAN) OF 'I_EMP' (NON-UNIQUE)


The explain plan indicates the index is used.


SQL> select * from v$object_usage;

INDEX_NAME TABLE_NAME MON USED START_MONITORING END_MONITORING
--------------- --------------- --- ---- ------------------- -------------------
I_EMP EMP YES YES 03/14/2001 09:17:19


If the index was not used, the DBA could drop this unused index.


3. To stop monitoring an index, use the following SQL statement:

SQL> alter index i_emp nomonitoring usage;
Index altered.

SQL> select * from v$object_usage;

INDEX_NAME TABLE_NAME MON USED START_MONITORING END_MONITORING
--------------- --------------- --- ---- ------------------- -------------------
I_EMP EMP NO YES 03/14/2001 09:17:19 03/14/2001 09:55:24


As soon as you turn monitoring on again for the index, both columns
MONITORING and USED are reset.


SQL> alter index i_emp monitoring usage;
Index altered.

SQL> select * from v$object_usage;

INDEX_NAME TABLE_NAME MON USED START_MONITORING END_MONITORING
--------------- --------------- --- ---- ------------------- -------------------
I_EMP EMP YES NO 03/14/2001 09:57:27

Sunday, February 13, 2011

The Three Sets of Data Dictionary Views: USER_,ALL_,DBA_*

Oracle doesn’t allow you to access the internal data dictionary tables directly.
It creates views on the base dictionary tables and creates public synonyms for these views so users can access them.

There are three sets of data dictionary views—USER, ALL, and DBA—with each set of views pertaining to a similar item containing similar columns.


The views in each category are prefixed by the keywords USER, ALL, or DBA.


Each of these sets of views shows only the information the user is granted privileges
to access, as follows:

*** USER: The USER views show a user only those objects that the user owns. These views are useful to users, especially developers, for viewing the owner’s objects, grants, and so on.


*** ALL: The ALL views show you information about objects for which you have been granted privileges. The views with the prefix ALL include information on the user’s objects, and all other objects on which privileges have been granted, directly or through a role.


*** DBA: The DBA views are the most powerful in their range. Users who have been assigned the DBA role can access information about any object or any user in the database.

The DBAprefixed dictionary views are the ones you use to monitor and administer the database.

Who is using temporary tablespace :

The below script will give you the temporary tablespace used by Tablespace.

set pagesize 1000
select a.tablespace_name, trunc(b.allocated)Allocated, trunc(a.free) Free, trunc(b.allocated-a.free) USED,
(100-trunc(((b.allocated-a.free)*100)/b.allocated)) Percentfree from
(select tablespace_name, sum(bytes)/1024/1024 free from sys.dba_free_space group by tablespace_name) a,
(select sum(bytes)/1024/1024 allocated, tablespace_name from dba_data_files group by tablespace_name ) b
where a.tablespace_name = b.tablespace_name
union
select 'Total', trunc( sum(b.allocated)), trunc(sum(a.free)), trunc(sum((b.allocated-a.free))),
00000 from
(select tablespace_name, sum(bytes)/1024/1024 free from sys.dba_free_space group by tablespace_name) a,
(select sum(bytes)/1024/1024 allocated, tablespace_name from dba_data_files group by tablespace_name ) b
where a.tablespace_name = b.tablespace_name
order by 2;


**************Output given below *******************************

TABLESPACE_NAME ALLOCATED FREE USED PERCENTFREE
------------------------------ ---------- ---------- ---------- -----------
USERS 5 1 3 35
EXAMPLE 100 22 77 23
TEST_TS 100 99 0 100
UNDOTBS1 160 136 23 86
SYSAUX 330 10 319 4
SYSTEM 490 4 485 1
Total 1185 275 909 0

7 rows selected.


***************

Note :

If you want to connect to a database without tnsnames.ora entry it will workout from 10g onwards.

sqlplus username/pwd@//hostname:1521/sidname

Size of Table , Tablespace used and free space

Size of the table in Bytes ==>>


To find the size of bytes allocated to a table:==>>>

sql > analyze table emp compute statistics;


sql > select num_rows * avg_row_len "Bytes Used" from dba_tables where table_name = 'TEST';


Bytes Used
-----------
560

sql > select bytes "Bytes Allocated" from dba_segments where segment_name = 'TEST';


Bytes Allocated
----------
524288

The result of the query shows that the emp table is using 560 bytes of the 524,288 bytes allocated to it.



**** you can use user_segments too...if u don't have the dba privileges ******



************** Size of Tablespace used and Free space *****************

Size of all Tablespace in your database in GB:==>>

SYS@MOB1BT> select tablespace_name,sum(bytes)/(1024*1024*1024) "GB" from dba_data_files group by tablespace_name order by 1;


TABLESPACE_NAME GB
------------------------------ ----------
AUDIT1 1.953125
GENUSER .48828125
HUGE_DATA 7.20000458
LARGE_DATA 7.20000458
LARGE_INDX 2.734375
SYSAUX 4.8828125
SYSTEM .48828125
TOOLS .48828125
UNDOTBS .48828125
USERS 3.6000061
USER_INDX 4.1015625

11 rows selected.


Size of all Tablespace in your database free space in GB:==>>


SYS@MOB1BT> select tablespace_name,sum(bytes)/(1024*1024*1024) from dba_free_space group by tablespace_name order by 1;

TABLESPACE_NAME SUM(BYTES)/(1024*1024*1024)
------------------------------ ---------------------------
AUDIT1 1.95269775
GENUSER .488098145
HUGE_DATA 7.19992065
LARGE_DATA 7.19992065
LARGE_INDX 2.73117065
SYSAUX 4.64660645
SYSTEM .116516113
TOOLS .488220215
UNDOTBS .424743652
USERS .228088379
USER_INDX 3.72009277

11 rows selected.

Wednesday, February 9, 2011

RMAN Incremental Backups to Refresh a Standby Database

You can create an incremental backup of the target database containing changes to the database since the creation of the duplicate or the previous syncrhonization.
You can apply the incremental backup to the standby database.


Note: This technique cannot be used to update a duplicate database.

RMAN enables you to synchronize a standby database with a primary database by creating an incremental backup at the source database that contains all changed blocks since the duplicate was created or last refreshed. You then apply the incremental backup to the standby database, which updates it with all changes.

This capability faciliates the temporary conversion of a physcial standby database into a reporting database, as described in Oracle Data Guard Concepts and Administration.. In particular, this capability makes it possible to reverse the effects of converting the standby into a reporting database.

After the standby database has been used for reporting or testing, Flashback Database can reverse any changes resulting from that work, returning the database to its contents when it was still a standby. An incremental backup created with BACKUP INCREMENTAL... FROM SCN can be used to refresh the standby with changes at the primary since the conversion. and then managed recovery can resume. The effect is to return the reporting database to its role as standby.




Using BACKUP INCREMENTAL... FROM SCN

The incremental backup is created at the source database by means of the BACKUP INCREMENTAL FROM SCN=n form of the BACKUP command. For example:

BACKUP DEVICE TYPE SBT INCREMENTAL FROM SCN 750923 DATABASE;
BACKUP INCREMENTAL FROM SCN 750923 DATABASE;
BACKUP DEVICE TYPE DISK INCREMENTAL FROM SCN 750983 DATABASE
FORMAT '/tmp/incr_standby_%U';


RMAN uses the selected SCN as the basis for this incremental backup. For all files being backed up, RMAN includes all data blocks that were changed at SCNs greater than or equal to the FROM SCN in the incremental backup.



Note:

* RMAN does not consider the incremental backup as part of a backup strategy at the source database. The backup is not suitable for use in a normal RECOVER DATABASE operation at the source database.


*
The backup sets produced by this command are written to ?/dbs by default, even if the flash recovery area or some other backup destination is defined as the default for disk backups.


* You must create this incremental backup on disk for it to be useful. When you move the incremental backup to the standby, you must catalog it at the standby as described in "Step 3: Catalog the Incremental Backup Files at the Standby Database". Backups on tape cannot be cataloged.



Refreshing a Standby Database With INCREMENTAL FROM SCN Backups: Example

This example shows the steps required to update a standby database using incremental backups. The assumption is that you have already activated the standby, performed your tests or other operations at the standby, , and then used Flashback Database to undo the effects of those changes. The task here is to refresh the standby with the latest changes to the primary , so that it can resume its role as a standby database.
Step 1: Create the Incremental Backup

Create the needed incremental backup at the source database, using BACKUP with the INCREMENTAL FROM SCN clause.

Assume that the incremental backup to be used in updating the duplicate database is to be created on disk, with the filenames for backup pieces determined by the format /tmp/incr_for_standby/bkup_%U.

RMAN> BACKUP DEVICE TYPE DISK INCREMENTAL FROM SCN 750983 DATABASE
FORMAT '/tmp/incr_for_standby/bkup_%U';


Step 2: Make the Incremental Backup Accessible at the Standby Database

Make the backup pieces containing the incremental backup available in some directory accessible on the system containing the standby database. For this example, assume that the destination directory is called /standbydisk1/incrback/ and ensure that it contains nothing besides the incremental backups from Step 1.


Step 3: Catalog the Incremental Backup Files at the Standby Database

Use the RMAN CATALOG command to register the backup sets in the RMAN repository at the duplicate. With an RMAN client connected to the standby database and the recovery catalog (if you use one at the standby), mount the standby and run the following command:

RMAN> CATALOG START WITH '/standbydisk1/incrback/';


The backups are now available for use in recovery of the standby.


Step 4: Apply the Incremental Backup to the Standby Database

Use the RMAN RECOVER command with the NOREDO option to apply the incremental backup to the standby database. All changed blocks captured in the incremental backup are updated at the standby database, bringing it up to date with the primary database. With an RMAN client connected to the standby database, run the following command:

RMAN> RECOVER DATABASE NOREDO;


You can now resume managed recovery at the standby. Any redo logs required at the standby with changes since those contained in the incremental are automatically requested from the primary and applied.

Tuesday, February 8, 2011

RMAN FAQ

Questions and Answers


RMAN Restore :
RMAN Recovery :
RMAN Duplicate & Cloning :
RMAN Maintenance :
RMAN and ASM :
RMAN and RAC :
RMAN and Dataguard :
RMAN and Logical Standby database :
RMAN and Third party MML :
General RMAN Information :

What is RMAN and How to configure it ?
Why to use RMAN ?
How RMAN works ?
What O/S and oracle user privilege required to use RMAN ?
RMAN terminology

Q.What is RMAN and How to configure it ?
A.RMAN is an Oracle Database client that performs backup and recovery tasks on your databases and automates administration of your backup strategies. It greatly simplifies the dba jobs by managing the production database's backing up, restoring, and recovering database files.

This tool integrates with sessions running on an Oracle database to perform a range of backup and recovery activities, including maintaining an RMAN repository of historical data about backups. There is no additional installation required for this tool. Its by default get installed with the oracle database installation. The RMAN environment consists of the utilities and databases that play a role in backing up your data.You can access RMAN through the command line or through Oracle Enterprise Manager.

Q.Why to use RMAN ?
A.RMAN gives you access to several backup and recovery techniques and features not available with user-managed backup and recovery. The most noteworthy are the following:

-- Automatic specification of files to include in a backup : Establishes the name and locations of all files to be backed up.

-- Maintain backup repository : Backups are recorded in the control file, which is the main repository of RMAN metadata. Additionally, you can store this metadata in a recovery catalog,

-- Incremental backups : An incremental backup stores only blocks changed since a previous backup. Thus, they provide more compact backups and faster recovery, thereby reducing the need to apply redo during datafile media recovery.

-- Unused block compression : In unused block compression, RMAN can skip data blocks that have never been used

-- Block media recovery : You an repair a datafile with only a small number of corrupt data blocks without taking it offline or restoring it from backup.

-- Binary compression : A binary compression mechanism integrated into Oracle Database reduces the size of backups.

-- Encrypted backups : RMAN uses backup encryption capabilities integrated into Oracle Database to store backup sets in an encrypted format.

-- Corrupt block detection : RMAN checks for the block corruption before taking its backup.



Q.How RMAN works ?
A.RMAN backup and recovery operation for a target database are managed by RMAN client. RMAN uses the target database control file to gather metadata about the target database and to store information about its own operations. The RMAN client itself does not perform backup, restore, or recovery operations. When you connect the RMAN client to a target database, RMAN allocates server sessions on the target instance and directs them to perform the operations.The work of backup and recovery is performed by server sessions running on the target database. A channel establishes a connection from the RMAN client to a target or auxiliary database instance by starting a server session on the instance.The channel reads data into memory, processes it, and writes it to the output device.

When you take a database backup using RMAN, you need to connect to the target database using RMAN Client.The RMAN client can use Oracle Net to connect to a target database, so it can be located on any host that is connected to the target host through Oracle Net. For backup you need to allocate explicit or implicit channel to the target database. An RMAN channel represents one stream of data to a device, and corresponds to one database server session. This session dynamically collect information of the files from the target database control file before taking the backup or while restoring.

For example If you give ' Backup database ' from RMAN, it will first get all the datafiles information from the controlfile. Then it will divide all the datafiles among the allocated channels. ( roughly equal size of work as per the datafile size ). Then it takes the backup in 2 steps. In the first step the channel will read all the Blocks of the entire datafile to find out all the formatted blocks to backup. Note : RMAN do not take backup of the un formatted blocks. In the second step it take backup of the formatted blocks. This is the best advantage of using RMAN as it only take backup of the required blocks. Lets say in a datafile of 100 MB size, there may be only 10 MB of use full data and rest 90 MB is free then RMAN will only take backup of those 10 MB.

Q.What O/S and oracle user privilege required to use RMAN ?
A.RMAN always connect to the target or auxiliary database using the SYSDBA privilege. In fact the SYSDBA keywords are implied and cannot be explicitly specified. Its connections to a database are specified and authenticated in the same way as SQL*Plus connections to a database.

The O/S user should be part of the DBA group . For remote connection it needs the password file Authentication.Target database should have the initialization parameter REMOTE_LOGIN_PASSWORDFILE set to EXCLUSIVE or SHARED.

Q.RMAN terminology :

A target database : An Oracle database to which RMAN is connected with the TARGET keyword. A target database is a database on which RMAN is performing backup and recovery operations. RMAN always maintains metadata about its operations on a database in the control file of the database.

A recovery Catalog : A separate database schema used to record RMAN activity against one or more target databases. A recovery catalog preserves RMAN repository metadata if the control file is lost, making it much easier to restore and recover following the loss of the control file. The database may overwrite older records in the control file, but RMAN maintains records forever in the catalog unless deleted by the user.

Backup sets : RMAN can store backup data in a logical structure called a backup set, which is the smallest unit of an RMAN backup. One backup set contains one or more datafiles a section of datafile or archivelogs.

Backup Piece : A backup set contains one or more binary files in an RMAN-specific format. This file is known as a backup piece. Each backup piece is a single output file. The size of a backup piece can be restricted; if the size is not restricted, the backup set will comprise one backup piece. Backup piece size should be restricted to no larger than the maximum file size that your filesystem will support.

Image copies : An image copy is a copy of a single file (datafile, archivelog, or controlfile). It is very similar to an O/S copy of the file. It is not a backupset or a backup piece. No compression is performed.

Snapshot Controlfile : When RMAN needs to resynchronize from a read-consistent version of the control file, it creates a temporary snapshot control file. The default name for the snapshot control file is port-specific.

Database Incarnation : Whenever you perform incomplete recovery or perform recovery using a backup control file, you must reset the online redo logs when you open the database. The new version of the reset database is called a new incarnation. The reset database command directs RMAN to create a new database incarnation record in the recovery catalog. This new incarnation record indicates the current incarnation.
RMAN Configuration :
What is RMAN Configuration and how to Configure it ?
How to check RMAN Configuration ?
How to Reset the default Configuration ?


Q.What is RMAN Configuration and how to Configure it ?
A.The RMAN backup and recovery environment is preconfigured for each target database. The configuration is persistent and applies to all subsequent operations on this target database, even if you exit and restart RMAN. RMAN configured settings can specify backup devices, configure a connection to a backup device , policies affecting backup strategy, encryption algorithm, snap shot controlfile loaion and others.

By default there are few default configuration are set when you login to RMAN. You can customize them as per your requirement. Any time you can check the current setting by using the "Show all " command. CONFIGURE command is used to create persistent settings in the RMAN environment, which apply to all subsequent operations, even if you exit and restart RMAN.For details of the Configuration kindly refer Note <<305565.1>>


Q.How to check RMAN Configuration ?
RMAN>Show all;

Q.How to Reset to default Configuration ?

A.To reset the default configuration setting use
Connect to the target database from sqlplus and run

SQL> connect @target_database;
SQL> execute dbms_backup_restore.resetConfig;
RMAN Catalog Database

What is Catalog database and How to Configure it ?
How Many catalog database I can have ?
Is this mandetory to use catalog database ?
What is the advantage of catalog database ?
What is the difference between catalog database and catalog schema ?
Catalog database compatibility matrix with the target database ?
What happen if catalog database lost ?

Q. What is Catalog database and How to Configure it ?
A. This is a separate database which contains catalog schema. You can use the same target database as the catalog database but its not at all recommended. For more details on Recovery catalog kindly refer the Note 452529.1 : Recovery catalog for RMAN backup

Q. How Many catalog database I can have ?
A. You can have multiple catalog database for the same target database . But at a time you can connect to only 1 catalog database via RMAN. Its not recommended to have multiple catalog database.

Q. Is this mandatory to use catalog database ?
A. No ! its a optional one.

Q. What is the advantage of catalog database ?
A. Catalog database is a secondary storage of backup metadata. Its very useful in case you lost the current controlfile, as all the backup information are there in the catalog schema. Secondly from contolfile the older backup information are aged out depending upon the control_file_record_keep_time. RMAN catalog database mainten the history of data. Kindly refer the note <<397269.1>> for more details on relation between retention policy and control_File_record_keep_time.

Q. What is the difference between catalog database & catalog schema ?
A. Catalog database is like any other database which contains the RMAN catalog user's schema.

Q. Catalog database compatibility matrix with the target database ?
A. refer Note 73431.1 : RMAN Compatibility Matrix

Q. What happen if catalog database lost ?
A. Since catalog database is a option one there is no direct effect of loss of catalog database. Create a new catalog database and register the target database with the newly createdcatalog one. All the backup information from the target database current controlfile will be updated to the catalog schema. If any backup information which is aged out from the target database then you need to manually catalog those backup pieces.
RMAN backup :

What are the database files that RMAN can backup ?
What are the database files that RMAN can not backup ?
Can I have archivelogs and datafile backup in a single backupset ?
Can I have datafiles and contolfile backup in a single backup set?
Can I regulate the size of backup piece and backupset ?
What is the difference between backup set backup and Image copy backup ?
What is RMAN consistent backup and Inconsistent backup ?
Can I take RMAN backup when the database is Down ?
Do I need to place the database in begin backup mode while taking RMAN inconsistent backup ?
Can I compress RMAN backups ?
Can I encript RMAN backup ?

Q. What are the database file's that RMAN can backup ?
A. RMAN can backup Controlfile , Datafiles , Archive logs , standby database controfile, Spfile

Q. What are the database file's that RMAN can not backup ?
A. RMAN can not take backup of the pfile, Redo logs , network configuration files, password files, external tables and the contents of the Oracle home files

Q. Can I have archivelogs and datafile backup in a single backupset ?
A. No . we can not put datafiles and archive logs in the same backupset.
Q. Can I have datafiles and contolfile backup in a single backup set ?
A. Yes . If the controlfile autobackup is not ON then RMAN takes backup of controlfile along with the datafile 1, whenever you take backup of the database or System tablespace. Kindly refer ( *** create a note for controfile auto backup )

Q. Can I regulate the size of backup piece and backupset ?
A. Yes ! You can set max size of the backupset as well as the backup piece. By default one RMAN channel creates a single backupset with one backup piece in it. You can use the MAXPIECESIZE channel parameter to set limits on the size of backup pieces. You can also use the MAXSETSIZE parameter on the BACKUP and CONFIGURE commands to set a
limit for the size of backup sets.

Q. What is the difference between backup set backup and Image copy backup ?
A : A backup set is an RMAN-specific proprietary format, whereas an image copy is a bit-for-bit copy of a file. By default,RMAN creates backup sets

Q. What is RMAN consistent backup and inconsistent backup ?
A. A consistent backup occurs when the database is in a consistent state. That means backup of the database taken after a shutdown immediate , shutdown normal or shutdown transactional. If the database is shutdown with abort option then its not a consistent backup.
A backup when when the database is Up and running is called an inconsistent backup. When a database is restored from an inconsistent backup, Oracle must perform media recovery before the database can be
opened, applying any pending changes from the redo logs. You can not take inconsistent backup when the database is in NoArchivelog mode.

Q. Can I take RMAN backup when the database is Down ?
A. No ! You can take RMAN backup only when the target database is Open or in Mount stage. Its because RMAN keep the backup metadata in controfile . Only in open or mount mode controlfile is accessible.

Q. Do I need to place the database in begin backup mode while taking RMAN inconsistent backup ?
A. RMAN does not require extra logging or backup mode because it knows the format of data blocks. RMAN is guaranteed not to back up fractured blocks.No extra redo is generated during RMAN backup.

Q. Can I compress RMAN backups ?
A. RMAN supports binary compression of backup sets. The supported algorithms are BZIP2 (default) and ZLIB. Kindly refer Note 427581.1 for more details about different type of compressions. Its not recommend to compress the RMAN backup using any other OS or third party utility.

Note : RMAN compressed backup with BZIP2 provides great compression but is CPU intensive. Using ZLIB compression requires the Oracle Database 11g Advanced Compression Option and is only supported with an 11g database. The feature is not backward compatible with 10g databases.

Q. Can I encrypt RMAN backup ?
A. RMAN supports backup encryption for backup sets. You can use wallet-based transparent encryption, password-based encryption, or both. You can use the CONFIGURE ENCRYPTION command to configure persistent transparent encryption. Use the SET ENCRYPTION, command at the RMAN session level to specify password-based encryption.

8.13 How to specify backup destination ?

9. Can RMAN take backup to Tape ?

Yes ! you can use RMAN for the tape backup. But RMAN can not able to write directly to tape. You need to have third party Media Management Software installed. Oracle has published an API specification which Media Management Vendor's who are members of Oracle's Backup Solutions Partner program have access to. Media Management Vendors (MMVs) then write an interface library which the Oracle server uses to write and read to
and from tape.

9.1 Where can I get the list of supported Third party Media Management Software for tape backup ?

RMAN should not be used with that Media Manager until the MMV has certified that their software works with
RMan. Either contact your Media Manager, or check the RMan home page for updates on which MMVs have certified their products on which platforms:

http://www.oracle.com/technology/deploy/availability/htdocs/bsp.htm

Starting from oracle 10g R2 oracle has its Own Media management software for the database backup to tape called OSB.

9.2 How RMAN Interact with Media manager ?

Before performing backup or restore to a media manager, you must allocate one or more channels or configure default channels for use with the media manager to handle the communication with the media manager. RMAN does not issue specific commands to load, label, or unload tapes. When backing up, RMAN gives the media manager a stream of bytes and associates a unique name with this stream. When RMAN needs to restore the backup, it asks the media manager to retrieve the byte stream. All details of how and where that stream is stored are handled entirely by the media manager.

9.3 What is Proxy copy backup to tape ?

Proxy copy is functionality, supported by few media manager in which they handle the entire data movement between datafiles and the backup devices. Such products may use technologies such as high-speed connections between storage and media subsystems to reduce load on the primary database server. RMAN provides a list of files requiring backup or restore to the media manager, which in turn makes all decisions regarding how and when to move the data.

9.4 What is Oracle Secure backup ?

Oracle Secure Backup is a media manager provided by oracle that provides reliable and secure data protection through file system backup to tape. All major tape drives and tape libraries in SAN, Gigabit Ethernet, and SCSI environments are supported.

Friday, February 4, 2011

SQL Tunning Tips

SET ECHO off
REM NAME: TFSXPLAN.SQL
REM USAGE:"@path/tfsxplan"
REM ------------------------------------------------------------------------
REM REQUIREMENTS:
REM SELECT on the desireed EXPLAIN_PLAN table
REM ------------------------------------------------------------------------
REM PURPOSE:
REM This script provides formatted output from an EXPLAIN_PLAN table
REM ------------------------------------------------------------------------
REM EXAMPLE:
REM Query Plan
REM -------------------------------------------------------------------
REM SELECT STATEMENT Cost = 1
REM TABLE ACCESS FULL S_EMP
REM
REM ------------------------------------------------------------------------
REM Main text of script follows:

column operation format a16
column options format a15
column object_name format a20
column id format 99
select lpad(' ',2*(level-1))||operation||' '||options||' '||object_name||' '
||decode(id,0,'Cost = '||position) "Query Plan"
from plan_table
start with id = 0
connect by prior id = parent_id
/



==============
Sample Output:
==============

Query Plan
-------------------------------------------------------------------
SELECT STATEMENT Cost = 1
TABLE ACCESS FULL S_EMP



===>>>>>





In this chapter, you will do the following:
Learn about SQL tuning

See SQL tuning tips that you can use to shorten the length of time your queries take
to execute Learn about the Oracle optimizer
See how to compare the cost of performing queries
Examine optimizer hints
Learn about some additional tuning tools



Introducing SQL Tuning : ==>>

One of the main strengths of SQL is that you don’t have to tell the database exactly how to obtain
the data requested.

You simply run a query specifying the information you want, and the database
software figures out the best way to get it.

Sometimes, you can improve the performance of your
SQL statements by “tuning” them. In the following sections, you’ll see tuning tips that can make
your queries run faster; later, you’ll see more advanced tuning techniques.
Use a WHERE Clause to Filter Rows

Many novices retrieve all the rows from a table when they only want one row (or a few rows). This
is very wasteful. A better approach is to add a WHERE clause to a query. That way, you restrict the
rows retrieved to just those actually needed.


For example, say you want the details for customer #1 and #2. The following query retrieves
all the rows from the customers table in the store schema (wasteful):

-- BAD (retrieves all rows from the customers table) ==>

SELECT *
FROM customers;

CUSTOMER_ID FIRST_NAME LAST_NAME DOB PHONE
----------- ---------- ---------- --------- ------------
1 John Brown 01-JAN-65 800-555-1211
2 Cynthia Green 05-FEB-68 800-555-1212
3 Steve White 16-MAR-71 800-555-1213
4 Gail Black 800-555-1214
5 Doreen Blue 20-MAY-70


The next query adds a WHERE clause to the previous example to just get customer #1 and #2:

-- GOOD (uses a WHERE clause to limit the rows retrieved) ==>

SELECT *
FROM customers
WHERE customer_id IN (1, 2);


CUSTOMER_ID FIRST_NAME LAST_NAME DOB PHONE
----------- ---------- ---------- --------- ------------
1 John Brown 01-JAN-65 800-555-1211
2 Cynthia Green 05-FEB-68 800-555-1212


You should avoid using functions in the WHERE clause, as that increases execution time.
Use Table Joins Rather than Multiple Queries.

If you need information from multiple related tables, you should use join conditions rather than
multiple queries.

In the following bad example, two queries are used to get the product name and
the product type name for product #1 (using two queries is wasteful).

The first query gets the name and product_type_id column values from the products table for product #1.
The second query then uses that product_type_id to get the name column from the product_types table.

-- BAD (two separate queries when one would work) ===>>

SELECT name, product_type_id
FROM products
WHERE product_id = 1;
NAME PRODUCT_TYPE_ID
------------------------------ ---------------
Modern Science 1


SELECT name
FROM product_types
WHERE product_type_id = 1;

NAME
----------
Book


Instead of using the two queries, you should write one query that uses a join between the
products and product_types tables. The following good query shows this:

-- GOOD (one query with a join) ==>>

SELECT p.name, pt.name
FROM products p, product_types pt
WHERE p.product_type_id = pt.product_type_id
AND p.product_id = 1;

NAME NAME
------------------------------ ----------
Modern Science Book



This query results in the same product name and product type name being retrieved as in the
first example, but the results are obtained using one query. One query is generally more efficient
than two.


You should choose the join order in your query so that you join fewer rows to tables later in
the join order.

For example, say you were joining three related tables named tab1, tab2, and
tab3. Assume tab1 contains 1,000 rows, tab2 100 rows, and tab3 10 rows. You should join
tab1 with tab2 first, followed by tab2 and tab3.

Also, avoid joining complex views in your queries, because doing so causes the queries for
the views to be run first, followed by your actual query. Instead, write your query using the tables
rather than the views.Use Fully Qualified Column References When Performing Joins

Always include table aliases in your queries and use the alias for each column in your query (this
is known as “fully qualifying” your column references). That way, the database doesn’t have to
search for each column in the tables used in your query.

The following bad example uses the aliases p and pt for the products and
product_types tables, respectively, but the query doesn’t fully qualify the description and price
columns:

-- BAD (description and price columns not fully qualified) ==>>

SELECT p.name, pt.name, description, price
FROM products p, product_types pt
WHERE p.product_type_id = pt.product_type_id
AND p.product_id = 1;


NAME NAME DESCRIPTION PRICE
-------------------------------------------------- ----------
Modern Science Book A description of modern science 19.95


This example works, but the database has to search both the products and product_types
tables for the description and price columns; that’s because there’s no alias that tells the
database which table those columns are in.

The extra time spent by the database having to do the search is wasted time.

The following good example includes the table alias p to fully qualify the description and
price columns:

-- GOOD (all columns are fully qualified) ===>>

SELECT p.name, pt.name, p.description, p.price
FROM products p, product_types pt
WHERE p.product_type_id = pt.product_type_id
AND p.product_id = 1;


NAME NAME DESCRIPTION PRICE
-------------------------------------------------- ----------
Modern Science Book A description of modern science 19.95


Because all references to columns include a table alias, the database doesn’t have to waste
time searching the tables for the columns, and execution time is reduced.


Use CASE Expressions Rather than Multiple Queries
Use CASE expressions rather than multiple queries when you need to perform many calculations
on the same rows in a table. The following bad example uses multiple queries to count the
number of products within various price ranges:


-- BAD (three separate queries when one CASE statement would work)
SELECT COUNT(*)
FROM products
WHERE price < 13;

COUNT(*)
----------
2

SELECT COUNT(*)
FROM products
WHERE price BETWEEN 13 AND 15;

COUNT(*)
----------
5


SELECT COUNT(*)
FROM products
WHERE price > 15;


COUNT(*)
----------
5


Rather than using three queries, you should write one query that uses CASE expressions. This
is shown in the following good example:



-- GOOD (one query with a CASE expression) ===>>>


SELECT
COUNT(CASE WHEN price < 13 THEN 1 ELSE null END) low,
COUNT(CASE WHEN price BETWEEN 13 AND 15 THEN 1 ELSE null END) med,
COUNT(CASE WHEN price > 15 THEN 1 ELSE null END) high
FROM products;


LOW MED HIGH
---------- ---------- ----------
2 5 5

Notice that the counts of the products with prices less than $13 are labeled as low, products
between $13 and $15 are labeled med, and products greater than $15 are labeled high.



NOTE :You can, of course, use overlapping ranges and different functions in
your CASE expressions.

Add Indexes to Tables When looking for a particular topic in a book, you can either scan the whole book or use the index
to find the location.

An index for a database table is similar in concept to a book index, except
that database indexes are used to find specific rows in a table. The downside of indexes is that
when a row is added to the table, additional time is required to update the index for the new row.
Generally, you should create an index on a column when you are retrieving a small number
of rows from a table containing many rows.


A good rule of thumb is Create an index when a query retrieves <= 10 percent of the total rows in a table.

This means the column for the index should contain a wide range of values. A good candidate
for indexing would be a column containing a unique value for each row (for example, a social
security number).

A poor candidate for indexing would be a column that contains only a small
range of values (for example, N, S, E, W or 1, 2, 3, 4, 5, 6).

An Oracle database automatically
creates an index for the primary key of a table and for columns included in a unique constraint.
In addition, if your database is accessed using a lot of hierarchical queries (that is, a query
containing a CONNECT BY), you should add indexes to the columns referenced in the START
WITH and CONNECT BY clauses.


Finally, for a column that contains a small range of values and is frequently used in the WHERE
clause of queries, you should consider adding a bitmap index to that column. Bitmap indexes are
typically used in data warehouses, which are databases containing very large amounts of data.
The data in a data warehouse is typically read using many queries, but the data is not modified
by many concurrent transactions.


Normally, a database administrator is responsible for creating indexes. However, as an
application developer, you’ll be able to provide the DBA with feedback on which columns are
good candidates for indexing, because you may know more about the application than the DBA.
Chapter 10 covers indexes in depth, and you should re-read the section on indexes if necessary.


Use WHERE Rather than HAVING ==>

You use the WHERE clause to filter rows; you use the HAVING clause to filter groups of rows.

Because the HAVING clause filters groups of rows after they have been grouped together (which
takes some time to do), you should first filter rows using a WHERE clause whenever possible.

That way, you avoid the time taken to group the filtered rows together in the first place.

The following bad query retrieves the product_type_id and average price for products
whose product_type_id is 1 or 2.

To do this, the query performs the following:

It uses the GROUP BY clause to group rows into blocks with the same product_type_id.

It uses the HAVING clause to filter the returned results to those groups that have a
product_type_id in 1 or 2 (this is bad, because a WHERE clause would work).


-- BAD (uses HAVING rather than WHERE) ===>>>


SELECT product_type_id, AVG(price)
FROM products
GROUP BY product_type_id
HAVING product_type_id IN (1, 2);

PRODUCT_TYPE_ID AVG(PRICE)
--------------- ----------
1 24.975
2 26.22


The following good query rewrites the previous example to use WHERE rather than HAVING to
first filter the rows to those whose product_type_id is 1 or 2:


-- GOOD (uses WHERE rather than HAVING) ==>>>


SELECT product_type_id, AVG(price)
FROM products
WHERE product_type_id IN (1, 2)
GROUP BY product_type_id;



PRODUCT_TYPE_ID AVG(PRICE)
--------------- ----------
1 24.975
2 26.22



Use UNION ALL Rather than UNION ===>>>>>

You use UNION ALL to get all the rows retrieved by two queries, including duplicate rows;

you use UNION to get all non-duplicate rows retrieved by the queries. Because UNION removes
duplicate rows (which takes some time to do), you should use UNION ALL whenever possible.
The following bad query uses UNION (bad because UNION ALL would work) to get the rows
from the products and more_products tables; notice that all non-duplicate rows from
products and more_products are retrieved:


-- BAD (uses UNION rather than UNION ALL) ===>>.

SELECT product_id, product_type_id, name
FROM products
UNION
SELECT prd_id, prd_type_id, name
FROM more_products;


PRODUCT_ID PRODUCT_TYPE_ID NAME
---------- --------------- -------------------
1 1 Modern Science
2 1 Chemistry
3 2 Supernova
3 Supernova
4 2 Lunar Landing
4 2 Tank War
5 2 Submarine
5 2 Z Files
6 2 2412: The Return
7 3 Space Force 9
8 3 From Another Planet
9 4 Classical Music
10 4 Pop 3
11 4 Creative Yell
12 My Front Line


The following good query rewrites the previous example to use UNION ALL; notice that all
the rows from products and more_products are retrieved, including duplicates:


-- GOOD (uses UNION ALL rather than UNION) ==>>

SELECT product_id, product_type_id, name
FROM products
UNION ALL
SELECT prd_id, prd_type_id, name
FROM more_products;


PRODUCT_ID PRODUCT_TYPE_ID NAME
---------- --------------- ------------------------------
1 1 Modern Science
2 1 Chemistry
3 2 Supernova
4 2 Tank War
5 2 Z Files
6 2 2412: The Return
7 3 Space Force 9
8 3 From Another Planet
9 4 Classical Music
10 4 Pop 3
11 4 Creative Yell
12 My Front Line
1 1 Modern Science
2 1 Chemistry
3 Supernova
4 2 Lunar Landing
5 2 Submarine



Use EXISTS Rather than IN ===>>>


You use IN to check if a value is contained in a list. You use EXISTS to check for the existence of
rows returned by a subquery. EXISTS is different from IN: EXISTS just checks for the existence
of rows, whereas IN checks actual values. EXISTS typically offers better performance than IN
with subqueries. Therefore, you should use EXISTS rather than IN whenever possible.


You should refer back to the section entitled “Using EXISTS and NOT EXISTS with a Correlated
Subquery” when you should use EXISTS with a correlated subquery
(an important point to remember is that correlated subqueries can resolve null values).


The following bad query uses IN (bad because EXISTS would work) to retrieve products that
have been purchased:


-- BAD (uses IN rather than EXISTS) ===>>

SELECT product_id, name
FROM products
WHERE product_id IN
(SELECT product_id
FROM purchases);


PRODUCT_ID NAME
---------- -----------------------------
1 Modern Science
2 Chemistry
3 Supernova


The following good query rewrites the previous example to use EXISTS:

-- GOOD (uses EXISTS rather than IN) ==>>


SELECT product_id, name
FROM products outer
WHERE EXISTS
(SELECT 1
FROM purchases inner
WHERE inner.product_id = outer.product_id);


PRODUCT_ID NAME
---------- -----------------------------
1 Modern Science
2 Chemistry
3 Supernova



Use EXISTS Rather than DISTINCT ===>>>

You can suppress the display of duplicate rows using DISTINCT. You use EXISTS to check for the
existence of rows returned by a subquery. Whenever possible, you should use EXISTS rather than
DISTINCT, because DISTINCT sorts the retrieved rows before suppressing the duplicate rows.
The following bad query uses DISTINCT (bad because EXISTS would work) to retrieve
products that have been purchased:


-- BAD (uses DISTINCT when EXISTS would work) ==>>


SELECT DISTINCT pr.product_id, pr.name
FROM products pr, purchases pu
WHERE pr.product_id = pu.product_id;


PRODUCT_ID NAME
---------- -----------------------------
1 Modern Science
2 Chemistry
3 Supernova



The following good query rewrites the previous example to use EXISTS rather than DISTINCT:

-- GOOD (uses EXISTS rather than DISTINCT) ==>>>


SELECT product_id, name
FROM products outer
WHERE EXISTS
(SELECT 1
FROM purchases inner
WHERE inner.product_id = outer.product_id);


PRODUCT_ID NAME
---------- -----------------------------
1 Modern Science
2 Chemistry
3 Supernova


Use GROUPING SETS Rather than CUBE ===>>


The GROUPING SETS clause typically offers better performance than CUBE. Therefore, you
should use GROUPING SETS rather than CUBE wherever possible. This is fully covered in the
section entitled “Using the GROUPING SETS Clause”.


Use Bind Variables ===>>


The Oracle database software caches SQL statements; a cached SQL statement is reused if an
identical statement is submitted to the database. When an SQL statement is reused, the execution
time is reduced. However, the SQL statement must be absolutely identical in order for it to be
reused.

This means that
All characters in the SQL statement must be the same.
All letters in the SQL statement must be in the same case.
All spaces in the SQL statement must be the same.


If you need to supply different column values in a statement, you can use bind variables
instead of literal column values. You’ll see examples that clarify these ideas next.
Non-Identical SQL Statements

In this section, you’ll see some non-identical SQL statements. The following non-identical queries
retrieve products #1 and #2:


SELECT * FROM products WHERE product_id = 1;
SELECT * FROM products WHERE product_id = 2;


These queries are not identical, because the value 1 is used in the first statement, but the
value 2 is used in the second.


The following non-identical queries have spaces in different positions:

SELECT * FROM products WHERE product_id = 1;
SELECT * FROM products WHERE product_id = 1;

The following non-identical queries use a different case for some of the characters:

select * from products where product_id = 1;
SELECT * FROM products WHERE product_id = 1;

Now that you’ve seen some non-identical statements, let’s take a look at identical SQL
statements that use bind variables.

Identical SQL Statements That Use Bind Variables
You can ensure that a statement is identical by using bind variables to represent column values.
You create a bind variable using the SQL*Plus VARIABLE command. For example, the following
command creates a variable named v_product_id of type NUMBER:
VARIABLE v_product_id NUMBER

NOTE :You can use the types shown in Table A-1 of the appendix to define
the type of a bind variable.

You reference a bind variable in an SQL or PL/SQL statement using a colon followed by the
variable name (such as :v_product_id).

For example, the following PL/SQL block sets v_product_id to 1:

BEGIN
:v_product_id := 1;
END;
/

The following query uses v_product_id to set the product_id column value in the
WHERE clause; because v_product_id was set to 1 in the previous PL/SQL block, the query
retrieves the details of product #1:


SELECT * FROM products WHERE product_id = :v_product_id;


PRODUCT_ID PRODUCT_TYPE_ID NAME DESCRIPTION PRICE
-------------------------------------------------- ----------
1 1 Modern Science A description of modern science 19.95


The next example sets v_product_id to 2 and repeats the query:

BEGIN
:v_product_id := 2;
END;
/


SELECT * FROM products WHERE product_id = :v_product_id;


PRODUCT_ID PRODUCT_TYPE_ID NAME DESCRIPTION PRICE
-------------------------------------------------- ----------
2 1 Chemistry Introduction to Chemistry 30


Because the query used in this example is identical to the previous query, the cached query is
reused and there’s an improvement in performance.

TIP ==>

You should typically use bind variables if you’re performing the same
query many times. Also, in the example, the bind variables are session
specific and need to be reset if the session is lost.

Listing and Printing Bind Variables
You list bind variables in SQL*Plus using the VARIABLE command. For example:

VARIABLE
variable v_product_id
datatype NUMBER

You display the value of a bind variable in SQL*Plus using the PRINT command.
For example:

PRINT v_product_id
V_PRODUCT_ID
-------------
2

Using a Bind Variable to Store a Value Returned by a PL/SQL Function
You can also use a bind variable to store returned values from a PL/SQL function. The following
example creates a bind variable named v_average_product_price and stores the result
returned by the function average_product_price() (this function was described in Chapter 11
and calculates the average product price for the supplied product_type_id):

VARIABLE v_average_product_price NUMBER
BEGIN
:v_average_product_price := average_product_price(1);
END;
/


PRINT v_average_product_price

V_AVERAGE_PRODUCT_PRICE
------------------------
24.975


Using a Bind Variable to Store Rows from a REFCURSOR
You can also use a bind variable to store returned values from a REFCURSOR (a REFCURSOR is a
pointer to a list of rows). The following example creates a bind variable named v_products_
refcursor and stores the result returned by the function product_package.get_products_
ref_cursor() (this function was introduced in Chapter 11; it returns a pointer to the rows in the
products table):

VARIABLE v_products_refcursor REFCURSOR
BEGIN
:v_products_refcursor := product_package.get_products_ref_cursor();
END;
/


PRINT v_products_refcursor

PRODUCT_ID NAME PRICE
---------- ------------------------------ ----------
1 Modern Science 19.95
2 Chemistry 30
Chapter 16: SQL Tuning 591
3 Supernova 25.99
4 Tank War 13.95
5 Z Files 49.99
6 2412: The Return 14.95
7 Space Force 9 13.49
8 From Another Planet 12.99
9 Classical Music 10.99
10 Pop 3 15.99
11 Creative Yell 14.99
PRODUCT_ID NAME PRICE
---------- ------------------------------ ----------
12 My Front Line 13.49



Comparing the Cost of Performing Queries ==>>

The Oracle database software uses a subsystem known as the optimizer to generate the most
efficient path to access the data stored in the tables. The path generated by the optimizer is
known as an execution plan. Oracle Database 10g and above automatically gathers statistics
about the data in your tables and indexes in order to generate the best execution plan (this is
known as cost-based optimization).

Comparing the execution plans generated by the optimizer allows you to judge the relative
cost of one SQL statement versus another. You can use the results to improve your SQL statements.
In this section, you’ll learn how to view and interpret a couple of example execution plans.


NOTE :==>

Database versions prior to Oracle Database 10g don’t automatically
gather statistics, and the optimizer automatically defaults to rule-based
optimization. Rule-based optimization uses syntactic rules to generate
the execution plan. Cost-based optimization is typically better than
rule-based optimization because the former uses actual information
gathered from the data in the tables and indexes. If you’re using
Oracle Database 9i or below, you can gather statistics yourself (you’ll
learn how to do that later in the section “Gathering Table Statistics”).
Examining Execution Plans.


The optimizer generates an execution plan for an SQL statement. You can examine the execution
plan using the SQL*Plus EXPLAIN PLAN command. The EXPLAIN PLAN command populates
a table named plan_table with the SQL statement’s execution plan (plan_table is often
referred to as the “plan table”). You may then examine that execution plan by querying the plan
table. The first thing you must do is check if the plan table currently exists in the database.

Checking if the Plan Table Currently Exists in the Database
To check if the plan table currently exists in the database, you should connect to the database as
the store user and run the following DESCRIBE command:

SQL> DESCRIBE plan_table
Name Null? Type
----------------------------------------- -------- --------------
STATEMENT_ID VARCHAR2(30)
592 Oracle Database 11g SQL
PLAN_ID NUMBER
TIMESTAMP DATE
REMARKS VARCHAR2(4000)
OPERATION VARCHAR2(30)
OPTIONS VARCHAR2(255)
OBJECT_NODE VARCHAR2(128)
OBJECT_OWNER VARCHAR2(30)
OBJECT_NAME VARCHAR2(30)
OBJECT_ALIAS VARCHAR2(65)
OBJECT_INSTANCE NUMBER(38)
OBJECT_TYPE VARCHAR2(30)
OPTIMIZER VARCHAR2(255)
SEARCH_COLUMNS NUMBER
ID NUMBER(38)
PARENT_ID NUMBER(38)
DEPTH NUMBER(38)
POSITION NUMBER(38)
COST NUMBER(38)
CARDINALITY NUMBER(38)
BYTES NUMBER(38)
OTHER_TAG VARCHAR2(255)
PARTITION_START VARCHAR2(255)
PARTITION_STOP VARCHAR2(255)
PARTITION_ID NUMBER(38)
OTHER LONG
OTHER_XML CLOB
DISTRIBUTION VARCHAR2(30)
CPU_COST NUMBER(38)
IO_COST NUMBER(38)
TEMP_SPACE NUMBER(38)
ACCESS_PREDICATES VARCHAR2(4000)
FILTER_PREDICATES VARCHAR2(4000)
PROJECTION VARCHAR2(4000)
TIME NUMBER(38)
QBLOCK_NAME VARCHAR2(30)




If you get a table description similar to these results, you have the plan table already. If you
get an error, then you need to create the plan table.
Creating the Plan Table

If you don’t have the plan table, you must create it. To do this, you run the SQL*Plus script
utlxplan.sql (on my Windows computer, the script is located in the directory
E:\oracle_11g\product\11.1.0\db_1\RDBMS\ADMIN). The following example shows the command
to run the utlxplan.sql script:

SQL> @ E:\oracle_11g\product\11.1.0\db_1\RDBMS\ADMIN\utlxplan.sql

NOTE :You’ll need to replace the directory path with the path for your
environment.

The most important columns in the plan table are shown above

Creating a Central Plan Table

If necessary, a database administrator can create one central plan table. That way,
individual users don’t have to create their own plan tables. To do this, a database
administrator performs the following steps:

1. Creates the plan table in a schema of their choice by running the utlxplan.sql
script.

2. Creates a public synonym for the plan table.

3. Grants access on the plan table to the public role.

Here is an example of these steps:

@ E:\oracle_11g\product\11.1.0\db_1\RDBMS\ADMIN\utlxplan.sql


CREATE PUBLIC SYNONYM plan_table FOR plan_table;
GRANT SELECT, INSERT, UPDATE, DELETE ON plan_table TO PUBLIC;


Column Description
statement_id Name you assign to the execution plan.
operation Database operation performed, which can be
■ Scanning a table
■ Scanning an index
■ Accessing rows from a table by using an index
■ Joining two tables together
■ Sorting a row set


For example, the operation for accessing a table is TABLE ACCESS.
options Name of the option used in the operation. For example, the option for a
complete scan is FULL.


object_name Name of the database object referenced in the operation.
object_type Attribute of object. For example, a unique index has the attribute of
UNIQUE.


id Number assigned to this operation in the execution plan.
parent_id Parent number for the current step in the execution plan. The parent_
id value relates to an id value from a parent step.


position Processing order for steps that have the same parent_id.
cost Estimate of units of work for operation. Cost-based optimization uses
disk I/O, CPU usage, and memory usage as units of work. Therefore, the
cost is an estimate of the number of disk I/Os and the amount of CPU
and memory used in performing an operation.



Generating an Execution Plan:

Once you have a plan table, you can use the EXPLAIN PLAN command to generate an execution
plan for an SQL statement. The syntax for the EXPLAIN PLAN command is as follows:


EXPLAIN PLAN SET STATEMENT_ID = statement_id FOR sql_statement;


statement_id is the name you want to call the execution plan. This can be any
alphanumeric text.

sql_statement is the SQL statement you want to generate an execution plan for.

The following example generates the execution plan for a query that retrieves all rows from
the customers table (notice that the statement_id is set to 'CUSTOMERS'):


SQL> EXPLAIN PLAN SET STATEMENT_ID = 'CUSTOMERS' FOR SELECT customer_id, first_name, last_name FROM customers;
Explained


After the command completes, you may examine the execution plan stored in the plan table.
You’ll see how to do that next.


NOTE :===>>

The query in the EXPLAIN PLAN statement doesn’t return rows
from the customers table. The EXPLAIN PLAN statement simply
generates the execution plan that would be used if the query was run.
Querying the Plan Table


For querying the plan table, I have provided an SQL*Plus script named explain_plan.sql in
the SQL directory. The script prompts you for the statement_id and then displays the execution
plan for that statement.
The explain_plan.sql script is as follows:


-- Displays the execution plan for the specified statement_id
UNDEFINE v_statement_id;

SELECT
id ||
DECODE(id, 0, '', LPAD(' ', 2*(level - 1))) || ' ' ||
operation || ' ' ||
options || ' ' ||
object_name || ' ' ||
object_type || ' ' ||
DECODE(cost, NULL, '', 'Cost = ' || position)
AS execution_plan
FROM plan_table
CONNECT BY PRIOR id = parent_id
AND statement_id = '&&v_statement_id'
START WITH id = 0
AND statement_id = '&v_statement_id';




An execution plan is organized into a hierarchy of database operations similar to a tree; the
details of these operations are stored in the plan table. The operation with an id of 0 is the root
of the hierarchy, and all the other operations in the plan stem from this root. The query in the
script retrieves the details of the operations, starting with the root operation and then navigating
the tree from the root.

The following example shows how to run the explain_plan.sql script to retrieve the
'CUSTOMERS' plan created earlier:

SQL> @ c:\sql_book\sql\explain_plan.sql
Enter value for v_statement_id: CUSTOMERS
old 12: statement_id = '&&v_statement_id'
new 12: statement_id = 'CUSTOMERS'
old 14: statement_id = '&v_statement_id'
new 14: statement_id = 'CUSTOMERS'



EXECUTION_PLAN
----------------------------------------------
0 SELECT STATEMENT Cost = 3
1 TABLE ACCESS FULL CUSTOMERS TABLE Cost = 1





The operations shown in the EXECUTION_PLAN column are executed in the following order:
The rightmost indented operation is executed first, followed by any parent operations
above it.



For operations with the same indentation, the topmost operation is executed first,
followed by any parent operations above it.
Each operation feeds its results back up the chain to its immediate parent operation, and the
parent operation is then executed. In the EXECUTION_PLAN column, the operation ID is shown
on the far left. In the example execution plan, operation 1 is run first, with the results of that
operation being passed to operation 0. The following example illustrates the ordering for a more
complex example:



0 SELECT STATEMENT Cost = 6
1 MERGE JOIN Cost = 1
2 TABLE ACCESS BY INDEX ROWID PRODUCT_TYPES TABLE Cost = 1
3 INDEX FULL SCAN PRODUCT_TYPES_PK INDEX (UNIQUE) Cost = 1
4 SORT JOIN Cost = 2
5 TABLE ACCESS FULL PRODUCTS TABLE Cost = 1



The order in which the operations are executed in this example is 3, 2, 5, 4, 1, and 0.
Now that you’ve seen the order in which operations are executed, it’s time to move onto what
the operations actually do. The execution plan for the 'CUSTOMERS' query was


0 SELECT STATEMENT Cost = 3
1 TABLE ACCESS FULL CUSTOMERS TABLE Cost = 1

Operation 1 is run first, with the results of that operation being passed to operation 0.
Operation 1 involves a full table scan—indicated by the string TABLE ACCESS FULL—on the
customers table. Here’s the original command used to generate the 'CUSTOMERS' query:
EXPLAIN PLAN SET STATEMENT_ID = 'CUSTOMERS' FOR
SELECT customer_id, first_name, last_name FROM customers;



A full table scan is performed because the SELECT statement specifies that all the rows from
the customers table are to be retrieved.
The total cost of the query is three work units, as indicated in the cost part shown to the right
of operation 0 in the execution plan (0 SELECT STATEMENT Cost = 3). A work unit is the
amount of processing the software has to do to perform a given operation. The higher the cost,
the more work the database software has to do to complete the SQL statement.



NOTE :==>

If you’re using a version of the database prior to Oracle Database 10g,
then the output for the overall statement cost may be blank. That’s
because earlier database versions don’t automatically collect table
statistics. In order to gather statistics, you have to use the ANALYZE
command. You’ll learn how to do that later in the section “Gathering
Table Statistics.”

Execution Plans Involving Table Joins
Execution plans for queries with table joins are more complex. The following example generates
the execution plan for a query that joins the products and product_types tables:
EXPLAIN PLAN SET STATEMENT_ID = 'PRODUCTS' FOR
SELECT p.name, pt.name
FROM products p, product_types pt
WHERE p.product_type_id = pt.product_type_id;


The execution plan for this query is shown in the following example:

@ c:\sql_book\sql\explain_plan.sql
Enter value for v_statement_id: PRODUCTS
EXECUTION_PLAN
----------------------------------------------------------------
0 SELECT STATEMENT Cost = 6
1 MERGE JOIN Cost = 1
2 TABLE ACCESS BY INDEX ROWID PRODUCT_TYPES TABLE Cost = 1
3 INDEX FULL SCAN PRODUCT_TYPES_PK INDEX (UNIQUE) Cost = 1
4 SORT JOIN Cost = 2
5 TABLE ACCESS FULL PRODUCTS TABLE Cost = 1




NOTE ===>>>

If you run the example, you may get a slightly different execution
plan depending on the version of the database you are using and
on the settings of the parameters in the database’s init.ora
configuration file.


The previous execution plan is more complex, and you can see the hierarchical relationships
between the various operations. The execution order of the operations is 3, 2, 5, 4, 1, and 0. Table
16-2 describes each operation in the order they are performed.

Gathering Table Statistics
If you’re using a version of the database prior to Oracle Database 10g (such as 9i ), then you’ll
have to gather table statistics yourself using the ANALYZE command. By default, if no statistics are
available then rule-based optimization is used. Rule-based optimization isn’t usually as good as
cost-based optimization.


The following examples use the ANALYZE command to gather statistics for the products
and product_types tables:


ANALYZE TABLE products COMPUTE STATISTICS;
ANALYZE TABLE product_types COMPUTE STATISTICS;


Once the statistics have been gathered, cost-based optimization will be used rather than rulebased
optimization.


Comparing Execution Plans
By comparing the total cost shown in the execution plan for different SQL statements, you can
determine the value of tuning your SQL. In this section, you’ll see how to compare two execution
plans and see the benefit of using EXISTS rather than DISTINCT (a tip I gave earlier). The following

example generates an execution plan for a query that uses EXISTS:


EXPLAIN PLAN SET STATEMENT_ID = 'EXISTS_QUERY' FOR
SELECT product_id, name
FROM products outer
WHERE EXISTS
(SELECT 1
FROM purchases inner
WHERE inner.product_id = outer.product_id);


Operation ID Description :

3 Full scan of the index product_types_pk (which is a unique index)
to obtain the addresses of the rows in the product_types table. The
addresses are in the form of ROWID values, which are passed to operation 2.


2 Access the rows in the product_types table using the list of ROWID
values passed from operation 3. The rows are passed to operation 1.


5 Access the rows in the products table. The rows are passed to operation 4.


4 Sort the rows passed from operation 5. The sorted rows are passed to
operation 1.


1 Merge the rows passed from operations 2 and 5. The merged rows are
passed to operation 0.


0 Return the rows from operation 1 to the user. The total cost of the query is 6 work units.




TABLE 16-2 Execution Plan Operations: ==>>

The execution plan for this query is shown in the following example:
@ c:\sql_book\sql\explain_plan.sql
Enter value for v_statement_id: EXISTS_QUERY
EXECUTION_PLAN
------------------------------------------------------------
0 SELECT STATEMENT Cost = 4
1 MERGE JOIN SEMI Cost = 1
2 TABLE ACCESS BY INDEX ROWID PRODUCTS TABLE Cost = 1
3 INDEX FULL SCAN PRODUCTS_PK INDEX (UNIQUE) Cost = 1
4 SORT UNIQUE Cost = 2
5 INDEX FULL SCAN PURCHASES_PK INDEX (UNIQUE) Cost = 1



As you can see, the total cost of the query is 4 work units. The next example generates an
execution plan for a query that uses DISTINCT:
EXPLAIN PLAN SET STATEMENT_ID = 'DISTINCT_QUERY' FOR
SELECT DISTINCT pr.product_id, pr.name
FROM products pr, purchases pu
WHERE pr.product_id = pu.product_id;


The execution plan for this query is shown in the following example:
@ c:\sql_book\sql\explain_plan.sql
Enter value for v_statement_id: DISTINCT_QUERY
EXECUTION_PLAN
--------------------------------------------------------------
0 SELECT STATEMENT Cost = 5
1 HASH UNIQUE Cost = 1
2 MERGE JOIN Cost = 1
3 TABLE ACCESS BY INDEX ROWID PRODUCTS TABLE Cost = 1
4 INDEX FULL SCAN PRODUCTS_PK INDEX (UNIQUE) Cost = 1
5 SORT JOIN Cost = 2
6 INDEX FULL SCAN PURCHASES_PK INDEX (UNIQUE) Cost = 1


The cost for the query is 5 work units. This query is more costly than the earlier query that
used EXISTS (that query had a cost of only 4 work units). These results prove it is better to use
EXISTS than DISTINCT.


Passing Hints to the Optimizer
You can pass hints to the optimizer. A hint is an optimizer directive that influences the optimizer’s
choice of execution plan. The correct hint may improve the performance of an SQL statement.
You can check the effectiveness of a hint by comparing the cost in the execution plan of an SQL
statement with and without the hint.


In this section, you’ll see an example query that uses one of the more useful hints: the
FIRST_ROWS(n) hint. The FIRST_ROWS(n) hint tells the optimizer to generate an execution
plan that will minimize the time taken to return the first n rows in a query. This hint can be useful
when you don’t want to wait around too long before getting some rows back from your query, but
you still want to see all the rows.


The following example generates an execution plan for a query that uses FIRST_ROWS(2);
notice that the hint is placed within the strings /*+ and */:
EXPLAIN PLAN SET STATEMENT_ID = 'HINT' FOR
SELECT /*+ FIRST_ROWS(2) */ p.name, pt.name
FROM products p, product_types pt
WHERE p.product_type_id = pt. product_type_id;
CAUTION

Your hint must use the exact syntax shown—otherwise, the hint
will be ignored. The syntax is: /*+ followed by one space, the
hint, followed by one space, and */.
The execution plan for this query is shown in the following example; notice that the cost is 4 work units:


@ c:\sql_book\sql\explain_plan.sql
Enter value for v_statement_id: HINT
EXECUTION_PLAN
------------------------------------------------------------------
0 SELECT STATEMENT Cost = 4
1 NESTED LOOPS
2 NESTED LOOPS Cost = 1
3 TABLE ACCESS FULL PRODUCTS TABLE Cost = 1
4 INDEX UNIQUE SCAN PRODUCT_TYPES_PK INDEX (UNIQUE) Cost = 2
5 TABLE ACCESS BY INDEX ROWID PRODUCT_TYPES TABLE Cost = 2


The next example generates an execution plan for the same query without the hint:
EXPLAIN PLAN SET STATEMENT_ID = 'NO_HINT' FOR
SELECT p.name, pt.name
FROM products p, product_types pt
WHERE p.product_type_id = pt. product_type_id;
The execution plan for the query is shown in the following example; notice the cost is 6 work
units (higher than the query with the hint):


@ c:\sql_book\sql\explain_plan.sql
Enter value for v_statement_id: NO_HINT
EXECUTION_PLAN
----------------------------------------------------------------
0 SELECT STATEMENT Cost = 6
1 MERGE JOIN Cost = 1
2 TABLE ACCESS BY INDEX ROWID PRODUCT_TYPES TABLE Cost = 1
3 INDEX FULL SCAN PRODUCT_TYPES_PK INDEX (UNIQUE) Cost = 1
4 SORT JOIN Cost = 2
5 TABLE ACCESS FULL PRODUCTS TABLE Cost = 1



These results show that the inclusion of the hint reduces the cost of running the query by
2 work units.
600 Oracle Database 11g SQL
There are many hints that you can use, and this section has merely given you a taste of the
subject.


Additional Tuning Tools
In this final section, I’ll mention some other tuning tools. Full coverage of these tools is beyond
the scope of this book. You can read the Oracle Database Performance Tuning Guide, published
by Oracle Corporation, for full details of the tools mentioned in this section and for a comprehensive
list of hints.


Oracle Enterprise Manager Diagnostics Pack
The Oracle Enterprise Manager Diagnostics Pack captures operating system, middle tier, and
application performance data, as well as database performance data. The Diagnostics Pack
analyzes this performance data and displays the results graphically. A database administrator can
also configure the Diagnostics Pack to alert them immediately of performance problems via e-mail
or page. Oracle Enterprise Manager also includes software guides to help resolve performance
problems.


Automatic Database Diagnostic Monitor ==>>

The Automatic Database Diagnostic Monitor (ADDM) is a self-diagnostic module built into the
Oracle database software. ADDM enables a database administrator to monitor the database for
performance problems by analyzing system performance over a long period of time. The database
administrator can view the performance information generated by ADDM in Oracle Enterprise
Manager. When ADDM finds performance problems, it will suggest solutions for corrective
action. Some example ADDM suggestions include
Hardware changes—for example, adding CPUs to the database server
Database configuration—for example, changing the database initialization parameter
settings
Application changes—for example, using the cache option for sequences or using bind
variables
Use other advisors—for example, running the SQL Tuning Advisor and SQL Access
Advisor on SQL statements that are consuming the most database resources to execute
You’ll learn about the SQL Tuning Advisor and SQL Access Advisor next.



SQL Tuning Advisor ==>>
The SQL Tuning Advisor allows a developer or database administrator to tune an SQL statement
using the following items:
The text of the SQL statement
The SQL identifier of the statement (obtained from the V$SQL_PLAN view, which is one
of the views available to a database administrator)
The range of snapshot identifiers
The SQL Tuning Set name



An SQL Tuning Set is a set of SQL statements with their associated execution plan and execution
statistics. SQL Tuning Sets are analyzed to generate SQL Profiles that help the optimizer to choose
the optimal execution plan. SQL Profiles contain collections of information that enable optimization
of the execution plan.


SQL Access Advisor ==>>

The SQL Access Advisor provides a developer or database administrator with performance advice
on materialized views, indexes, and materialized view logs. The SQL Access Advisor examines
space usage and query performance and recommends the most cost-effective configuration of new and existing materialized views and indexes.