Tuesday, July 27, 2010

Upgrade Notes from 10g RAC to 11g

Upgrade Notes from 10g RAC to 11g


Here is how I went about upgrading the aforementioned, 2-node 10g RAC cluster

Here are the basic steps I followed:

Download, mount, and unzip the 11g Database and Clusterware Software on the original installation node.

I ran the Clusterware preupdate.sh script on both nodes, since I don't have a shared clusterware installation:

root@rac2 # /u03/clusterware/upgrade/preupdate.sh -crshome /u03/app/oracle/product/crs/10.2 -crsuser oracle -shutdown


The aforementioned, unlocks and stops the 10g CRS Software in preparation for installation. The new software must be installed in the same directory as the existing CRS software.

Next, install the 11g Clusterware Software:

oracle@rac1 upgrade]$ echo $ORA_CRS_HOME
/u03/app/oracle/product/crs/10.2
[oracle@rac1 upgrade]$ export DISPLAY=192.168.1.4:0.0
[oracle@rac1 upgrade]$ /u03/clusterware/runInstaller


Follow the Universal Installer prompts to install on both nodes:

Next
> Next (ORA_CRS_HOME should be already selected)
> Next (Select all applicable nodes)
> Next Product Checks
> Install (after verifying)

# When prompted run root on both nodes:

[root@rac1 ~]# /u03/app/oracle/product/crs/10.2/install/rootupgrade


Here is the output from the first node:

Checking to see if Oracle CRS stack is already up...

copying ONS config file to 11.1 CRS home
/bin/cp: `/u03/app/oracle/product/crs/10.2/opmn/conf/ons.config' and `/u03/app/oracle/product/crs/10.2/opmn/conf/ons.config' are the same file
/u03/app/oracle/product/crs/10.2/opmn/conf/ons.config was copied successfully to
/u03/app/oracle/product/crs/10.2/opmn/conf/ons.config
WARNING: directory '/u03/app/oracle/product/crs' is not owned by root
WARNING: directory '/u03/app/oracle/product' is not owned by root
WARNING: directory '/u03/app/oracle' is not owned by root
WARNING: directory '/u03/app' is not owned by root
WARNING: directory '/u03' is not owned by root
Oracle Cluster Registry configuration upgraded successfully
Adding daemons to inittab
Attempting to start Oracle Clusterware stack
Waiting for Cluster Synchronization Services daemon to start
Waiting for Cluster Synchronization Services daemon to start
Waiting for Cluster Synchronization Services daemon to start
Waiting for Cluster Synchronization Services daemon to start
Cluster Synchronization Services daemon has started
Event Manager daemon has started
Cluster Ready Services daemon has started
Oracle CRS stack is running under init(1M)
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac1 rac1-priv rac1
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
clscfg -upgrade completed successfully
CRS stack on this node, is successfully
upgraded to 11.1.0.6.0
Checking the existence of nodeapps on this node
Creating '/u03/app/oracle/product/crs/10.2/install/paramfile.crs'
with data used for CRS configuration
Setting CRS configuration values in
/u03/app/oracle/product/crs/10.2/install/paramfile.crs


Here is the output from the remote node:

root@rac2 ~]# /u03/app/oracle/product/crs/10.2/install/rootupgrade
Checking to see if Oracle CRS stack is already up...

copying ONS config file to 11.1 CRS home
/bin/cp: `/u03/app/oracle/product/crs/10.2/opmn/conf/ons.config' and
`/u03/app/oracle/product/crs/10.2/opmn/conf/ons.config' are the same file
/u03/app/oracle/product/crs/10.2/opmn/conf/ons.config was copied successfully to
/u03/app/oracle/product/crs/10.2/opmn/conf/ons.config
WARNING: directory '/u03/app/oracle/product/crs' is not owned by root
WARNING: directory '/u03/app/oracle/product' is not owned by root
WARNING: directory '/u03/app/oracle' is not owned by root
WARNING: directory '/u03/app' is not owned by root
WARNING: directory '/u03' is not owned by root
Oracle Cluster Registry configuration upgraded successfully
Adding daemons to inittab
Attempting to start Oracle Clusterware stack
Waiting for Cluster Synchronization Services daemon to start
Waiting for Cluster Synchronization Services daemon to start
Waiting for Cluster Synchronization Services daemon to start
Waiting for Cluster Synchronization Services daemon to start
Cluster Synchronization Services daemon has started
Event Manager daemon has started
Cluster Ready Services daemon has started
Oracle CRS stack is running under init(1M)
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
clscfg -upgrade completed successfully
CRS stack on this node, is successfully
upgraded to 11.1.0.6.0
Checking the existence of nodeapps on this node
Creating '/u03/app/oracle/product/crs/10.2/install/paramfile.crs' with data used for CRS configuration
Setting CRS configuration values in /u03/app/oracle/product/crs/10.2/install/paramfile.crs


After running the root scripts press 'OK' in the pop-up window, The Installer will run the Cluster Verification Utility. Once that runs successfully, press exit.

Now install the Database Software onto both nodes, afterwards, I will upgrade the database:

First, add a new entry to the oratab for your 11g Installation:

echo "11g:/u03/app/oracle/product/db/11g:N" >> /etc/oratab
[oracle@rac1 ~]$ export ORACLE_SID=11g
[oracle@rac1 ~]$ . oraenv
[oracle@rac1 ~]$ /u03/database/runInstaller


Follow the Universal Installer prompts to install on both nodes:

Next
> Choose Edition (ostensibly, Enterprise), Next
> Specify the Name and the path of the new $ORACLE_HOME, Next
> Specify the nodes for installation, Next
> Prerequisites, Next
> Select 'No' for Upgrade Database, Next
> Choose Install Software Only, Next
> Assign O/S groups to Operations, Next
> Install

Run the root.sh script on both nodes:

[root@rac1 crs] # /u03/app/oracle/product/db/11g/root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u03/app/oracle/product/db/11g

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.


After running the root scripts press 'OK' in the pop-up window. The Installer is now complete, press exit.

Next, I upgrade the listeners that runs out of the 10.2.0.3 homes to the new 11g $ORACLE_HOMEs - do this on both nodes:

[oracle@rac1 10.2]$ export ORACLE_SID=cubs1
[oracle@rac1 10.2]$ . oraenv
[oracle@rac1 10.2]$ cd $ORACLE_HOME/network/admin
[oracle@rac1 admin]$ ls -lart
total 52
-rw-r--r-- 1 oracle dba 172 Dec 26 2003 shrept.lst
drwxr-x--- 2 oracle dba 4096 Sep 21 19:40 samples
drwxr-x--- 11 oracle dba 4096 Sep 21 19:44 ..
-rw-r--r-- 1 oracle dba 574 Sep 25 15:54 listener.ora
-rw-r--r-- 1 oracle dba 230 Sep 28 15:11 ldap.ora
-rw-r--r-- 1 oracle dba 16776 Sep 28 15:29 sqlnet.log
-rw-r--r-- 1 oracle dba 3786 Sep 28 16:51 tnsnames.ora
-rw-r--r-- 1 oracle dba 218 Sep 28 16:52 sqlnet.ora
drwxr-x--- 3 oracle dba 4096 Sep 28 16:52 .
[oracle@rac1 admin]$ lsnrctl LISTENER_RAC1 stop
[oracle@rac1 admin]$ mv listener.ora /u03/app/oracle/product/db/11g/network/admin/.
[oracle@rac1 admin]$ cp tnsnames.ora sqlnet.ora ldap.ora
/u03/app/oracle/product/db/11g/network/admin/.
[oracle@rac1 admin]$ ps -ef grep tns grep -v grep awk '{print $2}' xargs kill
[oracle@rac1 admin]$ export ORACLE_SID=11g
[oracle@rac1 admin]$ . oraenv
[oracle@rac1 admin]$ lsnrctl start LISTENER_RAC1



Now that the listener has been moved to the highest version $ORACLE_HOME on the machine, I am ready to upgrade a 10.2.0.3 clustered database to 11g (11.1.0.6.0).

First, I will run the Pre-Upgrade Information Tool:

[oracle@rac1 admin]$ cp
/u03/app/oracle/product/db/11g/rdbms/admin/utlu111i.sql
/tmp
[oracle@rac1 admin]$ cd /tmp
[oracle@rac1 tmp]$ export ORACLE_SID=cubs1
[oracle@rac1 tmp]$ . oraenv
[oracle@rac1 tmp]$ sqlplus "/ as sysdba"

SQL*Plus: Release 10.2.0.3.0 - Production on Sun Sep 30 17:23:09 2007

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> spool upgrade_info.log
SQL> @/tmp/utlu111i.sql
SQL> spool off;



After examining the upgrade log, I have to perform the following tasks:


* Upgrade sga_target parameter to 336MB

* Replace the deprecated *_dest parameters with the new all inclusive one, diagnostic_dest

* Patch - via opatch - the 10.2.0.3 databases and $ORACLE_HOMEs to use Version 4 of the timezone file; this is patch number: 5632264. You can verify correct application via v$timezone_file performance view. Remember that during the one-off patch as well as the upgrade to 11g the RAC database must have cluster_database parameter set to FALSE so that the database can be mounted exclusive.

The next step is to migrate the password file and modified init.ora.

On the first node:

[oracle@rac1 tmp]$ export ORACLE_SID=11g
[oracle@rac1 tmp]$ . oraenv
[oracle@rac1 tmp]$ echo $ORACLE_HOME
/u03/app/oracle/product/db/11g
[oracle@rac1 tmp]$ export TNS_ADMIN=$ORACLE_HOME/network/admin
[oracle@rac1 tmp]$ export ORACLE_SID=cubs1
[oracle@rac1 tmp]$ echo $PATH
/u03/app/oracle/product/db/11g/bin:/usr/sbin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/home/oracle/bin
[oracle@rac1 tmp]$ cd $ORACLE_HOME/dbs
[oracle@rac1 dbs]$ sqlplus "/ as sysdba"

SQL*Plus: Release 11.1.0.6.0 - Production on Mon Oct 1 15:09:34 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to an idle instance.

SQL> create spfile='/u02/app/oradata2/spfile/spfilecubs.ora' from pfile='/tmp/initcubs.ora';

File created.

SQL> exit
Disconnected
[oracle@rac1 dbs]$ ln -s /u02/app/oradata/dbs/orapwcubs orapwcubs2
[oracle@rac1 dbs]$ ln -s /u02/app/oradata2/spfile/spfilecubs.ora spfilecubs1.ora


On the second node:

[oracle@rac2 dbs]$ export ORACLE_SID=11g
[oracle@rac2 dbs]$ . oraenv
[oracle@rac2 dbs]$ cd $ORACLE_HOME/dbs
[oracle@rac2 dbs]$ ln -s /u02/app/oradata/dbs/orapwcubs orapwcubs2
[oracle@rac2 dbs]$ ln -s /u02/app/oradata2/spfile/spfilecubs.ora spfilecubs2.ora


Next upgrade the database by running the appropriate scripts; on the primary node:

[oracle@rac1 dbs]$ sqlplus "/ as sysdba"

SQL*Plus: Release 11.1.0.6.0 - Production on Mon Oct 1 15:16:49 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup upgrade;
ORACLE instance started.

Total System Global Area 351522816 bytes
Fixed Size 1299876 bytes
Variable Size 155191900 bytes
Database Buffers 188743680 bytes
Redo Buffers 6287360 bytes
Database mounted.
Database opened.

SQL> @$ORACLE_HOME/rdbms/admin/catupgrd.sql


Once the database has been upgraded, the database will automatically be shutdown by the previous script. Startup the database and run the post scripts that don't require exclusive access to the database and verify that there are no invalid objects:

[oracle@rac1 dbs]$ sqlplus "/ as sysdba"

SQL*Plus: Release 11.1.0.6.0 - Production on Mon Oct 1 16:15:08 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 351522816 bytes
Fixed Size 1299876 bytes
Variable Size 184552028 bytes
Database Buffers 159383552 bytes
Redo Buffers 6287360 bytes
SQL> alter system set cluster_database=TRUE scope=spfile;

System altered.

SQL> shutdown immediate;
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 351522816 bytes
Fixed Size 1299876 bytes
Variable Size 184552028 bytes
Database Buffers 159383552 bytes
Redo Buffers 6287360 bytes
Database mounted.
Database opened.

SQL> @$ORACLE_HOME/rdbms/admin/utlu111s.sql
SQL> @$ORACLE_HOME/rdbms/admin/catuppst.sql
SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql


Update the oratab on both nodes and re-source the environment via oratab.

On the first node:

[oracle@rac1 dbs]$ vi /etc/oratab
[oracle@rac1 dbs]$ grep cubs1 /etc/oratab
cubs1:/u03/app/oracle/product/db/11g:N
[oracle@rac1 dbs]$ . oraenv
The Oracle base for ORACLE_HOME=/u03/app/oracle/product/db/11g is /u03/app/oracle
[oracle@rac1 dbs]$ echo $PATH
/u03/app/oracle/product/db/11g/bin:/usr/sbin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/home/oracle/bin


On the second node:

[oracle@rac2 dbs]$ vi /etc/oratab
[oracle@rac2 dbs]$ grep cubs2 /etc/oratab
cubs2:/u03/app/oracle/product/db/11g:N
[oracle@rac2 dbs]$ . oraenv
The Oracle base for ORACLE_HOME=/u03/app/oracle/product/db/11g is /u03/app/oracle
[oracle@rac2 dbs]$ echo $PATH
/u03/app/oracle/product/db/11g/bin:/usr/sbin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/home/oracle/bin


Upgrade the Oracle Cluster Registry (OCR) to reflect the upgraded nature of the database, including the previously migrated listeners:

[oracle@rac1 bin]$ export ORACLE_SID=10.2.0.3
[oracle@rac1 bin]$ . oraenv
The Oracle base for ORACLE_HOME=/u03/app/oracle/product/db/10.2 is /u03/app/oracle
[oracle@rac1 bin]$ srvctl remove database -d cubs
Remove the database cubs? (y/[n]) y
[oracle@rac1 bin]$ export ORACLE_SID=11g
[oracle@rac1 bin]$ . oraenv
[oracle@rac1 bin]$ srvctl add database -d cubs -o /u03/app/oracle/product/db/11g
[oracle@rac1 bin]$ srvctl add instance -d cubs -i cubs1 -n rac1
[oracle@rac1 bin]$ srvctl add instance -d cubs -i cubs2 -n rac2
[oracle@rac1 bin]$ srvctl modify listener -n rac1 -l LISTENER_RAC1 -o /u03/app/oracle/product/db/11g
[oracle@rac1 bin]$ srvctl modify listener -n rac2 -l LISTENER_RAC2 -o /u03/app/oracle/product/db/11g


Next I will upgrade the compatibility of the database and cycle the database in order to verify:

[oracle@rac1 bin]$ export ORACLE_SID=cubs1
[oracle@rac1 bin]$ . oraenv
The Oracle base for ORACLE_HOME=/u03/app/oracle/product/db/11g is /u03/app/oracle
[oracle@rac1 bin]$ sqlplus "/ as sysdba"

SQL*Plus: Release 11.1.0.6.0 - Production on Mon Oct 1 16:51:17 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> alter system set compatible='11.0.0' scope=spfile;

System altered.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
[oracle@rac1 bin]$ srvctl stop database -d cubs
[oracle@rac1 bin]$ srvctl start database -d cubs
[oracle@rac1 ~]$ sqlplus "/ as sysdba"

SQL*Plus: Release 11.1.0.6.0 - Production on Mon Oct 1 18:03:20 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> show parameter compatible

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
compatible string 11.0.0


If applicable, upgrade your EM Agents to reflect the software changes (both nodes):

$ export ORACLE_SID=agent10g
[oracle@rac2 ~]$ . oraenv
The Oracle base for ORACLE_HOME=/u03/app/oracle/product/agent10g is /u03/app/oracle
[oracle@rac2 ~]$ agentca -d

Stopping the agent using /u03/app/oracle/product/agent10g/bin/emctl stop agent
Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
Stopping agent ... stopped.
Running agentca using /u03/app/oracle/product/agent10g/oui/bin/runConfig.sh ORACLE_HOME=/u03/app/oracle/product/agent10g ACTION=Configure MODE=Perform RESPONSE_FILE=/u03/app/oracle/product/agent10g/response_file RERUN=TRUE INV_PTR_LOC=/etc/oraInst.loc COMPONENT_XML={oracle.sysman.top.agent.10_2_0_1_0.xml}
Perform - mode is starting for action: Configure


Perform - mode finished for action: Configure

You can see the log file: /u03/app/oracle/product/agent10g/cfgtoollogs/oui/configActions2007-10-01_06-05-43-PM.log


One of my last steps is to take a backup; therefore, I will need to upgrade my RMAN catalog and/or create a new one. I have opted to create a new one, so that the other databases that I backup are not disturbed. First I create a new rman database user for my repository database:

DROP USER RMAN_11g CASCADE;
CREATE USER RMAN_11g
IDENTIFIED BY "rman"
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
PROFILE DEFAULT
ACCOUNT UNLOCK;
-- 1 Role for RMAN
GRANT RECOVERY_CATALOG_OWNER TO RMAN_11g;
ALTER USER RMAN DEFAULT ROLE ALL;
-- 1 System Privilege for RMAN
GRANT CREATE SESSION TO RMAN_11g;
-- 1 Tablespace Quota for RMAN
ALTER USER RMAN_11G QUOTA UNLIMITED ON USERS;
DROP USER RMAN_11g CASCADE
Error at line 1
ORA-01918: user 'RMAN_11G' does not exist

User created.
Grant complete.
User altered.
Grant complete.
User altered.


Now that I have an RMAN catalog owner on the repository database, it is time for me to create a new catalog, register my 11g RAC database and add back my customizations:

[oracle@rac2 ~]$ export ORACLE_SID=cubs2
[oracle@rac2 ~]$ . oraenv
The Oracle base for ORACLE_HOME=/u03/app/oracle/product/db/11g is /u03/app/oracle
[oracle@rac2 ~]$ rman target=/ catalog=rman_11g/rman@rman

Recovery Manager: Release 11.1.0.6.0 - Production on Tue Oct 2 13:39:46 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: CUBS (DBID=2121269038)
connected to recovery catalog database

RMAN> create catalog;

recovery catalog created

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

RMAN> @/home/oracle/bin/backup_scripts.rman

RMAN> replace script 'cubs_full_backup' {
2> sql "alter session set optimizer_mode=RULE";
3> allocate channel ch device type disk format '/u03/app/oracle/orabackup/cubs_%U.rman';
4> backup full database plus archivelog delete input;
5> }
replaced script cubs_full_backup

RMAN>
RMAN> replace script 'cleanup_catalog' {
2> sql "alter session set optimizer_mode=RULE";
3> allocate channel ch device type disk;
4> crosscheck backup;
5> delete noprompt expired backup;
6> }
replaced script cleanup_catalog

RMAN> **end-of-file**



Afterwards, I perform a backup using one of my stored scripts:

RMAN> run { execute script cubs_full_backup; }

executing script: cubs_full_backup

sql statement: alter session set optimizer_mode=RULE

allocated channel: ch
channel ch: SID=103 instance=cubs2 device type=DISK


Starting backup at 02-OCT-07
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=2 sequence=108 RECID=241 STAMP=634918386
channel ORA_DISK_1: starting piece 1 at 02-OCT-07
channel ORA_DISK_1: finished piece 1 at 02-OCT-07
piece handle=/u03/1qitg5fm_1_1 tag=TAG20071002T141310 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u02/app/oradata/cubs/arch/cubs_2_108_633910001.arc RECID=241 STAMP=634918386
Finished backup at 02-OCT-07

Starting backup at 02-OCT-07
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u02/app/oradata/cubs/system01.dbf
input datafile file number=00003 name=/u02/app/oradata/cubs/sysaux01.dbf
input datafile file number=00002 name=/u02/app/oradata/cubs/undotbs01.dbf
input datafile file number=00005 name=/u02/app/oradata/cubs/undotbs02.dbf
input datafile file number=00004 name=/u02/app/oradata/cubs/users01.dbf
channel ORA_DISK_1: starting piece 1 at 02-OCT-07
channel ORA_DISK_1: finished piece 1 at 02-OCT-07
piece handle=/u03/1ritg5fo_1_1 tag=TAG20071002T141311 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:55
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 02-OCT-07
channel ORA_DISK_1: finished piece 1 at 02-OCT-07
piece handle=/u03/1sitg5hf_1_1 tag=TAG20071002T141311 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 02-OCT-07

Starting backup at 02-OCT-07
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=2 sequence=109 RECID=242 STAMP=634918451
channel ORA_DISK_1: starting piece 1 at 02-OCT-07
channel ORA_DISK_1: finished piece 1 at 02-OCT-07
piece handle=/u03/1titg5hl_1_1 tag=TAG20071002T141413 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u02/app/oradata/cubs/arch/cubs_2_109_633910001.arc RECID=242 STAMP=634918451
Finished backup at 02-OCT-07


On my test machine, I have a lot of products installed and simply export the TNS_ADMIN variable instead of doing a lot of symbolic linking and/or network file maintenanace. In this situation, I perform the following command for any CRS resource database that I still want to run out of the old 10.2.0.3 HOME (for more information see Metalink Note 360575.1):

srvctl setenv database -d rman -t TNS_ADMIN=/u03/app/oracle/product/db/11g/network/admin

No comments:

Post a Comment