Oracle's Sun Database Machine X2-2 Setup / Configuration Best Practices
Oracle's Sun Database Machine X2-2 Setup / Configuration Best Practices
Applies to:
Oracle Exadata Storage Server Software - Version: 11.2.1.2.0 to 11.2.2.1.1 - Release: 11.2 to 11.2The information in this document applies to any platform.
Purpose
The goal of this document is to present the best practices for the deployment of Sun Oracle Database Machine X2-2 in the area of Setup and Configuration.The Scope and Application
general audience working on the Sun Oracle Database Machine X2-2Oracle's Sun Database Machine X2-2 Setup / Configuration Best Practices
The Primary and the standby databases should NOT reside on the IB fabric, the same
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
Risk:
If the primary and standby resides on the same IB fabric, both primary and standby systems can be unavailable due a bug causing an IB fabric failure.
Action / Repair: The primary and at least one viable standby database must not reside on the same inter-racked Exadata Database Machine. The communication between the primary and standby Exadata Database Machines must use GigE or 10GigE. The trade-off is lower network bandwidth . The higher network bandwidth is desirable for standby database instantiation (should only be done first time) but that requirement is eliminated for post-failover operations when flashback database is enabled.
Use the hostname and domain name in lower case
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
Using lowercase will avoid known deployment time issues.
Risk:
OneCommand deployment will fail in step 16 if this is not done. This will abort the installation with:
"ERROR: unable to locate the file to check for the string 'Configure Oracle Grid Infrastructure for the Cluster, a ... succeeded' # Step 16 #"
Action / the Repair:
As a best practice, user lower case for the hostnames and domain names
Verify Hardware and Firmware on Database and Storage Servers (CheckHWnFWProfile)
Priority | Added | Machine the Type | The OS the Type | Exadat Version | Oracle the Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
The Oracle Exadata Database Machine is tightly integrated, and verifying the hardware and firmware before the Oracle Exadata Database Machine is placed into or returned to production status can avoid problems related to the hardware or firmware modifications.
The impact for these verification steps is minimal.
Risk:
If the hardware and firmware are not validated, inconsistencies between database and storage servers can lead to problems and outages.
Action / the Repair:
To verify the hardware and firmware configuration execute the following command as the root userid:
/ Opt / oracle.SupportTools / CheckHWnFWProfile
The output, will contain a line similar to the following:
[SUCCESS] The hardware and firmware profile matches one of the supported profile
If any result other than "SUCCESS" is returned, investigate and correct the condition. NOTE: CheckHWnFWProfile is also executed at each boot of the storage and database servers.
Verify Software on Storage Servers (CheckSWProfile.sh)
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
Verifying the software configuration after initial deployment, upgrades, or patching and before the Oracle Exadata Database Machine is placed into or returned to production status can avoid problems related to the software modifications.
The overhead for these verification steps is minimal.
Risk:
If the software is not validated, inconsistencies between database and storage servers can lead to problems and outages.
Action / the Repair:
To verify the storage server software configuration execute the following command as the root userid:
/ Opt / oracle.SupportTools / CheckSWProfile.sh-c
The output will be similar to:
[INFO] SUCCESS: Meets requirements of operating platform and installed software for [INFO] below listed releases and patches of Exadata and of corresponding Database. [INFO] the Check does NOT verify correctness of configuration for the installed software and. [The_ExadataAndDatabaseReleases] Exadata: 11.2.2.1.0 OracleDatabase: 11.2.0.2 + PatchesIf any result other than "SUCCESS" is returned, investigate and correct the condition.
Verify Software on InfiniBand Switches (CheckSWProfile.sh)
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
Verifying the software configuration after initial deployment, upgrades, or patching and before the Oracle Exadata Database Machine is placed into or returned to production status can avoid problems related to the software modifications.
The overhead for these verification steps is minimal.
Risk:
If the software is not validated, problems may occur when the machine is utilized.
Action / the Repair:
The commands required to verify the InfiniBand switches software configuration vary slightly by the physcial configuration of the Oracle Exadata Database Machine. The key difference is whether or not the physical configuration includes a designated spine switch.
To verify the InfiniBand switches software configuration for a X2-8, a full rack Oracle Exadata Database Machine X2-2 or a late production model half rack Oracle Exadata Database Machine X2-2, with a designated spine switch properly configured per the "Oracle Exadata Database Machine Owner's Guide 11g Release 2 (11.2) E13874-15 "with" sm_priority = 8 ", and the name" RanDomsw-ib1 ", execute the following command as
the" root "userid on one of the database servers:
/ Opt / oracle.SupportTools / CheckSWProfile.sh-I IS_SPINERanDomsw-ib1, RanDomsw-ib3, RanDomsw-ib2
Where "RanDomsw-ib1, RanDomsw-ib3, and RanDomsw-ib2" are the switch names returned by the "ibswitches" command.
NOTE: There is no space between the "IS_SPINE" qualifier and the name of the designated spine switch.The output will be similar to:
Checking if switch RanDomsw-ib1 is pingable ... Checking if switch RanDomsw-ib3 is pingable ... Checking if switch RanDomsw-ib2 is pingable ... Use the default password for all switches? (Y / n) [n]: y [INFO] SUCCESS Switch RanDomsw-ib1 has correct software and firmware version: SWVer: 1.3.3-2 [INFO] SUCCESS Switch RanDomsw-ib1 has correct opensm configuration: controlled_handover = TRUE polling_retry_number = 5 routing_engine = ftree sminfo_polling_timeout = 1000 sm_priority = 8 [INFO] SUCCESS Switch RanDomsw-ib3 has correct software and firmware version: SWVer: 1.3.3-2 [INFO] SUCCESS Switch RanDomsw-ib3 has correct opensm configuration: controlled_handover = TRUE polling_retry_number = 5 routing_engine = ftree sminfo_polling_timeout = 1000 sm_priority = 5 [INFO] SUCCESS Switch RanDomsw-ib2 has correct software and firmware version: SWVer: 1.3.3-2 [INFO] SUCCESS Switch RanDomsw-ib2 has correct opensm configuration: controlled_handover = TRUE polling_retry_number = 5 routing_engine = ftree sminfo_polling_timeout = 1000 sm_priority = 5 [INFO] SUCCESS All switches have correct software and firmware version: SWVer: 1.3.3-2 [INFO] SUCCESS All switches have correct opensm configuration: controlled_handover = TRUE polling_retry_number = 5 routing_engine = ftree sminfo_polling_timeout = 1000 sm_priority = 5 for non spine and 8 for spine switch5
To
verify the InfiniBand switches software configuration for an early
production model half rack Oracle Exadata Database Machine X2-2 (may not
have shipped with a designated spine switch), or a quarter rack Oracle
Exadata Database Machine X2-2 properly configured per the "Oracle
Exadata Database Machine Owner's Guide 11g Release 2 (11.2) E13874-15 ",
execute the following command as the" root "userid on one of the
database servers: / Opt / oracle.SupportTools / CheckSWProfile.sh-I RanDomsw-ib3, RanDomsw-ib2
Where "RanDomsw-ib3 and RanDomsw-ib2" are the switch names returned by the "ibswitches" command.
The output will be similar to the output for the first command, but there will be no references to a spine switch and all switches will have "sm_priority" of 5.
In either command case, the expected output is to return "SUCCESS". If anything else is returned, investigate and correct the condition.
Verify the InfiniBand Cable, the Connection the Quality
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
InfiniBand cables require proper connections for optimal efficiency. Verifying the InfiniBand cable connection quality helps to ensure that the InfiniBand network operates at optimal efficiency.
There is minimal impact to verify InfiniBand cable connection quality.
Risk:
InfiniBand cables that are not properly connected may negotiate to a lower speed, work intermittently, or fail.
Action / the Repair:
Execute the following command on all database and storage servers:
for ib_cable in `ls / sys / class / net | grep ^ ib`; do printf "$ ib_cable:"; cat / sys / class / net / $ ib_cable / carrier; done
Of The output, should look similar to: ib0: 1 ib1: 1If anything other than "1" is reported, investigate that cable connection.
NOTE: Storage servers should report 2 connections. X2-2 (4170) and X2-2 database servers should report 2 connections. X2-8 database servers should report 8 connections.
Verify the Ethernet Cable, the Connection Quality
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
Ethernet cables require proper connections for optimal efficiency. Verifying the Ethernet cable connection quality helps to ensure that the Ethernet network operates at optimal efficiency.
There is minimal impact to verify the Ethernet cable connection quality.
Risk:
Ethernet cables that are not properly connected may negotiate to a lower speed, work intermittently, or fail.
Action / the Repair:
Execute the following command as the root userid on all database and storage servers:
for cable in `ls / sys / class / net | grep ^ eth`; do printf "$ cable:"; cat / sys / class / net / $ cable / carrier; done
Of The output, should look similar to: eth0: 1 eth1: cat: / sys/class/net/eth1/carrier: Invalid argument eth2: cat: / sys/class/net/eth2/carrier: Invalid argument eth3: cat: / sys/class/net/eth3/carrier: Invalid argument eth4: 1 eth5: 1"Invalid argument" usually indicates the device has not been configured and is not in use. If a device reports "0", investigate that cable connection.
NOTE: Within machine types, the output of this command will vary by customer depending on how the customer chooses to configure the available ethernet cards.
Verify the InfiniBand fabric, the Topology (verify-topology)
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
Verifying that the InfiniBand network is configured with the correct topology for an Oracle Exadata Database Machine helps to ensure that the InfiniBand network operates at maximum efficiency.
Risk:
An incorrect InfiniBand topology will cause the InfiniBand network to operate at degraded efficiency, intermittently, or fail to operate.
Action / the Repair:
Execute the verify-topology command as shown below:
/ Opt / oracle.SupportTools / ibdiagtools / verify-topology-t fattree
The output will be similar to: [DB Machine the InfiniBand Cabling the Topology Verification Tool] Is every external switch connected to every internal switch .......... [SUCCESS by Are any external switches connected to the each other .............. SUCCESS by Are any hosts connected to spine switch .............................. [SUCCESS] Check if all hosts have 2 CAs to different switches .................. [SUCCESS] Leaf switch check: cardinality and even distribution ................. [SUCCESS]If anything other than "SUCCESS" is reported, investigate and correct the condition.
Verify No InfiniBand Network Errors (ibqueryerrors)
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
Verifying that there are no high, persistent InfiniBand network error counters helps to maintain the InfiniBand network at peak efficiency.
The impact of verifying there are no the InfiniBand network errors is minimal.
Risk:
Without verifying the InfiniBand network error counters, there is a risk that a component will degrade the InfiniBand network performance, yet may not be sending an alert or error condition.
Action / the Repair:
Use the command shown below on one of the database or storage servers:
ibqueryerrors.pl-rR-s RcvSwRelayErrors, XmtDiscards, XmtWait
There should be no errors reported. The InfiniBand counters are cumulative and the errors may have occurred at any time in the past. If there are errors, it is recommended to clear the InfiniBand counters with ibclearcounters, let the system run for a few minutes under load, and then re-execute the ibquerryerrors command. Any links reporting persistent errors (especially RcvErrors or SymbolErrors) may indicate a bad / loose cable or port.
Some counters (eg RcvErrors, SymbolErrors) can increment when nodes are rebooted. Small values for these counters which are less than the "LinkDowned" counter are generally not a problem. The "LinkDowned" counter indicates the number of times the port has gone down (usually for valid reasons, eg reboot) and is not usually an error indicator by itself.
If there are persistent, high InfiniBand network error counters, investigate and correct the condition.
Verify There is Are No. the Storage Server Memory (ECC) Errors
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
Memory modules that have corrected Memory Errors (ECC) can show degraded performance, IPMI driver timeouts, and BMC error messages in / var / log / messages file.
Correcting the condition restores optimal performance.
The impact of checking for memory ECC errors is slight. Correction will likely require a firmware upgrade and reboot, or hardware repair downtime.
Risk:
If not corrected, the faulty memory will lead to performance degradation and other errors.
Action / the Repair:
To check for memory ECC errors, run the following command as the root userid on the storage server:
the ipmitool sel list | grep of ECC | cut-f1-d: | sort-u
If any errors are reported, take the following actions in order: - Upgrade to the latest the BIOS as it addresses a potential cause,
- The reseat the DIMMs.
- The Open an SR for hardware replacement.
Verify Database Server Disk the Controller the Configuration
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
For X2-2, there are 4 disk drives in a database server controlled by an LSI MegaRAID SAS 9261-8i disk controller. The disks are configured RAID-5 with 3 disks in the RAID set and 1 disk as a hot spare. There is 1 virtual drive created across the RAID set. Verifying the status of the database server RAID devices helps to avoid a possible performance impact, or an outage.
For X2-8, there are 8 disk drives in a database server controlled by an LSI MegaRAID SAS 9261-8i disk controller. The disks are configured RAID-5 with 7 disks in the RAID set and 1 disk as a hot spare. There is 1 virtual drive created across the RAID set. Verifying the status of the database server RAID devices helps to avoid a possible performance impact, or an outage.
The impact of validating the RAID devices is minimal. The impact of corrective actions will vary depending on the specific issue uncovered, and may range from simple reconfiguration to an outage.
Risk:
Not verifying the RAID devices increases the chance of a performance degradation or an outage.
Action / the Repair:
To verify the database server disk controller configuration, use the following command:
/ Opt/MegaRAID/MegaCli/MegaCli64 AdpAllInfo-aALL | grep "Device Present"-A 8
For X2-2, the output will be similar to: The Device Present ================ Virtual Drives,: 1 Degraded: The 0 The Offline: 0 The Physical Devices,: 5 Disks: 4 The Critical Disks: 0 Failed Disks: 0The expected output is 1 virtual drive, none degraded or offline, 5 physical devices (controller + 4 disks), 4 disks, and no critical or failed disks.
For X2-8, the output will be similar to:
The Device Present ================ Virtual Drives,: 1 Degraded: The 0 The Offline: 0 The Physical Devices,: 11 Disks: 8 The Critical Disks: 0 Failed Disks: 0The expected output is 1 virtual drive, none degraded or offline, 11 physical devices (1 controller + 8 disks + 2 SAS2 expansion ports), 8 disks, and no critical or failed disks.
On X2-8, there is a SAS2 expander on each NEM, which takes in the 8 ports from the Niwot REM and expands it out to both the 8 physical drive slots through the midplane and the 2 SAS2 expansion ports external on each NEM. See the output below from the MegaRaid ? the FW event log.
If the reported output differs, investigate and correct the condition.
Verify Database Server Virtual Drive Configuration
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
For X2-2, there are 4 disk drives in a database server controlled by an LSI MegaRAID SAS 9261-8i disk controller. The disks are configured RAID-5 with 3 disks in the RAID set and 1 disk as a hot spare. There is 1 virtual drive created across the RAID set. Verifying the status of the database server RAID devices helps to avoid a possible performance impact, or an outage.
For X2-8, there are 8 disk drives in a database server controlled by an LSI MegaRAID SAS 9261-8i disk controller. The disks are configured RAID-5 with 7 disks in the RAID set and 1 disk as a hot spare. There is 1 virtual drive created across the RAID set. Verifying the status of the database server RAID devices helps to avoid a possible performance impact, or an outage.
The impact of validating the virtual drives is minimal. The impact of corrective actions will vary depending on the specific issue uncovered, and may range from simple reconfiguration to an outage.
Risk:
Not verifying the virtual drives increases the chance of a performance degradation or an outage.
Action / the Repair:
To verify the database server the virtual drive configuration, use the following command:
/ Opt/MegaRAID/MegaCli/MegaCli64 CfgDsply-aALL | grep "Virtual Drive:" ;/ opt/MegaRAID/MegaCli/MegaCli64 CfgDsply-aALL | grep "Number Of Drives" ;/ opt/MegaRAID/MegaCli/MegaCli64 CfgDsply-aALL | grep "^ State"
For X2-2 of the output should be similar to: Virtual Drive: 0 (Target Id: 0) Number Of Drives,: 3 State: OptimalThe expected result is that the virtual device has 3 drives and a state of optimal.
For X2-8, the output should be similar to:
Virtual Drive: 0 (Target Id: 0) Number Of Drives: 7 State: OptimalThe expected result is that the virtual device has 7 drives and a state of optimal.
If the reported output differs, investigate and correct the condition.
NOTE: The virtual device number reported may vary depending upon configuration and version levels.
NOTE: If a bare metal restore procedure is performed on a database server without using the "dualboot = no" configuration, that database server may be left with three virtual devices for X2-2 and 7 for X2-8. Please see My Oracle Support note 1,323,309.1 for additional information and correction instructions.
Verify the Database Server, the Physical Drive Configuration
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
For X2-2, there are 4 disk drives in a database server controlled by an LSI MegaRAID SAS 9261-8i disk controller. The disks are configured RAID-5 with 3 disks in the RAID set and 1 disk as a hot spare. There is 1 virtual drive created across the RAID set. Verifying the status of the database server RAID devices helps to avoid a possible performance impact, or an outage.
For X2-8, there are 8 disk drives in a database server controlled by an LSI MegaRAID SAS 9261-8i disk controller. The disks are configured RAID-5 with 7 disks in the RAID set and 1 disk as a hot spare. There is 1 virtual drive created across the RAID set. Verifying the status of the database server RAID devices helps to avoid a possible performance impact, or an outage.
The impact of validating the physical drives is minimal. The impact of corrective actions will vary depending on the specific issue uncovered, and may range from simple reconfiguration to an outage.
Risk:
Not verifying the physical drives increases the chance of a performance degradation or an outage.
Action / the Repair:
To verify the database server the physical drive configuration, use the following command:
/ Opt/MegaRAID/MegaCli/MegaCli64 PDList-aALL | grep "Firmware state"
The output, for X2-2 will be similar to: Firmware the state: Online, Spun Up Firmware the state: Online, Spun Up Firmware the state: Online, Spun Up Firmware state: Hotspare, Spun downThere should be three lines of output showing a state of "Online, Spun Up", and one line showing a state of "Hotspare, Spun down". The ordering of the output lines is not significant and may vary based upon a given database server's physical drive replacement history.
The output, for X2-8 will be similar to:
Firmware the state: Online, Spun Up Firmware the state: Online, Spun Up Firmware the state: Online, Spun Up Firmware the state: Online, Spun Up Firmware the state: Online, Spun Up Firmware the state: Online, Spun Up Firmware the state: Online, Spun Up Firmware state: Hotspare, Spun downThere should be seven lines of output showing a state of "Online, Spun Up", and one line showing a state of "Hotspare, Spun down". The ordering of the output lines is not significant and may vary based upon a given database server's physical drive replacement history.
If the reported output differs, investigate and correct the condition.
Verify the InfiniBand is the Private Network for Oracle Clusterware Communication
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
The InfiniBand network in an Oracle Exadata Database Machine provides superior performance and throughput characteristics that allow Oracle Clusterware to operate at optimal efficiency.
The overhead for these verification steps is minimal.
Risk:
If the the InfiniBand network is not used for the Oracle Clusterware communication, the performance will be sub-optimal.
Action / the Repair:
The InfiniBand network is preconfigured on the storage servers. Perform the following on the database servers:
Verify the InfiniBand network is the private network used for Oracle Clusterware communication with the following command:
$ GI_HOME / bin / oifcfg getif-type cluster_interconnect
For X2-2 of the output should be similar to: bondib0 192.168.8.0 global cluster_interconnect
For X2-8 the output should be similar to: bondib0 192.168.8.0 global cluster_interconnect bondib1 192.168.8.0 global cluster_interconnect bondib2 192.168.8.0 global cluster_interconnect bondib3 192.168.8.0 global cluster_interconnectIf the InfiniBand network is not the private network used for Oracle Clusterware communication, configure it following the instructions in MOS note 1073502.1, "How to Modify Private Network Interface in 11.2 Grid Infrastructure".
NOTE: It is important to ensure that your public interface is properly marked as public and not private. This can be checked with the oifcfg getif command. If it is inadvertantly marked private, you can get errors such as "OS system dependent operation: bind failed with status "and" OS failure message: Cannot assign requested address ". It can be corrected with a command like oifcfg setif-global eth0 /: public
Verify the Oracle RAC Databases use the RDS Protocol over InfiniBand Network.
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
The RDS protocol over InfiniBand provides superior performance because it avoids additional memory buffering operations when moving data from process memory to the network interface for IO operations. This includes both IO operations between the Oracle instance and the storage servers, as well as instance to instance block transfers Via Cache Fusion.
There is minimal impact to verify that the RDS protocol is in use. Implementing the RDS protocol over InfiniBand requires an outage to relink the Oracle software.
Risk:
If the Oracle RAC databases do not use RDS protocol over the InfiniBand network, IO operations will be sub-optimal.
Action / the Repair:
To verify the RDS protocol is in use by a given Oracle instance, set the ORACLE_HOME and LD_LIBRARY_PATH variables properly for the instance and execute the following command as the oracle userid on each database server where the instance is running:
$ ORACLE_HOME / bin / skgxpinfo
The output, should be: rds
Note: For Oracle software versions below 11.2.0.2, the skgxpinfo command is not present. For 11.2.0.1, you can copy over skgxpinfo to the proper path in your 11.2.0.1 environment from an available 11.2.0.2 environment and execute it against the 11.2.0.1 database home (s) using the provided command.If the instance is not using the RDS protocol over InfiniBand, relink the Oracle binary using the following commands (with variables properly defined for each home being linked):
Note: An alternative check (regardless of Oracle software version) is to scan each instance's alert log (must contain a startup sequence!) For the following line:
Cluster communication is configured to use the following interface (s) for this instance 192.168.20.21 cluster interconnect IPC version: Oracle RDS / IP (generic)
- (As oracle) Shutdown, any processes using the Oracle binary
- If and only if relinking the grid infrastructure home, then (as root) GRID_HOME / crs / install / rootcrs.pl-unlock
- (As oracle) cd $ ORACLE_HOME / rdbms / lib
- (As oracle) make-f ins_rdbms.mk ipc_rds ioracle
- If and only if relinking the Grid Infrastructure home, then (as root) GRID_HOME / crs / install / rootcrs.pl-patch
Note: Avoid using the relink all command due to various issues. Use the make commands provided.
Configure Storage Server alerts to be sent via e-mail
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
Oracle Exadata Storage Servers can send various levels of alerts and clear messages via email or snmp, or both. Sending these messages via email at a minimum helps to ensure that a problem is detected and corrected.
There is little impact to storage the server operation to send these messages via email.
Risk:
If the storage servers are not configured to send alerts and clear messages via email at a minimum, there is an increased risk of a problem not being detected in a timely manner.
Action / the Repair:
Use the following cellcli command to validate the email configuration by sending a test email:
alter the cell the validate mail;
The output will be similar to: Cell slcc09cel01 successfully altered
If
the output is not successful, configure a storage server to send email
alerts using the following cellcli command (tailored to your
environment): ALTER CELL smtpServer = 'mailserver.maildomain.com', - smtpFromAddr = 'firstname.lastname @ maildomain.com', - smtpToAddr = 'firstname.lastname @ maildomain.com', - smtpFrom = 'Exadata cell', - smtpPort = '', - smtpUseSSL = 'TRUE', - notificationPolicy = 'critical, warning, clear', - notificationMethod = 'mail';
NOTE: The recommended best practice to monitor an Oracle Exadata Database Machine is with Oracle Enterprise Manager (OEM) and the suite of OEM plugins developed for the Oracle Exadata Database Machine. Please reference My Oracle Support (MOS) note 1110675.1 for details.
Configure the NTP and Timezone on the the InfiniBand switches
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
Synchronized timestamps are important to switch operation and message logging, both within an InfiniBand switch between the InfiniBand switches. There is little impact to correctly configure the switches.
Risk:
If the InfiniBand switches are not correctly configured, there is a risk of improper operation and disjoint message timestamping.
Action / Repair:
The InfiniBand switches should be properly configured during the initial deployment process. If for some reason they were were not, please consult the “Configuring Sun Datacenter InfiniBand Switch 36 Switch” section of the “Oracle® Exadata Database Machine Owner's Guide, 11g Release 2 (11.2)”.
Verify NUMA Configuration
Priority | Added | Machine Type | OS Type | Exadata Version | Oracle Version |
Critical | N/A | X2-2(4170), X2-2, X2-8 | Linux | 11.2.x + | 11.2.x + |
X2-2 Database servers in Oracle Exadata Database Machine by default are booted with operating system NUMA support enabled. Commands that manipulate large files without using direct I/O on ext3 file systems will cause low memory conditions on the NUMA node (Xeon 5500 processor) currently running the process.
By turning NUMA off, a potential local node low memory condition and subsequent performance drop is avoided.
X2-8 Database servers should have NUMA on
The impact of turning NUMA off is minimal.
Risk:
Once local node memory is depleted, system performance as a whole will be severely impacted.
Action / Repair:
Follow the instructions in MOS Note 1053332.1 to turn NUMA off in the kernel for database servers.
NOTE: NUMA is configured to be off in the storage servers and should not be changed.
Set “mpt_cmd_retry_count=10″ in /etc/modprobe.conf on Storage Servers
Priority | Added | Machine Type | OS Type | Exadata Version | Oracle Version |
Critical | N/A | X2-2(4170), X2-2, X2-8 | Linux | 11.2.x + | 11.2.x + |
If a flash card DOM fails, and a storage server is rebooted, the startup sequence can hang for a very long time starting the udev service.
By setting “mpt_cmd_retry_count=10″, the potential delay at boot time is avoided.
The impact of setting “mpt_cmd_retry_count=10″ is minimal, and it will take effect with the next reboot.
Risk:
If rebooted with a failed flash card DOM, a storage server may hang for an extended period.
Action / Repair:
Add the following line to the /etc/modprobe.conf file and reboot the storage server:
options mptsas mpt_cmd_retry_count=10
NOTE: This configuration will be implemented in an upcoming Exadata image patch.
Configure Storage Server Flash Memory as Exadata Smart Flash Cache
Priority | Added | Machine Type | OS Type | Exadata Version | Oracle Version |
Critical | N/A | X2-2(4170), X2-2, X2-8 | Linux | 11.2.x + | 11.2.x + |
For the vast majority of situations, maximum performance is achieved by configuring the storage server flash memory as cache, allowing the Exadata software to determine the content of the cache.
The impact of configuring storage server flash memory as cache at initial deployment is minimal. If there are already grid disks configured in the flash memory, consideration must be given as to the relocation of the data when converting the flash memory back to cache.
Risk:
Not configuring the storage server flash memory as cache may result in a degradation of overall performance.
Action / Repair:
To confirm all storage server flash memory is configured as smart flash cache, execute the command shown below:
cellcli -e "list flashcache detail" | grep size
The output will be similar to: size: 365.25G
The
expected result is 365.25G. If the size is less than that, it is an
indication that some or all of the storage server flash memory has been
configured as grid disks. Investigate and correct the condition. Verify database server disk controllers use writeback cache
Priority | Added | Machine Type | OS Type | Exadata Version | Oracle Version |
Critical | N/A | X2-2(4170), X2-2, X2-8 | Linux | 11.2.x + | 11.2.x + |
Database servers use an internal RAID controller with a battery-backed cache to host local filesystems. For maximum performance when writing I/O to local disks, the battery-backed cache should be in “WriteBack” mode.
The impact of configuring the battery-backed cache in “WriteBack” mode is minimal.
Risk:
Not configuring the battery-backed cache in “WriteBack” mode will result in degraded performance when writing I/O to the local database server disks {AB: and there is a risk of data corruption if the node panics].
Action / Repair:
To verify that the disk controller battery-backed cache is in “WriteBack” mode, run the following command on all database servers:
/opt/MegaRAID/MegaCli/MegaCli64 -CfgDsply -a0 | grep -i writethrough
There should be no output returned. If the battery-backed cache is not in “WriteBack” mode, run these commands on the effected server to place the battery-backed cache into “WriteBack” mode:
/opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp WB -Lall -a0 /opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp NoCachedBadBBU -Lall -a0 /opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp NORA -Lall -a0 /opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp Direct -Lall -a0
NOTE: No settings should be modified on Exadata storage cells. The mode described above applies only to database servers in an Exadata database machine.
Verify that “Disk Cache Policy” is set to “Disabled”
Priority | Added | Machine Type | OS Type | Exadata Version | Oracle Version |
Critical | 06/13/11 | X2-2(4170), X2-2, X2-8 | Linux | 11.2.x + | 11.2.x + |
Benefit / Impact:
“Disk Cache Policy” is set to “Disabled” by default at imaging time and should not be changed because the cache created by setting “Disk Cache Policy” to “Enabled” is not battery backed. It is possible that a replacement drive has the disk cache policy enabled so its a good idea to check this setting after replacing a drive.
The impact of verifying that “Disk Cache Policy” is set to “Disabled” is minimal. The impact of suddenly losing power with “Disk Cache Policy” set to anything other than “Disabled” will vary according to each specific case, and cannot be estimated here.
Risk:
If the “Disk Cache Policy” is not “Disabled”, there is a risk of data loss in the event of a sudden power loss because the cache created by “Disk Cache Policy” is not backed up by a battery.
Action / Repair:
To verify that “Disk Cache Policy” is set to “Disabled” on all servers, use the following command as the “root” userid on the first database server in the cluster:
dcli -g /opt/oracle.SupportTools/onecommand/all_group -l root /opt/MegaRAID/MegaCli/MegaCli64 -LdPdInfo -aALL | grep -i 'Disk Cache Policy'
The output will be similar to: randomdb01: Disk Cache Policy : Disabled randomdb01: Disk Cache Policy : Disabled randomdb01: Disk Cache Policy : Disabled randomcel03: Disk Cache Policy : Disabled randomcel03: Disk Cache Policy : Disabled randomcel03: Disk Cache Policy : Disabled randomcel03: Disk Cache Policy : Disabled randomcel03: Disk Cache Policy : DisabledIf any of the results are other than “Disabled”, identify the LUN in question and reset the “Disk Cache Policy” to “Disabled” using the following command (where Lx= the lun in question, for example: L2):
MegaCli64 -LDSetProp -DisDskCache -Lx -a0
Note: The “Disk Cache Policy” is completely separate from the disk controller caching mode of “WriteBack”. Do not
confuse the two. The cache created by “WriteBack” cache mode is battery-backed, the cache created by “Disk Cache Policy” is not!
Verify Master (Rack) Serial Number is Set
Priority | Added | Machine Type | OS Type | Exadata Version | Oracle Version |
Critical | 03/02/11 | X2-2(4170), X2-2, X2-8 | Linux | 11.2.x + | 11.2.x + |
Benefit/Impact
Setting the Master Serial Number (MSN) (aka Rack Serial Number) assists Oracle Support Services to resolve entitlement issues which may arise. The MSN is listed on a label on the front and the rear of the chassis but is not electronically readable unless this value is set.
The impact to set the MSN is minimal.
Risk
Not having the MSN set for the system may hinder entitlement when opening Service Requests.
Action/Repair
Use the following command to verify that all the MSN's are set correctly and all match:
ipmitool sunoem cli "show /SP system_identifier"
The output should resemble one of the following: EV2: Sun Oracle Database Machine xxxxAKyyyy
X2-2: Exadata Database Machine X2-2 xxxxAKyyyy
X2-8: Exadata Database Machine X2-8 xxxAKyyyy
(MSN's almost always have 4 numbers, the letters 'AK' followed by 4 more numbers)
If none of the values are set contact X64 Support.
If one is not set correctly set it to match the others with the command:
ipmitool sunoem cli 'set /SP system_identifier="Exadata Database Machine X2-2 xxxxAKyyyy"'
Verify Management Network Interface (eth0) is on a Separate Subnet
Priority | Added | Machine Type | OS Type | Exadata Version | Oracle Version |
Critical | 03/02/11 | X2-2(4170), X2-2, X2-8 | Linux | 11.2.x + | 11.2.x + |
It is a requirement that the management network be on a different non-overlapping sub-net than the InfiniBand network and the client access network. This is necessary for better network security, better client access bandwidths, and for Auto Service Request (ASR) to work correctly.
The management network comprises of the eth0 network interface in the database and storage severs, the ILOM network interfaces of the database and storage servers, and the Ethernet management interfaces of the InfiniBand switches and PDUs.
Risk:
Having the management network on the same subnet as the client access network will reduce network security, potentially restrict the client access bandwidth to/from the Database Machine to a single 1GbE link, and will prevent ASR from working correctly.
Action/Repair:
To verify that the management network interface (eth0) is on a separate network from other network interfaces, execute the following command as the “root” userid on both storage and database servers:
grep -i network /etc/sysconfig/network-scripts/ifcfg* | cut -f5 -d"/" | grep -v "#"
The output will be similar to: ifcfg-bondeth0:NETWORK=10.204.77.0 ifcfg-bondib0:NETWORK=192.168.76.0 ifcfg-eth0:NETWORK=10.204.78.0 ifcfg-lo:NETWORK=127.0.0.0The expected result is that the network values are different. If they are not, investigate and correct the condition.
Verify RAID Controller Battery Condition
Priority | Added | Machine Type | OS Type | Exadata Version | Oracle Version |
Critical | 03/02/11 | X2-2(4170), X2-2, X2-8 | Linux | 11.2.x + | 11.2.x + |
Benefit/Impact:
The RAID controller battery loses its ability to support cache over time. Verifying the battery charge and condition allows proactive battery replacement.
The impact of verifying the RAID controller battery condition is minimal.
Risk:
A failed RAID controller battery will put the RAID controller into WriteThrough mode which significantly impacts write I/O performance.
Action/Repair:
Execute the following command on all servers:
/opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -a0 | grep "Full Charge" -A5 | sort | grep Full -A1
The output will be similar to: Full Charge Capacity: 1357 mAh Max Error: 2 %Proactive battery replacement should be performed within 60 days for any batteries that do not meet the following criteria:
1) “Full Charge Capacity” less than or equal to 800 mAh and “Max Error” less than 10%.
Immediately replace any batteries that do not meet the following criteria:
1) “Max Error” is 10% or greater (battery deemed unreliable regardless of “Full Charge Capacity” reading)
2) “Full Charge Capacity” less than 674 mAh regardless of “Max Error” reading
[NOTE: The complete reference guide for LSI disk controller batteries used in Exadata can be found in MOS 1329989.1 (INTERNAL ONLY)]
Verify RAID Controller Battery Temperature
Priority | Added | Machine Type | OS Type | Exadata Version | Oracle Version |
Critical | 03/02/11 | X2-2(4170), X2-2, X2-8 | Linux | 11.2.x + | 11.2.x + |
Benefit/Impact:
Maintaining proper temperature ranges maximizes RAID controller battery life.
The impact of verifying RAID controller battery temperature is minimal.
Risk:
A reported temperature of 60C or higher causes the battery to suspend charging until the temperature drops and shortens the service life of the battery, causing it to fail prematurely and put the RAID controller into WriteThrough mode which significantly impacts write I/O performance.
Action/Repair:
To verify the RAID controller battery temperature, execute the following command on all servers:
/opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -a0 | grep BatteryType; /opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -a0 | grep -i temper
The output will be similar to: BatteryType: iBBU08 Temperature: 38 C Temperature : OK Over Temperature : NoIf the battery temperature is equal to or greater than 55C, investigate and correct the environmental conditions.
NOTE: Replace Battery Module after 3 Year service life assuming the battery temperature has not exceeded 55C. If the temperature has exceeded 55C (battery temp shall not exceed 60C), replace the battery every 2 years.
[NOTE: The complete reference guide for LSI disk controller batteries used in Exadata can be found in MOS 1329989.1 (INTERNAL ONLY)]
Verify Electronic Storage Module (ESM) Lifetime is within Specification
Priority | Added | Machine Type | OS Type | Exadata Version | Oracle Version |
Critical | 03/02/11 | X2-2(4170), X2-2, X2-8 | Linux | 11.2.x + | 11.2.x + |
Benefit/Impact:
The Flash 20 card supports ESM lifetime to enable proactive replacement before failure.
The impact of verifying that the ESM lifetime is within specification is minimal. Replacing an ESM requires a storage server outage. The database and application may remain available if the appropriate grid disks are properly inactivated before and activated after the storage server outage. Refer to MOS Note 1188080.1 and “Shutting Down Exadata Storage Server” in Chapter 7 of “Oracle® Exadata Database Machine Owner's Guide 11g Release 2 (11.2) E13874-14″ for additional details.
Risk:
Failure of the ESM will put the Flash 20 card in WriteThrough mode which has a high impact on performance.
Action/Repair:
Top verify the ESM lifetime value, use the following command on the storage servers:
for RISER in RISER1/PCIE1 RISER1/PCIE4 RISER2/PCIE2 RISER2/PCIE5; do ipmitool sunoem cli "show /SYS/MB/$RISER/F20CARD/UPTIME"; done | grep value -A4
The output will be similar to: value = 3382.350 Hours upper_nonrecov_threshold = 17500.000 Hours upper_critical_threshold = 17200.000 Hours upper_noncritical_threshold = 16800.000 Hours lower_noncritical_threshold = N/A --
If the “value” reported exceeds the “upper_noncritical_threshold” reported, schedule a replacement of the relevant ESM. NOTE: There is a bug in ILOM firmware version 3.0.9.19.a which may report “Invalid target…” for “RISER1/PCIE4″. If that happens, consult your site maintenance records to verify the age the ESM Module.
Verify Proper ASM Disk Group Attributes
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
The components in the I/O stack are tightly integrated in Exadata. You must use the proper versions of software both on the storage servers and the database servers. Setting compatible attributes defines available functionality. Setting CELL.SMART_SCAN_CAPABLE enables the offloading of certain query work to the storage servers. Setting AU_SIZE maximizes available disk technology and throughput by reading 4MB of data before performing a disk seek to a new sector location.
There is minimal impact to verify and configure these settings.
Risk:
If these attributes are not set as directed, performance will be sub-optimal.
Action / the Repair:
For the ASM disk group containing Oracle Exadata Storage Server grid disks, verify the attribute settings as follows:
- COMPATIBLE.ASM attribute is set to the Oracle ASM software version in use.
- COMPATIBLE.RDBMS attribute is set to the Oracle database software version in use.
- CELL.SMART_SCAN_CAPABLE attribute is TRUE.
- AU_SIZE attribute is 4M.
Verify Initialization Parameters
Reference for parameter values for ASM instance
Reference for parameter values for OLTP database instance
Reference for parameter values for DW/BI database instance
Reference for parameter values for DBFS database instance
Reference for parameter values for OLTP database instance
Reference for parameter values for DW/BI database instance
Reference for parameter values for DBFS database instance
Verify Platform Configuration and Initialization Parameters for Consolidation
Platform Consolidation Considerations
Consolidation Parameters Reference TableCritical, 08/02/11
Benefit / Impact: Experience and testing has shown that certain database initialization parameter settings should use the following formulas for platform consolidation. By using these formulas as recommended, known problems may be avoided and performance maximized.
The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance settings can be done after careful performance evaluation and clear understanding of the performance impact.
Risk: If the operating system and database parameters are not set as recommended, a variety of issues may be encountered that can lead to system and database instability.
Action / Repair: To verify the database initialization parameters, use the following guidance:
The following are important platform level considerations in a consolidated environment.
- Operating System Configuration Recommendations
- Hugepages
– when set, should equal the sum of shared memory from all databases –
see MOS note 401749.1 for precise computations and see MOS 361323.1 for a
description of Hugepages. Hugepages is generally required if
'PageTables' in /proc/meminfo is > 2% of physical memory
- Benefits: Memory savings. Prevent cases of paging and swapping when not configured.
- Tradeoffs: Set Hugepages correctly and need to be adjusted when another instance is added/dropped or when sga sizes change.
- As of 11.2.0.2 to disable hugepages on an instance set parameter “use_large_pages=false”
- Note that as of onecommad version that supports 11.2.0.2 BP9 hugepages is automatically configured upon deployment. The vm.nr_hugepages value may need to be adjusted if an instance memory parameters are changed post initial deployment
- Amount of locked memory – 75% of physical memory
- Number of Shared Memory Identifiers – set greater than the number of databases
- Size of Shared Memory Segments – OS setting for max size = 85% of physical memory
- Number of semaphores – sum of processes cannot exceed the maximum number of semaphors. On linux, the max can be obtained with cat /proc/sys/kernel/sem | awk '{print $2}'. The number of semaphores on the system should not be so high such that maximizing oracle processes running causes performance problems.
- Number of semaphores in a semaphore set: The number of semaphores in a semaphore set must be at least as high as the largest value for the processes parameter in all databases. On linux, the number of semaphore sets can be obtained with cat /proc/sys/kernel/sem | awk '{print $4}'
- Hugepages
– when set, should equal the sum of shared memory from all databases –
see MOS note 401749.1 for precise computations and see MOS 361323.1 for a
description of Hugepages. Hugepages is generally required if
'PageTables' in /proc/meminfo is > 2% of physical memory
- Applications with similar SLA requirements are best suited to co-exist in a consolidated environment together. Do not mix mission critical applications with non mission critical applications in the same consolidated environment. Do not mix production and test/dev databases in the same environment.
- It is possible to “over-subscribe” an application's resource requirements in a consolidated environment as long as the other applications “under-subscribe” at that time. The exception to this is mission critical applications. Do not “over-subscribe” in a consolidated environment that contains mission critical applications. Oracle Resource Manager can be used to manage varying degrees of IO and CPU requirements within one database and across databases. Within one database, Oracle Resource Manager can also manage parallel query processing.
- Operating System Configuration Recommendations
Consolidation Parameters Reference Table
Update 8/2/11The performance related recommendations provide guidance to maintain highest stability without sacrificing performance. Changing these performance settings can be done after careful performance evaluation and clear understanding of the performance impact.
This parameter consolidation health check table is a general reference for environments. This is not a hard prerequisite for a consolidated environment, rather a guideline used to establish the formulas, maximum values, and notes below. It should suffice for most customers, but if you do not qualify for this formula, the table below can be used as a reference solely for important parameters that must be considered. These values are per node.
Parameter | Formula | Max | Notes | |
Sga_target / Pga_aggregate_target | OLTP: Sum of all sga_target and pga_aggregate_target for all databases < 75% of physical memory DW/BI: Sum of Sga_target + (pga_aggregate_target x 3) < 75% of physical memory | 75% of total memory | Check
aforementioned formula. Exceeding recommended memory usage can
potentially cause performance problems. It is important to also ensure
that the value computed from the formula is sufficient for the
application using the associated database. Pga_aggregate_target setting
does not enforce a maximum PGA usage. For some data warehouse and BI
applications, 3 X specified target has been observed. For OLTP
applications, the spill over is much less. The 25% room provides
insurance from any additional spill over and for non-SGA/PGA memory
allocation. Process memory and non-memory allocations can add up to be
1-5 MB/process in some cases. Monitoring application and system memory
utilizatoin is required to ensure there's sufficient memory throughout
your workload/business cycles. Oracle recommends at least 5% memory free
at all times. DBM Machine Type: Memory Available : Oracle Memory Target DBM V2 | 72 GB | 54 GB X2-2 | 96 GB | 60.8 GB can be expanded to
| |
Cpu_count | For mission critical applications: Sum of cpu_count of all databases <= 75% X Total CPUs Alternatively: For light-weight CPU usage applications, sum (CPU_COUNT) <=3 X CPUs and CPU intensive applications, sum(CPU_COUNT) <= Total CPUs | Refer to the formulas in the previous column | Rules
of thumbs: 1.Leverage CPU_COUNT and instance caging for platform
consolidation (eg managing multiple databases within Exadata DBM). They
are particularly helpful in preventing processes and jobs from
over-consuming target CPU resources. 2. Most light weight applications
are idle and consume < 3 CPUs. 3.
Large reporting/DW/BI and some OLTP applications (“CPU intensive
applications) can easily consume all the CPU so they need to be bounded
with instance caging and resource management. 4. For consolidating mission critical applications, recommend not over-subscribing CPU resources to maximize stability and performance consistency. Exadata DBM | # Cores |# CPUs DBM V2 | 8 CPUs | 16 CPUs X2-2 | 12 CPUs | 24 CPUs X2-8 | 64 CPUs | 128 CPUs | |
resource_manager_plan | NA | NA | Ensure this is enabled. A good starting value is 'default_plan' | |
processes | Sum of processes of all databases < max | Number of semaphores on the system | Check formula. Alert if > max Alert if # Active Processes > 4 X CPUs Sum (all processes for all instances) < 21K | |
Parallel parameters | Sum of parallel parameters for all databases should not exceed the recommendation for a single database | Parallel_max_servers defined for a single database | Check formula. | |
Db_recovery_file_dest_size | Sum of Db_recovery_file_dest_size <= Fast Recovery Area | Size of Usable Fast Recovery Area | Check formula; Usable FRA space subtracts the space consumed by other files such as online log files in the case of RECO being the only high redundancy diskgroups |
Configure the Number of Mounts before a File System check on the Database Servers
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
Ensure Temporary Tablespace is correctly defined
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
- A BigFile Tablespace
- Located in DATA or RECO, whichever one is not HIGH redundancy
- Sized 32GB Initially
- Configured with AutoExtend on at 4GB
- Configured with a Max Size defined to limit out of control growth.
Enable portmap service if app requires it
By default, the portmap service is not enabled on the database nodes and it is required for things such as NFS. If needed, enable and start it using the following with dcli across required nodes:chkconfig –level 345 portmap on
service portmap start
Enable proper services on database nodes to use NFS
In addition to the portmap service previously explained, the nflsock service must also be enabled and running to use NFS on database nodes. Below is a working example, showing the errors that will be encountered with various utilities if not setup correclty. MOS Note 359515.1can also be referenced.SQL> create tablespace nfs_test_on_nfs datafile '/shared/dscbbg02/users/user/nfs_test/nfs_test_on_nfs.dbf' size 16M;
create tablespace nfs_test_on_nfs datafile '/shared/dscbbg02/users/user/nfs_test/nfs_test_on_nfs.dbf' size 16M
*
ERROR at line 1:
ORA-01119: error in creating database file
'/shared/dscbbg02/users/user/nfs_test/nfs_test_on_nfs.dbf'
ORA-27086: unable to lock file – already in use
Linux-x86_64 Error: 37: No locks available
Additional information: 10
Elapsed: 00:00:30.08
SQL> create tablespace nfs_test datafile '+D/user/datafile/nfs_test.dbf' size 16M;
Tablespace created.
SQL> create table nfs_test(n not null) tablespace nfs_test as select rownum from dual connect by rownum < 1e5 + 1;
Table created.
SQL> alter tablespace nfs_test read only;
Tablespace altered.
SQL> create directory nfs_test as '/shared/dscbbg02/users/user/nfs_test';
Directory created.
SQL> create table nfs_test_x organization external(type oracle_datapump default directory nfs_test location('nfs_test.dp')) as select * from nfs_test;
create table nfs_test_x organization external(type oracle_datapump default directory nfs_test location('nfs_test.dp')) as select * from nfs_test
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEPOPULATE callout
ORA-31641: unable to create dump file
“/shared/dscbbg02/users/user/nfs_test/nfs_test.dp”
ORA-27086: unable to lock file – already in use
Linux-x86_64 Error: 37: No locks available
Additional information: 10
Elapsed: 00:00:31.17
$ expdp userid=scott/tiger parfile=nfs_test.par
Export: Release 11.2.0.1.0 – Production on Wed Jun 2 10:44:51 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file “/shared/dscbbg02/users/user/nfs_test/nfs_test.dmp”
ORA-27086: unable to lock file – already in use
Linux-x86_64 Error: 37: No locks available
Additional information: 10
RMAN works:
$ rman target=/
Recovery Manager: Release 11.2.0.1.0 – Production on Wed Jun 2 10:46:40 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: USER (DBID=3710096878)
RMAN> backup as copy datafile '+D/user/datafile/nfs_test.dbf' format '/shared/dscbbg02/users/user/nfs_test/nfs_test.dbf';
Starting backup at 20100602104700
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=204 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00007 name=+D/user/datafile/nfs_test.dbf
output file name=/shared/dscbbg02/users/user/nfs_test/nfs_test.dbf tag=TAG
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 20100602104702
The solution is to ensure that the nfslock service (aka rpc.statd) is running:
# service nfslock status
rpc.statd (pid 10795) is running… Of course you'd want to enable the service via chkconfig too.
Be Careful when Combining the InfiniBand Network across Clusters and Database Machines
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
The cell name, cell disk name, grid disk name, ASM diskgroup name, and ASM failgroup name should be unique to help avoid accidental damage during mainenance operations. For example do not have diskgroup DATA on both database machines, call them DATA_DM01 and DATA_DM02.
IP Addresses
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
Ensure any additional equipment ordered from Oracle is marked for an Oracle Exadata Database Machine and the hardware engineer is using the correct Multi-rack Cabling when the physical InfiniBand network is modified.
After the hardware engineer has modified the network, ensure that network is working correctly by running verify topology and infinicheck. Infinicheck will create load on the system and should not be run when there is active workload on the system. Note: Infinicheck will need an input file of all IP addresses on the network.
IE Create a temporary file in /tmp that contains all cells for both database machines. Pass this file to the inifnicheck command using the -c option. Also pass the -b option
#cd /opt/oracle.SupportTools/ibdiagtools
#./verify-topology -t fattree
#./infinicheck -c /tmp/combined_cellip.ora -b
CELLIP.ORA
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
Set fast_start_mttr_target=300 to optimize run time performance of writes
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
Considerations for a direct writes in a data warehouse type of application: Even though direct operations aren't using the buffer cache, fast_start_mttr_target is very effective at controlling crash recovery time because it ensures adequate checkpointing for the few buffers that are resident (ex: undo segment headers). fast_start_mttr_target should be set to the desired RTO (Recovery Time Objective) while still maintaing performance SLAs.
Enable auditd on database servers
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
The best practice is to run the auditd service whenever auditing is configured during kernel bootup by setting audit=1 on the kernel line in /boot/grub/grub.conf, as shown here:
title Trying_LABEL_DBSYS root (hd0,0) kernel /vmlinuz-2.6.18-194.3.1.0.2.el5 root=LABEL=DBSYS ro bootarea=dbsys loglevel=7 panic=60 debug rhgb audit=1 numa=off console=ttyS0,115200n8 console=tty1 crashkernel=128M@16M initrd /initrd-2.6.18-194.3.1.0.2.el5.imgTo configure auditd to be enabled, run the following commands as root on each database server:
chkconfig auditd on chkconfig --list auditd auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off service auditd start service auditd status auditd (pid 32582) is running...
Manage ASM Audit File Directory Growth with cron
Priority | Added | Machine the Type | The OS the Type | The Exadata the Version | Oracle Version |
Critical | N / A | X2-2 (4170), X2-2 X2 - 8 | Linux | 11.2.x + | 11.2.x + |
The audit file destination directories for an ASM instance can grow to contain a very large number of files if they are not regularly maintained. Use the Linux cron(8) utility and the find(1) command to manage the number of files in the audit file destination directories.
The impact of using cron(8) and find(1) to manage the number of files in the audit file destination directories is minimal.
Risk:
Having a very large number of files can cause the file system to run out of free disk space or inodes, or can cause Oracle to run very slowly due to file system directory scaling limits, which can have the appearance that the ASM instance is hanging on startup.
Action / the Repair:
Refer to MOS Note 1298957.1.
Updating database node OEL packages to match the cell
MOS Note 1284070.1 provides a working example of updating the db host OEL packages to match those on the cell.Verify ASM Instance Database Initialization Parameters
Critical, 06/23/11 Benefit / Impact: Experience and testing has shown that certain ASM initialization parameters should be set at specific values. These are the best practice values set at deployment time. By setting these ASM initialization parameters as recommended, known problems may be avoided and performance maximized. The parameters are specific to the ASM instances. Unless otherwise specified, the value is for both X2-2 and X2-8 Database Machines. The impact of setting these parameters is minimal. Risk: If the ASM initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value. Action / Repair: To verify the database initialization parameters, compare the values in your environment against the table below (* = default value):Parameter | Recommended Value | Priority | Notes |
cluster_interconnects | Bondib0 IP address for X2-2 Colon delimited Bondib* IP addresses for X2-8 | A | This is used to avoid the Clusterware HAIP address; For an X2-8, the IP addresses are colon delimited |
asm_power_limit | 4 | A | This is Exadata default to mitigate application performance impact during ASM rebalance. Please evaluate application performance impact before using a higher ASM_POWER_LIMIT. |
Memory_target | 1025M | A | This avoids issues with 11.2.0.1 to 11.2.0.2 upgrade. This is the default setting for Exadata. |
processes | For < 10 instances per node, 50 X (DB instances per node + 1) For >= 10 instances per node, {50 * MIN (# db instances per node +1, 11) }+ {10 * MAX (# db instance per node – 10, 0)} This new formula accommodates the consolidation case where there are a lot of instances per node. | A | This avoids issues observed when ASM hits max # of processes. NOTE: “instances” means “non-ASM” instances] |
Verify Common Instance Database Initialization Parameters
Critical, 08/02/11Benefit / Impact: Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice values set at deployment time. By setting these database initialization parameters as recommended, known problems may be avoided and performance maximized. The parameters are common to all database instances. The impact of setting these parameters is minimal. The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance settings can be done after careful performance evaluation and clear understanding of the performance impact. Risk: If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value. Action / Repair: To verify the database initialization parameters, compare the values in your environment against the table below (* = default value):
Parameter | Recommended Value | Priority | Notes |
cluster_interconnects | Bondib0 IP address for X2-2 Colon delimited Bondib* IP addresses for X2-8 | A | This is used to avoid the Clusterware HAIP address; For an X2-8, the 4 IP addresses are colon delimited |
compatible | 11.2.0.2 | A | Need this for new RDBMS and ASM features |
processes | Set per doc definition [Deployment database uses 1024] | 2 | Customers should set this per doc definition |
log_buffer | 134217728 | A | Check this is not less than 128M. Ensures adequate buffer space for new LGWR transport |
db_block_checking | False * | 2 | For higher data corruption detection and prevention, enable this setting but performance impacts vary per workload. Evaluate performance impact. Refer to MOS 1302539.1. |
db_block_checksum | Typical * | A | Aids in block corruption detection. Enable for primary and standby databases. Refer to MOS 1302539.1. |
db_lost_write_protect | Typical | A | This is important for data block lost write detection and repair. Enable for primary and standby databases. Refer to MOS 1265884.1 and 1302539.1. Refer to section on how to address ORA-752 on the standby database. |
control_files | Check to ensure control file is in high redundancy disk group and there are two members (copies) of the controlfile. | A | A
high redundancy diskgroup optimizes availability. 1. Control file
should be in a high redundancy disk group. 2. Two controlfile members
are recommended. If there's one high redundancy disk group, create both controlfile members in the high redundancy disk group. Otherwise, multiplex the controlfile members across multiple ASM disk groups. [Modified 8/16/11) |
audit_trail | Db | 2 | Security optimization |
audit_sys_operations | True | 2 | Security optimization |
diagnostics_dest | ORACLE_BASE | 2 | Customers should set this per doc definition |
db_recovery_file_dest | RECO diskgroup | A | Check to ensure diskgroup is different from db_file_create_dest |
db_recovery_file_dest_size | RECO diskgroup size | A | Check to ensure the size is <= 90% of the RECO diskgroup size |
Db_block_size | 8192 | 2 | Check that db_block_size=8192. 8192 blocksize is generally recommended for Oracle applications unless a different block size is proven more efficient. |
_lm_rcvr_hang_allow_time | 140 | A | This parameter protects from corner case timeouts lower in the stack and prevents instance evictions |
_kill_diagnostics_timeout | 140 | A | This parameter protects from corner case timeouts lower in the stack and prevents instance evictions |
Global_names | True | A | Security optimization |
_file_size_increase_increment | 2143289344 | A | This ensures adequately sized RMAN backup allocations |
os_authent_prefix | "" | A | Security optimization NOTE: this is set to a null value, not literally two double quotes.] |
sql92_security | True | A | Security optimization |
fast_start_mttr_target | 300 | 2 | Check that its set and not less than 300. Relaxing aggressive checkpointing prevents outliers and improves performance |
parallel_adaptive_multi_user | False | A | Performance impact: PQ degree will be reduced for some queries especially with concurrent workloads. |
parallel_execution_message_size | 16384 * | A | Improves PQ performance |
Parallel_threads_per_cpu | A | A | Check that this value is at 1. Setting this to account for hyper threading |
Log_archive_dest_n | LOCATION=Use_db_file_recovery_dest | A | Do NOT set to a specific diskgroup since fast recovery area auto space management is ignore unless “USE_DB_FILE_RECOVERY_DEST” is explicitly used. This is not the same as setting it to the equivalent diskgroup name from db_recovery_file_dest parameter |
filesystemio_options | Setall | 2 | Important to get both async and direct IO for performance |
Db_create_online_log_dest_n | Check for high redundancy diskgroup | A | A high redundancy diskgroup optimizes availability. If a high redundancy disk group is available, use the first high ASM redundancy disk group for all your Online Redo Logs or Standby Redo Logs. Use only one log member to minimize performance impact. If a high redundancy disk group isn't available, multiplex redo log members across DATA and RECO ASM disk group for additional protection. |
Open_cursors | Set per doc definition [Deployment database uses 1000] | A | Check to ensure this is at least 300 |
use_large_pages | Only | A | This ensures the entire SGA is stored in hugepages Benefits: Memory savings and reduce paging and swapping Prerequisites: Operating system hugepages setting need to be correctly configured and need to be adjusted whenever another database instance is added or dropped or whenever the sga sizes change. Refer to MOS 401749.1 and 361323.1 to configure HugePages ? . |
_enable_NUMA_support | FALSE * for X2-2 TRUE * for X2-8 | A | Enable NUMA support on X2-8 only |
Verify OLTP Instance Database Initialization Parameters
Note that, except for the case of OLTP applications using an X2-8, all parameters referenced here are for a single database. The reason the OLTP-on-X2-8 case is different is because it is unlikely a customer will put a single OLTP database on an X2-8. For more detail on platform and parameter configuration for consolidation of any application types, refer to the consolidation parameter reference page .Critical, 06/23/11
Benefit / Impact: Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice values set at deployment time. By setting these database initialization parameters as recommended, known problems may be avoided and performance maximized. The parameters are specific to OLTP database instances. Unless otherwise specified, the value is for both X2-2 and X2-8 Database Machines. The impact of setting these parameters is minimal. The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance settings can be done after careful performance evaluation and clear understanding of the performance impact. Risk: If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value. Action / Repair: To verify the database initialization parameters, compare the values in your environment against the table below (* = default value):
Parameter | Recommended Value | Priority | Notes |
parallel_max_servers | 240 for X2-2 1280 for X2-8 | 2 | Check to ensure not more than the recommended value. Setting this higher than this recommended value can deplete memory and impact performance.* |
parallel_min_servers | 0 | A | Check that it is 0. For OLTP, we don't want wasted resources that won't be use. |
sga_target | 24G for X2-2 128G for X2-8 | A | Check to ensure not higher than the recommended value.* For X2-2, the recommended value is for a single database. For X2-8, this number is based on a small set of databases (< 5) |
pga_aggregate_target | 16G for X2-2 64G for X2-8 | A | Check to ensure not higher than the recommended value.* For X2-2, the recommended value is for a single database. For X2-8, this number is based on a small set of databases (< 5) |
_kgl_cluster_lock_read_mostly | True | A | This improves cursor cache performance for OLTP applications and will be the default in 11.2.0.3 |
Verify DW/BI Instance Database Initialization Parameters
Critical, 06/23/11Benefit / Impact: Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice values set at deployment time. By setting these database initialization parameters as recommended, known problems may be avoided and performance maximized. The parameters are specific to DW/BI database instances. Unless otherwise specified, the value is for both X2-2 and X2-8 Database Machines. The impact of setting these parameters is minimal. The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance settings can be done after careful performance evaluation and clear understanding of the performance impact. Risk: If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value. Action / Repair: To verify the database initialization parameters, compare the values in your environment against the table below (* = default value):
Parameter | Recommended Value | Priority | Notes |
parallel_max_servers | 240 for X2-2 1280 for X2-8 | A | Check to ensure not more than the recommended value. Setting this higher than this recommended value can deplete memory and impact performance. |
parallel_min_servers | 96 for X2-2 512 for X2-8 | A | Reduce overhead of allocating and deallocating parallel servers unncessary |
parallel_degree_policy | Manual | 2 | Evaluate workload management before deploying; otherwise set to manual by default. |
parallel_degree_limit | 16 for X2-2 24 for X2-8 | 2 | Check that this is less than parallel_servers_target. |
parallel_servers_target | 128 for X2-2 512 for X2-8 | 2 | Check to ensure not higher than parallel_max_servers. Setting this higher than this recommended value can deplete memory and impact performance. |
sga_target | 16G for X2-2 128G for X2-8 | A | Check to ensure not higher than the recommended value. * Note these values are for a single database and Exadata's default settings. |
pga_aggregate_target | 16G for X2-2 256G for X2-8 | A | Check to ensure not higher than the recommended value. Note these values are for a single database and Exadata's default settings. |
Verify DBFS Instance Database Initialization Parameters
Critical, 05/23/11 Benefit / Impact: Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice values set at deployment time. By setting these database initialization parameters as recommended, known problems may be avoided and performance maximized. The parameters are specific to the DBFS database instances. Unless otherwise specified, the value is for both X2-2 and X2-8 Database Machines. The impact of setting these parameters is minimal. Risk: If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value. Action / Repair: To verify the database initialization parameters, compare the values in your environment against the table below (* = default value):Parameter | Recommended Value | Priority | Notes |
parallel_max_servers | 2 | 2 | Check to ensure not more than 2. Setting this higher than this recommended value can deplete memory and impact performance. |
parallel_min_servers | 0 | A | Check that it is 0. For OLTP, we don't want wasted resources that won't be use. |
sga_target | 1.5G | A | Check to ensure not higher than the recommended value. * Note these values are for a single database |
pga_aggregate_target | 6.5G | A | Check to ensure not higher than the recommended value. Note these values are for a single database |
db_recovery_file_dest | DBFS_DG diskgroup | A | Check to ensure DBFS_DG * Note this overrides the common database instance parameter reference because of a different requirement for DBFS |
db_recovery_file_dest_size | 10% of DBFS_DG size | A | Check to ensure the size is 10% of the DBFS_DG diskgroup size ; No archiving for data staging use cas. * Note this overrides the common database instance parameter reference because of a different requirement for DBFS |
No comments:
Post a Comment