Skip to content

Oracle 排障配置与调整 - 15. page

RAC安装完成之后客户端连接数据库报错ORA-12545

日前一次一个朋友装完一套rac后,客户端连接rac数据库时候报错,错误如下:

ORA-12545: Connect failed because target host or object does not exist./pre>

Rac的监听机制我就不解释了,12545的错误基本和Tnsnames里的连接通配符里的host设置有关,刚装完的rac报错,大部分都是因为采用了节点的hostname作为解析字符串中的host,而客户端一般从业务的连续冗余性角度应该要用vip对应的ip或者hostname来匹配网络连接字符串,这2者不一致导致了12545的出现;当然也有可能是设置了VIP,但是客户端与服务端的VIP无法通信,也会造成ora-12545的错误,这种情况是因为负载均衡打开来后,listener会分配新的请求进程到负载较低的节点,如果负载低的节点正好和客户端无法通信,就造成了12545的错误。现在问题基本上解决思路就有一些了,遇到rac的这个错误,首先就是检查你本机的local_listener,remote_listener是否设置的对应host为其他的错误的ip或者主机名,一般新装的rac这个一般都为localhost。明白了为题所在,解决思路也就出来了,只要把实例的local_listener设置为对应vip,客户端采用vip的解析方式即可解决,也应配合检查rac的配置以及TAF和LOAD BLANCE的开启情况。

这个情况一般出现在刚装完rac之后。报错如果采用trace level 16的跟踪可以发现类似如下的日志:

[05-APR-2004 11:32:55] nttbnd2addr: looking up IP addr for host: myhost.oracle.com

[05-APR-2014 11:32:55] nttbnd2addr: *** hostname lookup failure! ***
[05-APR-2014 11:32:55] nttbnd2addr: exit
[05-APR-2014 11:32:55] nserror: nsres: id=0, op=77, ns=12545, ns2=12560; nt[0]=515, nt[1]=145,
nt[2]=0; ora[0]=0, ora[1]=0, ora[2]=0
[05-APR-2014 11:32:55] nsmfr: 207 bytes at 0xf2a18
[05-APR-2014 11:32:55] nsmfr: 140 bytes at 0xef078
[05-APR-2014 11:32:55] nladtrm: entry
[05-APR-2014 11:32:55] nladtrm: exit
[05-APR-2014 11:32:55] nioqper: error from nscall
[05-APR-2014 11:32:55] nioqper: nr err code: 0
[05-APR-2014 11:32:55] nioqper: ns main err code: 12545
[05-APR-2014 11:32:55] nioqper: ns (2) err code: 12560
[05-APR-2014 11:32:55] nioqper: nt main err code: 515
[05-APR-2014 11:32:55] nioqper: nt (2) err code: 145
[05-APR-2014 11:32:55] nioqper: nt OS err code: 0
[05-APR-2014 11:32:55] niomapnserror: entry
[05-APR-2014 11:32:55] niqme: entry
[05-APR-2014 11:32:55] niqme: reporting NS-12545 error as ORA-12545
[05-APR-2014 11:32:55] niqme: exit
[05-APR-2014 11:32:55] niomapnserror: returning error 12545
[05-APR-2014 11:32:55] niomapnserror: exit
[05-APR-2014 11:32:55] niotns: Couldn't connect, returning 12545
...

正确的local_listener配置为如下:

alter system set local_listener='(ADDRESS =(PROTOCOL=TCP)(HOST=vip_host1)(PORT=1521))' scope=both sid='luda1';
alter system set local_listener='(ADDRESS =(PROTOCOL=TCP)(HOST=vip_host2)(PORT=1521))' scope=both sid='luda2';

客户端采用VIP的解析方式:

LUDA =
  (DESCRIPTION =
     (ADDRESS = (PROTOCOL = TCP)(HOST = vip_host1)(PORT = 1521))
     (ADDRESS = (PROTOCOL = TCP)(HOST = vip_host2)(PORT = 1521))
     (LOAD_BALANCE=YES)
      (CONNECT_DATA=
        (SERVER=DEDICATED)
         (SERVICE_NAME=LUDA)
      )
  )

OUI-0094 安装11g和10g同平台安装遇错

在安装11g数据库后,再另外一个用户下安装oracle 10g数据库报错OUI0094错误。这个错误是因为oraInst.log文件已经写入了11g数据库的信息导致,既然知道了问题所在那解决起来就水到渠成了,具体解决方法如下:

标注:以下步骤都是在安装10g oracle用户下操作

1. 创建新的oraInst.loc文件在 $ORACLE_HOME,并更新为如下

inventory_loc=$ORACLE_HOME/oraInventory  -- 必须用绝对路径
inst_group=oinstall

2. 创建完$ORACLE_HOME/oraInst.loc后启动OUI具体如下:

./runInstaller -invPtrLoc $ORACLE_HOME/oraInst.loc

The -invPtrLoc flag is used to locate the oraInst.loc file.

3.经过以上步骤以后就可以顺利安装oracle 10g

更改RAC私有网络(private network change)配置的步骤以及版本差异的注意事项

Network information(interface, subnet and role of each interface) for Oracle Clusterware is managed by ‘oifcfg’, but actual IP address for each interfaces are not, ‘oifcfg’ can not update IP address information. ‘oifcfg getif’ can be used to find out currently configured interfaces in OCR:

% $CRS_HOME/bin/oifcfg getif
eth0 10.2.156.0 global public
eth1 192.168.0.0 global cluster_interconnect

On Unix/Linux systems, the interface names are generally assigned by the OS, and standard names vary by platform. For Windows systems, see additional notes below. Above example shows currently interface eth0 is used for public with subnet 10.2.156.0, and eth1 for cluster_interconnect/private with subnet 192.168.0.0.

The ‘public’ network is for database client communication (VIP also uses the same network though it’s stored in OCR as separate entry), whereas the ‘cluster_interconnect’ network is for RDBMS/ASM cache fusion. Starting with 11gR2, cluster_interconnect is also used for clusterware heartbeats – this is significant change compare to prior release as pre-11gR2 uses the private nodename that were specified at installation time for clusterware heartbeats.

If the subnet or interface name for ‘cluster_interconnect’ interface is incorrect, it needs to be changed as crs/grid user.

Case I. Changing private hostname

In pre-11.2 Oracle Clusterware, private hostname is recorded in OCR, it can not be updated. Generally private hostname is not required to change. Its associated IP can be changed. The only way to change private hostname is by deleting/adding nodes, or reinstall Oracle Clusterware.

In 11.2 Grid Infrastructure, private hostname is no longer recorded in OCR and there is no dependancy on the private hostname. It can be changed freely in /etc/hosts.

Case II. Changing private IP only without changing network interface, subnet and netmask

For example, private IP is changed from 192.168.1.10 to 192.168.1.21, network interface name and subnet remain the same,.

Simply shutdown Oracle Clusterware stack on the node where change required, make IP modification at OS layer (eg: /etc/hosts, OS network config etc) for private network, restart Oracle Clusterware stack will complete the task.

Case III. Changing private network MTU only

For example, private network MTU is changed from 1500 to 9000 (enable jumbo frame), network interface name and subnet remain the same.

1. Shutdown Oracle Clusterware stack on all nodes
2. Make the required network change of MTU size at OS network layer, ensure private network is available with the desired MTU size, ping with the desired MTU size works on all cluster nodes
3. Restart Oracle Clusterware stack on all nodes

Case IV. Changing private network interface name, subnet or netmask

Note: When the netmask is changed but the subnet ID doesn’t change, for example:
The netmask is changed from 255.255.0.0 to 255.255.255.0 with private IP like 192.168.0.x, the subnet ID remains the same as 192.168.0.0, the network interface name is not changed.
Please follow the same procedure as outlined in Case II.
When the netmask is changed, the associated subnet ID is often changed. Oracle only store network interface name and subnet ID in OCR, not the netmask. Oifcfg command can be used for such change, oifcfg commands only require to run on 1 of the cluster node, not all.

A. For pre-11gR2 Oracle Clusterware

1. Use oifcfg to add the new private network information, delete the old private network information:

% $ORA_CRS_HOME/bin/oifcfg/oifcfg setif -global <if_name>/:cluster_interconnect
% $ORA_CRS_HOME/bin/oifcfg/oifcfg delif -global <if_name>[/]]

For example:
% $ORA_CRS_HOME/bin/oifcfg setif -global eth3/192.168.2.0:cluster_interconnect
% $ORA_CRS_HOME/bin/oifcfg delif -global eth1/192.168.1.0

To verify the change
% $ORA_CRS_HOME/bin/oifcfg getif
eth0 10.2.166.0 global public
eth3 192.168.2.0 global cluster_interconnect

2. Shutdown Oracle Clusterware stack

As root user: # crsctl stop crs

3. Make required network change at OS level, /etc/hosts file should be modified on all nodes to reflect the change.
Ensure the new network is available on all cluster nodes:

% ping % ifconfig -a on Unix/Linux
or
% ipconfig /all on windows

4. restart the Oracle Clusterware stack

As root user: # crsctl start crs

Note: If running OCFS2 on Linux, one may also need to change the private IP address that OCFS2 is using to communicate with other nodes. For more information, please refer to Note 604958.1

B. For 11gR2 and higher

As of 11.2 Grid Infrastructure, the private network configuration is not only stored in OCR but also in the gpnp profile. If the private network is not available or its definition is incorrect, the CRSD process will not start and any subsequent changes to the OCR will be impossible. Therefore care needs to be taken when making modifications to the configuration of the private network. It is important to perform the changes in the correct order. Please also note that manual modification of gpnp profile is not supported.

Please take a backup of profile.xml on all cluster nodes before proceeding, as grid user:

$ cd $GRID_HOME/gpnp//profiles/peer/
$ cp -p profile.xml profile.xml.bk

1. Ensure Oracle Clusterware is running on ALL cluster nodes in the cluster

2. As grid user:

Get the existing information. For example:
$ oifcfg getif
eth1 100.17.10.0 global public
eth0 192.168.0.0 global cluster_interconnect

Add the new cluster_interconnect information:

$ oifcfg setif -global /:cluster_interconnect

For example:
a. add a new interface bond0 with the same subnet
$ oifcfg setif -global bond0/192.168.0.0:cluster_interconnect

b. add a new subnet with the same interface name but different subnet or new interface name
$ oifcfg setif -global eth0/192.65.0.0:cluster_interconnect
or
$ oifcfg setif -global eth3/192.168.1.96:cluster_interconnect

1. This can be done with -global option even if the interface is not available yet, but this can not be done with -node option if the interface is not available, it will lead to node eviction.

2. If the interface is available on the server, subnet address can be identified by command:

$ oifcfg iflist

It lists the network interface and its subnet address. This command can be run even if Oracle Clusterware is not running. Please note, subnet address might not be in the format of x.y.z.0, it can be x.y.z.24, x.y.z.64 or x.y.z.128 etc. For example,
$ oifcfg iflist
lan1 18.1.2.0
lan2 10.2.3.64 < < this is the private network subnet address associated with private network IP: 10.2.3.86

3. If it is for adding a 2nd private network, not replacing the existing private network, please ensure MTU size of both interfaces are the same, otherwise instance startup will report error:

ORA-27504: IPC error creating OSD context
ORA-27300: OS system dependent operation:if MTU failed with status: 0
ORA-27301: OS failure message: Error 0
ORA-27302: failure occurred at: skgxpcini2
ORA-27303: additional information: requested interface lan1:801 has a different MTU (1500) than lan3:801 (9000), which is not supported. Check output from ifconfig command

Verify the change:

$ oifcfg getif

3. Shutdown Oracle Clusterware on all nodes and disable the Oracle Clusterware as root user:

# crsctl stop crs
# crsctl disable crs

4. Make the network configuration change at OS level as required, ensure the new interface is available on all nodes after the change.

$ ifconfig -a
$ ping

5. Enable Oracle Clusterware and restart Oracle Clusterware on all nodes as root user:

# crsctl enable crs
# crsctl start crs

6. Remove the old interface if required:

$ oifcfg delif -global <if_name>[/]
eg:
$ oifcfg delif -global eth0/192.168.0.0

Something to note for 11gR2

1. If underlying network configuration has been changed, but oifcfg has not been run to make the same change, then upon Oracle Clusterware restart, the CRSD will not be able to start.

The crsd.log will show:

2010-01-30 09:22:47.234: [ default][2926461424] CRS Daemon Starting
..
2010-01-30 09:22:47.273: [ GPnP][2926461424]clsgpnp_Init: [at clsgpnp0.c:837] GPnP client pid=7153, tl=3, f=0
2010-01-30 09:22:47.282: [ OCRAPI][2926461424]clsu_get_private_ip_addresses: no ip addresses found.
2010-01-30 09:22:47.282: [GIPCXCPT][2926461424] gipcShutdownF: skipping shutdown, count 2, from [ clsinet.c : 1732], ret gipcretSuccess (0)
2010-01-30 09:22:47.283: [GIPCXCPT][2926461424] gipcShutdownF: skipping shutdown, count 1, from [ clsgpnp0.c : 1021], ret gipcretSuccess (0)
[ OCRAPI][2926461424]a_init_clsss: failed to call clsu_get_private_ip_addr (7)
2010-01-30 09:22:47.285: [ OCRAPI][2926461424]a_init:13!: Clusterware init unsuccessful : [44]
2010-01-30 09:22:47.285: [ CRSOCR][2926461424] OCR context init failure. Error: PROC-44: Error in network address and interface operations Network address and interface operations error [7]
2010-01-30 09:22:47.285: [ CRSD][2926461424][PANIC] CRSD exiting: Could not init OCR, code: 44
2010-01-30 09:22:47.285: [ CRSD][2926461424] Done.
Above errors indicate a mismatch between OS setting (oifcfg iflist) and gpnp profile setting profile.xml.

Workaround: restore the OS network configuration back to the original status, start Oracle Clusterware. Then follow above steps to make the changes again.

If the underlying network has not been changed, but oifcfg setif has been run with a wrong subnet address or interface name, same issue will happen.

2. If any one node is down in the cluster, oifcfg command will fail with error:

$ oifcfg setif -global bond0/192.168.0.0:cluster_interconnect
PRIF-26: Error in update the profiles in the cluster
Workaround: start Oracle Clusterware on the node where it is not running. Ensure Oracle Clusterware is up on all cluster nodes. If the node is down for any OS reason, please remove the node from the cluster before performing private network change.

3. If a user other than Grid Infrastructure owner issues above command, it will fail with same error:

$ oifcfg setif -global bond0/192.168.0.0:cluster_interconnect
PRIF-26: Error in update the profiles in the cluster
Workaround: ensure to login as Grid Infrastructure owner to perform such command.

4. From 11.2.0.2 onwards, if attempt to delete the last private interface (cluster_interconnect) without adding a new one first, following error will occur:

PRIF-31: Failed to delete the specified network interface because it is the last private interface
Workaround: Add new private interface first before deleting the old private interface.

5. If Oracle Clusterware is down on the node, the following error is expected:

$ oifcfg getif
PRIF-10: failed to initialize the cluster registry
Workaround: Start the Oracle Clusterware on the node

Notes for Windows Systems

The syntax for changing the interfaces on Windows/RAC clusters is the same as on Unix/Linux, but the interface names will be slightly different. On Windows systems, the default names assigned to the interfaces are generally named such as:

Local Area Connection
Local Area Connection 1
Local Area Connection 2

If using an interface name that has space in it, the name must be enclosed in quotes. Also, be aware that it is case sensitive. For example, on Windows, to set cluster_interconnect:

C:\oracle\product\10.2.0\crs\bin\oifcfg setif -global “Local Area Connection 1″/192.168.1.0:cluster_interconnect
However, it is best practice on Windows to rename the interfaces to be more meaningful, such as renaming them to ‘ocwpublic’ and ‘ocwprivate’. If interface names are renamed after Oracle Clusterware is installed, then you will need to run ‘oifcfg’ to add the new interface and delete the old one, as described above.

You can view the available interface names on each node by running the command:

oifcfg iflist -p -n
This command must be run on each node to verify the interface names are defined the same.

Ramifications of Changing Interface Names Using oifcfg

For the Private interface, the database will use the interface stored in the OCR and defined as a 'cluster_interconnect' for cache fusion traffic. The cluster_interconnect information is available at startup in the alert log, after the parameter listing - for example:

For pre 11.2.0.2:
Cluster communication is configured to use the following interface(s) for this instance
192.168.1.1

For 11.2.0.2+: (HAIP address will show in alert log instead of private IP)
Cluster communication is configured to use the following interface(s) for this instance
169.254.86.97
If this is incorrect, then instance is required to restart once the OCR entry is corrected. This applies to ASM instances and Database instances alike. On Windows systems, after shutting down the instance, it is also required to stop/restart the OracleService (or OracleASMService before the OCR will be re-read.

Oifcfg Usage

To see the full options of oifcfg, simply type:

$ $ORA_CRS_HOME/bin/oifcfg

 

11.2.0.3 Apply 11.2.0.3.8 PSU Encounter bug:Clusterware home location does not exist

11g的psu的安装默认都是用auto的方式来安装,但是在11.2.0.3这个版本aix平台上,我很不幸的遭遇了auto无法安装的bug 16835171,报错如下:

Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
Either does not exist or is not readable
Make sure the file exists and it has read and execute access
Clusterware home location does not exist

解决办法就是使用手工安装的方式,比较繁冗,官方描述如下:
Solution
At the time of this writing, the bug is still being worked by Development.
The workaround is to apply the patch manually.
The manual instruction is part of patch readme。

手工安装psu的方式参考以下网页:
http://www.ludatou.com/index.php/archives/1069