Skip to content

如果授予用户connect,resource时,此时用户已经有了UNLIMITED TABLESPACE权限。
此时如果授予用户DBA权限,然后再回收DBA角色;此时会收回UNLIMITED TABLESPACE权限。
近期遇到过此种情况,导致业务用户无法使用表空间,造成较为严重的事故,记录一下。

测试如下;
1.创建用户并授予connect,resource、dba权限并验证
SQL> create user test identified by test;

User created.

SQL> grant connect,resource to test;

Grant succeeded.

SQL>
SQL> select * from dba_role_privs where grantee=’TEST’;

GRANTEE GRANTED_ROLE ADM DEF
—————————— —————————— — —
TEST RESOURCE NO YES
TEST CONNECT NO YES

SQL> select * from dba_sys_privs where grantee=’TEST’;

GRANTEE PRIVILEGE ADM
—————————— —————————————- —
TEST UNLIMITED TABLESPACE NO

SQL> grant dba to test;

Grant succeeded.

SQL> select * from dba_role_privs where grantee=’TEST’;

GRANTEE GRANTED_ROLE ADM DEF
—————————— —————————— — —
TEST RESOURCE NO YES
TEST DBA NO YES
TEST CONNECT NO YES

SQL> select * from dba_sys_privs where grantee=’TEST’;

GRANTEE PRIVILEGE ADM
—————————— —————————————- —
TEST UNLIMITED TABLESPACE NO

2.回收dba权限并检查权限
SQL> revoke dba from test;

Revoke succeeded.

SQL> select * from dba_role_privs where grantee=’TEST’;

GRANTEE GRANTED_ROLE ADM DEF
—————————— —————————— — —
TEST RESOURCE NO YES
TEST CONNECT NO YES

SQL> select * from dba_sys_privs where grantee=’TEST’;

no rows selected

SQL> grant connect,resource to test;

Grant succeeded.

SQL> select * from dba_role_privs where grantee=’TEST’;

GRANTEE GRANTED_ROLE ADM DEF
—————————— —————————— — —
TEST RESOURCE NO YES
TEST CONNECT NO YES

SQL> select * from dba_sys_privs where grantee=’TEST’;

GRANTEE PRIVILEGE ADM
—————————— —————————————- —
TEST UNLIMITED TABLESPACE NO

用户有connect,resource,dba角色权限后回收dba权限导致无UNLIMITED TABLESPACE权限造成业务中断

从OS层面上,建议您监控内存的使用是否逐步在增加,是否达到某一个阈值的时候就会触发该错误。
svmon -G -i 2 2
ipcs -a

按照内存使用对进程进行排序,以发现是哪些进程使用了较多的内存,
su –
#ps avx |head -1 ;ps avx |grep -v PID |sort -rn +6 > ps_avx.output

对于这些进程,查看它们究竟为什么使用了较多内存:
svmon -P <PID>

相关文档可以参考:
AIX: Determining Oracle Memory Usage On AIX [ID 123754.1]
Diagnosing Oracle Memory on AIX using SVMON ( Doc ID 166491.1 )
AIX: Database performance gets slower the longer the database is running [ID 316533.1]

运行oracle数据库的AIX系统内存使用率高时的排查思路

遇到11gr2 rac WRH$_ACTIVE_SESSION_HISTORY未自动清理导致SYSAUX空间过度增长;但是和网上其它的不太一样,查询SNAP快照信息,和select a.snap_interval,a.retention,a.topnsql from dba_hist_wr_control a; 中配置未发生变化,分区和SNAP对应信息也是最近7天的,没问题;查询基线的信息,也没问题。

只有WRH$_ACTIVE_SESSION_HISTORY的一个WRH$_ACTIVE_SES_MXDB_MXSN分区异常,是6GB。但是查不出此分区对应的SNAP信息。

使用MOS文档:WRH$_ACTIVE_SESSION_HISTORY Does Not Get Purged Based Upon the Retention Policy (文档 ID 387914.1)
中的方法alter session set “_swrf_test_action” = 72;解决了此问题。

附一下此MOS文档的解决方法:

Cause
Oracle decides what rows need to be purged based on the retention policy. There is a special mechanism which is used in the case of the large AWR tables where we store the snapshot data in partitions. One method of purging data from these tables is by removing partitions that only contain rows that have exceeded the retention criteria. During the nightly purge task, we only drop the partition if all the data in the partition has expired. If the partition contains at least one row which, according to the retention policy shouldn’t be removed, then the partition won’t be dropped and as such the table will contain old data.

If partition splits do not occur (for whatever reason), then we can end up with a situation where we have to wait for the latest entries to expire before the partition that they sit in can be removed. This can mean that some of the older entries can be retained significantly past their expiry date. The result of this is that the data is not purged as expected.

Solution
A potential solution to this issue is to manually split the partitions of the partitioned AWR objects such that there is more chance of the split partition being purged.You will still have to wait for all the rows in the new partitions to reach their retention time but with split partitions there is more chance of this happening. you can manually split the partitions using the following undocumented command:

alter session set “_swrf_test_action” = 72;
To perform a single split of all the AWR partitions.

Check the partition details for the offending table before the split:
SELECT owner,
segment_name,
partition_name,
segment_type,
bytes/1024/1024/1024 Size_GB
FROM dba_segments
WHERE segment_name=’WRH$_ACTIVE_SESSION_HISTORY’;
Split the partitions so that there is more chance of the smaller partition being purged:
alter session set “_swrf_test_action” = 72;
NOTE: This command will split partitions for ALL partitioned AWR objects. It also initiates a single split; it does not need to be disabled and will need to be repeated if multiple splits are required.

Check the partition details for the offending table after the split:
SELECT owner,
segment_name,
partition_name,
segment_type,
bytes/1024/1024/1024 Size_GB
FROM dba_segments
WHERE segment_name=’WRH$_ACTIVE_SESSION_HISTORY’;

With smaller partitions it is expected that some will be automatically removed when the retention period of all the rows within each partition is reached.

As an alternative, you could purge data based upon a snapshot range. Depending on the snapshots chosen, this may remove data that has not yet reached the retention limit so this may not be suitable for all cases. The following output shows the min and max snapshot_id in each partition.

set serveroutput on
declare
CURSOR cur_part IS
SELECT partition_name from dba_tab_partitions
WHERE table_name = ‘WRH$_ACTIVE_SESSION_HISTORY’;

query1 varchar2(200);
query2 varchar2(200);

TYPE partrec IS RECORD (snapid number, dbid number);
TYPE partlist IS TABLE OF partrec;

Outlist partlist;
begin
dbms_output.put_line(‘PARTITION NAME SNAP_ID DBID’);
dbms_output.put_line(‘————————— ——- ———-‘);

for part in cur_part loop
query1 := ‘select min(snap_id), dbid from sys.WRH$_ACTIVE_SESSION_HISTORY partition (‘||part.partition_name||’) group by dbid’;
execute immediate query1 bulk collect into OutList;

if OutList.count > 0 then
for i in OutList.first..OutList.last loop
dbms_output.put_line(part.partition_name||’ Min ‘||OutList(i).snapid||’ ‘||OutList(i).dbid);
end loop;
end if;

query2 := ‘select max(snap_id), dbid from sys.WRH$_ACTIVE_SESSION_HISTORY partition (‘||part.partition_name||’) group by dbid’;
execute immediate query2 bulk collect into OutList;

if OutList.count > 0 then
for i in OutList.first..OutList.last loop
dbms_output.put_line(part.partition_name||’ Max ‘||OutList(i).snapid||’ ‘||OutList(i).dbid);
dbms_output.put_line(‘—‘);
end loop;
end if;

end loop;
end;
/

Once you have split the partitions and identified a partition with a range of snap ids that can be deleted, you can free up the memory by dropping a snapshot range than matches the high and low snap_ids for the partition:

DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE(
low_snap_id IN NUMBER,
high_snap_id IN NUMBER
dbid IN NUMBER DEFAULT NULL);

11gr2 rac WRH$_ACTIVE_SESSION_HISTORY未自动清理导致SYSAUX空间过度增长

————————–
修改私有ip为其它网段IP。
—————–修改前:
192.168.58.1 bys1-priv.bys.com bys1-priv
192.168.58.2 bys2-priv.bys.com bys2-priv
—————–修改后:
192.168.59.1 bys1-priv.bys.com bys1-priv
192.168.59.2 bys2-priv.bys.com bys2-priv
参考MOS文档:How to Modify Private Network Information in Oracle Clusterware (文档 ID 283684.1)
从11.2 Grid Infrastructure开始,私有网络配置存储在OCR和gpnp配置文件中。 如果专用网络不可用或其定义不正确,CRSD进程将不会启动,并且后续不能对OCR进行更改。 注意,不支持手动修改gpnp配置文件。
——–目录
1.备份gpnp配置文件
2.查看当前集群是运行状态及OS层面网卡信息
3.查看并修改私网配置信息–单个节点进行
4.关闭CRS–两个节点均进行
5.OS层面修改IP并修改/etc/hosts中记录(两个节点均修改)并测通
6.重新启动集群
7.删除原有私网信息并验证
8.检查集群状态
—————————
###########################
—————————
具体步骤:–仅显示节点1,节点2同样步骤。

1.备份gpnp配置文件
$ cd $GRID_HOME/gpnp/<hostname>/profiles/peer/

[grid@bys1 ~]$ cd /u01/11.2.0/grid/gpnp/bys1/profiles/peer/
[grid@bys1 peer]$ ls
pending.xml profile.old profile_orig.xml profile.xml
[grid@bys1 peer]$ cp profile.xml profile.xmlbak

[grid@bys2 ~]$ cd /u01/11.2.0/grid/gpnp/bys2/profiles/peer/
[grid@bys2 peer]$ cp profile.xml profile.xmlbak

2.查看当前集群是运行状态及OS层面网卡信息
[grid@bys2 peer]$ crsctl stat res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATA.dg
ONLINE ONLINE bys1
ONLINE ONLINE bys2
ora.LISTENER.lsnr
ONLINE ONLINE bys1
ONLINE ONLINE bys2
ora.asm
ONLINE ONLINE bys1 Started
ONLINE ONLINE bys2 Started
ora.gsd
OFFLINE OFFLINE bys1
OFFLINE OFFLINE bys2
ora.net1.network
ONLINE ONLINE bys1
ONLINE ONLINE bys2
ora.ons
ONLINE ONLINE bys1
ONLINE ONLINE bys2
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE bys1
ora.bys1.vip
1 ONLINE ONLINE bys1
ora.bys2.vip
1 ONLINE ONLINE bys2
ora.bysrac.db
1 ONLINE ONLINE bys1 Open
2 OFFLINE OFFLINE Instance Shutdown
ora.cvu
1 ONLINE ONLINE bys2
ora.oc4j
1 ONLINE ONLINE bys2
ora.scan1.vip
1 ONLINE ONLINE bys1

###############
3.查看并修改私网配置信息–单个节点进行

[grid@bys1 peer]$ oifcfg getif
eth0 192.168.57.0 global public
eth1 192.168.58.0 global cluster_interconnect
[grid@bys1 peer]$ oifcfg iflist
eth0 192.168.57.0
eth1 192.168.58.0
eth1 169.254.0.0

[grid@bys1 peer]$ oifcfg setif -global eth1/192.168.59.0:cluster_interconnect
[grid@bys1 peer]$ oifcfg getif
eth0 192.168.57.0 global public
eth1 192.168.58.0 global cluster_interconnect
eth1 192.168.59.0 global cluster_interconnect

##############
4.关闭CRS–两个节点均进行
[root@bys1 ~]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘bys1’
………………
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘bys1’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@bys1 ~]# crsctl disable crs
CRS-4621: Oracle High Availability Services autostart is disabled.

###############
5.OS层面修改IP并修改/etc/hosts中记录(两个节点均修改)并测通

—————–修改前:
192.168.58.1 bys1-priv.bys.com bys1-priv
192.168.58.2 bys2-priv.bys.com bys2-priv
—————–修改后:
192.168.59.1 bys1-priv.bys.com bys1-priv
192.168.59.2 bys2-priv.bys.com bys2-priv

[root@bys2 network-scripts]# ifconfig

eth1 Link encap:Ethernet HWaddr 08:00:27:4B:EE:4E
inet addr:192.168.59.2 Bcast:192.168.59.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe4b:ee4e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:13462 errors:0 dropped:0 overruns:0 frame:0
TX packets:15323 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6890589 (6.5 MiB) TX bytes:11488864 (10.9 MiB)
[root@bys2 network-scripts]# ping bys1-priv
PING bys1-priv.bys.com (192.168.59.1) 56(84) bytes of data.
64 bytes from bys1-priv.bys.com (192.168.59.1): icmp_seq=1 ttl=64 time=2.85 ms
###############
6.重新启动集群

[root@bys2 ~]# crsctl enable crs
CRS-4622: Oracle High Availability Services autostart is enabled.
[root@bys2 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

###############
7.删除原有私网信息并验证
[grid@bys1 ~]$ oifcfg getif
eth0 192.168.57.0 global public
eth1 192.168.58.0 global cluster_interconnect
eth1 192.168.59.0 global cluster_interconnect
[grid@bys1 ~]$ oifcfg delif -global eth1/192.168.58.0:cluster_interconnect
[grid@bys1 ~]$ oifcfg getif
eth0 192.168.57.0 global public
eth1 192.168.59.0 global cluster_interconnect

###############
8.检查集群状态
[grid@bys1 ~]$ gpnptool get
Warning: some command line parameters were defaulted. Resulting command line:
/u01/11.2.0/grid/bin/gpnptool.bin get -o-

<?xml version=”1.0″ encoding=”UTF-8″?><gpnp:GPnP-Profile Version=”1.0″ xmlns=”http://www.grid-pnp.org/2005/11/gpnp-profile” xmlns:gpnp=”http://www.grid-pnp.org/2005/11/gpnp-profile” xmlns:orcl=”http://www.oracle.com/gpnp/2005/11/gpnp-profile” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd” ProfileSequence=”8″ ClusterUId=”405460e2b8c24fd8bf9acebf33654b8b” ClusterName=”bysrac” PALocation=””><gpnp:Network-Profile><gpnp:HostNetwork id=”gen” HostName=”*”><gpnp:Network id=”net1″ IP=”192.168.57.0″ Adapter=”eth0″ Use=”public”/><gpnp:Network id=”net4″ Adapter=”eth1″ IP=”192.168.59.0″ Use=”cluster_interconnect”/></gpnp:HostNetwork></gpnp:Network-Profile><orcl:CSS-Profile id=”css” DiscoveryString=”+asm” LeaseDuration=”400″/><orcl:ASM-Profile id=”asm” DiscoveryString=”/dev/asm*” SPFile=”+DATA/bysrac/asmparameterfile/registry.253.927488691″/><ds:Signature xmlns:ds=”http://www.w3.org/2000/09/xmldsig#”><ds:SignedInfo><ds:CanonicalizationMethod Algorithm=”http://www.w3.org/2001/10/xml-exc-c14n#”/><ds:SignatureMethod Algorithm=”http://www.w3.org/2000/09/xmldsig#rsa-sha1″/><ds:Reference URI=””><ds:Transforms><ds:Transform Algorithm=”http://www.w3.org/2000/09/xmldsig#enveloped-signature”/><ds:Transform Algorithm=”http://www.w3.org/2001/10/xml-exc-c14n#”> <InclusiveNamespaces xmlns=”http://www.w3.org/2001/10/xml-exc-c14n#” PrefixList=”gpnp orcl xsi”/></ds:Transform></ds:Transforms><ds:DigestMethod Algorithm=”http://www.w3.org/2000/09/xmldsig#sha1″/><ds:DigestValue>TQTgFHyy0z8ROUI6tfleTxsQwdY=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue>F3N8CS7gBNJXMfMHZuP/FckLtynkybtNFq3TbHuU9yrZEuDuzMA1EtMmId7W7YPDS1wBZ6Qrh4hKMDuXfWTSR6xHZQ9iFM6mC6vHTa13+7AGYopoat5iXnGd050jj/w/VMhiYUQuP5g5O28SUu6lHRlhzpnPZLUkyhKvmhRdpIM=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile>
Success.

[grid@bys1 ~]$ ifconfig
eth1 Link encap:Ethernet HWaddr 08:00:27:64:B0:1C
inet addr:192.168.59.1 Bcast:192.168.59.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe64:b01c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:21137 errors:0 dropped:0 overruns:0 frame:0
TX packets:18486 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:14069238 (13.4 MiB) TX bytes:11326935 (10.8 MiB)

eth1:1 Link encap:Ethernet HWaddr 08:00:27:64:B0:1C
inet addr:169.254.167.252 Bcast:169.254.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

 

11gr2 rac改IP系列之四:修改私网IP为其它IP

1.未配置NTP,且移除/etc/ntp.conf配置文件
2016-12-10 22:39:17.459: [ CTSS][156329728]ctss_main: The Cluster Time Synchronization Service is started with option [reboot].
2016-12-10 22:39:17.459: [ CTSS][156329728]ctss_scls_init: SCLs Context is 0x19bd3b0
[ clsdmt][149878528]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=bys1DBG_CTSSD))
2016-12-10 22:39:17.469: [ clsdmt][149878528]PID for the Process [2460], connkey 11
2016-12-10 22:39:17.470: [ clsdmt][149878528]Creating PID [2460] file for home /u01/11.2.0/grid host bys1 bin ctss to /u01/11.2.0/gri
d/ctss/init/
2016-12-10 22:39:17.470: [ clsdmt][149878528]Writing PID [2460] to the file [/u01/11.2.0/grid/ctss/init/bys1.pid]
2016-12-10 22:39:18.193: [ CTSS][149878528]ctss_checkcb: clsdm requested check alive. checkcb_data{mode[0x40], offset[0 ms]}, lengt
h=[8].
2016-12-10 22:39:18.206: [ CTSS][156329728]ctss_css_init: CSS Context is 0x19ce210
2016-12-10 22:39:18.206: [ CTSS][156329728]ctss_init: CTSS production mode
2016-12-10 22:39:18.206: [ CTSS][156329728]ctss_init: Env var CTSS_REBOOT is undefined or contains non-boolean value. Ignoring CTSS_REBOOT.
—-以上可以看到集群启动时CTSSD启动,及相关的PID信息
2016-12-10 22:39:18.206: [ CTSS][156329728]sclsctss_gvss2: NTP default pid file not found
2016-12-10 22:39:18.206: [ CTSS][156329728]sclsctss_gvss8: Return [0] and NTP status [1].
2016-12-10 22:39:18.206: [ CTSS][156329728]ctss_check_vendor_sw: Vendor time sync software is not detected. status [1].
2016-12-10 22:39:18.206: [ GIPC][156329728] gipcCheckInitialization: possible incompatible non-threaded init from [prom.c : 694], original from [clsss.c : 5358]
——- 没有检测到NTP及其它时间同步软件,CTSSD为ACTIVE模式(结合下面配置ntp.conf时的日志判断)
2016-12-10 22:39:18.210: [ default][156329728]clsvactversion:4: Retrieving Active Version from local storage.
2016-12-10 22:39:18.211: [ CTSS][156329728]clsctss_r_av4: Current active version [11.2.0.4.0] [186647552].
2016-12-10 22:39:18.213: [ CRSCCL][156329728]clsCclInit called by process 2460: groupname=CTSSGROUP commOptions=0 clusterType=0
2016-12-10 22:39:18.213: [ CRSCCL][156329728]Software version: 11.2.0.4.0.
2016-12-10 22:39:18.214: [ OCRMSG][156329728]prom_waitconnect: CONN NOT ESTABLISHED (0,29,1,2)

##########################################
2.未配置NTP,但是存在/etc/ntp.conf配置文件

2016-12-10 23:03:38.020: [ CTSS][3066242816]ctss_main: The Cluster Time Synchronization Service is started with option [reboot].
2016-12-10 23:03:38.020: [ CTSS][3066242816]ctss_scls_init: SCLs Context is 0x11363b0
[ clsdmt][3059791616]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=bys1DBG_CTSSD))
2016-12-10 23:03:38.026: [ clsdmt][3059791616]PID for the Process [10797], connkey 11
2016-12-10 23:03:38.026: [ clsdmt][3059791616]Creating PID [10797] file for home /u01/11.2.0/grid host bys1 bin ctss to /u01/11.2.0/grid/ctss/init/
2016-12-10 23:03:38.027: [ clsdmt][3059791616]Writing PID [10797] to the file [/u01/11.2.0/grid/ctss/init/bys1.pid]
2016-12-10 23:03:38.915: [ CTSS][3059791616]ctss_checkcb: clsdm requested check alive. checkcb_data{mode[0x40], offset[0 ms]}, length=[8].
2016-12-10 23:03:38.918: [ CTSS][3066242816]ctss_css_init: CSS Context is 0x1147210
2016-12-10 23:03:38.918: [ CTSS][3066242816]ctss_init: CTSS production mode
2016-12-10 23:03:38.919: [ CTSS][3066242816]ctss_init: Env var CTSS_REBOOT is undefined or contains non-boolean value. Ignoring CTSS_REBOOT.

—-以上可以看到集群启动时CTSSD启动,及相关的PID信息

2016-12-10 23:03:38.919: [ CTSS][3066242816]sclsctss_gvss1: NTP default config file found —-发现了NTP的配置文件
2016-12-10 23:03:38.919: [ CTSS][3066242816]sclsctss_gvss8: Return [0] and NTP status [2].
2016-12-10 23:03:38.919: [ CTSS][3066242816]ctss_check_vendor_sw: Vendor time sync software is detected. status [2].
2016-12-10 23:03:38.919: [ CTSS][3066242816]ctss_check_vendor_sw: Ctssd is switching to observer role —-CTSSD切换为observer观察者模式

2016-12-10 23:03:38.920: [ GIPC][3066242816] gipcCheckInitialization: possible incompatible non-threaded init from [prom.c : 694], original from [clsss.c : 5358]
2016-12-10 23:03:38.921: [ default][3066242816]clsvactversion:4: Retrieving Active Version from local storage.
2016-12-10 23:03:38.923: [ CTSS][3066242816]clsctss_r_av4: Current active version [11.2.0.4.0] [186647552].
2016-12-10 23:03:38.924: [ CRSCCL][3066242816]clsCclInit called by process 10797: groupname=CTSSGROUP commOptions=0 clusterType=0
2016-12-10 23:03:38.925: [ CRSCCL][3066242816]Software version: 11.2.0.4.0.
2016-12-10 23:03:38.925: [ OCRMSG][3066242816]prom_waitconnect: CONN NOT ESTABLISHED (0,29,1,2)
2016-12-10 23:03:38.926: [ OCRMSG][3066242816]GIPC error [29] msg [gipcretConnectionRefused]
2016-12-10 23:03:38.926: [ OCRMSG][3066242816]prom_connect: error while waiting for connection complete [24]
2016-12-10 23:03:38.926: [ CRSCCL][3066242816]Failed to init OCR to get active version. PROC-32: Cluster Ready Services on the local node is not running Messaging error [gipcretConnectionRefused] [29]Checking active version in OLR.
2016-12-10 23:03:38.928: [ default][3066242816]clsvactversion:4: Retrieving Active Version from local storage.
2016-12-10 23:03:38.930: [ CRSCCL][3066242816]Active version: 11.2.0.4.0.
2016-12-10 23:03:38.931: [ CRSCCL][3066242816]USING GIPC ============
2016-12-10 23:03:38.931: [ CRSCCL][3066242816]clsCclGipcListen: Attempting to listen on gipcha://bys1:CTSSGROUP_1.
2016-12-10 23:03:38.932: [GIPCHGEN][3066242816] gipchaInternalRegister: Initializing HA GIPC
2016-12-10 23:03:38.932: [GIPCHGEN][3066242816] gipchaNodeCreate: adding new node 0x1271b40 { host ”, haName ‘981d-9f80-3574-cb76’, srcLuid 2b8bd76d-00000000, dstLuid 00000000-00000000 numInf 0, contigSeq 0, lastAck 0, lastValidAck 0, sendSeq [0 : 0], createTime 1227934, sentRegister 0, localMonitor 0, flags 0x1 }
2016-12-10 23:03:38.932: [GIPCHTHR][3051460352] gipchaWorkerThread: starting worker thread hctx 0x125d7e0 [0000000000000010] { gipchaContext : host ‘bys1’, name ‘981d-9f80-3574-cb76’, luid ‘2b8bd76d-00000000’, numNode 0, numInf 0, usrFlags 0x0, flags 0xc000 }
2016-12-10 23:03:38.933: [GIPCHDEM][3049359104] gipchaDaemonThread: starting daemon thread hctx 0x125d7e0 [0000000000000010] { gipchaContext : host ‘bys1’, name ‘981d-9f80-3574-cb76’, luid ‘2b8bd76d-00000000’, numNode 0, numInf 0, usrFlags 0x0, flags 0xc000 }
2016-12-10 23:03:38.958: [GIPCHGEN][3049359104] gipchaNodeAddInterface: adding interface information for inf 0x7faaa400c0c0 { host ”, haName ‘981d-9f80-3574-cb76’, local (nil), ip ‘192.168.59.1’, subnet ‘192.168.59.0’, mask ‘255.255.255.0’, mac ’08-00-27-64-b0-1c’, ifname ‘eth1’, numRef 0, numFail 0, idxBoot 0, flags 0x1 }
2016-12-10 23:03:39.160: [GIPCXCPT][3066242816] gipchaInternalResolve: failed to resolve ret gipcretKeyNotFound (36), host ‘bys1’, port ‘CTSSGROUP_1’, hctx 0x125d7e0 [0000000000000010] { gipchaContext : host ‘bys1’, name ‘981d-9f80-3574-cb76’, luid ‘2b8bd76d-00000000’, numNode 0, numInf 1, usrFlags 0x0, flags 0x5 }, ret gipcretKeyNotFound (36)
2016-12-10 23:03:39.160: [GIPCHGEN][3066242816] gipchaResolveF [gipcmodGipcResolve : gipcmodGipc.c : 809]: EXCEPTION[ ret gipcretKeyNotFound (36) ] failed to resolve ctx 0x125d7e0 [0000000000000010] { gipchaContext : host ‘bys1’, name ‘981d-9f80-3574-cb76’, luid ‘2b8bd76d-00000000’, numNode 0, numInf 1, usrFlags 0x0, flags 0x5 }, host ‘bys1’, port ‘CTSSGROUP_1’, flags 0x0
2016-12-10 23:03:39.161: [GIPCHTHR][3051460352] gipchaWorkerCreateInterface: created local interface for node ‘bys1’, haName ‘981d-9f80-3574-cb76’, inf ‘udp://192.168.59.1:63850’
2016-12-10 23:03:39.162: [ CRSCCL][3066242816]gipcListen() Listening on gipcha://bys1:CTSSGROUP_1
2016-12-10 23:03:39.164: [ CRSCCL][3047257856]CSS Group Registration complete.

2016-12-10 23:03:39.164: [ CRSCCL][3047257856]cclGetMemberData called
2016-12-10 23:03:39.165: [ CRSCCL][3047257856]Member (1, 1228164, bys1:11.2.0.4.0) @ found.

2016-12-10 23:03:39.166: [ CRSCCL][3047257856]Obtained first membership map.

2016-12-10 23:03:39.166: [ CRSCCL][3047257856]Dumping member data ——————
2016-12-10 23:03:39.166: [ CRSCCL][3047257856]Member (1, 1228164) on node bys1 port=.
2016-12-10 23:03:39.167: [ CRSCCL][3047257856]Done ——————
2016-12-10 23:03:39.167: [ CTSS][3066242816]ctss_ccl_init4: clsCclInitWithCtx() finished. The local nodenum is [1].
2016-12-10 23:03:39.167: [ CTSS][3066242816]ctss_ccl_init6: Retrieved grpmap.
2016-12-10 23:03:39.168: [ CTSS][3066242816]ctss_ccl_init99: Successfully initialize CCL
2016-12-10 23:03:39.168: [ CTSS][3066242816]ctss_init: Spawn completed. Waiting for threads to join
2016-12-10 23:03:39.168: [ CTSS][3047257856]ctsselect_msccb1: Receive membership change event. Inc num[1] New master [1] members count[1]
2016-12-10 23:03:39.168: [ CTSS][3047257856]ctsselect_msccb9: The local node [1] is the CTSS master
2016-12-10 23:03:39.168: [ CRSCCL][3047257856]clsCclGetPriMemData: memDataSize[16] is too small. Requires [256]. Returns [14]
2016-12-10 23:03:39.168: [ CTSS][3047257856]ctsselect_gpd1: Size of pridata for node [1] is [256]. Passed [16].
2016-12-10 23:03:39.169: [ CTSS][3047257856](:ctss_e_rmmsr_2_1:: Failed to retrieve peer member data [14]. Need to alloc bigger buffer [16].
2016-12-10 23:03:39.169: [ CTSS][3047257856](:ctss_e_rmmsr_4:): Pri data for member [1]. {Version [1] Node [-1] SW version [186647552] Mode [0x62]}
2016-12-10 23:03:39.169: [ CTSS][3047257856](:ctss_e_rmmsr_5:): Detected vendor time sync sw on peer [1].
2016-12-10 23:03:39.169: [ CTSS][3047257856](:ctss_e_rmmsr_9:): Return [0]
2016-12-10 23:03:39.169: [ CRSCCL][3047257856]Waiting for reconfigs
2016-12-10 23:03:39.170: [ CRSCCL][3047257856]clsCclGetPriMemberData: Detected pridata change for node[1]. Retrieving it to the cache.
[ CTSS][3038471936]ctsselect_msm: Slave Monitor thread started
2016-12-10 23:03:39.170: [ CTSS][3038471936]ctsselect_msm: CTSS mode is [0xee]
[ CTSS][3040573184]clsctsselect_mm: Master Monitor thread started
2016-12-10 23:03:39.172: [ CTSS][3036370688]ctsselect_vermon5: Successfully registered with [crs_version]
2016-12-10 23:03:39.172: [ CTSS][3036370688]ctsselect_vermon7: Expecting clssgsevGRPPRIV event. Ignoring 1 event.
2016-12-10 23:03:39.172: [ CRSCCL][3044775680]cclCommunicationHandler started.
2016-12-10 23:03:39.923: [ CTSS][3059791616]ctss_checkcb: clsdm requested check alive. checkcb_data{mode[0xee], offset[0 ms]}, length=[8].
2016-12-10 23:03:39.927: [ CTSS][3059791616]ctss_checkcb: clsdm requested check alive. checkcb_data{mode[0xee], offset[0 ms]}, length=[8].
2016-12-10 23:03:40.921: [ CTSS][3059791616]ctss_checkcb: clsdm requested check alive. checkcb_data{mode[0xee], offset[0 ms]}, length=[8].
2016-12-10 23:03:40.924: [ CTSS][3059791616]ctss_checkcb: clsdm requested check alive. checkcb_data{mode[0xee], offset[0 ms]}, length=[8].
2016-12-10 23:03:40.931: [ CTSS][3059791616]ctss_checkcb: clsdm requested check alive. checkcb_data{mode[0xee], offset[0 ms]}, length=[8].
2016-12-10 23:03:40.934: [ CTSS][3059791616]ctss_checkcb: clsdm requested check alive. checkcb_data{mode[0xee], offset[0 ms]}, length=[8].
2016-12-10 23:04:04.174: [ CTSS][3036370688]ctsselect_vermon7: Expecting clssgsevGRPPRIV event. Ignoring 1 event.
2016-12-10 23:04:04.174: [ CTSS][3036370688]ctsselect_vermon8: Received clssgsevGRPPRIV event.
2016-12-10 23:04:04.175: [ CTSS][3036370688]ctsselect_vermon10_1: Retrieved av_data from grp pridata. Upgrade state [11].
2016-12-10 23:04:04.175: [ CTSS][3036370688]ctsselect_vermon11: Retrieved Active Version [186647552].
2016-12-10 23:04:04.175: [ CTSS][3036370688]ctsselect_vermon12: Active version[186647552] didn’t change.
2016-12-10 23:04:09.171: [ CTSS][3040573184]sclsctss_gvss1: NTP default config file found
2016-12-10 23:04:09.171: [ CTSS][3040573184]sclsctss_gvss8: Return [0] and NTP status [2].
2016-12-10 23:04:09.171: [ CTSS][3040573184]ctss_check_vendor_sw: Vendor time sync software is detected. status [2].
2016-12-10 23:04:09.926: [ CTSS][3059791616]ctss_checkcb: clsdm requested check alive. checkcb_data{mode[0xee], offset[0 ms]}, length=[8].

11gr2通过配置ntp.conf来观察ctssd.log中CTSSD模式为ACTIVE或者observer