Skip to content

在12.2之前,如果你想获得实时的数据,那么在利用query rewrite前,你必须得用on commit的刷新方式刷新物化视图。但是on commit的刷新方式有众多限制。所以,以往的方式中需要采用on command的方式来进行刷新(不管是全量刷新还是增量刷新)。

在使用on command刷新的时候,通过设置job来定时的刷新物化视图,一次job运行之后,下一次job到来之前,如果基表有数据变化是不会体现到物化视图的查询结果里的,实时物化视图就是解决这种问题的,实时获取数据而且免去频繁刷新mv。

 

下面的测试仅供参考:

 

SQL> create table t1 (x not null primary key, y not null) as
2 select rownum x, mod(rownum, 10) y from dual connect by level <= 1000000;

Table created.

SQL> create materialized view log on t1 with rowid (x, y) including new values;

Materialized view log created.

SQL>
SQL> create materialized view mv_old
2 refresh fast on demand
3 enable on query computation
4 enable query rewrite
5 as
6 select y , count(*) c1
7 from t1
8 group by y;

Materialized view created.

create mv时的关键字:enable on query computation

SQL> create table t2 (x not null primary key, y not null) as
2 select rownum x, mod(rownum, 10) y from dual connect by level <= 1000000;

Table created.

SQL> create materialized view log on t2 with rowid (x, y) including new values;

Materialized view log created.

SQL>
SQL> create materialized view mv_new
2 refresh fast on demand
3 enable on query computation
4 enable query rewrite
5 as
6 select y , count(*) c1
7 from t2
8 group by y;

Materialized view created.

SQL> show parameter rewrite

NAME TYPE VALUE
———————————— ———– ——————————
query_rewrite_enabled string TRUE
query_rewrite_integrity string enforced

SQL> set autotrace on explain stat

SQL> select mview_name,staleness,on_query_computation from user_mviews;

MVIEW_NAME STALENESS O
—————————————- ——————- –
MV_OLD FRESH N
MV_NEW FRESH Y

SQL>
SQL> select count(*) from MLOG$_T1;

COUNT(*)
———-
0

SQL> select count(*) from MLOG$_T2;

COUNT(*)
———-
0

SQL>

SQL> insert into t1
2 select 1000000+rownum, mod(rownum, 3) from dual connect by level <= 999;

999 rows created.

SQL>
SQL> insert into t2
2 select 1000000+rownum, mod(rownum, 3) from dual connect by level <= 999;

999 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select mview_name,staleness,on_query_computation from user_mviews;

MVIEW_NAME STALENESS O
—————————————- ——————- –
MV_OLD NEEDS_COMPILE N
MV_NEW NEEDS_COMPILE Y

SQL>
SQL> select count(*) from MLOG$_T1;

COUNT(*)
———-
999

SQL> select count(*) from MLOG$_T2;

COUNT(*)
———-
999

SQL>

由于数据stale,且没有set query_rewrite_integrity=stale_tolerated,传统mv没有进行query write。
SQL> select y as y_new_parse1, count(*) from t1
2 group by y;

Y_NEW_PARSE1 COUNT(*)
———— ———-
1 100333
6 100000
2 100333
4 100000
5 100000
8 100000
3 100000
7 100000
9 100000
0 100333

10 rows selected.

Execution Plan
———————————————————-
Plan hash value: 136660032

—————————————————————————
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————
| 0 | SELECT STATEMENT | | 10 | 30 | 515 (4)| 00:00:01 |
| 1 | HASH GROUP BY | | 10 | 30 | 515 (4)| 00:00:01 |
| 2 | TABLE ACCESS FULL| T1 | 1000K| 2929K| 498 (1)| 00:00:01 |
—————————————————————————

Statistics
———————————————————-
1975 recursive calls
30 db block gets
4167 consistent gets
1786 physical reads
5440 redo size
754 bytes sent via SQL*Net to client
608 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
131 sorts (memory)
0 sorts (disk)
10 rows processed

SQL>

real time进行了query rewrite,且查到的数据是最新实时数据

SQL> select y as y_new_parse1, count(*) from t2
2 group by y;

Y_NEW_PARSE1 COUNT(*)
———— ———-
6 100000
4 100000
5 100000
8 100000
3 100000
7 100000
9 100000
1 100333
2 100333
0 100333

10 rows selected.

Execution Plan
———————————————————-
Plan hash value: 542978159

——————————————————————————————————
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————————
| 0 | SELECT STATEMENT | | 12 | 312 | 22 (14)| 00:00:01 |
| 1 | VIEW | | 12 | 312 | 22 (14)| 00:00:01 |
| 2 | UNION-ALL | | | | | |
|* 3 | VIEW | VW_FOJ_0 | 10 | 290 | 9 (12)| 00:00:01 |
|* 4 | HASH JOIN FULL OUTER | | 10 | 240 | 9 (12)| 00:00:01 |
| 5 | VIEW | | 1 | 7 | 6 (17)| 00:00:01 |
| 6 | HASH GROUP BY | | 1 | 22 | 6 (17)| 00:00:01 |
|* 7 | TABLE ACCESS FULL | MLOG$_T2 | 999 | 21978 | 5 (0)| 00:00:01 |
| 8 | VIEW | | 10 | 170 | 3 (0)| 00:00:01 |
| 9 | MAT_VIEW ACCESS FULL | MV_NEW | 10 | 60 | 3 (0)| 00:00:01 |
| 10 | VIEW | | 2 | 52 | 13 (16)| 00:00:01 |
| 11 | UNION-ALL | | | | | |
|* 12 | FILTER | | | | | |
| 13 | NESTED LOOPS OUTER | | 1 | 32 | 6 (17)| 00:00:01 |
| 14 | VIEW | | 1 | 26 | 6 (17)| 00:00:01 |
|* 15 | FILTER | | | | | |
| 16 | HASH GROUP BY | | 1 | 22 | 6 (17)| 00:00:01 |
|* 17 | TABLE ACCESS FULL | MLOG$_T2 | 999 | 21978 | 5 (0)| 00:00:01 |
|* 18 | INDEX UNIQUE SCAN | I_SNAP$_MV_NEW | 1 | 6 | 0 (0)| 00:00:01 |
| 19 | NESTED LOOPS | | 1 | 35 | 7 (15)| 00:00:01 |
| 20 | VIEW | | 1 | 29 | 6 (17)| 00:00:01 |
| 21 | HASH GROUP BY | | 1 | 22 | 6 (17)| 00:00:01 |
|* 22 | TABLE ACCESS FULL | MLOG$_T2 | 999 | 21978 | 5 (0)| 00:00:01 |
|* 23 | MAT_VIEW ACCESS BY INDEX ROWID| MV_NEW | 1 | 6 | 1 (0)| 00:00:01 |
|* 24 | INDEX UNIQUE SCAN | I_SNAP$_MV_NEW | 1 | | 0 (0)| 00:00:01 |
——————————————————————————————————

Predicate Information (identified by operation id):
—————————————————

3 – filter(“AV$0″.”OJ_MARK” IS NULL)
4 – access(SYS_OP_MAP_NONNULL(“SNA$0″.”Y”)=SYS_OP_MAP_NONNULL(“AV$0”.”GB0″))
7 – filter(“MAS$”.”SNAPTIME$$”>TO_DATE(‘ 2017-07-12 14:35:01’, ‘syyyy-mm-dd hh24:mi:ss’))
12 – filter(CASE WHEN ROWID IS NOT NULL THEN 1 ELSE NULL END IS NULL)
15 – filter(SUM(1)>0)
17 – filter(“MAS$”.”SNAPTIME$$”>TO_DATE(‘ 2017-07-12 14:35:01’, ‘syyyy-mm-dd hh24:mi:ss’))
18 – access(“MV_NEW”.”SYS_NC00003$”(+)=SYS_OP_MAP_NONNULL(“AV$0”.”GB0″))
22 – filter(“MAS$”.”SNAPTIME$$”>TO_DATE(‘ 2017-07-12 14:35:01’, ‘syyyy-mm-dd hh24:mi:ss’))
23 – filter(“MV_NEW”.”C1″+”AV$0″.”D0″>0)
24 – access(SYS_OP_MAP_NONNULL(“Y”)=SYS_OP_MAP_NONNULL(“AV$0”.”GB0″))

Note
—–
– dynamic statistics used: dynamic sampling (level=2)
– this is an adaptive plan

Statistics
———————————————————-
906 recursive calls
64 db block gets
1232 consistent gets
14 physical reads
10548 redo size
744 bytes sent via SQL*Net to client
608 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
64 sorts (memory)
0 sorts (disk)
10 rows processed

对比结果还是比较清楚的。

Real time mv利用原来的已经stale的物化视图,结合mv log,通过计算后,帮你获取实时的数据。即能获得实时数据,又避免频繁的刷新mv。

12c以后 Real-time materialized view 实时物化视图的应用

2018年2月15日Oracle官方在MOS上新发布预警文档《Recommended patches and actions for Oracle databases versions 12.1.0.1, 11.2.0.3 and earlier – before June 2019 (Doc ID 2361478.1)》中强烈建议所有DBA在2019年6月之前将Oracle数据库版本11.1.0.7、11.2.0.3和12.1.0.1打补丁到下面提到的patchset/PSU级别,以解决未来dblinks互操作性方面的潜在问题。

Oracle只是定义为潜在问题,实际关键是oracle的高版本(11.2.4/12.1.0.2/12.2及以上)数据库在2019年06月23日,默认的SCN兼容性会进行自动变化。

2.1 SCN基础概念

系统更改号(SCN)是Oracle数据库使用的一个逻辑的内部时间戳。SCN对数据库中发生的事件进行排序,这是满足事务的ACID属性所必需的。

 

数据库使用SCN来查询和跟踪更改。例如,如果事务更新了一行,那么数据库将记录更新发生时的SCN。此事务中的其他修改通常具有相同的SCN。当事务提交时,数据库将为此提交记录SCN。每个事务在提交后增加SCN。

 

 

2.2 SCN limit 

SCN以单调递增的顺序出现,Oracle数据库可以使用的SCN数量上限非常大。也称为Reasonable SCN Limit,简称RSL。这个值是一个限制,避免数据库的SCN无限制地增大,甚至达到了SCN的最大值。这个值大约是这样一个公式计算出来的:(当前时间-1988年1月1日)*24*3600*16,384 (16K/sec)。兼容性版本1

由于存在上限,任何给定的Oracle数据库都不能耗尽可用的SCN,这一点很重要。Oracle数据库使用基于时间的配给系统来确保不会发生这种情况。

这样做可以确保Oracle数据库随着时间的推移对SCN进行定量配给,从而允许任何使用版本12.2.0.1或更低版本的Oracle数据库进行超过500年的数据处理。从12.2.0.1开始,数据库的兼容性设置为12.2,即使SCN以96K/sec(兼容性版本3)的速度消耗,Oracle数据库也将允许近300万年的数据处理。

 

2.3 SCN headroom

数据库使用的当前SCN与“不超过”上限之间的差异称为SCN headroom 。

 

然而,Oracle已经确定,一些软件错误可能会导致数据库试图超过当前最大SCN值(或者超出所保证的范围)。

 

通常,如果数据库确实试图超过当前的最大SCN值,那么引起此事件的事务将被数据库取消,应用程序将看到一个错误。下一秒,这个限制就会增加,所以通常应用程序在处理过程中会继续出现轻微的打嗝。然而,在一些非常罕见的情况下,数据库确实需要关闭以保持其完整性。在任何情况下,数据都不会丢失或损坏。

 

2.4 SCN兼容性

SCN 兼容级别是限制SCN增速和SCN RSL。

 

2.5 Auto-rollover 

AUTO-ROLLVOER是一种类例JOB的定时任务,定时修改SCN 兼容性级别。

禁用了AUTO-ROLLOVER 到2019-06-23后SCN兼容级别就不会自动调整,还保持原来的限制。 禁用了Auto-RollOver,可以手动调整scn兼容性(前提是应用了补丁)。

 

 


 三 关于这次SCN兼容性自动触发的原理 

在高版本的数据库中引入了SCN 兼容性参数( Compatibility),而且在这个特性中设置了时间限制。

在Oracle数据库软件内核中, 引入了一个: Auto-RollOver 的机制 。

也就是说Oracle 为不同 SCN 兼容性设定了触发时间,随着时间推移自动迭代,用户会在不知情的情况下自动应用了新的SCN 兼容性。

当在2019年6月23日后,整体数据库环境中,有高版本(定义参看上)创建后, 由于SCN允许最高SCN新增速率会有可能有较高的SCN,而未打补丁的数据库处于较低的SCN级别(可能会出现通过dblink连接这两个数据库,由于DBLINK的特性(两端数据库必须同步SCN),如果SCN增加修改数据库的同步超出它的允许SCN率或当前最大的SCN限制,无法建立连接,那么dblink被拒绝,导致业务中断。

 

 


四 查看SCN兼容性级别和调整时间

数据库版本在11.2.0.3.9之前没有打dblink的补丁则没有DBMS_SCN包。

 

在11g版本可以通过sqlplus连接至数据库执行以下语句

set serveroutput on

declare

rsl number;

headroom_in_scn number;

headroom_in_sec number;

cur_scn_compat number;

next_compat number;

max_scn_compat number;

auto_rollover_ts date;

is_enabled boolean;

begin

dbms_scn.getcurrentscnparams(rsl,headroom_in_scn,headroom_in_sec,cur_scn_compat,max_scn_compat);

dbms_scn.getscnautorolloverparams(auto_rollover_ts,next_compat,is_enabled);

dbms_output.put_line(‘RSL=’||rsl);

dbms_output.put_line(‘headroom_in_scn=’||headroom_in_scn);

dbms_output.put_line(‘headroom_in_sec=’||headroom_in_sec);

dbms_output.put_line(‘CUR_SCN_COMPAT=’||cur_scn_compat);

dbms_output.put_line(‘NEXT_COMPAT=’||next_compat);

dbms_output.put_line(‘MAX_SCN_COMPAT=’||max_scn_compat);

dbms_output.put_line(‘auto_rollover_ts=’||to_char(auto_rollover_ts,’YYYY-MM-DD’));

if(is_enabled) then

dbms_output.put_line(‘Auto_rollover is enabled!’);

else

dbms_output.put_line(‘Auto_rollover is disabled!’);

end if;

end;

/

样例输出

 

当前auto rollover是启用且在2019年6月23日自动调整SCN兼容级别为3

五 如何判断是否需要应用补丁调整策略

5.1符合以下条件则无需关注SCN兼容性自动变化及相关补丁信息

所有有DBLINK传输数据数据库的集合版本均为低版本(11.1.0.7/11.2.0.3/12.1.0.1之前版本)或均为高版本(11.2.4/12.1.0.2/12.2及以上),且未来不会有版本和dblink交叉传输数据需求的变化

 

5.2对于11.1.0.7、11.2.0.3和12.1.0.1版本的数据库

 

这些补丁增加了数据库当前最大SCN限制,从16k/s到96k/s,且默认为2019年6月24日启动生效,允许更高的SCN速率,使数据库能够支持比早期版本高许多倍的事务速率。

 

在2019年6月23日之前安装补丁后,可以通关闭autorollover的特性,来规避scn兼容性自动变更到3。命令如下:

sqlplus连接至数据库

Exec dbms_scn. DisableAutoRollover ;

在2019年6月23日之后安装的高版本(定义参看上)数据库环境设置scn 兼容性至1.命令如下:

sqlplus连接至数据库

ALTER DATABASE SET SCN COMPATIBILITY 1;

 

5.3对于上表中Oracle官方没有提及的数据库版本如10.2或更老的数据库 

充分评估业务将来可能出现的dblink传输数据的场景,考虑升级数据库或不使用dblink通过JDBC等其他方式进行数据传输。

对于10g数据库在安装完10.2.0.5.171017的PSU以及补丁14121009。

 

 

5.4安装相应补丁后将SCN的异常情况记录到alert日志中 

我们建议通过控制SCN Headroom的告警阀值使得数据库可以尽可能的将DBLINK传播SCN的行为在alert日志中提现出来,目前我们建议将_external_scn_logging_threshold_seconds设置为600秒左右,使得通过DBLINK传递导致SCN Headroom过低产生alert告警。

alter system set “_external_scn_logging_threshold_seconds”=600 scope=spfile;

 

参考《Recommended patches and actions for Oracle databases versions 12.1.0.1, 11.2.0.3 and earlier – before June 2019 (Doc ID 2361478.1)》

关于Oracle高版本SCN兼容性 变化的处理建议

Disabling real-time statistics gathering

Real-time Statistics
This Oracle Database 19c new feature is available on certain Oracle Database platforms. Check the Oracle Database Licensing Guide for more
information.
CHALLENGES TO MAINTAINING ACCURATE OPTIMIZER STATISTICS
As mentioned above, stale statistics can result in sub-optimal SQL execution plans and keeping them accurate in highly volatile
systems can be challenging. High-frequency statistics gathering helps to resolving this, but a more ideal solution would be to maintain
statistics as changes to the data in the database are made.

REAL-TIME STATISTICS
Real-time statistics extends statistic gathering techniques to the conventional DML operations INSERT, UPDATE and MERGE. When
these DML operations are executed on the data in the database, the most essential optimizer statistics are maintained in real time. This
applies both the individual row and bulk operations.

Real-time statistics augment those collected by the automatic statistics gathering job, high-frequency stats gathering or those gathered
manually using the DBMS_STATS API. An accurate picture of the data in the database is therefore maintained at all times, which
results in more optimal SQL execution plans.

Real-time statistics are managed automatically, and no intervention from the database administrator is required. Developers may
choose to disable online statistics gathering for individual SQL statements using the NO_GATHER_OPTIMIZER_STATISTICS hint.

Example:

SELECT /*+ NO_GATHER_OPTIMIZER_STATISTICS */ …

If you already have a well-established statistics gathering procedure or if for some other reason you want to disable automatic statistics
gathering for your main application schema, consider leaving it on for the dictionary tables. You can do so by changing the value of
AUTOSTATS_TARGET parameter to ORACLE instead of AUTO using DBMS_STATS.SET_GLOBAL_PREFS procedure.

 

exec dbms_stats.set_global_prefs(‘autostats_target’,‘oracle’)

 

SQL> begin
dbms_stats.set_global_prefs(‘autostats_target’,’oracle’);
end;
/

参考:
https://docs.oracle.com/en/database/oracle/oracle-database/19/tgsql/optimizer-statistics-concepts.html#GUID-769E609D-0312-43A7-9581-3F3EACF10BA9
<<19c New Feature:Real-Time Statistics (文档 ID 2552657.1)>>

 

改特性建议关闭。

Oracle 19c Real-Time Statistics 关闭实时统计信息

引用自: Doc ID 1577660.1

升级到19c的升级兼容性

  能够直接升级到Oracle Database 19c的数据库最小版本

源库 目标库
18 (所有版本) 19c
12.2.0.1 19c
12.1.0.2 19c
11.2.0.4 19c
其他未在上面提到的数据库发布/版本不支持直接升级到19c。
所以首先升级到中间Oracle数据库版本,然后可以直接升级到19c。

升级到18c的升级兼容性矩阵

注意:对于 Exadata 18.1 数据库已经发布了,  对于 On-Premise 数据库 18.3 发布了。

  能够直接升级到Oracle Database 18.1的数据库最小版本

源库 目标库
12.2.0.1 18c
12.1.0.1 / 12.1.0.2 18c
11.2.0.3 / 11.2.0.4 18c
如果不是上述提到的数据库版本,直接升级到18c是不support的。
因此,首先升级到Oracle数据库的中间版本,然后可以将其直接升级到18c版本。

升级到12.2.x的升级兼容性矩阵

能够直接升级到Oracle 12c Release 2的数据库最小版本

                              源库                         目标库
                                 11.2.0.3 / 11.2.0.4                                  12.2.x
                                 12.1.0.1 / 12.1.0.2                                  12.2.x

以下的数据库版本需要间接升级

源库  升级路径 目标库
11.2.0.2 –> 11.2.0.4/12.1.0.2 –> 12.2.x
11.2.0.1 –> 11.2.0.4 –> 12.2.x
11.1.0.7 –> 11.2.0.4/12.1.0.2 –> 12.2.x
11.1.0.6 –> 11.2.0.4 –> 12.2.x
10.2.0.5 –> 11.2.0.4/12.1.0.2 –> 12.2.x
10.2.0.2 /10.2.0.3/10.2.0.4 –> 11.2.0.4 –> 12.2.x
10.2.0.1 –> 10.2.0.5 –>   11.2.0.4/12.1.0.2 –> 12.2.x
10.1.0.5 –> 11.2.0.4 –> 12.2.x
10.1.0.4(或更低版本) –> 10.2.0.5 –> 11.2.0.4/12.1.0.2 –> 12.2.x
9.2.0.8 –> 11.2.0.4 –> 12.2.x
9.2.0.7(或更低版本) –>   9.2.0.8 –> 11.2.0.4 –> 12.2.x

升级到12.1.x的升级兼容性矩阵

能够直接升级到Oracle 12c Release 1的数据库最小版本

源数据库

       目标数据库

10.2.0.5

12.1.x

11.1.0.7

12.1.x

11.2.0.2 (或更高版本)

12.1.x

以下的数据库版本需要间接升级。

源数据库

升级路径

目标数据库

11.2.0.1

=> 11.2.0.2 或更高版本 =>

12.1.x

11.1.0.6

=> 11.1.0.7 or 11.2.0.2 或更高版本 =>

12.1.x

10.2.0.4 (或更低版本)

=> 10.2.0.5 或之后的直接升级版本 =>

12.1.x

10.1.0.5 (或更低版本)

=> 10.2.0.5 或之后的直接升级版本 =>

12.1.x

9.2.0.8 (或更低版本)

=> 9.2.0.8 –> 11.2.0.2 或更高版本 =>

12.1.x

升级到11.2.x的升级兼容性矩阵

能够直接升级到Oracle 11g Release 2的数据库最小版本

源数据库

        目标数据库

9.2.0.8   (或更高版本)

11.2.x

10.1.0.5 (或更高版本)

11.2.x

10.2.0.2 (或更高版本)

11.2.x

11.1.0.6 (或更高版本)

11.2.x

 

以下的数据库版本需要间接升级。

源数据库

升级路径

目标数据库

7.3.3 (或更低版本)

=> 7.3.4.0 => 9.2.0.8 =>

11.2.x

8.0.5 (或更低版本)

=> 8.0.6.x => 9.2.0.8 =>

11.2.x

8.1.7 (或更低版本)

=> 8.1.7.4 => 10.2.0.4 =>

11.2.x

9.0.1.3 (或更低版本)

=> 9.0.1.4 => 10.2.0.4 =>

11.2.x

9.2.0.7 (或更低版本)

=> 9.2.0.8 =>

11.2.x

 

更多信息,请参阅以下链接:
http://docs.oracle.com/cd/E11882_01/server.112/e23633/preup.htm#UPGRD12358

(以上链接会定向到Oracle Technology Network,需要OTN的用户名和密码进行访问)

升级到11.1.x的升级兼容性矩阵

能够直接升级到Oracle 11g Release 1的数据库最小版本

源数据库

         目标数据库

9.2.0.4   (或更高版本)

11.1.x

10.1.0.2 (或更高版本)

11.1.x

10.2.0.1 (或更高版本)

11.1.x

 

以下版本的数据库需要间接升级。

源数据库

升级路径

目标数据库

7.3.3 (或更低版本)

=> 7.3.4.0 => 9.2.0.8 =>

11.1.x

8.0.5 (或更低版本)

=> 8.0.6.x => 9.2.0.8 =>

11.1.x

8.1.7 (或更低版本)

=> 8.1.7.4 => 9.2.0.8 =>

11.1.x

9.0.1.3 (或更低版本)

=> 9.0.1.4 => 9.2.0.8 =>

11.1.x

9.2.0.3 (或更低版本

=> 9.2.0.4.0 =>

11.1.x

 

更多信息,请参阅以下链接:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28300/preup.htm#CEGEIBHC

(以上链接会定向到Oracle Technology Network,需要OTN的用户名和密码进行访问)

升级到10.2.x的升级兼容性矩阵

能够直接升级到10.2.x的数据库最小版本

源数据库

目标数据库

8.1.7.4   (或更高版本)

10.2.x

9.0.1.4   (或更高版本)

10.2.x

9.2.0.4   (或更高版本)

10.2.x

10.1.0.2 (或更高版本)

10.2.x

以下版本的数据库需要间接升级。

源数据库

升级路径

目标数据库

7.3.3 (或更低版本)

=> 7.3.4 => 8.1.7 =>8.1.7.4 =>

10.2.x

7.3.4 (或更低版本)

=>8.1.7 => 8.1.7.4 =>

10.2.x

8.0.n (或更低版本)

=>8.1.7 => 8.1.7.4 =>

10.2.x

8.1.n (或更低版本)

=>8.1.7 => 8.1.7.4 =>

10.2.x

更多信息,请参阅以下链接:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14238/preup.htm#CEGEIBHC

升级到10.1.x的升级兼容性矩阵

能够直接升级到10.1.x的数据库最小版本

源数据库

目标数据库

8.0.6  (或更高版本)

10.1.x

8.1.7  (或更高版本)

10.1.x

9.0.1 (或更高版本)

10.1.x

9.2.0 (或更高版本)

10.1.x

以下版本的数据库需要间接升级。

源数据库

升级路径

目标数据库

7.3.4 (或更低版本)

=> 8.0.6 =>

10.1.x

8.0.5 (或更低版本)

=> 8.0.6 =>

10.1.x

8.1.6 (或更低版本)

=> 8.1.7 =>

10.1.x

 

更多信息,请参阅以下链接:
http://download.oracle.com/docs/cd/B14117_01/server.101/b10763/preup.htm#CEGEIBHC

升级到9.2.x的升级兼容性矩阵

能够直接升级到9.2.x的数据库最小版本

源数据库

目标数据库

7.3.4 (或更高版本)

9.2.x

8.0.6 (或更高版本)

9.2.x

8.1.7 (或更高版本)

9.2.x

9.0.1 (或更高版本)

9.2.x

以下版本的数据库需要间接升级。

源数据库

升级路径

目标数据库

7.3.3 (或更低版本)

=> 7.3.4 =>

9.2.x

8.0.5 (或更低版本)

=> 8.0.6 =>

9.2.x

8.1.6 (或更低版本)

=> 8.1.7 =>

9.2.x

 

更多信息,请参阅以下链接:
http://download.oracle.com/docs/cd/B10501_01/server.920/a96530/migprep.htm#1006863

降级

 

从19c降级的降级兼容性矩阵

 

源数据库

                 可能降级到

Non-CDB 19c

18c,12.2.0.1,12.1.0.2,12.1.0.1,11.2.0.4

PDB/CDB 19c

18c,12.2.0.1,12.1.0.2

更多的信息,请参考 19c Downgrade link.

 

从18c降级的降级兼容性矩阵

 

源数据库

                  可能降级到

Non-CDB 18c

12.2.0.1,12.1.0.2,12.1.0.1,11.2.0.4

PDB/CDB 18c

12.2.0.1,12.1.0.2

更多的信息,请参考 18c Downgrade link.

从12.2降级的降级兼容性矩阵

 

源数据库

                  可能降级到

Non-CDB 12.2

12.1.0.2,12.1.0.1,11.2.0.4,11.2.0.3

PDB/CDB 12.2

12.1.0.2

更多的信息,请参考 12.2 Downgrade link.

从12.1.x降级的降级兼容性矩阵

源数据库

         可能降级到

12.1.x

11.1.0.7

12.1.x

11.2.0.2.0 (或更高版本)

更多信息,请参阅以下链接:
http://docs.oracle.com/cd/E16655_01/server.121/e17642/downgrade.htm#i1010267

注意:
你不能降级到10.2.0.5因为Oracle Database 12c的最小兼容版本是11.0。
你不能降级一个从Oracle Database Express Edition升级上来的数据库。

从11.2.x降级的降级兼容性矩阵

源数据库

             可能降级到

11.2.x

10.1.0.5.0 (或更高版本)

11.2.x

10.2.0.2.0 (或更高版本)

11.2.x

11.1.0.6.0 (或更高版本)

更多信息,请参阅以下链接:
http://docs.oracle.com/cd/E11882_01/server.112/e23633/downgrade.htm#UPGRD00710

从11.1.x降级的降级兼容性矩阵

源数据库

              可能降级到

11.1.x

10.1.0.5.0 (或更高版本)

11.1.x

10.2.0.3.0 (或更高版本)

更多信息,请参阅以下链接:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28300/downgrade.htm#i1010243

从10.2.x降级的降级兼容性矩阵

源数据库

可能降级到

10.2.x

9.2.0.6.0 (或更高版本)

10.2.x

10.1.0.4.0 (或更高版本)

更多信息,请参阅以下链接:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14238/downgrade.htm#i1010243

从10.1.x降级的降级兼容性矩阵

源数据库

可能降级到

10.1.x

9.2.0.4.0 (或更高版本)

更多信息,请参阅以下链接:
http://download.oracle.com/docs/cd/B14117_01/server.101/b10763/downgrade.htm#i1010243

从9.2.x降级的降级兼容性矩阵

源数据库

可能降级到

9.2.x

9.0.1.3.0 (或更高版本)

9.2.x

8.1.7.3.0 (或更高版本)

更多信息,请参阅以下链接:
http://download.oracle.com/docs/cd/B10501_01/server.920/a96530/downgrad.htm#1008177

注意
1 : 如果您在升级后打了某个补丁集,那么将不能降级。

例如:如果您从9.2.0.6 升级到 10.2.0.1,然后打上了10.2.0.3的补丁集,那么您将不能从10.2.0.3 降级到 9.2.0.6. (只有当您直接升级9.2.0.6 到 10.2.0.3,您才能从10.2.0.3 降级到 9.2.0.6)

2 : 您只能降级到和您数据库升级前一样的版本。
例如:只有您的数据库是从10.2.0.3升级到11.1.0.6的话,才能从11.1.0.6降级到10.2.0.3。如果您创建了新的11.1.0.6数据库,那么您不能将数据库降级到10.2.0.3。而且,如果您直接升级9.2.0.8 到 11.1.0.6,那么您也不能降级到10.2.0.3,因为您的数据库不是从10.2.0.3升级的。

3 : 如果数据库参数COMPATIBLE设置为比您升级前数据库版本更高的版本,您将不能做降级。
例如:如果您从10.2.0.3 升级到 11.1.0.6,之后设置COMPATIBLE 为 11.1.0.6,您不能做降级。而且,如果您从9.2.0.8 直接升级到 11.1.0.6之后设置COMPATIBLE 为10.2.0.1,您也不能做降级。

4 : 您不能升级一个发行版的数据库二进制文件到另一个发行版
例如:您不能升级10.1.0.2.0的二级制文件 到 10.2.0.1.0。发行版必须在一个单独的oracle home下安装。您不能将10.2.0.x安装在任何其他发行版的oracle home下来完成升级。

升级到 oracle 19c 的版本差异

1错误1:ocrconfig.bin” does not exists or is not readable
1.1 现象
19c的RAC 环境,升级RU 从19.3 到 19.6,根据19.6 RU readme文档的操作,opatchauto的时候,报错:

[root@luda software]# export PATH=$PATH:/u01/app/19.3.0/grid/OPatch
[root@luda software]# opatchauto apply /u01/software/30501910

OPatchauto session is initiated at Thu Jan 29 21:18:26 2020

System initialization log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-03-12_09-18-36PM.log.

Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2020-03-12_09-21-01PM.log
The id for this session is 5Z4B

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.3.0/db_1
Patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Patch applicability verified successfully on home /u01/app/19.3.0/grid

Verifying SQL patch applicability on home /u01/app/oracle/product/19.3.0/db_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Preparing to bring down database service on home /u01/app/oracle/product/19.3.0/db_1
Successfully prepared home /u01/app/oracle/product/19.3.0/db_1 to bring down database service

Bringing down CRS service on home /u01/app/19.3.0/grid
CRS service brought down successfully on home /u01/app/19.3.0/grid

Performing prepatch operation on home /u01/app/oracle/product/19.3.0/db_1
Perpatch operation completed successfully on home /u01/app/oracle/product/19.3.0/db_1

Start applying binary patch on home /u01/app/oracle/product/19.3.0/db_1
Binary patch applied successfully on home /u01/app/oracle/product/19.3.0/db_1

Performing postpatch operation on home /u01/app/oracle/product/19.3.0/db_1
Postpatch operation completed successfully on home /u01/app/oracle/product/19.3.0/db_1

Start applying binary patch on home /u01/app/19.3.0/grid
Failed while applying binary patches on home /u01/app/19.3.0/grid

Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : rac1->/u01/app/19.3.0/grid Type[crs]
Details: [
—————————Patching Failed———————————
Command execution failed during patching in home: /u01/app/19.3.0/grid, host: rac1.
Command failed: /u01/app/19.3.0/grid/OPatch/opatchauto apply /u01/software/30501910 -oh /u01/app/19.3.0/grid -target_type cluster -binary -invPtrLoc /u01/app/19.3.0/grid/oraInst.loc -jre /u01/app/19.3.0/grid/OPatch/jre -persistresult /u01/app/19.3.0/grid/OPatch/auto/dbsessioninfo/sessionresult_rac1_crs.ser -analyzedresult /u01/app/19.3.0/grid/OPatch/auto/dbsessioninfo/sessionresult_analyze_rac1_crs.ser
Command failure output:
==Following patches FAILED in apply:

Patch: /u01/software/30501910/30489227
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-12_21-45-54PM_1.log
Reason: Failed during Analysis: CheckNApplyReport Failed, [ Prerequisite Status: FAILED, Prerequisite output:
The details are:

Prerequisite check “CheckApplicable” failed.]

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Thu Jan 29 21:46:04 2020
Time taken to complete the session 27 minutes, 39 seconds

opatchauto failed with error code 42
[root@luda software]#
查看错误日志:

[Jan 29, 2020 10:36:54 PM] [INFO] Prereq checkPatchApplicableOnCurrentPlatform Passed for patch : 30489227
[Jan 29, 2020 10:36:57 PM] [WARNING]Action file /u01/app/19.3.0/grid/jlib/srvmasm.jar is in the jar list,OOP should be lanched
[Jan 29, 2020 10:36:58 PM] [INFO] Patch 30489227:
Copy Action: Source File “/u01/software/30501910/30489227/files/bin/ocrcheck.bin” does not exists or i
s not readable
‘oracle.has.crs, 19.0.0.0.0’: Cannot copy file from ‘ocrcheck.bin’ to ‘/u01/app/19.3.0/grid/bin/ocrche
ck.bin’
Copy Action: Source File “/u01/software/30501910/30489227/files/bin/ocrconfig.bin” does not exists or
is not readable
‘oracle.has.crs, 19.0.0.0.0’: Cannot copy file from ‘ocrconfig.bin’ to ‘/u01/app/19.3.0/grid/bin/ocrco
nfig.bin’
[Jan 29, 2020 10:36:58 PM] [INFO] Prerequisite check “CheckApplicable” failed.
The details are:

Patch 30489227:
Copy Action: Source File “/u01/software/30501910/30489227/files/bin/ocrcheck.bin” does not exists or i
s not readable
‘oracle.has.crs, 19.0.0.0.0’: Cannot copy file from ‘ocrcheck.bin’ to ‘/u01/app/19.3.0/grid/bin/ocrche
ck.bin’
Copy Action: Source File “/u01/software/30501910/30489227/files/bin/ocrconfig.bin” does not exists or
is not readable
‘oracle.has.crs, 19.0.0.0.0’: Cannot copy file from ‘ocrconfig.bin’ to ‘/u01/app/19.3.0/grid/bin/ocrco
nfig.bin’
[Jan 29, 2020 10:36:58 PM] [SEVERE] OUI-67073:UtilSession failed:
Prerequisite check “CheckApplicable” failed.
[Jan 29, 2020 10:36:58 PM] [INFO] Finishing UtilSession at Thu Jan 29 22:36:58 CST 2020
[Jan 29, 2020 10:36:58 PM] [INFO] Log file location: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-12_22-36-43PM
_1.log
[Jan 29, 2020 10:36:58 PM] [INFO] Stack Description: java.lang.RuntimeException:
Prerequisite check “CheckApplicable” failed.
at oracle.opatch.OPatchSessionHelper.runApplyPrereqs(OPatchSessionHelper.java:6548)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:1002)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:370)
at oracle.opatch.opatchutil.NApply.process(NApply.java:352)
at oracle.opatch.opatchutil.OUSession.napply(OUSession.java:1123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at oracle.opatch.UtilSession.process(UtilSession.java:355)
at oracle.opatch.OPatchSession.main(OPatchSession.java:3985)
at oracle.opatch.OPatchSDK.NApply(OPatchSDK.java:1127)
at oracle.opatch.opatchsdk.OPatchTarget.NApplyReport(OPatchTarget.java:3964)
at oracle.opatch.opatchsdk.OPatchTarget.NApplyReportForAllPrereqs(OPatchTarget.java:4013)
at oracle.opatchauto.core.binary.action.AnalyzeReportGenerator.analyzePatch(AnalyzeReportGenerator
.java:186)
at oracle.opatchauto.core.binary.action.AnalyzeReportGenerator.execute(AnalyzeReportGenerator.java
:148)
at oracle.opatchauto.core.binary.action.LegacyPatchAction.execute(LegacyPatchAction.java:46)
at oracle.opatchauto.core.binary.OPatchAutoBinary.patchWithoutAnalyze(OPatchAutoBinary.java:519)
at oracle.opatchauto.core.binary.OPatchAutoBinary.applyWithoutAnalyze(OPatchAutoBinary.java:406)
at oracle.opatchauto.core.OPatchAutoCore.runOPatchAutoBinary(OPatchAutoCore.java:192)
at oracle.opatchauto.core.OPatchAutoCore.main(OPatchAutoCore.java:75)
Caused by: java.lang.RuntimeException:
Prerequisite check “CheckApplicable” failed.
… 21 more
Caused by: oracle.opatch.PrereqFailedException:
Prerequisite check “CheckApplicable” failed.
… 21 more
[Jan 29, 2020 10:36:58 PM] [INFO] EXITING METHOD: NApplyReport(OPatchPatch[] patches,OPatchNApplyOptions options)
[grid@luda OPatch]$
1.2 解决方法
这里的错误是:
“/u01/software/30501910/30489227/files/bin/ocrconfig.bin” does not exists or is not readable
从这里看像是权限问题,手工修改RU 补丁的权限:

[root@luda software]# chown grid:oinstall 30501910 -R
再次执行变成了另外的错误。

2 错误2:Copy failed from ‘…/files/bin/cssdagent’ to ‘…/bin/cssdagent’…
2.1 现象
再次执行,还是报错。如下:

2020-03-13 08:36:48,374 INFO [313] com.oracle.glcm.patch.auto.db.integration.controller.action.OPatchAutoBinaryAction – Opatchcore binary error message=
2020-03-13 08:36:48,375 INFO [313] com.oracle.glcm.patch.auto.db.integration.controller.action.OPatchAutoBinaryAction – Reading session result from /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_rac1_crs.ser
2020-03-13 08:36:48,380 INFO [313] com.oracle.cie.common.util.reporting.CommonReporter – Reporting console output : Message{id=’null’, message=’Failed while applying binary patches on home /u01/app/19.3.0/grid
‘}
2020-03-13 08:36:48,381 SEVERE [311] com.oracle.glcm.patch.auto.action.PatchActionExecutor – Failed to execute patch action [com.oracle.glcm.patch.auto.db.integration.controller.action.OPatchAutoBinaryAction] on patch target [rac1->/u01/app/19.3.0/grid Type[crs]].
2020-03-13 08:36:48,382 INFO [311] com.oracle.cie.common.util.reporting.CommonReporter – Reporting console output : Message{id=’null’, message=’Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : rac1->/u01/app/19.3.0/grid Type[crs]
Details: [
—————————Patching Failed———————————
Command execution failed during patching in home: /u01/app/19.3.0/grid, host: rac1.
Command failed: /u01/app/19.3.0/grid/OPatch/opatchauto apply /u01/software/30501910 -oh /u01/app/19.3.0/grid -target_type cluster -binary -invPtrLoc /u01/app/19.3.0/grid/oraInst.loc -jre /u01/app/19.3.0/grid/OPatch/jre -persistresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_rac1_crs.ser -analyzedresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_analyze_rac1_crs.ser
Command failure output:
==Following patches FAILED in apply:

Patch: /u01/software/30501910/30489227
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_08-31-51AM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: ApplySession failed in system modification phase… ‘ApplySession::apply failed: Copy failed from ‘/u01/software/30501910/30489227/files/bin/crsd.bin’ to ‘/u01/app/19.3.0/grid/bin/crsd.bin’…
Copy failed from ‘/u01/software/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…
Copy failed from ‘/u01/software/30501910/30489227/files/bin/cssdmonitor’ to ‘/u01/app/19.3.0/grid/bin/cssdmonitor’…
Copy fa …

After fixing the cause of failure Run opatchauto resume

]’}
2020-03-13 08:36:48,407 SEVERE [41] com.oracle.cie.wizard.internal.engine.WizardControllerEngine – Wizard error cause
com.oracle.cie.wizard.tasks.TaskExecutionException: OPATCHAUTO-68128: Patch action execution failed.
OPATCHAUTO-68128: Failed to execute patch actions for goal offline:binary-patching
OPATCHAUTO-68128: Check the log for more information.
at com.oracle.glcm.patch.auto.wizard.silent.tasks.PatchActionTask.execute(PatchActionTask.java:106)
at com.oracle.cie.wizard.internal.cont.SilentTaskContainer$TaskRunner.run(SilentTaskContainer.java:102)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.oracle.glcm.patch.auto.OPatchAutoException: OPATCHAUTO-68067: Patch action execution failed.
OPATCHAUTO-68067: Failed to execute patch action [com.oracle.glcm.patch.auto.db.integration.controller.action.OPatchAutoBinaryAction
Patch Target : rac1->/u01/app/19.3.0/grid Type[crs]
Details: [
—————————Patching Failed———————————
Command execution failed during patching in home: /u01/app/19.3.0/grid, host: rac1.
Command failed: /u01/app/19.3.0/grid/OPatch/opatchauto apply /u01/software/30501910 -oh /u01/app/19.3.0/grid -target_type cluster -binary -invPtrLoc /u01/app/19.3.0/grid/oraInst.loc -jre /u01/app/19.3.0/grid/OPatch/jre -persistresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_rac1_crs.ser -analyzedresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_analyze_rac1_crs.ser
Command failure output:
==Following patches FAILED in apply:

Patch: /u01/software/30501910/30489227
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_08-31-51AM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: ApplySession failed in system modification phase… ‘ApplySession::apply failed: Copy failed from ‘/u01/software/30501910/30489227/files/bin/crsd.bin’ to ‘/u01/app/19.3.0/grid/bin/crsd.bin’…
Copy failed from ‘/u01/software/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…
Copy failed from ‘/u01/software/30501910/30489227/files/bin/cssdmonitor’ to ‘/u01/app/19.3.0/grid/bin/cssdmonitor’…
Copy fa …

After fixing the cause of failure Run opatchauto resume

]]. Failures:
OPATCHAUTO-68067: Check the details to determine the cause of the failure.
at com.oracle.glcm.patch.auto.action.PatchActionExecutor.execute(PatchActionExecutor.java:172)
at com.oracle.glcm.patch.auto.wizard.silent.tasks.PatchActionTask.execute(PatchActionTask.java:102)
… 2 more
2020-03-13 08:36:48,516 INFO [1] com.oracle.glcm.patch.auto.db.integration.model.productsupport.DBBaseProductSupport – Space available after session: 44935 MB
2020-03-13 08:36:49,875 SEVERE [1] com.oracle.glcm.patch.auto.OPatchAuto – OPatchAuto failed.
com.oracle.glcm.patch.auto.OPatchAutoException: OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
at com.oracle.glcm.patch.auto.OrchestrationEngineImpl.orchestrate(OrchestrationEngineImpl.java:40)
at com.oracle.glcm.patch.auto.OPatchAuto.orchestrate(OPatchAuto.java:858)
at com.oracle.glcm.patch.auto.OPatchAuto.orchestrate(OPatchAuto.java:398)
at com.oracle.glcm.patch.auto.OPatchAuto.orchestrate(OPatchAuto.java:344)
at com.oracle.glcm.patch.auto.OPatchAuto.main(OPatchAuto.java:212)
2020-03-13 08:36:49,875 INFO [1] com.oracle.cie.common.util.reporting.CommonReporter – Reporting console output : Message{id=’null’, message=’OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.’}
2020-03-13 08:36:49,875 INFO [1] com.oracle.cie.common.util.reporting.CommonReporter – Reporting console output : Message{id=’null’, message=’OPatchAuto failed.’}
这次错误变了:

Copy failed from ‘/u01/software/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…

2.2 解决方法
上MOS搜了一下,有一篇文章和这个错误很类似:
opatch report “ERROR: Prereq checkApplicable failed.” when Applying Grid Infrastructure patch (Doc ID 1417268.1)

MOS里将了多种原因可能会触发这个错误,和我这里最像的是这一条:

D. The patch is not unzipped as grid user, often it is unzipped as root user
ls -l will show the files are owned by root user.
The solution is to unzip the patch as grid user into an empty directory outside of GRID_HOME, then retry the patch apply.
我的RU patch 虽然不在GRID_HOME目录下,但也是用root用户解压缩的。 删除原来的解压缩后,重新用grid用户解压缩一次,在打补丁,还是同样的错误:
[grid@luda tmp]$ unzip -d /tmp p30501910_190000_Linux-x86-64-GI.zip

[Mar 13, 2020 9:18:08 AM] [INFO] Copying file to “/u01/app/19.3.0/grid/srvm/lib/sprraw.o”
[Mar 13, 2020 9:18:08 AM] [INFO] The following actions have failed:
[Mar 13, 2020 9:18:08 AM] [WARNING] OUI-67124:Copy failed from ‘/tmp/30501910/30489227/files/bin/crsd.bin’ to ‘/u01/app/19.3.0/grid/bin/crsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdmonitor’ to ‘/u01/app/19.3.0/grid/bin/cssdmonitor’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmd.bin’ to ‘/u01/app/19.3.0/grid/bin/evmd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmlogger.bin’ to ‘/u01/app/19.3.0/grid/bin/evmlogger.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gipcd.bin’ to ‘/u01/app/19.3.0/grid/bin/gipcd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gpnpd.bin’ to ‘/u01/app/19.3.0/grid/bin/gpnpd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/mdnsd.bin’ to ‘/u01/app/19.3.0/grid/bin/mdnsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ocssd.bin’ to ‘/u01/app/19.3.0/grid/bin/ocssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/octssd.bin’ to ‘/u01/app/19.3.0/grid/bin/octssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ohasd.bin’ to ‘/u01/app/19.3.0/grid/bin/ohasd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/oraagent.bin’ to ‘/u01/app/19.3.0/grid/bin/oraagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/orarootagent.bin’ to ‘/u01/app/19.3.0/grid/bin/orarootagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/osysmond.bin’ to ‘/u01/app/19.3.0/grid/bin/osysmond.bin’…
[Mar 13, 2020 9:18:08 AM] [INFO] Do you want to proceed? [y|n]
[Mar 13, 2020 9:18:08 AM] [INFO] N (auto-answered by -silent)
[Mar 13, 2020 9:18:08 AM] [INFO] User Responded with: N
[Mar 13, 2020 9:18:08 AM] [WARNING] OUI-67124:ApplySession failed in system modification phase… ‘ApplySession::apply failed: Copy failed from ‘/tmp/30501910/30489227/f
iles/bin/crsd.bin’ to ‘/u01/app/19.3.0/grid/bin/crsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdmonitor’ to ‘/u01/app/19.3.0/grid/bin/cssdmonitor’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmd.bin’ to ‘/u01/app/19.3.0/grid/bin/evmd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmlogger.bin’ to ‘/u01/app/19.3.0/grid/bin/evmlogger.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gipcd.bin’ to ‘/u01/app/19.3.0/grid/bin/gipcd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gpnpd.bin’ to ‘/u01/app/19.3.0/grid/bin/gpnpd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/mdnsd.bin’ to ‘/u01/app/19.3.0/grid/bin/mdnsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ocssd.bin’ to ‘/u01/app/19.3.0/grid/bin/ocssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/octssd.bin’ to ‘/u01/app/19.3.0/grid/bin/octssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ohasd.bin’ to ‘/u01/app/19.3.0/grid/bin/ohasd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/oraagent.bin’ to ‘/u01/app/19.3.0/grid/bin/oraagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/orarootagent.bin’ to ‘/u01/app/19.3.0/grid/bin/orarootagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/osysmond.bin’ to ‘/u01/app/19.3.0/grid/bin/osysmond.bin’…

[Mar 13, 2020 9:18:08 AM] [INFO] Restoring “/u01/app/19.3.0/grid” to the state prior to running NApply…
[Mar 13, 2020 9:18:08 AM] [INFO] Restoring files: copy recurse from /u01/app/19.3.0/grid/.patch_storage/NApply/2020-03-13_09-15-07AM/backup to /u01/app/19.3.0/grid

#### Stack trace of processes holding locks ####

Time: 2020-03-13_09-14-13AM
Command: oracle/opatchauto/core/OPatchAutoCore apply /tmp/30501910 -oh /u01/app/19.3.0/grid -target_type cluster -binary -invPtrLoc /u01/app/19.3.0/grid/oraInst.loc -per
sistresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_rac1_crs.ser -analyzedresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_a
nalyze_rac1_crs.ser -customLogDir /u01/app/19.3.0/grid/cfgtoollogs
Lock File Name: /u01/app/oraInventory/locks/_u01_app_19.3.0_grid_writer.lock
StackTrace:
———–
java.lang.Throwable
at oracle.sysman.oii.oiit.OiitLockHeartbeat.writeStackTrace(OiitLockHeartbeat.java:193)
at oracle.sysman.oii.oiit.OiitLockHeartbeat.(OiitLockHeartbeat.java:173)
at oracle.sysman.oii.oiit.OiitTargetLocker.getWriterLock(OiitTargetLocker.java:346)
at oracle.sysman.oii.oiit.OiitTargetLocker.getWriterLock(OiitTargetLocker.java:238)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.acquireLocks(OiicStandardInventorySession.java:564)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initAreaControl(OiicStandardInventorySession.java:359)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:332)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:294)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:243)
at oracle.sysman.oui.patch.impl.HomeOperationsImpl.initialize(HomeOperationsImpl.java:107)
at oracle.glcm.opatch.common.api.install.HomeOperationsShell.initialize(HomeOperationsShell.java:117)
at oracle.opatch.ipm.IPMRWServices.addPatchCUP(IPMRWServices.java:134)
at oracle.opatch.ipm.IPMRWServices.add(IPMRWServices.java:146)
at oracle.opatch.ApplySession.apply(ApplySession.java:899)
at oracle.opatch.ApplySession.processLocal(ApplySession.java:4098)
at oracle.opatch.ApplySession.process(ApplySession.java:5080)
at oracle.opatch.ApplySession.process(ApplySession.java:4942)
at oracle.opatch.OPatchACL.processApply(OPatchACL.java:310)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:1429)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:370)
at oracle.opatch.opatchutil.NApply.process(NApply.java:352)
at oracle.opatch.opatchutil.OUSession.napply(OUSession.java:1123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at oracle.opatch.UtilSession.process(UtilSession.java:355)
at oracle.opatch.OPatchSession.main(OPatchSession.java:3985)
at oracle.opatch.OPatchSDK.NApply(OPatchSDK.java:1127)
at oracle.opatch.opatchsdk.OPatchTarget.NApply(OPatchTarget.java:4169)
at oracle.opatchauto.core.binary.action.LegacyPatchAction.execute(LegacyPatchAction.java:76)
at oracle.opatchauto.core.binary.OPatchAutoBinary.patchWithoutAnalyze(OPatchAutoBinary.java:519)
at oracle.opatchauto.core.binary.OPatchAutoBinary.applyWithoutAnalyze(OPatchAutoBinary.java:406)
at oracle.opatchauto.core.OPatchAutoCore.runOPatchAutoBinary(OPatchAutoCore.java:192)
at oracle.opatchauto.core.OPatchAutoCore.main(OPatchAutoCore.java:75)
继续研究,有可能是bug:
Bug 13575478 : PATCH APLLICABLE/CONFLICT CHECK FAILED WITH ‘OPATCH AUTO’
A. Expected behaviour if GRID_HOME has not been unlocked

If GI home has not been unlocked with “rootcrs.pl -unlock”, checkapplicable will fail as many files are still owned by root user, this is expected behaviour. The solution is to use “opatch auto” or follow the patch readme step-by-step so the GI home gets unlocked first.
这里的方法和opatchauto有出入,所以尝试进行了analyze,提示有patch 已经打上了,但是通过opatch lspatches 并没有显示,所以对19.6 的RU执行了rollback。

[root@luda ~]# /u01/app/19.3.0/grid/OPatch/opatchauto apply /tmp/30501910 -analyze
[root@luda ~]# /u01/app/19.3.0/grid/OPatch/opatchauto rollback /tmp/30501910
再次执行,变成了另外错误3.

3 错误3:”CheckActiveFilesAndExecutables”
3.1 现象
这里的错误提示非常明显:

Following active executables are used by opatch process :
/u01/app/oracle/product/19.3.0/db_1/lib/libclntsh.so.19.1
/u01/app/oracle/product/19.3.0/db_1/lib/libsqlplus.so
[Mar 13, 2020 3:13:26 PM] [INFO] Prerequisite check “CheckActiveFilesAndExecutables” failed.
The details are:

Following active executables are not used by opatch process :

Following active executables are used by opatch process :
/u01/app/oracle/product/19.3.0/db_1/lib/libclntsh.so.19.1
/u01/app/oracle/product/19.3.0/db_1/lib/libsqlplus.so
[Mar 13, 2020 3:13:26 PM] [SEVERE] OUI-67073:UtilSession failed: Prerequisite check “CheckActiveFilesAndExecutables” failed.
[Mar 13, 2020 3:13:26 PM] [INFO] Finishing UtilSession at Fri Mar 13 15:13:26 CST 2020
[Mar 13, 2020 3:13:26 PM] [INFO] Log file location: /u01/app/oracle/product/19.3.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_15-09-58PM_1.log
[root@luda ~]#
3.2 解决方法
有活动的sqlplus 窗口存在。 先执行rollback:
[root@luda ~]# /u01/app/19.3.0/grid/OPatch/opatchauto rollback /tmp/30501910

OPatchauto session is initiated at Fri Mar 13 15:18:58 2020

System initialization log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-03-13_03-19-01PM.log.

Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2020-03-13_03-21-09PM.log
The id for this session is U2H9

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.3.0/db_1
Patch applicability verified successfully on home /u01/app/19.3.0/grid

Patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Verifying SQL patch applicability on home /u01/app/oracle/product/19.3.0/db_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Preparing to bring down database service on home /u01/app/oracle/product/19.3.0/db_1
Successfully prepared home /u01/app/oracle/product/19.3.0/db_1 to bring down database service

Performing prepatch operation on home /u01/app/oracle/product/19.3.0/db_1
Perpatch operation completed successfully on home /u01/app/oracle/product/19.3.0/db_1

Start rolling back binary patch on home /u01/app/oracle/product/19.3.0/db_1
Binary patch rolled back successfully on home /u01/app/oracle/product/19.3.0/db_1

Performing postpatch operation on home /u01/app/oracle/product/19.3.0/db_1
Postpatch operation completed successfully on home /u01/app/oracle/product/19.3.0/db_1

Preparing home /u01/app/oracle/product/19.3.0/db_1 after database service restarted
No step execution required………

Trying to roll back SQL patch on home /u01/app/oracle/product/19.3.0/db_1
SQL patch rolled back successfully on home /u01/app/oracle/product/19.3.0/db_1

OPatchAuto successful.

——————————–Summary——————————–

Patching is completed successfully. Please find the summary as follows:

Host:rac1
CRS Home:/u01/app/19.3.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /tmp/30501910/30489227
Reason: This Patch does not exist in the home, it cannot be rolled back.

Patch: /tmp/30501910/30489632
Reason: This Patch does not exist in the home, it cannot be rolled back.

Patch: /tmp/30501910/30655595
Reason: This Patch does not exist in the home, it cannot be rolled back.

Patch: /tmp/30501910/30557433
Reason: This Patch does not exist in the home, it cannot be rolled back.

Host:rac1
RAC Home:/u01/app/oracle/product/19.3.0/db_1
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /tmp/30501910/30489632
Reason: This patch is not applicable to this specified target type – “rac_database”

Patch: /tmp/30501910/30655595
Reason: This patch is not applicable to this specified target type – “rac_database”

Patch: /tmp/30501910/30557433
Reason: Patch /tmp/30501910/30557433 is not applied as part of bundle patch 30501910

==Following patches were SUCCESSFULLY rolled back:

Patch: /tmp/30501910/30489227
Log: /u01/app/oracle/product/19.3.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_15-23-36PM_1.log

Patching session reported following warning(s):
_________________________________________________

[WARNING] The database instance ‘cndba1’ from ‘/u01/app/oracle/product/19.3.0/db_1′, in host’rac1’ is not running. SQL changes, if any, will not be rolled back.
To roll back. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle).
Refer to the readme to get the correct steps for applying the sql changes.

OPatchauto session completed at Fri Mar 13 15:24:58 2020
Time taken to complete the session 6 minutes, 1 second
[root@luda ~]#
再次apply,依旧报错。

4错误4:OUI-67073:UtilSession failed: ApplySession failed in system modification phase(这种解决方法不要尝试)
4.1 现象
这里的错误和错误2是一致的。 其实这个错误才是最根本的,影响19c 打RU的现象,只是MOS上没有找到合理的解释。 现在可行的方法就是使用 nonrolling的方式,分别对GI和DB 进行RU的升级。

————————————

[Mar 13, 2020 3:54:57 PM] [INFO] Removed patch “30489227” with UPI + “23305624” from OUI inventory memory..
[Mar 13, 2020 3:54:57 PM] [INFO] Stack Description: java.lang.RuntimeException: OUI session not initialized
at oracle.sysman.oui.patch.impl.HomeOperationsImpl.saveInventory(HomeOperationsImpl.java:372)
at oracle.glcm.opatch.common.api.install.HomeOperationsShell.saveInventory(HomeOperationsShell.java:204)
at oracle.opatch.ipm.IPMRWServices.saveInstallInventory(IPMRWServices.java:854)
at oracle.opatch.OPatchSession.restorePatchesInventory(OPatchSession.java:1434)
at oracle.opatch.MergedPatchObject.restoreOH(MergedPatchObject.java:1234)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:1465)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:370)
at oracle.opatch.opatchutil.NApply.process(NApply.java:352)
at oracle.opatch.opatchutil.OUSession.napply(OUSession.java:1123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at oracle.opatch.UtilSession.process(UtilSession.java:355)
at oracle.opatch.OPatchSession.main(OPatchSession.java:3985)
at oracle.opatch.OPatchSDK.NApply(OPatchSDK.java:1127)
at oracle.opatch.opatchsdk.OPatchTarget.NApply(OPatchTarget.java:4169)
at oracle.opatchauto.core.binary.action.LegacyPatchAction.execute(LegacyPatchAction.java:76)
at oracle.opatchauto.core.binary.OPatchAutoBinary.patchWithoutAnalyze(OPatchAutoBinary.java:519)
at oracle.opatchauto.core.binary.OPatchAutoBinary.applyWithoutAnalyze(OPatchAutoBinary.java:406)
at oracle.opatchauto.core.OPatchAutoCore.runOPatchAutoBinary(OPatchAutoCore.java:192)
at oracle.opatchauto.core.OPatchAutoCore.main(OPatchAutoCore.java:75)
[Mar 13, 2020 3:54:57 PM] [SEVERE] OUI-67115:OPatch failed to restore OH ‘/u01/app/19.3.0/grid’. Consult OPatch document to restore the home manually before proceeding.
[Mar 13, 2020 3:54:57 PM] [WARNING] OUI-67124:
NApply was not able to restore the home. Please invoke the following scripts:
– restore.[sh,bat]
– make.txt (Unix only)
to restore the ORACLE_HOME. They are located under
“/u01/app/19.3.0/grid/.patch_storage/NApply/2020-03-13_15-52-41PM”
[Mar 13, 2020 3:54:58 PM] [SEVERE] OUI-67073:UtilSession failed: ApplySession failed in system modification phase… ‘ApplySession::apply failed: Copy failed from ‘/tmp/30501910/30489227/files/bin/crsd.bin’ to ‘/u01/app/19.3.0/grid/bin/crsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdmonitor’ to ‘/u01/app/19.3.0/grid/bin/cssdmonitor’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmd.bin’ to ‘/u01/app/19.3.0/grid/bin/evmd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmlogger.bin’ to ‘/u01/app/19.3.0/grid/bin/evmlogger.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gipcd.bin’ to ‘/u01/app/19.3.0/grid/bin/gipcd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gpnpd.bin’ to ‘/u01/app/19.3.0/grid/bin/gpnpd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/mdnsd.bin’ to ‘/u01/app/19.3.0/grid/bin/mdnsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ocssd.bin’ to ‘/u01/app/19.3.0/grid/bin/ocssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/octssd.bin’ to ‘/u01/app/19.3.0/grid/bin/octssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ohasd.bin’ to ‘/u01/app/19.3.0/grid/bin/ohasd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/oraagent.bin’ to ‘/u01/app/19.3.0/grid/bin/oraagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/orarootagent.bin’ to ‘/u01/app/19.3.0/grid/bin/orarootagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/osysmond.bin’ to ‘/u01/app/19.3.0/grid/bin/osysmond.bin’…
4.2 解决方法
按日志提示,执行脚本:

[root@luda 2020-03-13_15-52-41PM]# ls
backup make.txt patchlist.txt restore.sh
[root@luda 2020-03-13_15-52-41PM]# ./restore.sh
This script is going to restore the Oracle Home to the previous state.
It does not perform any of the following:
– Running init/pre/post scripts
– Oracle binary re-link
– Customized steps performed manually by user
Please use this script with supervision from Oracle Technical Support.
About to modify Oracle Home( /u01/app/19.3.0/grid )
Do you want to proceed? [Y/N]
y
User responded with : Y
Restore script completed.
[root@luda 2020-03-13_15-52-41PM]#
4.3 分析过程1
因为之前不能copy文件都是crs的,所以在进行到RU 进行到停CRS时,我手工的执行了停CRS的操作:

[root@luda 2020-03-13_16-45-11PM]# crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘rac1’
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘rac1’
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server ‘rac1’
CRS-2673: Attempting to stop ‘ora.chad’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.cndba.db’ on ‘rac1’
CRS-2677: Stop of ‘ora.cndba.db’ on ‘rac1’ succeeded
CRS-33673: Attempting to stop resource group ‘ora.asmgroup’ on server ‘rac1’
CRS-2673: Attempting to stop ‘ora.OCR.dg’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.DATA.dg’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.MGMT.dg’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.LISTENER.lsnr’ on ‘rac1’
CRS-2677: Stop of ‘ora.DATA.dg’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.MGMT.dg’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.OCR.dg’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.asm’ on ‘rac1’
CRS-2677: Stop of ‘ora.LISTENER.lsnr’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.rac1.vip’ on ‘rac1’
CRS-2677: Stop of ‘ora.rac1.vip’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.asm’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.ASMNET1LSNR_ASM.lsnr’ on ‘rac1’
CRS-2677: Stop of ‘ora.chad’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.ASMNET1LSNR_ASM.lsnr’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.asmnet1.asmnetwork’ on ‘rac1’
CRS-2677: Stop of ‘ora.asmnet1.asmnetwork’ on ‘rac1’ succeeded
CRS-33677: Stop of resource group ‘ora.asmgroup’ on server ‘rac1’ succeeded.
CRS-2672: Attempting to start ‘ora.rac1.vip’ on ‘rac2’
CRS-2676: Start of ‘ora.rac1.vip’ on ‘rac2’ succeeded
CRS-2673: Attempting to stop ‘ora.ons’ on ‘rac1’
CRS-2677: Stop of ‘ora.ons’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.net1.network’ on ‘rac1’
CRS-2677: Stop of ‘ora.net1.network’ on ‘rac1’ succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘rac1’ has completed
CRS-2677: Stop of ‘ora.crsd’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.storage’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.crf’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.drivers.acfs’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘rac1’
CRS-2677: Stop of ‘ora.crf’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.drivers.acfs’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.storage’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.asm’ on ‘rac1’
CRS-2677: Stop of ‘ora.mdnsd’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.asm’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘rac1’
CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.evmd’ on ‘rac1’
CRS-2677: Stop of ‘ora.ctssd’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.evmd’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.cssd’ on ‘rac1’
CRS-2677: Stop of ‘ora.cssd’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.driver.afd’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘rac1’
CRS-2677: Stop of ‘ora.driver.afd’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.gpnpd’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.gipcd’ on ‘rac1’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘rac1’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@luda 2020-03-13_16-45-11PM]#
4.4 停完CRS之后,RU打成功
[root@luda tmp]# /u01/app/19.3.0/grid/OPatch/opatchauto apply /tmp/30501910

OPatchauto session is initiated at Fri Mar 13 17:17:26 2020

System initialization log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-03-13_05-17-50PM.log.

Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2020-03-13_05-20-03PM.log
The id for this session is FPKL

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.3.0/db_1
Patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Patch applicability verified successfully on home /u01/app/19.3.0/grid

Verifying SQL patch applicability on home /u01/app/oracle/product/19.3.0/db_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Preparing to bring down database service on home /u01/app/oracle/product/19.3.0/db_1
Successfully prepared home /u01/app/oracle/product/19.3.0/db_1 to bring down database service

Bringing down CRS service on home /u01/app/19.3.0/grid
CRS service brought down successfully on home /u01/app/19.3.0/grid

Start applying binary patch on home /u01/app/19.3.0/grid
Binary patch applied successfully on home /u01/app/19.3.0/grid

Starting CRS service on home /u01/app/19.3.0/grid
CRS service started successfully on home /u01/app/19.3.0/grid

Preparing home /u01/app/oracle/product/19.3.0/db_1 after database service restarted
No step execution required………

OPatchAuto successful.

——————————–Summary——————————–

Patching is completed successfully. Please find the summary as follows:

Host:rac1
RAC Home:/u01/app/oracle/product/19.3.0/db_1
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /tmp/30501910/30489632
Reason: This patch is not applicable to this specified target type – “rac_database”

Patch: /tmp/30501910/30655595
Reason: This patch is not applicable to this specified target type – “rac_database”

Patch: /tmp/30501910/30489227
Reason: This patch is already been applied, so not going to apply again.

Patch: /tmp/30501910/30557433
Reason: This patch is already been applied, so not going to apply again.

Host:rac1
CRS Home:/u01/app/19.3.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /tmp/30501910/30489227
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_17-23-27PM_1.log

Patch: /tmp/30501910/30489632
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_17-23-27PM_1.log

Patch: /tmp/30501910/30557433
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_17-23-27PM_1.log

Patch: /tmp/30501910/30655595
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_17-23-27PM_1.log

OPatchauto session completed at Fri Mar 13 17:45:20 2020
Time taken to complete the session 27 minutes, 56 seconds
[root@luda tmp]#
4.5 遗留问题
这种在打RU过程中手工停crs的方式可以让RU 打成功,但是会导致很多权限问题,CRS 无法启动:

[root@luda lib]# crsctl start crs
CRS-41053: checking Oracle Grid Infrastructure for file permission issues
PRVG-2031 : Owner of file “/u01/app/grid/cfgtoollogs” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVH-0124 : Path “/etc/oracle/maps” with permissions “rwxr-xr-x” does not have write permissions for the file’s group and others on node “rac1”.
PRVH-0100 : Restricted deletion flag is not set for path “/etc/oracle/maps” on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/grid” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/admin” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVH-0111 : Path “/etc/init.d/ohasd” with permissions “rwxr-x—” does not have read permissions for others on node “rac1”.
PRVH-0113 : Path “/etc/init.d/ohasd” with permissions “rwxr-x—” does not have execute permissions for others on node “rac1”.
PRVG-2031 : Owner of file “/etc/oracleafd.conf” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2032 : Group of file “/etc/oracleafd.conf” did not match the expected value on node “rac1”. [Expected = “oinstall(54321)” ; Found = “asmadmin(54329)”]
PRVH-0124 : Path “/var/tmp/.oracle” with permissions “rwxr-xr-x” does not have write permissions for the file’s group and others on node “rac1”.
PRVH-0100 : Restricted deletion flag is not set for path “/var/tmp/.oracle” on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/grid/diag/ofm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/lsnrctl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/netcman” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/audit” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/checkpoints” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/crsdata/rac1/olr/rac1_19.olr” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVH-0100 : Restricted deletion flag is not set for path “/u01/app/grid/crsdata/rac1/shm” on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/grid/crsdata/rac1/shm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/crsdata/rac1/cvu” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/crsdata/rac1/olr” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/crsdata” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/crsdata/rac1/ocr” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/kfod” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/asmtool” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/crs” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/dps” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/em” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/diagtool” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/gsm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/ios” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/rdbms” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/apx” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/tnslsnr” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/asmcmd” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/clients” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/asm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/afdboot” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-11960 : Set user ID bit is not set for file “/u01/app/19.3.0/grid/bin/jssu” on node “rac1”.
PRVH-0147 : Set group ID bit is not set for file “/u01/app/19.3.0/grid/bin/extproc” on node “rac1”.
PRVG-11960 : Set user ID bit is not set for file “/u01/app/19.3.0/grid/bin/extjob” on node “rac1”.
PRVG-11960 : Set user ID bit is not set for file “/u01/app/19.3.0/grid/bin/oradism” on node “rac1”.
PRVG-11960 : Set user ID bit is not set for file “/u01/app/19.3.0/grid/bin/oracle” on node “rac1”.
PRVH-0147 : Set group ID bit is not set for file “/u01/app/19.3.0/grid/bin/oracle” on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/HASLoad.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2032 : Group of file “/u01/app/19.3.0/grid/crs/install/cmdllroot.sh” did not match the expected value on node “rac1”. [Expected = “oinstall(54321)” ; Found = “root(0)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/crsconfig_params.sbs” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/crsconvert.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/install_gi.excl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/paramfile.crs” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/oracle-ohasd.service” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/oracle-ohasd.conf” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/installRemove.excl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/install.incl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/install.excl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/inittab” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/crstfa.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/crsconvtoext.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/crsconfig_addparams.sbs” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/CLSR.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/dropdb.pl” did not match the expected value on node “rac1”. [Expected = “root(0)|root(0)” ; Found = “grid(54322)”]
PRVH-0109 : Path “/u01/app/19.3.0/grid/lib/libacfs19.so” with permissions “rwxr-xr-x” does not have write permissions for the file’s group on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/tfa_setup” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/roothas.sh” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/rootcrs.sh” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVH-0109 : Path “/u01/app/19.3.0/grid/crs/install/rhpdata” with permissions “rwxr-xr-x” does not have write permissions for the file’s group on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/rhpdata” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/post_gimr_ugdg.pl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/perlhasgen.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVH-0109 : Path “/u01/app/19.3.0/grid/lib/libedtn19.a” with permissions “rwxr-xr-x” does not have write permissions for the file’s group on node “rac1”.
PRVH-0109 : Path “/u01/app/19.3.0/grid/lib/libskgxpcompat.so” with permissions “rwxr-xr-x” does not have write permissions for the file’s group on node “rac1”.
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
[root@luda lib]#
根据错误提示逐个手工修改权限:
[root@luda grid]# chown root:oinstall admin -R
[root@luda grid]# pwd
/u01/app/grid
[root@luda grid]# chown root:oinstall * -R
[root@luda grid]# cd /u01/app/19.3.0/grid/crs/install/
[root@luda install]# ll

Oracle 19c RAC 环境升级 19.6 RU OPatch Prerequisite check “CheckApplicable” failed