Skip to content

All posts by ludatou - 2. page

Disabling real-time statistics gathering

Real-time Statistics
This Oracle Database 19c new feature is available on certain Oracle Database platforms. Check the Oracle Database Licensing Guide for more
information.
CHALLENGES TO MAINTAINING ACCURATE OPTIMIZER STATISTICS
As mentioned above, stale statistics can result in sub-optimal SQL execution plans and keeping them accurate in highly volatile
systems can be challenging. High-frequency statistics gathering helps to resolving this, but a more ideal solution would be to maintain
statistics as changes to the data in the database are made.

REAL-TIME STATISTICS
Real-time statistics extends statistic gathering techniques to the conventional DML operations INSERT, UPDATE and MERGE. When
these DML operations are executed on the data in the database, the most essential optimizer statistics are maintained in real time. This
applies both the individual row and bulk operations.

Real-time statistics augment those collected by the automatic statistics gathering job, high-frequency stats gathering or those gathered
manually using the DBMS_STATS API. An accurate picture of the data in the database is therefore maintained at all times, which
results in more optimal SQL execution plans.

Real-time statistics are managed automatically, and no intervention from the database administrator is required. Developers may
choose to disable online statistics gathering for individual SQL statements using the NO_GATHER_OPTIMIZER_STATISTICS hint.

Example:

SELECT /*+ NO_GATHER_OPTIMIZER_STATISTICS */ …

If you already have a well-established statistics gathering procedure or if for some other reason you want to disable automatic statistics
gathering for your main application schema, consider leaving it on for the dictionary tables. You can do so by changing the value of
AUTOSTATS_TARGET parameter to ORACLE instead of AUTO using DBMS_STATS.SET_GLOBAL_PREFS procedure.

 

exec dbms_stats.set_global_prefs(‘autostats_target’,‘oracle’)

 

SQL> begin
dbms_stats.set_global_prefs(‘autostats_target’,’oracle’);
end;
/

参考:
https://docs.oracle.com/en/database/oracle/oracle-database/19/tgsql/optimizer-statistics-concepts.html#GUID-769E609D-0312-43A7-9581-3F3EACF10BA9
<<19c New Feature:Real-Time Statistics (文档 ID 2552657.1)>>

 

改特性建议关闭。

Oracle 19c Real-Time Statistics 关闭实时统计信息

引用自: Doc ID 1577660.1

升级到19c的升级兼容性

  能够直接升级到Oracle Database 19c的数据库最小版本

源库 目标库
18 (所有版本) 19c
12.2.0.1 19c
12.1.0.2 19c
11.2.0.4 19c
其他未在上面提到的数据库发布/版本不支持直接升级到19c。
所以首先升级到中间Oracle数据库版本,然后可以直接升级到19c。

升级到18c的升级兼容性矩阵

注意:对于 Exadata 18.1 数据库已经发布了,  对于 On-Premise 数据库 18.3 发布了。

  能够直接升级到Oracle Database 18.1的数据库最小版本

源库 目标库
12.2.0.1 18c
12.1.0.1 / 12.1.0.2 18c
11.2.0.3 / 11.2.0.4 18c
如果不是上述提到的数据库版本,直接升级到18c是不support的。
因此,首先升级到Oracle数据库的中间版本,然后可以将其直接升级到18c版本。

升级到12.2.x的升级兼容性矩阵

能够直接升级到Oracle 12c Release 2的数据库最小版本

                              源库                         目标库
                                 11.2.0.3 / 11.2.0.4                                  12.2.x
                                 12.1.0.1 / 12.1.0.2                                  12.2.x

以下的数据库版本需要间接升级

源库  升级路径 目标库
11.2.0.2 –> 11.2.0.4/12.1.0.2 –> 12.2.x
11.2.0.1 –> 11.2.0.4 –> 12.2.x
11.1.0.7 –> 11.2.0.4/12.1.0.2 –> 12.2.x
11.1.0.6 –> 11.2.0.4 –> 12.2.x
10.2.0.5 –> 11.2.0.4/12.1.0.2 –> 12.2.x
10.2.0.2 /10.2.0.3/10.2.0.4 –> 11.2.0.4 –> 12.2.x
10.2.0.1 –> 10.2.0.5 –>   11.2.0.4/12.1.0.2 –> 12.2.x
10.1.0.5 –> 11.2.0.4 –> 12.2.x
10.1.0.4(或更低版本) –> 10.2.0.5 –> 11.2.0.4/12.1.0.2 –> 12.2.x
9.2.0.8 –> 11.2.0.4 –> 12.2.x
9.2.0.7(或更低版本) –>   9.2.0.8 –> 11.2.0.4 –> 12.2.x

升级到12.1.x的升级兼容性矩阵

能够直接升级到Oracle 12c Release 1的数据库最小版本

源数据库

       目标数据库

10.2.0.5

12.1.x

11.1.0.7

12.1.x

11.2.0.2 (或更高版本)

12.1.x

以下的数据库版本需要间接升级。

源数据库

升级路径

目标数据库

11.2.0.1

=> 11.2.0.2 或更高版本 =>

12.1.x

11.1.0.6

=> 11.1.0.7 or 11.2.0.2 或更高版本 =>

12.1.x

10.2.0.4 (或更低版本)

=> 10.2.0.5 或之后的直接升级版本 =>

12.1.x

10.1.0.5 (或更低版本)

=> 10.2.0.5 或之后的直接升级版本 =>

12.1.x

9.2.0.8 (或更低版本)

=> 9.2.0.8 –> 11.2.0.2 或更高版本 =>

12.1.x

升级到11.2.x的升级兼容性矩阵

能够直接升级到Oracle 11g Release 2的数据库最小版本

源数据库

        目标数据库

9.2.0.8   (或更高版本)

11.2.x

10.1.0.5 (或更高版本)

11.2.x

10.2.0.2 (或更高版本)

11.2.x

11.1.0.6 (或更高版本)

11.2.x

 

以下的数据库版本需要间接升级。

源数据库

升级路径

目标数据库

7.3.3 (或更低版本)

=> 7.3.4.0 => 9.2.0.8 =>

11.2.x

8.0.5 (或更低版本)

=> 8.0.6.x => 9.2.0.8 =>

11.2.x

8.1.7 (或更低版本)

=> 8.1.7.4 => 10.2.0.4 =>

11.2.x

9.0.1.3 (或更低版本)

=> 9.0.1.4 => 10.2.0.4 =>

11.2.x

9.2.0.7 (或更低版本)

=> 9.2.0.8 =>

11.2.x

 

更多信息,请参阅以下链接:
http://docs.oracle.com/cd/E11882_01/server.112/e23633/preup.htm#UPGRD12358

(以上链接会定向到Oracle Technology Network,需要OTN的用户名和密码进行访问)

升级到11.1.x的升级兼容性矩阵

能够直接升级到Oracle 11g Release 1的数据库最小版本

源数据库

         目标数据库

9.2.0.4   (或更高版本)

11.1.x

10.1.0.2 (或更高版本)

11.1.x

10.2.0.1 (或更高版本)

11.1.x

 

以下版本的数据库需要间接升级。

源数据库

升级路径

目标数据库

7.3.3 (或更低版本)

=> 7.3.4.0 => 9.2.0.8 =>

11.1.x

8.0.5 (或更低版本)

=> 8.0.6.x => 9.2.0.8 =>

11.1.x

8.1.7 (或更低版本)

=> 8.1.7.4 => 9.2.0.8 =>

11.1.x

9.0.1.3 (或更低版本)

=> 9.0.1.4 => 9.2.0.8 =>

11.1.x

9.2.0.3 (或更低版本

=> 9.2.0.4.0 =>

11.1.x

 

更多信息,请参阅以下链接:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28300/preup.htm#CEGEIBHC

(以上链接会定向到Oracle Technology Network,需要OTN的用户名和密码进行访问)

升级到10.2.x的升级兼容性矩阵

能够直接升级到10.2.x的数据库最小版本

源数据库

目标数据库

8.1.7.4   (或更高版本)

10.2.x

9.0.1.4   (或更高版本)

10.2.x

9.2.0.4   (或更高版本)

10.2.x

10.1.0.2 (或更高版本)

10.2.x

以下版本的数据库需要间接升级。

源数据库

升级路径

目标数据库

7.3.3 (或更低版本)

=> 7.3.4 => 8.1.7 =>8.1.7.4 =>

10.2.x

7.3.4 (或更低版本)

=>8.1.7 => 8.1.7.4 =>

10.2.x

8.0.n (或更低版本)

=>8.1.7 => 8.1.7.4 =>

10.2.x

8.1.n (或更低版本)

=>8.1.7 => 8.1.7.4 =>

10.2.x

更多信息,请参阅以下链接:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14238/preup.htm#CEGEIBHC

升级到10.1.x的升级兼容性矩阵

能够直接升级到10.1.x的数据库最小版本

源数据库

目标数据库

8.0.6  (或更高版本)

10.1.x

8.1.7  (或更高版本)

10.1.x

9.0.1 (或更高版本)

10.1.x

9.2.0 (或更高版本)

10.1.x

以下版本的数据库需要间接升级。

源数据库

升级路径

目标数据库

7.3.4 (或更低版本)

=> 8.0.6 =>

10.1.x

8.0.5 (或更低版本)

=> 8.0.6 =>

10.1.x

8.1.6 (或更低版本)

=> 8.1.7 =>

10.1.x

 

更多信息,请参阅以下链接:
http://download.oracle.com/docs/cd/B14117_01/server.101/b10763/preup.htm#CEGEIBHC

升级到9.2.x的升级兼容性矩阵

能够直接升级到9.2.x的数据库最小版本

源数据库

目标数据库

7.3.4 (或更高版本)

9.2.x

8.0.6 (或更高版本)

9.2.x

8.1.7 (或更高版本)

9.2.x

9.0.1 (或更高版本)

9.2.x

以下版本的数据库需要间接升级。

源数据库

升级路径

目标数据库

7.3.3 (或更低版本)

=> 7.3.4 =>

9.2.x

8.0.5 (或更低版本)

=> 8.0.6 =>

9.2.x

8.1.6 (或更低版本)

=> 8.1.7 =>

9.2.x

 

更多信息,请参阅以下链接:
http://download.oracle.com/docs/cd/B10501_01/server.920/a96530/migprep.htm#1006863

降级

 

从19c降级的降级兼容性矩阵

 

源数据库

                 可能降级到

Non-CDB 19c

18c,12.2.0.1,12.1.0.2,12.1.0.1,11.2.0.4

PDB/CDB 19c

18c,12.2.0.1,12.1.0.2

更多的信息,请参考 19c Downgrade link.

 

从18c降级的降级兼容性矩阵

 

源数据库

                  可能降级到

Non-CDB 18c

12.2.0.1,12.1.0.2,12.1.0.1,11.2.0.4

PDB/CDB 18c

12.2.0.1,12.1.0.2

更多的信息,请参考 18c Downgrade link.

从12.2降级的降级兼容性矩阵

 

源数据库

                  可能降级到

Non-CDB 12.2

12.1.0.2,12.1.0.1,11.2.0.4,11.2.0.3

PDB/CDB 12.2

12.1.0.2

更多的信息,请参考 12.2 Downgrade link.

从12.1.x降级的降级兼容性矩阵

源数据库

         可能降级到

12.1.x

11.1.0.7

12.1.x

11.2.0.2.0 (或更高版本)

更多信息,请参阅以下链接:
http://docs.oracle.com/cd/E16655_01/server.121/e17642/downgrade.htm#i1010267

注意:
你不能降级到10.2.0.5因为Oracle Database 12c的最小兼容版本是11.0。
你不能降级一个从Oracle Database Express Edition升级上来的数据库。

从11.2.x降级的降级兼容性矩阵

源数据库

             可能降级到

11.2.x

10.1.0.5.0 (或更高版本)

11.2.x

10.2.0.2.0 (或更高版本)

11.2.x

11.1.0.6.0 (或更高版本)

更多信息,请参阅以下链接:
http://docs.oracle.com/cd/E11882_01/server.112/e23633/downgrade.htm#UPGRD00710

从11.1.x降级的降级兼容性矩阵

源数据库

              可能降级到

11.1.x

10.1.0.5.0 (或更高版本)

11.1.x

10.2.0.3.0 (或更高版本)

更多信息,请参阅以下链接:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28300/downgrade.htm#i1010243

从10.2.x降级的降级兼容性矩阵

源数据库

可能降级到

10.2.x

9.2.0.6.0 (或更高版本)

10.2.x

10.1.0.4.0 (或更高版本)

更多信息,请参阅以下链接:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14238/downgrade.htm#i1010243

从10.1.x降级的降级兼容性矩阵

源数据库

可能降级到

10.1.x

9.2.0.4.0 (或更高版本)

更多信息,请参阅以下链接:
http://download.oracle.com/docs/cd/B14117_01/server.101/b10763/downgrade.htm#i1010243

从9.2.x降级的降级兼容性矩阵

源数据库

可能降级到

9.2.x

9.0.1.3.0 (或更高版本)

9.2.x

8.1.7.3.0 (或更高版本)

更多信息,请参阅以下链接:
http://download.oracle.com/docs/cd/B10501_01/server.920/a96530/downgrad.htm#1008177

注意
1 : 如果您在升级后打了某个补丁集,那么将不能降级。

例如:如果您从9.2.0.6 升级到 10.2.0.1,然后打上了10.2.0.3的补丁集,那么您将不能从10.2.0.3 降级到 9.2.0.6. (只有当您直接升级9.2.0.6 到 10.2.0.3,您才能从10.2.0.3 降级到 9.2.0.6)

2 : 您只能降级到和您数据库升级前一样的版本。
例如:只有您的数据库是从10.2.0.3升级到11.1.0.6的话,才能从11.1.0.6降级到10.2.0.3。如果您创建了新的11.1.0.6数据库,那么您不能将数据库降级到10.2.0.3。而且,如果您直接升级9.2.0.8 到 11.1.0.6,那么您也不能降级到10.2.0.3,因为您的数据库不是从10.2.0.3升级的。

3 : 如果数据库参数COMPATIBLE设置为比您升级前数据库版本更高的版本,您将不能做降级。
例如:如果您从10.2.0.3 升级到 11.1.0.6,之后设置COMPATIBLE 为 11.1.0.6,您不能做降级。而且,如果您从9.2.0.8 直接升级到 11.1.0.6之后设置COMPATIBLE 为10.2.0.1,您也不能做降级。

4 : 您不能升级一个发行版的数据库二进制文件到另一个发行版
例如:您不能升级10.1.0.2.0的二级制文件 到 10.2.0.1.0。发行版必须在一个单独的oracle home下安装。您不能将10.2.0.x安装在任何其他发行版的oracle home下来完成升级。

升级到 oracle 19c 的版本差异

1错误1:ocrconfig.bin” does not exists or is not readable
1.1 现象
19c的RAC 环境,升级RU 从19.3 到 19.6,根据19.6 RU readme文档的操作,opatchauto的时候,报错:

[root@luda software]# export PATH=$PATH:/u01/app/19.3.0/grid/OPatch
[root@luda software]# opatchauto apply /u01/software/30501910

OPatchauto session is initiated at Thu Jan 29 21:18:26 2020

System initialization log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-03-12_09-18-36PM.log.

Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2020-03-12_09-21-01PM.log
The id for this session is 5Z4B

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.3.0/db_1
Patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Patch applicability verified successfully on home /u01/app/19.3.0/grid

Verifying SQL patch applicability on home /u01/app/oracle/product/19.3.0/db_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Preparing to bring down database service on home /u01/app/oracle/product/19.3.0/db_1
Successfully prepared home /u01/app/oracle/product/19.3.0/db_1 to bring down database service

Bringing down CRS service on home /u01/app/19.3.0/grid
CRS service brought down successfully on home /u01/app/19.3.0/grid

Performing prepatch operation on home /u01/app/oracle/product/19.3.0/db_1
Perpatch operation completed successfully on home /u01/app/oracle/product/19.3.0/db_1

Start applying binary patch on home /u01/app/oracle/product/19.3.0/db_1
Binary patch applied successfully on home /u01/app/oracle/product/19.3.0/db_1

Performing postpatch operation on home /u01/app/oracle/product/19.3.0/db_1
Postpatch operation completed successfully on home /u01/app/oracle/product/19.3.0/db_1

Start applying binary patch on home /u01/app/19.3.0/grid
Failed while applying binary patches on home /u01/app/19.3.0/grid

Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : rac1->/u01/app/19.3.0/grid Type[crs]
Details: [
—————————Patching Failed———————————
Command execution failed during patching in home: /u01/app/19.3.0/grid, host: rac1.
Command failed: /u01/app/19.3.0/grid/OPatch/opatchauto apply /u01/software/30501910 -oh /u01/app/19.3.0/grid -target_type cluster -binary -invPtrLoc /u01/app/19.3.0/grid/oraInst.loc -jre /u01/app/19.3.0/grid/OPatch/jre -persistresult /u01/app/19.3.0/grid/OPatch/auto/dbsessioninfo/sessionresult_rac1_crs.ser -analyzedresult /u01/app/19.3.0/grid/OPatch/auto/dbsessioninfo/sessionresult_analyze_rac1_crs.ser
Command failure output:
==Following patches FAILED in apply:

Patch: /u01/software/30501910/30489227
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-12_21-45-54PM_1.log
Reason: Failed during Analysis: CheckNApplyReport Failed, [ Prerequisite Status: FAILED, Prerequisite output:
The details are:

Prerequisite check “CheckApplicable” failed.]

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Thu Jan 29 21:46:04 2020
Time taken to complete the session 27 minutes, 39 seconds

opatchauto failed with error code 42
[root@luda software]#
查看错误日志:

[Jan 29, 2020 10:36:54 PM] [INFO] Prereq checkPatchApplicableOnCurrentPlatform Passed for patch : 30489227
[Jan 29, 2020 10:36:57 PM] [WARNING]Action file /u01/app/19.3.0/grid/jlib/srvmasm.jar is in the jar list,OOP should be lanched
[Jan 29, 2020 10:36:58 PM] [INFO] Patch 30489227:
Copy Action: Source File “/u01/software/30501910/30489227/files/bin/ocrcheck.bin” does not exists or i
s not readable
‘oracle.has.crs, 19.0.0.0.0’: Cannot copy file from ‘ocrcheck.bin’ to ‘/u01/app/19.3.0/grid/bin/ocrche
ck.bin’
Copy Action: Source File “/u01/software/30501910/30489227/files/bin/ocrconfig.bin” does not exists or
is not readable
‘oracle.has.crs, 19.0.0.0.0’: Cannot copy file from ‘ocrconfig.bin’ to ‘/u01/app/19.3.0/grid/bin/ocrco
nfig.bin’
[Jan 29, 2020 10:36:58 PM] [INFO] Prerequisite check “CheckApplicable” failed.
The details are:

Patch 30489227:
Copy Action: Source File “/u01/software/30501910/30489227/files/bin/ocrcheck.bin” does not exists or i
s not readable
‘oracle.has.crs, 19.0.0.0.0’: Cannot copy file from ‘ocrcheck.bin’ to ‘/u01/app/19.3.0/grid/bin/ocrche
ck.bin’
Copy Action: Source File “/u01/software/30501910/30489227/files/bin/ocrconfig.bin” does not exists or
is not readable
‘oracle.has.crs, 19.0.0.0.0’: Cannot copy file from ‘ocrconfig.bin’ to ‘/u01/app/19.3.0/grid/bin/ocrco
nfig.bin’
[Jan 29, 2020 10:36:58 PM] [SEVERE] OUI-67073:UtilSession failed:
Prerequisite check “CheckApplicable” failed.
[Jan 29, 2020 10:36:58 PM] [INFO] Finishing UtilSession at Thu Jan 29 22:36:58 CST 2020
[Jan 29, 2020 10:36:58 PM] [INFO] Log file location: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-12_22-36-43PM
_1.log
[Jan 29, 2020 10:36:58 PM] [INFO] Stack Description: java.lang.RuntimeException:
Prerequisite check “CheckApplicable” failed.
at oracle.opatch.OPatchSessionHelper.runApplyPrereqs(OPatchSessionHelper.java:6548)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:1002)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:370)
at oracle.opatch.opatchutil.NApply.process(NApply.java:352)
at oracle.opatch.opatchutil.OUSession.napply(OUSession.java:1123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at oracle.opatch.UtilSession.process(UtilSession.java:355)
at oracle.opatch.OPatchSession.main(OPatchSession.java:3985)
at oracle.opatch.OPatchSDK.NApply(OPatchSDK.java:1127)
at oracle.opatch.opatchsdk.OPatchTarget.NApplyReport(OPatchTarget.java:3964)
at oracle.opatch.opatchsdk.OPatchTarget.NApplyReportForAllPrereqs(OPatchTarget.java:4013)
at oracle.opatchauto.core.binary.action.AnalyzeReportGenerator.analyzePatch(AnalyzeReportGenerator
.java:186)
at oracle.opatchauto.core.binary.action.AnalyzeReportGenerator.execute(AnalyzeReportGenerator.java
:148)
at oracle.opatchauto.core.binary.action.LegacyPatchAction.execute(LegacyPatchAction.java:46)
at oracle.opatchauto.core.binary.OPatchAutoBinary.patchWithoutAnalyze(OPatchAutoBinary.java:519)
at oracle.opatchauto.core.binary.OPatchAutoBinary.applyWithoutAnalyze(OPatchAutoBinary.java:406)
at oracle.opatchauto.core.OPatchAutoCore.runOPatchAutoBinary(OPatchAutoCore.java:192)
at oracle.opatchauto.core.OPatchAutoCore.main(OPatchAutoCore.java:75)
Caused by: java.lang.RuntimeException:
Prerequisite check “CheckApplicable” failed.
… 21 more
Caused by: oracle.opatch.PrereqFailedException:
Prerequisite check “CheckApplicable” failed.
… 21 more
[Jan 29, 2020 10:36:58 PM] [INFO] EXITING METHOD: NApplyReport(OPatchPatch[] patches,OPatchNApplyOptions options)
[grid@luda OPatch]$
1.2 解决方法
这里的错误是:
“/u01/software/30501910/30489227/files/bin/ocrconfig.bin” does not exists or is not readable
从这里看像是权限问题,手工修改RU 补丁的权限:

[root@luda software]# chown grid:oinstall 30501910 -R
再次执行变成了另外的错误。

2 错误2:Copy failed from ‘…/files/bin/cssdagent’ to ‘…/bin/cssdagent’…
2.1 现象
再次执行,还是报错。如下:

2020-03-13 08:36:48,374 INFO [313] com.oracle.glcm.patch.auto.db.integration.controller.action.OPatchAutoBinaryAction – Opatchcore binary error message=
2020-03-13 08:36:48,375 INFO [313] com.oracle.glcm.patch.auto.db.integration.controller.action.OPatchAutoBinaryAction – Reading session result from /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_rac1_crs.ser
2020-03-13 08:36:48,380 INFO [313] com.oracle.cie.common.util.reporting.CommonReporter – Reporting console output : Message{id=’null’, message=’Failed while applying binary patches on home /u01/app/19.3.0/grid
‘}
2020-03-13 08:36:48,381 SEVERE [311] com.oracle.glcm.patch.auto.action.PatchActionExecutor – Failed to execute patch action [com.oracle.glcm.patch.auto.db.integration.controller.action.OPatchAutoBinaryAction] on patch target [rac1->/u01/app/19.3.0/grid Type[crs]].
2020-03-13 08:36:48,382 INFO [311] com.oracle.cie.common.util.reporting.CommonReporter – Reporting console output : Message{id=’null’, message=’Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : rac1->/u01/app/19.3.0/grid Type[crs]
Details: [
—————————Patching Failed———————————
Command execution failed during patching in home: /u01/app/19.3.0/grid, host: rac1.
Command failed: /u01/app/19.3.0/grid/OPatch/opatchauto apply /u01/software/30501910 -oh /u01/app/19.3.0/grid -target_type cluster -binary -invPtrLoc /u01/app/19.3.0/grid/oraInst.loc -jre /u01/app/19.3.0/grid/OPatch/jre -persistresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_rac1_crs.ser -analyzedresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_analyze_rac1_crs.ser
Command failure output:
==Following patches FAILED in apply:

Patch: /u01/software/30501910/30489227
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_08-31-51AM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: ApplySession failed in system modification phase… ‘ApplySession::apply failed: Copy failed from ‘/u01/software/30501910/30489227/files/bin/crsd.bin’ to ‘/u01/app/19.3.0/grid/bin/crsd.bin’…
Copy failed from ‘/u01/software/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…
Copy failed from ‘/u01/software/30501910/30489227/files/bin/cssdmonitor’ to ‘/u01/app/19.3.0/grid/bin/cssdmonitor’…
Copy fa …

After fixing the cause of failure Run opatchauto resume

]’}
2020-03-13 08:36:48,407 SEVERE [41] com.oracle.cie.wizard.internal.engine.WizardControllerEngine – Wizard error cause
com.oracle.cie.wizard.tasks.TaskExecutionException: OPATCHAUTO-68128: Patch action execution failed.
OPATCHAUTO-68128: Failed to execute patch actions for goal offline:binary-patching
OPATCHAUTO-68128: Check the log for more information.
at com.oracle.glcm.patch.auto.wizard.silent.tasks.PatchActionTask.execute(PatchActionTask.java:106)
at com.oracle.cie.wizard.internal.cont.SilentTaskContainer$TaskRunner.run(SilentTaskContainer.java:102)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.oracle.glcm.patch.auto.OPatchAutoException: OPATCHAUTO-68067: Patch action execution failed.
OPATCHAUTO-68067: Failed to execute patch action [com.oracle.glcm.patch.auto.db.integration.controller.action.OPatchAutoBinaryAction
Patch Target : rac1->/u01/app/19.3.0/grid Type[crs]
Details: [
—————————Patching Failed———————————
Command execution failed during patching in home: /u01/app/19.3.0/grid, host: rac1.
Command failed: /u01/app/19.3.0/grid/OPatch/opatchauto apply /u01/software/30501910 -oh /u01/app/19.3.0/grid -target_type cluster -binary -invPtrLoc /u01/app/19.3.0/grid/oraInst.loc -jre /u01/app/19.3.0/grid/OPatch/jre -persistresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_rac1_crs.ser -analyzedresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_analyze_rac1_crs.ser
Command failure output:
==Following patches FAILED in apply:

Patch: /u01/software/30501910/30489227
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_08-31-51AM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: ApplySession failed in system modification phase… ‘ApplySession::apply failed: Copy failed from ‘/u01/software/30501910/30489227/files/bin/crsd.bin’ to ‘/u01/app/19.3.0/grid/bin/crsd.bin’…
Copy failed from ‘/u01/software/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…
Copy failed from ‘/u01/software/30501910/30489227/files/bin/cssdmonitor’ to ‘/u01/app/19.3.0/grid/bin/cssdmonitor’…
Copy fa …

After fixing the cause of failure Run opatchauto resume

]]. Failures:
OPATCHAUTO-68067: Check the details to determine the cause of the failure.
at com.oracle.glcm.patch.auto.action.PatchActionExecutor.execute(PatchActionExecutor.java:172)
at com.oracle.glcm.patch.auto.wizard.silent.tasks.PatchActionTask.execute(PatchActionTask.java:102)
… 2 more
2020-03-13 08:36:48,516 INFO [1] com.oracle.glcm.patch.auto.db.integration.model.productsupport.DBBaseProductSupport – Space available after session: 44935 MB
2020-03-13 08:36:49,875 SEVERE [1] com.oracle.glcm.patch.auto.OPatchAuto – OPatchAuto failed.
com.oracle.glcm.patch.auto.OPatchAutoException: OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
at com.oracle.glcm.patch.auto.OrchestrationEngineImpl.orchestrate(OrchestrationEngineImpl.java:40)
at com.oracle.glcm.patch.auto.OPatchAuto.orchestrate(OPatchAuto.java:858)
at com.oracle.glcm.patch.auto.OPatchAuto.orchestrate(OPatchAuto.java:398)
at com.oracle.glcm.patch.auto.OPatchAuto.orchestrate(OPatchAuto.java:344)
at com.oracle.glcm.patch.auto.OPatchAuto.main(OPatchAuto.java:212)
2020-03-13 08:36:49,875 INFO [1] com.oracle.cie.common.util.reporting.CommonReporter – Reporting console output : Message{id=’null’, message=’OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.’}
2020-03-13 08:36:49,875 INFO [1] com.oracle.cie.common.util.reporting.CommonReporter – Reporting console output : Message{id=’null’, message=’OPatchAuto failed.’}
这次错误变了:

Copy failed from ‘/u01/software/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…

2.2 解决方法
上MOS搜了一下,有一篇文章和这个错误很类似:
opatch report “ERROR: Prereq checkApplicable failed.” when Applying Grid Infrastructure patch (Doc ID 1417268.1)

MOS里将了多种原因可能会触发这个错误,和我这里最像的是这一条:

D. The patch is not unzipped as grid user, often it is unzipped as root user
ls -l will show the files are owned by root user.
The solution is to unzip the patch as grid user into an empty directory outside of GRID_HOME, then retry the patch apply.
我的RU patch 虽然不在GRID_HOME目录下,但也是用root用户解压缩的。 删除原来的解压缩后,重新用grid用户解压缩一次,在打补丁,还是同样的错误:
[grid@luda tmp]$ unzip -d /tmp p30501910_190000_Linux-x86-64-GI.zip

[Mar 13, 2020 9:18:08 AM] [INFO] Copying file to “/u01/app/19.3.0/grid/srvm/lib/sprraw.o”
[Mar 13, 2020 9:18:08 AM] [INFO] The following actions have failed:
[Mar 13, 2020 9:18:08 AM] [WARNING] OUI-67124:Copy failed from ‘/tmp/30501910/30489227/files/bin/crsd.bin’ to ‘/u01/app/19.3.0/grid/bin/crsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdmonitor’ to ‘/u01/app/19.3.0/grid/bin/cssdmonitor’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmd.bin’ to ‘/u01/app/19.3.0/grid/bin/evmd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmlogger.bin’ to ‘/u01/app/19.3.0/grid/bin/evmlogger.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gipcd.bin’ to ‘/u01/app/19.3.0/grid/bin/gipcd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gpnpd.bin’ to ‘/u01/app/19.3.0/grid/bin/gpnpd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/mdnsd.bin’ to ‘/u01/app/19.3.0/grid/bin/mdnsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ocssd.bin’ to ‘/u01/app/19.3.0/grid/bin/ocssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/octssd.bin’ to ‘/u01/app/19.3.0/grid/bin/octssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ohasd.bin’ to ‘/u01/app/19.3.0/grid/bin/ohasd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/oraagent.bin’ to ‘/u01/app/19.3.0/grid/bin/oraagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/orarootagent.bin’ to ‘/u01/app/19.3.0/grid/bin/orarootagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/osysmond.bin’ to ‘/u01/app/19.3.0/grid/bin/osysmond.bin’…
[Mar 13, 2020 9:18:08 AM] [INFO] Do you want to proceed? [y|n]
[Mar 13, 2020 9:18:08 AM] [INFO] N (auto-answered by -silent)
[Mar 13, 2020 9:18:08 AM] [INFO] User Responded with: N
[Mar 13, 2020 9:18:08 AM] [WARNING] OUI-67124:ApplySession failed in system modification phase… ‘ApplySession::apply failed: Copy failed from ‘/tmp/30501910/30489227/f
iles/bin/crsd.bin’ to ‘/u01/app/19.3.0/grid/bin/crsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdmonitor’ to ‘/u01/app/19.3.0/grid/bin/cssdmonitor’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmd.bin’ to ‘/u01/app/19.3.0/grid/bin/evmd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmlogger.bin’ to ‘/u01/app/19.3.0/grid/bin/evmlogger.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gipcd.bin’ to ‘/u01/app/19.3.0/grid/bin/gipcd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gpnpd.bin’ to ‘/u01/app/19.3.0/grid/bin/gpnpd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/mdnsd.bin’ to ‘/u01/app/19.3.0/grid/bin/mdnsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ocssd.bin’ to ‘/u01/app/19.3.0/grid/bin/ocssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/octssd.bin’ to ‘/u01/app/19.3.0/grid/bin/octssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ohasd.bin’ to ‘/u01/app/19.3.0/grid/bin/ohasd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/oraagent.bin’ to ‘/u01/app/19.3.0/grid/bin/oraagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/orarootagent.bin’ to ‘/u01/app/19.3.0/grid/bin/orarootagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/osysmond.bin’ to ‘/u01/app/19.3.0/grid/bin/osysmond.bin’…

[Mar 13, 2020 9:18:08 AM] [INFO] Restoring “/u01/app/19.3.0/grid” to the state prior to running NApply…
[Mar 13, 2020 9:18:08 AM] [INFO] Restoring files: copy recurse from /u01/app/19.3.0/grid/.patch_storage/NApply/2020-03-13_09-15-07AM/backup to /u01/app/19.3.0/grid

#### Stack trace of processes holding locks ####

Time: 2020-03-13_09-14-13AM
Command: oracle/opatchauto/core/OPatchAutoCore apply /tmp/30501910 -oh /u01/app/19.3.0/grid -target_type cluster -binary -invPtrLoc /u01/app/19.3.0/grid/oraInst.loc -per
sistresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_rac1_crs.ser -analyzedresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_a
nalyze_rac1_crs.ser -customLogDir /u01/app/19.3.0/grid/cfgtoollogs
Lock File Name: /u01/app/oraInventory/locks/_u01_app_19.3.0_grid_writer.lock
StackTrace:
———–
java.lang.Throwable
at oracle.sysman.oii.oiit.OiitLockHeartbeat.writeStackTrace(OiitLockHeartbeat.java:193)
at oracle.sysman.oii.oiit.OiitLockHeartbeat.(OiitLockHeartbeat.java:173)
at oracle.sysman.oii.oiit.OiitTargetLocker.getWriterLock(OiitTargetLocker.java:346)
at oracle.sysman.oii.oiit.OiitTargetLocker.getWriterLock(OiitTargetLocker.java:238)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.acquireLocks(OiicStandardInventorySession.java:564)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initAreaControl(OiicStandardInventorySession.java:359)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:332)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:294)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:243)
at oracle.sysman.oui.patch.impl.HomeOperationsImpl.initialize(HomeOperationsImpl.java:107)
at oracle.glcm.opatch.common.api.install.HomeOperationsShell.initialize(HomeOperationsShell.java:117)
at oracle.opatch.ipm.IPMRWServices.addPatchCUP(IPMRWServices.java:134)
at oracle.opatch.ipm.IPMRWServices.add(IPMRWServices.java:146)
at oracle.opatch.ApplySession.apply(ApplySession.java:899)
at oracle.opatch.ApplySession.processLocal(ApplySession.java:4098)
at oracle.opatch.ApplySession.process(ApplySession.java:5080)
at oracle.opatch.ApplySession.process(ApplySession.java:4942)
at oracle.opatch.OPatchACL.processApply(OPatchACL.java:310)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:1429)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:370)
at oracle.opatch.opatchutil.NApply.process(NApply.java:352)
at oracle.opatch.opatchutil.OUSession.napply(OUSession.java:1123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at oracle.opatch.UtilSession.process(UtilSession.java:355)
at oracle.opatch.OPatchSession.main(OPatchSession.java:3985)
at oracle.opatch.OPatchSDK.NApply(OPatchSDK.java:1127)
at oracle.opatch.opatchsdk.OPatchTarget.NApply(OPatchTarget.java:4169)
at oracle.opatchauto.core.binary.action.LegacyPatchAction.execute(LegacyPatchAction.java:76)
at oracle.opatchauto.core.binary.OPatchAutoBinary.patchWithoutAnalyze(OPatchAutoBinary.java:519)
at oracle.opatchauto.core.binary.OPatchAutoBinary.applyWithoutAnalyze(OPatchAutoBinary.java:406)
at oracle.opatchauto.core.OPatchAutoCore.runOPatchAutoBinary(OPatchAutoCore.java:192)
at oracle.opatchauto.core.OPatchAutoCore.main(OPatchAutoCore.java:75)
继续研究,有可能是bug:
Bug 13575478 : PATCH APLLICABLE/CONFLICT CHECK FAILED WITH ‘OPATCH AUTO’
A. Expected behaviour if GRID_HOME has not been unlocked

If GI home has not been unlocked with “rootcrs.pl -unlock”, checkapplicable will fail as many files are still owned by root user, this is expected behaviour. The solution is to use “opatch auto” or follow the patch readme step-by-step so the GI home gets unlocked first.
这里的方法和opatchauto有出入,所以尝试进行了analyze,提示有patch 已经打上了,但是通过opatch lspatches 并没有显示,所以对19.6 的RU执行了rollback。

[root@luda ~]# /u01/app/19.3.0/grid/OPatch/opatchauto apply /tmp/30501910 -analyze
[root@luda ~]# /u01/app/19.3.0/grid/OPatch/opatchauto rollback /tmp/30501910
再次执行,变成了另外错误3.

3 错误3:”CheckActiveFilesAndExecutables”
3.1 现象
这里的错误提示非常明显:

Following active executables are used by opatch process :
/u01/app/oracle/product/19.3.0/db_1/lib/libclntsh.so.19.1
/u01/app/oracle/product/19.3.0/db_1/lib/libsqlplus.so
[Mar 13, 2020 3:13:26 PM] [INFO] Prerequisite check “CheckActiveFilesAndExecutables” failed.
The details are:

Following active executables are not used by opatch process :

Following active executables are used by opatch process :
/u01/app/oracle/product/19.3.0/db_1/lib/libclntsh.so.19.1
/u01/app/oracle/product/19.3.0/db_1/lib/libsqlplus.so
[Mar 13, 2020 3:13:26 PM] [SEVERE] OUI-67073:UtilSession failed: Prerequisite check “CheckActiveFilesAndExecutables” failed.
[Mar 13, 2020 3:13:26 PM] [INFO] Finishing UtilSession at Fri Mar 13 15:13:26 CST 2020
[Mar 13, 2020 3:13:26 PM] [INFO] Log file location: /u01/app/oracle/product/19.3.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_15-09-58PM_1.log
[root@luda ~]#
3.2 解决方法
有活动的sqlplus 窗口存在。 先执行rollback:
[root@luda ~]# /u01/app/19.3.0/grid/OPatch/opatchauto rollback /tmp/30501910

OPatchauto session is initiated at Fri Mar 13 15:18:58 2020

System initialization log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-03-13_03-19-01PM.log.

Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2020-03-13_03-21-09PM.log
The id for this session is U2H9

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.3.0/db_1
Patch applicability verified successfully on home /u01/app/19.3.0/grid

Patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Verifying SQL patch applicability on home /u01/app/oracle/product/19.3.0/db_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Preparing to bring down database service on home /u01/app/oracle/product/19.3.0/db_1
Successfully prepared home /u01/app/oracle/product/19.3.0/db_1 to bring down database service

Performing prepatch operation on home /u01/app/oracle/product/19.3.0/db_1
Perpatch operation completed successfully on home /u01/app/oracle/product/19.3.0/db_1

Start rolling back binary patch on home /u01/app/oracle/product/19.3.0/db_1
Binary patch rolled back successfully on home /u01/app/oracle/product/19.3.0/db_1

Performing postpatch operation on home /u01/app/oracle/product/19.3.0/db_1
Postpatch operation completed successfully on home /u01/app/oracle/product/19.3.0/db_1

Preparing home /u01/app/oracle/product/19.3.0/db_1 after database service restarted
No step execution required………

Trying to roll back SQL patch on home /u01/app/oracle/product/19.3.0/db_1
SQL patch rolled back successfully on home /u01/app/oracle/product/19.3.0/db_1

OPatchAuto successful.

——————————–Summary——————————–

Patching is completed successfully. Please find the summary as follows:

Host:rac1
CRS Home:/u01/app/19.3.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /tmp/30501910/30489227
Reason: This Patch does not exist in the home, it cannot be rolled back.

Patch: /tmp/30501910/30489632
Reason: This Patch does not exist in the home, it cannot be rolled back.

Patch: /tmp/30501910/30655595
Reason: This Patch does not exist in the home, it cannot be rolled back.

Patch: /tmp/30501910/30557433
Reason: This Patch does not exist in the home, it cannot be rolled back.

Host:rac1
RAC Home:/u01/app/oracle/product/19.3.0/db_1
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /tmp/30501910/30489632
Reason: This patch is not applicable to this specified target type – “rac_database”

Patch: /tmp/30501910/30655595
Reason: This patch is not applicable to this specified target type – “rac_database”

Patch: /tmp/30501910/30557433
Reason: Patch /tmp/30501910/30557433 is not applied as part of bundle patch 30501910

==Following patches were SUCCESSFULLY rolled back:

Patch: /tmp/30501910/30489227
Log: /u01/app/oracle/product/19.3.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_15-23-36PM_1.log

Patching session reported following warning(s):
_________________________________________________

[WARNING] The database instance ‘cndba1’ from ‘/u01/app/oracle/product/19.3.0/db_1′, in host’rac1’ is not running. SQL changes, if any, will not be rolled back.
To roll back. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle).
Refer to the readme to get the correct steps for applying the sql changes.

OPatchauto session completed at Fri Mar 13 15:24:58 2020
Time taken to complete the session 6 minutes, 1 second
[root@luda ~]#
再次apply,依旧报错。

4错误4:OUI-67073:UtilSession failed: ApplySession failed in system modification phase(这种解决方法不要尝试)
4.1 现象
这里的错误和错误2是一致的。 其实这个错误才是最根本的,影响19c 打RU的现象,只是MOS上没有找到合理的解释。 现在可行的方法就是使用 nonrolling的方式,分别对GI和DB 进行RU的升级。

————————————

[Mar 13, 2020 3:54:57 PM] [INFO] Removed patch “30489227” with UPI + “23305624” from OUI inventory memory..
[Mar 13, 2020 3:54:57 PM] [INFO] Stack Description: java.lang.RuntimeException: OUI session not initialized
at oracle.sysman.oui.patch.impl.HomeOperationsImpl.saveInventory(HomeOperationsImpl.java:372)
at oracle.glcm.opatch.common.api.install.HomeOperationsShell.saveInventory(HomeOperationsShell.java:204)
at oracle.opatch.ipm.IPMRWServices.saveInstallInventory(IPMRWServices.java:854)
at oracle.opatch.OPatchSession.restorePatchesInventory(OPatchSession.java:1434)
at oracle.opatch.MergedPatchObject.restoreOH(MergedPatchObject.java:1234)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:1465)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:370)
at oracle.opatch.opatchutil.NApply.process(NApply.java:352)
at oracle.opatch.opatchutil.OUSession.napply(OUSession.java:1123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at oracle.opatch.UtilSession.process(UtilSession.java:355)
at oracle.opatch.OPatchSession.main(OPatchSession.java:3985)
at oracle.opatch.OPatchSDK.NApply(OPatchSDK.java:1127)
at oracle.opatch.opatchsdk.OPatchTarget.NApply(OPatchTarget.java:4169)
at oracle.opatchauto.core.binary.action.LegacyPatchAction.execute(LegacyPatchAction.java:76)
at oracle.opatchauto.core.binary.OPatchAutoBinary.patchWithoutAnalyze(OPatchAutoBinary.java:519)
at oracle.opatchauto.core.binary.OPatchAutoBinary.applyWithoutAnalyze(OPatchAutoBinary.java:406)
at oracle.opatchauto.core.OPatchAutoCore.runOPatchAutoBinary(OPatchAutoCore.java:192)
at oracle.opatchauto.core.OPatchAutoCore.main(OPatchAutoCore.java:75)
[Mar 13, 2020 3:54:57 PM] [SEVERE] OUI-67115:OPatch failed to restore OH ‘/u01/app/19.3.0/grid’. Consult OPatch document to restore the home manually before proceeding.
[Mar 13, 2020 3:54:57 PM] [WARNING] OUI-67124:
NApply was not able to restore the home. Please invoke the following scripts:
– restore.[sh,bat]
– make.txt (Unix only)
to restore the ORACLE_HOME. They are located under
“/u01/app/19.3.0/grid/.patch_storage/NApply/2020-03-13_15-52-41PM”
[Mar 13, 2020 3:54:58 PM] [SEVERE] OUI-67073:UtilSession failed: ApplySession failed in system modification phase… ‘ApplySession::apply failed: Copy failed from ‘/tmp/30501910/30489227/files/bin/crsd.bin’ to ‘/u01/app/19.3.0/grid/bin/crsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdmonitor’ to ‘/u01/app/19.3.0/grid/bin/cssdmonitor’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmd.bin’ to ‘/u01/app/19.3.0/grid/bin/evmd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmlogger.bin’ to ‘/u01/app/19.3.0/grid/bin/evmlogger.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gipcd.bin’ to ‘/u01/app/19.3.0/grid/bin/gipcd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gpnpd.bin’ to ‘/u01/app/19.3.0/grid/bin/gpnpd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/mdnsd.bin’ to ‘/u01/app/19.3.0/grid/bin/mdnsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ocssd.bin’ to ‘/u01/app/19.3.0/grid/bin/ocssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/octssd.bin’ to ‘/u01/app/19.3.0/grid/bin/octssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ohasd.bin’ to ‘/u01/app/19.3.0/grid/bin/ohasd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/oraagent.bin’ to ‘/u01/app/19.3.0/grid/bin/oraagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/orarootagent.bin’ to ‘/u01/app/19.3.0/grid/bin/orarootagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/osysmond.bin’ to ‘/u01/app/19.3.0/grid/bin/osysmond.bin’…
4.2 解决方法
按日志提示,执行脚本:

[root@luda 2020-03-13_15-52-41PM]# ls
backup make.txt patchlist.txt restore.sh
[root@luda 2020-03-13_15-52-41PM]# ./restore.sh
This script is going to restore the Oracle Home to the previous state.
It does not perform any of the following:
– Running init/pre/post scripts
– Oracle binary re-link
– Customized steps performed manually by user
Please use this script with supervision from Oracle Technical Support.
About to modify Oracle Home( /u01/app/19.3.0/grid )
Do you want to proceed? [Y/N]
y
User responded with : Y
Restore script completed.
[root@luda 2020-03-13_15-52-41PM]#
4.3 分析过程1
因为之前不能copy文件都是crs的,所以在进行到RU 进行到停CRS时,我手工的执行了停CRS的操作:

[root@luda 2020-03-13_16-45-11PM]# crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘rac1’
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘rac1’
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server ‘rac1’
CRS-2673: Attempting to stop ‘ora.chad’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.cndba.db’ on ‘rac1’
CRS-2677: Stop of ‘ora.cndba.db’ on ‘rac1’ succeeded
CRS-33673: Attempting to stop resource group ‘ora.asmgroup’ on server ‘rac1’
CRS-2673: Attempting to stop ‘ora.OCR.dg’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.DATA.dg’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.MGMT.dg’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.LISTENER.lsnr’ on ‘rac1’
CRS-2677: Stop of ‘ora.DATA.dg’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.MGMT.dg’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.OCR.dg’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.asm’ on ‘rac1’
CRS-2677: Stop of ‘ora.LISTENER.lsnr’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.rac1.vip’ on ‘rac1’
CRS-2677: Stop of ‘ora.rac1.vip’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.asm’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.ASMNET1LSNR_ASM.lsnr’ on ‘rac1’
CRS-2677: Stop of ‘ora.chad’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.ASMNET1LSNR_ASM.lsnr’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.asmnet1.asmnetwork’ on ‘rac1’
CRS-2677: Stop of ‘ora.asmnet1.asmnetwork’ on ‘rac1’ succeeded
CRS-33677: Stop of resource group ‘ora.asmgroup’ on server ‘rac1’ succeeded.
CRS-2672: Attempting to start ‘ora.rac1.vip’ on ‘rac2’
CRS-2676: Start of ‘ora.rac1.vip’ on ‘rac2’ succeeded
CRS-2673: Attempting to stop ‘ora.ons’ on ‘rac1’
CRS-2677: Stop of ‘ora.ons’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.net1.network’ on ‘rac1’
CRS-2677: Stop of ‘ora.net1.network’ on ‘rac1’ succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘rac1’ has completed
CRS-2677: Stop of ‘ora.crsd’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.storage’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.crf’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.drivers.acfs’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘rac1’
CRS-2677: Stop of ‘ora.crf’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.drivers.acfs’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.storage’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.asm’ on ‘rac1’
CRS-2677: Stop of ‘ora.mdnsd’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.asm’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘rac1’
CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.evmd’ on ‘rac1’
CRS-2677: Stop of ‘ora.ctssd’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.evmd’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.cssd’ on ‘rac1’
CRS-2677: Stop of ‘ora.cssd’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.driver.afd’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘rac1’
CRS-2677: Stop of ‘ora.driver.afd’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.gpnpd’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.gipcd’ on ‘rac1’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘rac1’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@luda 2020-03-13_16-45-11PM]#
4.4 停完CRS之后,RU打成功
[root@luda tmp]# /u01/app/19.3.0/grid/OPatch/opatchauto apply /tmp/30501910

OPatchauto session is initiated at Fri Mar 13 17:17:26 2020

System initialization log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-03-13_05-17-50PM.log.

Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2020-03-13_05-20-03PM.log
The id for this session is FPKL

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.3.0/db_1
Patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Patch applicability verified successfully on home /u01/app/19.3.0/grid

Verifying SQL patch applicability on home /u01/app/oracle/product/19.3.0/db_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Preparing to bring down database service on home /u01/app/oracle/product/19.3.0/db_1
Successfully prepared home /u01/app/oracle/product/19.3.0/db_1 to bring down database service

Bringing down CRS service on home /u01/app/19.3.0/grid
CRS service brought down successfully on home /u01/app/19.3.0/grid

Start applying binary patch on home /u01/app/19.3.0/grid
Binary patch applied successfully on home /u01/app/19.3.0/grid

Starting CRS service on home /u01/app/19.3.0/grid
CRS service started successfully on home /u01/app/19.3.0/grid

Preparing home /u01/app/oracle/product/19.3.0/db_1 after database service restarted
No step execution required………

OPatchAuto successful.

——————————–Summary——————————–

Patching is completed successfully. Please find the summary as follows:

Host:rac1
RAC Home:/u01/app/oracle/product/19.3.0/db_1
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /tmp/30501910/30489632
Reason: This patch is not applicable to this specified target type – “rac_database”

Patch: /tmp/30501910/30655595
Reason: This patch is not applicable to this specified target type – “rac_database”

Patch: /tmp/30501910/30489227
Reason: This patch is already been applied, so not going to apply again.

Patch: /tmp/30501910/30557433
Reason: This patch is already been applied, so not going to apply again.

Host:rac1
CRS Home:/u01/app/19.3.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /tmp/30501910/30489227
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_17-23-27PM_1.log

Patch: /tmp/30501910/30489632
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_17-23-27PM_1.log

Patch: /tmp/30501910/30557433
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_17-23-27PM_1.log

Patch: /tmp/30501910/30655595
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_17-23-27PM_1.log

OPatchauto session completed at Fri Mar 13 17:45:20 2020
Time taken to complete the session 27 minutes, 56 seconds
[root@luda tmp]#
4.5 遗留问题
这种在打RU过程中手工停crs的方式可以让RU 打成功,但是会导致很多权限问题,CRS 无法启动:

[root@luda lib]# crsctl start crs
CRS-41053: checking Oracle Grid Infrastructure for file permission issues
PRVG-2031 : Owner of file “/u01/app/grid/cfgtoollogs” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVH-0124 : Path “/etc/oracle/maps” with permissions “rwxr-xr-x” does not have write permissions for the file’s group and others on node “rac1”.
PRVH-0100 : Restricted deletion flag is not set for path “/etc/oracle/maps” on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/grid” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/admin” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVH-0111 : Path “/etc/init.d/ohasd” with permissions “rwxr-x—” does not have read permissions for others on node “rac1”.
PRVH-0113 : Path “/etc/init.d/ohasd” with permissions “rwxr-x—” does not have execute permissions for others on node “rac1”.
PRVG-2031 : Owner of file “/etc/oracleafd.conf” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2032 : Group of file “/etc/oracleafd.conf” did not match the expected value on node “rac1”. [Expected = “oinstall(54321)” ; Found = “asmadmin(54329)”]
PRVH-0124 : Path “/var/tmp/.oracle” with permissions “rwxr-xr-x” does not have write permissions for the file’s group and others on node “rac1”.
PRVH-0100 : Restricted deletion flag is not set for path “/var/tmp/.oracle” on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/grid/diag/ofm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/lsnrctl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/netcman” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/audit” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/checkpoints” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/crsdata/rac1/olr/rac1_19.olr” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVH-0100 : Restricted deletion flag is not set for path “/u01/app/grid/crsdata/rac1/shm” on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/grid/crsdata/rac1/shm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/crsdata/rac1/cvu” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/crsdata/rac1/olr” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/crsdata” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/crsdata/rac1/ocr” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/kfod” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/asmtool” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/crs” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/dps” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/em” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/diagtool” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/gsm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/ios” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/rdbms” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/apx” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/tnslsnr” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/asmcmd” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/clients” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/asm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/afdboot” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-11960 : Set user ID bit is not set for file “/u01/app/19.3.0/grid/bin/jssu” on node “rac1”.
PRVH-0147 : Set group ID bit is not set for file “/u01/app/19.3.0/grid/bin/extproc” on node “rac1”.
PRVG-11960 : Set user ID bit is not set for file “/u01/app/19.3.0/grid/bin/extjob” on node “rac1”.
PRVG-11960 : Set user ID bit is not set for file “/u01/app/19.3.0/grid/bin/oradism” on node “rac1”.
PRVG-11960 : Set user ID bit is not set for file “/u01/app/19.3.0/grid/bin/oracle” on node “rac1”.
PRVH-0147 : Set group ID bit is not set for file “/u01/app/19.3.0/grid/bin/oracle” on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/HASLoad.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2032 : Group of file “/u01/app/19.3.0/grid/crs/install/cmdllroot.sh” did not match the expected value on node “rac1”. [Expected = “oinstall(54321)” ; Found = “root(0)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/crsconfig_params.sbs” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/crsconvert.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/install_gi.excl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/paramfile.crs” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/oracle-ohasd.service” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/oracle-ohasd.conf” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/installRemove.excl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/install.incl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/install.excl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/inittab” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/crstfa.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/crsconvtoext.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/crsconfig_addparams.sbs” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/CLSR.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/dropdb.pl” did not match the expected value on node “rac1”. [Expected = “root(0)|root(0)” ; Found = “grid(54322)”]
PRVH-0109 : Path “/u01/app/19.3.0/grid/lib/libacfs19.so” with permissions “rwxr-xr-x” does not have write permissions for the file’s group on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/tfa_setup” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/roothas.sh” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/rootcrs.sh” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVH-0109 : Path “/u01/app/19.3.0/grid/crs/install/rhpdata” with permissions “rwxr-xr-x” does not have write permissions for the file’s group on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/rhpdata” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/post_gimr_ugdg.pl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/perlhasgen.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVH-0109 : Path “/u01/app/19.3.0/grid/lib/libedtn19.a” with permissions “rwxr-xr-x” does not have write permissions for the file’s group on node “rac1”.
PRVH-0109 : Path “/u01/app/19.3.0/grid/lib/libskgxpcompat.so” with permissions “rwxr-xr-x” does not have write permissions for the file’s group on node “rac1”.
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
[root@luda lib]#
根据错误提示逐个手工修改权限:
[root@luda grid]# chown root:oinstall admin -R
[root@luda grid]# pwd
/u01/app/grid
[root@luda grid]# chown root:oinstall * -R
[root@luda grid]# cd /u01/app/19.3.0/grid/crs/install/
[root@luda install]# ll

Oracle 19c RAC 环境升级 19.6 RU OPatch Prerequisite check “CheckApplicable” failed

An Oracle Data Guard far sync instance is a remote Oracle Data Guard destination that accepts redo from the primary database and then ships that redo to other members of the Oracle Data Guard configuration. A far sync instance manages a control file, receives redo into standby redo logs (SRLs), and archives those SRLs to local archived redo logs, but that is where the similarity with standbys ends. A far sync instance does not have user data files, cannot be opened for access, cannot run redo apply, and can never function in the primary role or be converted to any type of standby database.

Active Data Guard Far Sync是Oracle 12c的新特性(也称为Far Sync Standby),Far Sync功能的实现是通过在距离主库(Primary Database)相对较近的地点配置Far Sync实例,主库(Primary Database) 同步(synchronous)传输redo到Far Sync实例,然后Far Sync实例再将redo异步(asynchronous)传输到终端备库(Standby Database)。这样既可以保证零数据丢失又可以降低主库压力。Far Sync实例只有密码文件,init参数文件和控制文件,而没有数据文件。所以无法打开用于访问。

Far Sync配置对于Data Guard 角色转换(role transitions)是透明的,即switchover/failover命令方式与12c之前相同。

2 实验-Far Sync安装配置
创建Far Sync实例类似于创建物理备库,但数据文件在Far Sync实例中不存在。因此不需要拷贝数据文件并还原数据文件。一旦Far Sync实例创建了,那么就默认运行在Maximum Availability模式,那么REDO是实时同步传输的。

2.1 主库配置和操作
2.1.1 创建Far Sync实例的控制文件
SQL> ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE AS ‘/tmp/control01.ctl’;
Database altered.
将上面生成的Far Sync的控制文件拷贝到Far Sync所在的主机上。

SQL> !scp /tmp/control01.ctl 192.168.1.173://u01/app/oracle/oradata/cndba_far/

[Expect-le@ www.cndba.cn]$ pwd
/u01/app/oracle/oradata/cndba_far
2.1.2 修改配置,指向Far Sync实例
SQL> alter system set LOG_ARCHIVE_CONFIG =’DG_CONFIG=(cndba_p,cndba_far_sync,cndba_s)’ scope=both;
System altered.

SQL> alter system set LOG_ARCHIVE_DEST_2=’SERVICE=cndba_far_sync ASYNC NOAFFIRM
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=cndba_far_sync’ scope=both;
System altered.
2.2 备库修改配置
SQL> alter system set LOG_ARCHIVE_CONFIG =’DG_CONFIG=(cndba_p,cndba_far_sync,cndba_s)’ scope=both;
System altered.

SQL> alter system set LOG_ARCHIVE_DEST_2=’SERVICE=cndba_p ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=cndba_p’ scope=both;
System altered.
2.3 Far Sync实例配置
2.3.1 修改PFILE
cndba_s是物理备库,cndba_far_sync是Far Sync实例的DB_UNIQUE_NAME。如果日志文件路径需要修改,也要修改DB_FILE_NAME_CONVERT和LOG_FILE_NAME_CONVERT。

DB_UNIQUE_NAME=cndba_far_sync
CONTROL_FILES=’/u01/app/oracle/oradata/cndba_far/control01.ctl’
FAL_SERVER=cndba_p
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(cndba_p,cndba_far_sync,cndba_s)’
LOG_ARCHIVE_DEST_1=’LOCATION=/u01/archivelog VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=cndba_far_sync’
LOG_ARCHIVE_DEST_2=’SERVICE=cndba_s ASYNC
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=cndba_s’
2.3.2 将Far Sync实例启动到mount
SQL> create spfile from pfile=’/u01/app/oracle/product/12.1.0.2/db_1/dbs/initcndba_far.ora’;
File created.

SQL> startup mount;
ORACLE instance started.
Total System Global Area 2348810240 bytes
Fixed Size 2927048 bytes
Variable Size 1409287736 bytes
Database Buffers 922746880 bytes
Redo Buffers 13848576 bytes
Database mounted.
2.3.3 查看Far Sync实例状态
SQL> select protection_mode,database_role,protection_level,open_mode from v$database;
PROTECTION_MODE DATABASE_ROLE PROTECTION_LEVEL OPEN_MODE
——————– —————- ——————– ——————–
MAXIMUM PERFORMANCE FAR SYNC MAXIMUM PERFORMANCE MOUNTED
2.3.4 创建standby redo log(可选,最好创建)
语法:

ALTER DATABASE ADD STANDBY LOGFILE GROUP 4(‘/u01/app/oracle/oradata/cndba_far/standbyredo11.log’) SIZE 52428800;
2.4 主备库和Far Sync添加TNSNAME
CNDBA_FAR_SYNC =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.173)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = cndba)
)
)
2.5 检查配置
SQL> select * from V$DATAGUARD_CONFIG;
DB_UNIQUE_NAME PARENT_DBUN DEST_ROLE CURRENT_SCN CON_ID
—————————— —————————— —————– ———– ———-
cndba_p NONE PRIMARY DATABASE 2154617 0
cndba_far_sync cndba_p FAR SYNC INSTANCE 2151372 0
cndba_s cndba_far_sync PHYSICAL STANDBY 2151372 0
从上面可以看出,配置没有问题。

Cndba_p -> cndba_far_sync -> cndba_s

2.6 测试日志是否正常传输
–查看当前日志序列号

主库:

SQL> select max(sequence#) from v$archived_log;
MAX(SEQUENCE#)
————–
56
Far Sync实例:

SQL> select max(sequence#) from v$archived_log;
MAX(SEQUENCE#)
————–
56
备库:
SQL> select max(sequence#) from v$archived_log;
MAX(SEQUENCE#)
————–
56
–手动切换日志

SQL> alter system switch logfile;
System altered.
–再次查看备库和Far Sync实例的日志序列号
SQL> select max(sequence#) from v$archived_log;
MAX(SEQUENCE#)
————–
57
至此搭建Far Sync结束了。

3 总结
1. 由于Far Sync存在单点故障,所以建议搭建两个及以上的Far Sync实例。默认只启用其中一个,只有当一个挂掉了,才会自动启用另一个。

2. 主库建议配置一个备用的LOG_ARCHIVE_DEST_3,直接指向备库。在LOG_ARCHIVE_DEST_2不可用时,会启用LOG_ARCHIVE_DEST_3直接传输redo到备库。

3. Far Sync可大大减少主库的压力,特别是在一主多备的情况下。

4. switchover/failover对于Far Sync是透明的,不需要特殊配置,按正常切换即可。

Active DataGuard Far sync在dataguard中的应用(12c,18c,19c)

1  说明

In previous releases, when creating a Data Guard configuration using the SQL command line, the default configuration was to apply redo from archived log files on the standby database. In Oracle Database 12c Release 1 (12.1), the default configuration is to use real-time apply so that redo is applied directly from the standby redo log file.

在之前版本中启用MRP默认是从归档日志文件中应用redo日志。从12c开始,默认是通过读取standby redo日志文件来启用real-time redo应用。

Recovery time is shortened at failover given that there is no backlog of redo waiting to be applied to the standby database if a failover is required. An active Data Guard user also sees more current data. This enhancement eliminates additional manual configuration (and the requirement that the administrator be aware of the default setting) that was required in past releases. It also makes the default SQL*Plus configuration identical to the default configuration used by the Data Guard broker.

2  实验

2.1   主备库状态

主库

SQL> select protection_mode,database_role,protection_level,open_mode from v$database;
PROTECTION_MODE      DATABASE_ROLE    PROTECTION_LEVEL	   OPEN_MODE
-------------------- ---------------- -------------------- --------------------
MAXIMUM AVAILABILITY PRIMARY	      MAXIMUM AVAILABILITY READ WRITE

备库

SQL> select protection_mode,database_role,protection_level,open_mode from v$database;
PROTECTION_MODE      DATABASE_ROLE    PROTECTION_LEVEL	   OPEN_MODE
-------------------- ---------------- -------------------- --------------------
MAXIMUM AVAILABILITY PHYSICAL STANDBY MAXIMUM AVAILABILITY READ ONLY WITH APPLY

2.2   启用MRP

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
或
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING ARCHIVED LOGFILE DISCONNECT;

在备库配置了standby redo log,并处于归档模式,那么默认情况下会上面语句会自动启用real-time redo应用。

2.3   主库创建表

SQL> create table cndba as select * from dba_users;
Table created.
 
SQL> select count(*) from cndba;
COUNT(*)
----------
36

注意:不要切换日志,看看备库是否实时同步。

2.4   备库查询数据

SQL> select count(*) from cndba;
  COUNT(*)
----------
36

数据同步过来了。所以Real-Time应用是默认设置。

Real-Time Apply Default For Data Guard (12c,18c,19c)