Skip to content

12c/18c升级指南 - 2. page

Disabling real-time statistics gathering

Real-time Statistics
This Oracle Database 19c new feature is available on certain Oracle Database platforms. Check the Oracle Database Licensing Guide for more
information.
CHALLENGES TO MAINTAINING ACCURATE OPTIMIZER STATISTICS
As mentioned above, stale statistics can result in sub-optimal SQL execution plans and keeping them accurate in highly volatile
systems can be challenging. High-frequency statistics gathering helps to resolving this, but a more ideal solution would be to maintain
statistics as changes to the data in the database are made.

REAL-TIME STATISTICS
Real-time statistics extends statistic gathering techniques to the conventional DML operations INSERT, UPDATE and MERGE. When
these DML operations are executed on the data in the database, the most essential optimizer statistics are maintained in real time. This
applies both the individual row and bulk operations.

Real-time statistics augment those collected by the automatic statistics gathering job, high-frequency stats gathering or those gathered
manually using the DBMS_STATS API. An accurate picture of the data in the database is therefore maintained at all times, which
results in more optimal SQL execution plans.

Real-time statistics are managed automatically, and no intervention from the database administrator is required. Developers may
choose to disable online statistics gathering for individual SQL statements using the NO_GATHER_OPTIMIZER_STATISTICS hint.

Example:

SELECT /*+ NO_GATHER_OPTIMIZER_STATISTICS */ …

If you already have a well-established statistics gathering procedure or if for some other reason you want to disable automatic statistics
gathering for your main application schema, consider leaving it on for the dictionary tables. You can do so by changing the value of
AUTOSTATS_TARGET parameter to ORACLE instead of AUTO using DBMS_STATS.SET_GLOBAL_PREFS procedure.

 

exec dbms_stats.set_global_prefs(‘autostats_target’,‘oracle’)

 

SQL> begin
dbms_stats.set_global_prefs(‘autostats_target’,’oracle’);
end;
/

参考:
https://docs.oracle.com/en/database/oracle/oracle-database/19/tgsql/optimizer-statistics-concepts.html#GUID-769E609D-0312-43A7-9581-3F3EACF10BA9
<<19c New Feature:Real-Time Statistics (文档 ID 2552657.1)>>

 

改特性建议关闭。

Oracle 19c Real-Time Statistics 关闭实时统计信息

1错误1:ocrconfig.bin” does not exists or is not readable
1.1 现象
19c的RAC 环境,升级RU 从19.3 到 19.6,根据19.6 RU readme文档的操作,opatchauto的时候,报错:

[root@luda software]# export PATH=$PATH:/u01/app/19.3.0/grid/OPatch
[root@luda software]# opatchauto apply /u01/software/30501910

OPatchauto session is initiated at Thu Jan 29 21:18:26 2020

System initialization log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-03-12_09-18-36PM.log.

Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2020-03-12_09-21-01PM.log
The id for this session is 5Z4B

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.3.0/db_1
Patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Patch applicability verified successfully on home /u01/app/19.3.0/grid

Verifying SQL patch applicability on home /u01/app/oracle/product/19.3.0/db_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Preparing to bring down database service on home /u01/app/oracle/product/19.3.0/db_1
Successfully prepared home /u01/app/oracle/product/19.3.0/db_1 to bring down database service

Bringing down CRS service on home /u01/app/19.3.0/grid
CRS service brought down successfully on home /u01/app/19.3.0/grid

Performing prepatch operation on home /u01/app/oracle/product/19.3.0/db_1
Perpatch operation completed successfully on home /u01/app/oracle/product/19.3.0/db_1

Start applying binary patch on home /u01/app/oracle/product/19.3.0/db_1
Binary patch applied successfully on home /u01/app/oracle/product/19.3.0/db_1

Performing postpatch operation on home /u01/app/oracle/product/19.3.0/db_1
Postpatch operation completed successfully on home /u01/app/oracle/product/19.3.0/db_1

Start applying binary patch on home /u01/app/19.3.0/grid
Failed while applying binary patches on home /u01/app/19.3.0/grid

Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : rac1->/u01/app/19.3.0/grid Type[crs]
Details: [
—————————Patching Failed———————————
Command execution failed during patching in home: /u01/app/19.3.0/grid, host: rac1.
Command failed: /u01/app/19.3.0/grid/OPatch/opatchauto apply /u01/software/30501910 -oh /u01/app/19.3.0/grid -target_type cluster -binary -invPtrLoc /u01/app/19.3.0/grid/oraInst.loc -jre /u01/app/19.3.0/grid/OPatch/jre -persistresult /u01/app/19.3.0/grid/OPatch/auto/dbsessioninfo/sessionresult_rac1_crs.ser -analyzedresult /u01/app/19.3.0/grid/OPatch/auto/dbsessioninfo/sessionresult_analyze_rac1_crs.ser
Command failure output:
==Following patches FAILED in apply:

Patch: /u01/software/30501910/30489227
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-12_21-45-54PM_1.log
Reason: Failed during Analysis: CheckNApplyReport Failed, [ Prerequisite Status: FAILED, Prerequisite output:
The details are:

Prerequisite check “CheckApplicable” failed.]

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Thu Jan 29 21:46:04 2020
Time taken to complete the session 27 minutes, 39 seconds

opatchauto failed with error code 42
[root@luda software]#
查看错误日志:

[Jan 29, 2020 10:36:54 PM] [INFO] Prereq checkPatchApplicableOnCurrentPlatform Passed for patch : 30489227
[Jan 29, 2020 10:36:57 PM] [WARNING]Action file /u01/app/19.3.0/grid/jlib/srvmasm.jar is in the jar list,OOP should be lanched
[Jan 29, 2020 10:36:58 PM] [INFO] Patch 30489227:
Copy Action: Source File “/u01/software/30501910/30489227/files/bin/ocrcheck.bin” does not exists or i
s not readable
‘oracle.has.crs, 19.0.0.0.0’: Cannot copy file from ‘ocrcheck.bin’ to ‘/u01/app/19.3.0/grid/bin/ocrche
ck.bin’
Copy Action: Source File “/u01/software/30501910/30489227/files/bin/ocrconfig.bin” does not exists or
is not readable
‘oracle.has.crs, 19.0.0.0.0’: Cannot copy file from ‘ocrconfig.bin’ to ‘/u01/app/19.3.0/grid/bin/ocrco
nfig.bin’
[Jan 29, 2020 10:36:58 PM] [INFO] Prerequisite check “CheckApplicable” failed.
The details are:

Patch 30489227:
Copy Action: Source File “/u01/software/30501910/30489227/files/bin/ocrcheck.bin” does not exists or i
s not readable
‘oracle.has.crs, 19.0.0.0.0’: Cannot copy file from ‘ocrcheck.bin’ to ‘/u01/app/19.3.0/grid/bin/ocrche
ck.bin’
Copy Action: Source File “/u01/software/30501910/30489227/files/bin/ocrconfig.bin” does not exists or
is not readable
‘oracle.has.crs, 19.0.0.0.0’: Cannot copy file from ‘ocrconfig.bin’ to ‘/u01/app/19.3.0/grid/bin/ocrco
nfig.bin’
[Jan 29, 2020 10:36:58 PM] [SEVERE] OUI-67073:UtilSession failed:
Prerequisite check “CheckApplicable” failed.
[Jan 29, 2020 10:36:58 PM] [INFO] Finishing UtilSession at Thu Jan 29 22:36:58 CST 2020
[Jan 29, 2020 10:36:58 PM] [INFO] Log file location: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-12_22-36-43PM
_1.log
[Jan 29, 2020 10:36:58 PM] [INFO] Stack Description: java.lang.RuntimeException:
Prerequisite check “CheckApplicable” failed.
at oracle.opatch.OPatchSessionHelper.runApplyPrereqs(OPatchSessionHelper.java:6548)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:1002)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:370)
at oracle.opatch.opatchutil.NApply.process(NApply.java:352)
at oracle.opatch.opatchutil.OUSession.napply(OUSession.java:1123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at oracle.opatch.UtilSession.process(UtilSession.java:355)
at oracle.opatch.OPatchSession.main(OPatchSession.java:3985)
at oracle.opatch.OPatchSDK.NApply(OPatchSDK.java:1127)
at oracle.opatch.opatchsdk.OPatchTarget.NApplyReport(OPatchTarget.java:3964)
at oracle.opatch.opatchsdk.OPatchTarget.NApplyReportForAllPrereqs(OPatchTarget.java:4013)
at oracle.opatchauto.core.binary.action.AnalyzeReportGenerator.analyzePatch(AnalyzeReportGenerator
.java:186)
at oracle.opatchauto.core.binary.action.AnalyzeReportGenerator.execute(AnalyzeReportGenerator.java
:148)
at oracle.opatchauto.core.binary.action.LegacyPatchAction.execute(LegacyPatchAction.java:46)
at oracle.opatchauto.core.binary.OPatchAutoBinary.patchWithoutAnalyze(OPatchAutoBinary.java:519)
at oracle.opatchauto.core.binary.OPatchAutoBinary.applyWithoutAnalyze(OPatchAutoBinary.java:406)
at oracle.opatchauto.core.OPatchAutoCore.runOPatchAutoBinary(OPatchAutoCore.java:192)
at oracle.opatchauto.core.OPatchAutoCore.main(OPatchAutoCore.java:75)
Caused by: java.lang.RuntimeException:
Prerequisite check “CheckApplicable” failed.
… 21 more
Caused by: oracle.opatch.PrereqFailedException:
Prerequisite check “CheckApplicable” failed.
… 21 more
[Jan 29, 2020 10:36:58 PM] [INFO] EXITING METHOD: NApplyReport(OPatchPatch[] patches,OPatchNApplyOptions options)
[grid@luda OPatch]$
1.2 解决方法
这里的错误是:
“/u01/software/30501910/30489227/files/bin/ocrconfig.bin” does not exists or is not readable
从这里看像是权限问题,手工修改RU 补丁的权限:

[root@luda software]# chown grid:oinstall 30501910 -R
再次执行变成了另外的错误。

2 错误2:Copy failed from ‘…/files/bin/cssdagent’ to ‘…/bin/cssdagent’…
2.1 现象
再次执行,还是报错。如下:

2020-03-13 08:36:48,374 INFO [313] com.oracle.glcm.patch.auto.db.integration.controller.action.OPatchAutoBinaryAction – Opatchcore binary error message=
2020-03-13 08:36:48,375 INFO [313] com.oracle.glcm.patch.auto.db.integration.controller.action.OPatchAutoBinaryAction – Reading session result from /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_rac1_crs.ser
2020-03-13 08:36:48,380 INFO [313] com.oracle.cie.common.util.reporting.CommonReporter – Reporting console output : Message{id=’null’, message=’Failed while applying binary patches on home /u01/app/19.3.0/grid
‘}
2020-03-13 08:36:48,381 SEVERE [311] com.oracle.glcm.patch.auto.action.PatchActionExecutor – Failed to execute patch action [com.oracle.glcm.patch.auto.db.integration.controller.action.OPatchAutoBinaryAction] on patch target [rac1->/u01/app/19.3.0/grid Type[crs]].
2020-03-13 08:36:48,382 INFO [311] com.oracle.cie.common.util.reporting.CommonReporter – Reporting console output : Message{id=’null’, message=’Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : rac1->/u01/app/19.3.0/grid Type[crs]
Details: [
—————————Patching Failed———————————
Command execution failed during patching in home: /u01/app/19.3.0/grid, host: rac1.
Command failed: /u01/app/19.3.0/grid/OPatch/opatchauto apply /u01/software/30501910 -oh /u01/app/19.3.0/grid -target_type cluster -binary -invPtrLoc /u01/app/19.3.0/grid/oraInst.loc -jre /u01/app/19.3.0/grid/OPatch/jre -persistresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_rac1_crs.ser -analyzedresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_analyze_rac1_crs.ser
Command failure output:
==Following patches FAILED in apply:

Patch: /u01/software/30501910/30489227
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_08-31-51AM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: ApplySession failed in system modification phase… ‘ApplySession::apply failed: Copy failed from ‘/u01/software/30501910/30489227/files/bin/crsd.bin’ to ‘/u01/app/19.3.0/grid/bin/crsd.bin’…
Copy failed from ‘/u01/software/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…
Copy failed from ‘/u01/software/30501910/30489227/files/bin/cssdmonitor’ to ‘/u01/app/19.3.0/grid/bin/cssdmonitor’…
Copy fa …

After fixing the cause of failure Run opatchauto resume

]’}
2020-03-13 08:36:48,407 SEVERE [41] com.oracle.cie.wizard.internal.engine.WizardControllerEngine – Wizard error cause
com.oracle.cie.wizard.tasks.TaskExecutionException: OPATCHAUTO-68128: Patch action execution failed.
OPATCHAUTO-68128: Failed to execute patch actions for goal offline:binary-patching
OPATCHAUTO-68128: Check the log for more information.
at com.oracle.glcm.patch.auto.wizard.silent.tasks.PatchActionTask.execute(PatchActionTask.java:106)
at com.oracle.cie.wizard.internal.cont.SilentTaskContainer$TaskRunner.run(SilentTaskContainer.java:102)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.oracle.glcm.patch.auto.OPatchAutoException: OPATCHAUTO-68067: Patch action execution failed.
OPATCHAUTO-68067: Failed to execute patch action [com.oracle.glcm.patch.auto.db.integration.controller.action.OPatchAutoBinaryAction
Patch Target : rac1->/u01/app/19.3.0/grid Type[crs]
Details: [
—————————Patching Failed———————————
Command execution failed during patching in home: /u01/app/19.3.0/grid, host: rac1.
Command failed: /u01/app/19.3.0/grid/OPatch/opatchauto apply /u01/software/30501910 -oh /u01/app/19.3.0/grid -target_type cluster -binary -invPtrLoc /u01/app/19.3.0/grid/oraInst.loc -jre /u01/app/19.3.0/grid/OPatch/jre -persistresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_rac1_crs.ser -analyzedresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_analyze_rac1_crs.ser
Command failure output:
==Following patches FAILED in apply:

Patch: /u01/software/30501910/30489227
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_08-31-51AM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: ApplySession failed in system modification phase… ‘ApplySession::apply failed: Copy failed from ‘/u01/software/30501910/30489227/files/bin/crsd.bin’ to ‘/u01/app/19.3.0/grid/bin/crsd.bin’…
Copy failed from ‘/u01/software/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…
Copy failed from ‘/u01/software/30501910/30489227/files/bin/cssdmonitor’ to ‘/u01/app/19.3.0/grid/bin/cssdmonitor’…
Copy fa …

After fixing the cause of failure Run opatchauto resume

]]. Failures:
OPATCHAUTO-68067: Check the details to determine the cause of the failure.
at com.oracle.glcm.patch.auto.action.PatchActionExecutor.execute(PatchActionExecutor.java:172)
at com.oracle.glcm.patch.auto.wizard.silent.tasks.PatchActionTask.execute(PatchActionTask.java:102)
… 2 more
2020-03-13 08:36:48,516 INFO [1] com.oracle.glcm.patch.auto.db.integration.model.productsupport.DBBaseProductSupport – Space available after session: 44935 MB
2020-03-13 08:36:49,875 SEVERE [1] com.oracle.glcm.patch.auto.OPatchAuto – OPatchAuto failed.
com.oracle.glcm.patch.auto.OPatchAutoException: OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
at com.oracle.glcm.patch.auto.OrchestrationEngineImpl.orchestrate(OrchestrationEngineImpl.java:40)
at com.oracle.glcm.patch.auto.OPatchAuto.orchestrate(OPatchAuto.java:858)
at com.oracle.glcm.patch.auto.OPatchAuto.orchestrate(OPatchAuto.java:398)
at com.oracle.glcm.patch.auto.OPatchAuto.orchestrate(OPatchAuto.java:344)
at com.oracle.glcm.patch.auto.OPatchAuto.main(OPatchAuto.java:212)
2020-03-13 08:36:49,875 INFO [1] com.oracle.cie.common.util.reporting.CommonReporter – Reporting console output : Message{id=’null’, message=’OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.’}
2020-03-13 08:36:49,875 INFO [1] com.oracle.cie.common.util.reporting.CommonReporter – Reporting console output : Message{id=’null’, message=’OPatchAuto failed.’}
这次错误变了:

Copy failed from ‘/u01/software/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…

2.2 解决方法
上MOS搜了一下,有一篇文章和这个错误很类似:
opatch report “ERROR: Prereq checkApplicable failed.” when Applying Grid Infrastructure patch (Doc ID 1417268.1)

MOS里将了多种原因可能会触发这个错误,和我这里最像的是这一条:

D. The patch is not unzipped as grid user, often it is unzipped as root user
ls -l will show the files are owned by root user.
The solution is to unzip the patch as grid user into an empty directory outside of GRID_HOME, then retry the patch apply.
我的RU patch 虽然不在GRID_HOME目录下,但也是用root用户解压缩的。 删除原来的解压缩后,重新用grid用户解压缩一次,在打补丁,还是同样的错误:
[grid@luda tmp]$ unzip -d /tmp p30501910_190000_Linux-x86-64-GI.zip

[Mar 13, 2020 9:18:08 AM] [INFO] Copying file to “/u01/app/19.3.0/grid/srvm/lib/sprraw.o”
[Mar 13, 2020 9:18:08 AM] [INFO] The following actions have failed:
[Mar 13, 2020 9:18:08 AM] [WARNING] OUI-67124:Copy failed from ‘/tmp/30501910/30489227/files/bin/crsd.bin’ to ‘/u01/app/19.3.0/grid/bin/crsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdmonitor’ to ‘/u01/app/19.3.0/grid/bin/cssdmonitor’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmd.bin’ to ‘/u01/app/19.3.0/grid/bin/evmd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmlogger.bin’ to ‘/u01/app/19.3.0/grid/bin/evmlogger.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gipcd.bin’ to ‘/u01/app/19.3.0/grid/bin/gipcd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gpnpd.bin’ to ‘/u01/app/19.3.0/grid/bin/gpnpd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/mdnsd.bin’ to ‘/u01/app/19.3.0/grid/bin/mdnsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ocssd.bin’ to ‘/u01/app/19.3.0/grid/bin/ocssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/octssd.bin’ to ‘/u01/app/19.3.0/grid/bin/octssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ohasd.bin’ to ‘/u01/app/19.3.0/grid/bin/ohasd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/oraagent.bin’ to ‘/u01/app/19.3.0/grid/bin/oraagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/orarootagent.bin’ to ‘/u01/app/19.3.0/grid/bin/orarootagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/osysmond.bin’ to ‘/u01/app/19.3.0/grid/bin/osysmond.bin’…
[Mar 13, 2020 9:18:08 AM] [INFO] Do you want to proceed? [y|n]
[Mar 13, 2020 9:18:08 AM] [INFO] N (auto-answered by -silent)
[Mar 13, 2020 9:18:08 AM] [INFO] User Responded with: N
[Mar 13, 2020 9:18:08 AM] [WARNING] OUI-67124:ApplySession failed in system modification phase… ‘ApplySession::apply failed: Copy failed from ‘/tmp/30501910/30489227/f
iles/bin/crsd.bin’ to ‘/u01/app/19.3.0/grid/bin/crsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdmonitor’ to ‘/u01/app/19.3.0/grid/bin/cssdmonitor’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmd.bin’ to ‘/u01/app/19.3.0/grid/bin/evmd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmlogger.bin’ to ‘/u01/app/19.3.0/grid/bin/evmlogger.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gipcd.bin’ to ‘/u01/app/19.3.0/grid/bin/gipcd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gpnpd.bin’ to ‘/u01/app/19.3.0/grid/bin/gpnpd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/mdnsd.bin’ to ‘/u01/app/19.3.0/grid/bin/mdnsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ocssd.bin’ to ‘/u01/app/19.3.0/grid/bin/ocssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/octssd.bin’ to ‘/u01/app/19.3.0/grid/bin/octssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ohasd.bin’ to ‘/u01/app/19.3.0/grid/bin/ohasd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/oraagent.bin’ to ‘/u01/app/19.3.0/grid/bin/oraagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/orarootagent.bin’ to ‘/u01/app/19.3.0/grid/bin/orarootagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/osysmond.bin’ to ‘/u01/app/19.3.0/grid/bin/osysmond.bin’…

[Mar 13, 2020 9:18:08 AM] [INFO] Restoring “/u01/app/19.3.0/grid” to the state prior to running NApply…
[Mar 13, 2020 9:18:08 AM] [INFO] Restoring files: copy recurse from /u01/app/19.3.0/grid/.patch_storage/NApply/2020-03-13_09-15-07AM/backup to /u01/app/19.3.0/grid

#### Stack trace of processes holding locks ####

Time: 2020-03-13_09-14-13AM
Command: oracle/opatchauto/core/OPatchAutoCore apply /tmp/30501910 -oh /u01/app/19.3.0/grid -target_type cluster -binary -invPtrLoc /u01/app/19.3.0/grid/oraInst.loc -per
sistresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_rac1_crs.ser -analyzedresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_a
nalyze_rac1_crs.ser -customLogDir /u01/app/19.3.0/grid/cfgtoollogs
Lock File Name: /u01/app/oraInventory/locks/_u01_app_19.3.0_grid_writer.lock
StackTrace:
———–
java.lang.Throwable
at oracle.sysman.oii.oiit.OiitLockHeartbeat.writeStackTrace(OiitLockHeartbeat.java:193)
at oracle.sysman.oii.oiit.OiitLockHeartbeat.(OiitLockHeartbeat.java:173)
at oracle.sysman.oii.oiit.OiitTargetLocker.getWriterLock(OiitTargetLocker.java:346)
at oracle.sysman.oii.oiit.OiitTargetLocker.getWriterLock(OiitTargetLocker.java:238)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.acquireLocks(OiicStandardInventorySession.java:564)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initAreaControl(OiicStandardInventorySession.java:359)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:332)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:294)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:243)
at oracle.sysman.oui.patch.impl.HomeOperationsImpl.initialize(HomeOperationsImpl.java:107)
at oracle.glcm.opatch.common.api.install.HomeOperationsShell.initialize(HomeOperationsShell.java:117)
at oracle.opatch.ipm.IPMRWServices.addPatchCUP(IPMRWServices.java:134)
at oracle.opatch.ipm.IPMRWServices.add(IPMRWServices.java:146)
at oracle.opatch.ApplySession.apply(ApplySession.java:899)
at oracle.opatch.ApplySession.processLocal(ApplySession.java:4098)
at oracle.opatch.ApplySession.process(ApplySession.java:5080)
at oracle.opatch.ApplySession.process(ApplySession.java:4942)
at oracle.opatch.OPatchACL.processApply(OPatchACL.java:310)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:1429)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:370)
at oracle.opatch.opatchutil.NApply.process(NApply.java:352)
at oracle.opatch.opatchutil.OUSession.napply(OUSession.java:1123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at oracle.opatch.UtilSession.process(UtilSession.java:355)
at oracle.opatch.OPatchSession.main(OPatchSession.java:3985)
at oracle.opatch.OPatchSDK.NApply(OPatchSDK.java:1127)
at oracle.opatch.opatchsdk.OPatchTarget.NApply(OPatchTarget.java:4169)
at oracle.opatchauto.core.binary.action.LegacyPatchAction.execute(LegacyPatchAction.java:76)
at oracle.opatchauto.core.binary.OPatchAutoBinary.patchWithoutAnalyze(OPatchAutoBinary.java:519)
at oracle.opatchauto.core.binary.OPatchAutoBinary.applyWithoutAnalyze(OPatchAutoBinary.java:406)
at oracle.opatchauto.core.OPatchAutoCore.runOPatchAutoBinary(OPatchAutoCore.java:192)
at oracle.opatchauto.core.OPatchAutoCore.main(OPatchAutoCore.java:75)
继续研究,有可能是bug:
Bug 13575478 : PATCH APLLICABLE/CONFLICT CHECK FAILED WITH ‘OPATCH AUTO’
A. Expected behaviour if GRID_HOME has not been unlocked

If GI home has not been unlocked with “rootcrs.pl -unlock”, checkapplicable will fail as many files are still owned by root user, this is expected behaviour. The solution is to use “opatch auto” or follow the patch readme step-by-step so the GI home gets unlocked first.
这里的方法和opatchauto有出入,所以尝试进行了analyze,提示有patch 已经打上了,但是通过opatch lspatches 并没有显示,所以对19.6 的RU执行了rollback。

[root@luda ~]# /u01/app/19.3.0/grid/OPatch/opatchauto apply /tmp/30501910 -analyze
[root@luda ~]# /u01/app/19.3.0/grid/OPatch/opatchauto rollback /tmp/30501910
再次执行,变成了另外错误3.

3 错误3:”CheckActiveFilesAndExecutables”
3.1 现象
这里的错误提示非常明显:

Following active executables are used by opatch process :
/u01/app/oracle/product/19.3.0/db_1/lib/libclntsh.so.19.1
/u01/app/oracle/product/19.3.0/db_1/lib/libsqlplus.so
[Mar 13, 2020 3:13:26 PM] [INFO] Prerequisite check “CheckActiveFilesAndExecutables” failed.
The details are:

Following active executables are not used by opatch process :

Following active executables are used by opatch process :
/u01/app/oracle/product/19.3.0/db_1/lib/libclntsh.so.19.1
/u01/app/oracle/product/19.3.0/db_1/lib/libsqlplus.so
[Mar 13, 2020 3:13:26 PM] [SEVERE] OUI-67073:UtilSession failed: Prerequisite check “CheckActiveFilesAndExecutables” failed.
[Mar 13, 2020 3:13:26 PM] [INFO] Finishing UtilSession at Fri Mar 13 15:13:26 CST 2020
[Mar 13, 2020 3:13:26 PM] [INFO] Log file location: /u01/app/oracle/product/19.3.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_15-09-58PM_1.log
[root@luda ~]#
3.2 解决方法
有活动的sqlplus 窗口存在。 先执行rollback:
[root@luda ~]# /u01/app/19.3.0/grid/OPatch/opatchauto rollback /tmp/30501910

OPatchauto session is initiated at Fri Mar 13 15:18:58 2020

System initialization log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-03-13_03-19-01PM.log.

Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2020-03-13_03-21-09PM.log
The id for this session is U2H9

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.3.0/db_1
Patch applicability verified successfully on home /u01/app/19.3.0/grid

Patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Verifying SQL patch applicability on home /u01/app/oracle/product/19.3.0/db_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Preparing to bring down database service on home /u01/app/oracle/product/19.3.0/db_1
Successfully prepared home /u01/app/oracle/product/19.3.0/db_1 to bring down database service

Performing prepatch operation on home /u01/app/oracle/product/19.3.0/db_1
Perpatch operation completed successfully on home /u01/app/oracle/product/19.3.0/db_1

Start rolling back binary patch on home /u01/app/oracle/product/19.3.0/db_1
Binary patch rolled back successfully on home /u01/app/oracle/product/19.3.0/db_1

Performing postpatch operation on home /u01/app/oracle/product/19.3.0/db_1
Postpatch operation completed successfully on home /u01/app/oracle/product/19.3.0/db_1

Preparing home /u01/app/oracle/product/19.3.0/db_1 after database service restarted
No step execution required………

Trying to roll back SQL patch on home /u01/app/oracle/product/19.3.0/db_1
SQL patch rolled back successfully on home /u01/app/oracle/product/19.3.0/db_1

OPatchAuto successful.

——————————–Summary——————————–

Patching is completed successfully. Please find the summary as follows:

Host:rac1
CRS Home:/u01/app/19.3.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /tmp/30501910/30489227
Reason: This Patch does not exist in the home, it cannot be rolled back.

Patch: /tmp/30501910/30489632
Reason: This Patch does not exist in the home, it cannot be rolled back.

Patch: /tmp/30501910/30655595
Reason: This Patch does not exist in the home, it cannot be rolled back.

Patch: /tmp/30501910/30557433
Reason: This Patch does not exist in the home, it cannot be rolled back.

Host:rac1
RAC Home:/u01/app/oracle/product/19.3.0/db_1
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /tmp/30501910/30489632
Reason: This patch is not applicable to this specified target type – “rac_database”

Patch: /tmp/30501910/30655595
Reason: This patch is not applicable to this specified target type – “rac_database”

Patch: /tmp/30501910/30557433
Reason: Patch /tmp/30501910/30557433 is not applied as part of bundle patch 30501910

==Following patches were SUCCESSFULLY rolled back:

Patch: /tmp/30501910/30489227
Log: /u01/app/oracle/product/19.3.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_15-23-36PM_1.log

Patching session reported following warning(s):
_________________________________________________

[WARNING] The database instance ‘cndba1’ from ‘/u01/app/oracle/product/19.3.0/db_1′, in host’rac1’ is not running. SQL changes, if any, will not be rolled back.
To roll back. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle).
Refer to the readme to get the correct steps for applying the sql changes.

OPatchauto session completed at Fri Mar 13 15:24:58 2020
Time taken to complete the session 6 minutes, 1 second
[root@luda ~]#
再次apply,依旧报错。

4错误4:OUI-67073:UtilSession failed: ApplySession failed in system modification phase(这种解决方法不要尝试)
4.1 现象
这里的错误和错误2是一致的。 其实这个错误才是最根本的,影响19c 打RU的现象,只是MOS上没有找到合理的解释。 现在可行的方法就是使用 nonrolling的方式,分别对GI和DB 进行RU的升级。

————————————

[Mar 13, 2020 3:54:57 PM] [INFO] Removed patch “30489227” with UPI + “23305624” from OUI inventory memory..
[Mar 13, 2020 3:54:57 PM] [INFO] Stack Description: java.lang.RuntimeException: OUI session not initialized
at oracle.sysman.oui.patch.impl.HomeOperationsImpl.saveInventory(HomeOperationsImpl.java:372)
at oracle.glcm.opatch.common.api.install.HomeOperationsShell.saveInventory(HomeOperationsShell.java:204)
at oracle.opatch.ipm.IPMRWServices.saveInstallInventory(IPMRWServices.java:854)
at oracle.opatch.OPatchSession.restorePatchesInventory(OPatchSession.java:1434)
at oracle.opatch.MergedPatchObject.restoreOH(MergedPatchObject.java:1234)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:1465)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:370)
at oracle.opatch.opatchutil.NApply.process(NApply.java:352)
at oracle.opatch.opatchutil.OUSession.napply(OUSession.java:1123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at oracle.opatch.UtilSession.process(UtilSession.java:355)
at oracle.opatch.OPatchSession.main(OPatchSession.java:3985)
at oracle.opatch.OPatchSDK.NApply(OPatchSDK.java:1127)
at oracle.opatch.opatchsdk.OPatchTarget.NApply(OPatchTarget.java:4169)
at oracle.opatchauto.core.binary.action.LegacyPatchAction.execute(LegacyPatchAction.java:76)
at oracle.opatchauto.core.binary.OPatchAutoBinary.patchWithoutAnalyze(OPatchAutoBinary.java:519)
at oracle.opatchauto.core.binary.OPatchAutoBinary.applyWithoutAnalyze(OPatchAutoBinary.java:406)
at oracle.opatchauto.core.OPatchAutoCore.runOPatchAutoBinary(OPatchAutoCore.java:192)
at oracle.opatchauto.core.OPatchAutoCore.main(OPatchAutoCore.java:75)
[Mar 13, 2020 3:54:57 PM] [SEVERE] OUI-67115:OPatch failed to restore OH ‘/u01/app/19.3.0/grid’. Consult OPatch document to restore the home manually before proceeding.
[Mar 13, 2020 3:54:57 PM] [WARNING] OUI-67124:
NApply was not able to restore the home. Please invoke the following scripts:
– restore.[sh,bat]
– make.txt (Unix only)
to restore the ORACLE_HOME. They are located under
“/u01/app/19.3.0/grid/.patch_storage/NApply/2020-03-13_15-52-41PM”
[Mar 13, 2020 3:54:58 PM] [SEVERE] OUI-67073:UtilSession failed: ApplySession failed in system modification phase… ‘ApplySession::apply failed: Copy failed from ‘/tmp/30501910/30489227/files/bin/crsd.bin’ to ‘/u01/app/19.3.0/grid/bin/crsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdagent’ to ‘/u01/app/19.3.0/grid/bin/cssdagent’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/cssdmonitor’ to ‘/u01/app/19.3.0/grid/bin/cssdmonitor’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmd.bin’ to ‘/u01/app/19.3.0/grid/bin/evmd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/evmlogger.bin’ to ‘/u01/app/19.3.0/grid/bin/evmlogger.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gipcd.bin’ to ‘/u01/app/19.3.0/grid/bin/gipcd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/gpnpd.bin’ to ‘/u01/app/19.3.0/grid/bin/gpnpd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/mdnsd.bin’ to ‘/u01/app/19.3.0/grid/bin/mdnsd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ocssd.bin’ to ‘/u01/app/19.3.0/grid/bin/ocssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/octssd.bin’ to ‘/u01/app/19.3.0/grid/bin/octssd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/ohasd.bin’ to ‘/u01/app/19.3.0/grid/bin/ohasd.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/oraagent.bin’ to ‘/u01/app/19.3.0/grid/bin/oraagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/orarootagent.bin’ to ‘/u01/app/19.3.0/grid/bin/orarootagent.bin’…
Copy failed from ‘/tmp/30501910/30489227/files/bin/osysmond.bin’ to ‘/u01/app/19.3.0/grid/bin/osysmond.bin’…
4.2 解决方法
按日志提示,执行脚本:

[root@luda 2020-03-13_15-52-41PM]# ls
backup make.txt patchlist.txt restore.sh
[root@luda 2020-03-13_15-52-41PM]# ./restore.sh
This script is going to restore the Oracle Home to the previous state.
It does not perform any of the following:
– Running init/pre/post scripts
– Oracle binary re-link
– Customized steps performed manually by user
Please use this script with supervision from Oracle Technical Support.
About to modify Oracle Home( /u01/app/19.3.0/grid )
Do you want to proceed? [Y/N]
y
User responded with : Y
Restore script completed.
[root@luda 2020-03-13_15-52-41PM]#
4.3 分析过程1
因为之前不能copy文件都是crs的,所以在进行到RU 进行到停CRS时,我手工的执行了停CRS的操作:

[root@luda 2020-03-13_16-45-11PM]# crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘rac1’
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘rac1’
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server ‘rac1’
CRS-2673: Attempting to stop ‘ora.chad’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.cndba.db’ on ‘rac1’
CRS-2677: Stop of ‘ora.cndba.db’ on ‘rac1’ succeeded
CRS-33673: Attempting to stop resource group ‘ora.asmgroup’ on server ‘rac1’
CRS-2673: Attempting to stop ‘ora.OCR.dg’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.DATA.dg’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.MGMT.dg’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.LISTENER.lsnr’ on ‘rac1’
CRS-2677: Stop of ‘ora.DATA.dg’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.MGMT.dg’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.OCR.dg’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.asm’ on ‘rac1’
CRS-2677: Stop of ‘ora.LISTENER.lsnr’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.rac1.vip’ on ‘rac1’
CRS-2677: Stop of ‘ora.rac1.vip’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.asm’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.ASMNET1LSNR_ASM.lsnr’ on ‘rac1’
CRS-2677: Stop of ‘ora.chad’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.ASMNET1LSNR_ASM.lsnr’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.asmnet1.asmnetwork’ on ‘rac1’
CRS-2677: Stop of ‘ora.asmnet1.asmnetwork’ on ‘rac1’ succeeded
CRS-33677: Stop of resource group ‘ora.asmgroup’ on server ‘rac1’ succeeded.
CRS-2672: Attempting to start ‘ora.rac1.vip’ on ‘rac2’
CRS-2676: Start of ‘ora.rac1.vip’ on ‘rac2’ succeeded
CRS-2673: Attempting to stop ‘ora.ons’ on ‘rac1’
CRS-2677: Stop of ‘ora.ons’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.net1.network’ on ‘rac1’
CRS-2677: Stop of ‘ora.net1.network’ on ‘rac1’ succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘rac1’ has completed
CRS-2677: Stop of ‘ora.crsd’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.storage’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.crf’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.drivers.acfs’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘rac1’
CRS-2677: Stop of ‘ora.crf’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.drivers.acfs’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.storage’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.asm’ on ‘rac1’
CRS-2677: Stop of ‘ora.mdnsd’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.asm’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘rac1’
CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.evmd’ on ‘rac1’
CRS-2677: Stop of ‘ora.ctssd’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.evmd’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.cssd’ on ‘rac1’
CRS-2677: Stop of ‘ora.cssd’ on ‘rac1’ succeeded
CRS-2673: Attempting to stop ‘ora.driver.afd’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘rac1’
CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘rac1’
CRS-2677: Stop of ‘ora.driver.afd’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.gpnpd’ on ‘rac1’ succeeded
CRS-2677: Stop of ‘ora.gipcd’ on ‘rac1’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘rac1’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@luda 2020-03-13_16-45-11PM]#
4.4 停完CRS之后,RU打成功
[root@luda tmp]# /u01/app/19.3.0/grid/OPatch/opatchauto apply /tmp/30501910

OPatchauto session is initiated at Fri Mar 13 17:17:26 2020

System initialization log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-03-13_05-17-50PM.log.

Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2020-03-13_05-20-03PM.log
The id for this session is FPKL

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.3.0/db_1
Patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Patch applicability verified successfully on home /u01/app/19.3.0/grid

Verifying SQL patch applicability on home /u01/app/oracle/product/19.3.0/db_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/19.3.0/db_1

Preparing to bring down database service on home /u01/app/oracle/product/19.3.0/db_1
Successfully prepared home /u01/app/oracle/product/19.3.0/db_1 to bring down database service

Bringing down CRS service on home /u01/app/19.3.0/grid
CRS service brought down successfully on home /u01/app/19.3.0/grid

Start applying binary patch on home /u01/app/19.3.0/grid
Binary patch applied successfully on home /u01/app/19.3.0/grid

Starting CRS service on home /u01/app/19.3.0/grid
CRS service started successfully on home /u01/app/19.3.0/grid

Preparing home /u01/app/oracle/product/19.3.0/db_1 after database service restarted
No step execution required………

OPatchAuto successful.

——————————–Summary——————————–

Patching is completed successfully. Please find the summary as follows:

Host:rac1
RAC Home:/u01/app/oracle/product/19.3.0/db_1
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /tmp/30501910/30489632
Reason: This patch is not applicable to this specified target type – “rac_database”

Patch: /tmp/30501910/30655595
Reason: This patch is not applicable to this specified target type – “rac_database”

Patch: /tmp/30501910/30489227
Reason: This patch is already been applied, so not going to apply again.

Patch: /tmp/30501910/30557433
Reason: This patch is already been applied, so not going to apply again.

Host:rac1
CRS Home:/u01/app/19.3.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /tmp/30501910/30489227
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_17-23-27PM_1.log

Patch: /tmp/30501910/30489632
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_17-23-27PM_1.log

Patch: /tmp/30501910/30557433
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_17-23-27PM_1.log

Patch: /tmp/30501910/30655595
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-13_17-23-27PM_1.log

OPatchauto session completed at Fri Mar 13 17:45:20 2020
Time taken to complete the session 27 minutes, 56 seconds
[root@luda tmp]#
4.5 遗留问题
这种在打RU过程中手工停crs的方式可以让RU 打成功,但是会导致很多权限问题,CRS 无法启动:

[root@luda lib]# crsctl start crs
CRS-41053: checking Oracle Grid Infrastructure for file permission issues
PRVG-2031 : Owner of file “/u01/app/grid/cfgtoollogs” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVH-0124 : Path “/etc/oracle/maps” with permissions “rwxr-xr-x” does not have write permissions for the file’s group and others on node “rac1”.
PRVH-0100 : Restricted deletion flag is not set for path “/etc/oracle/maps” on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/grid” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/admin” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVH-0111 : Path “/etc/init.d/ohasd” with permissions “rwxr-x—” does not have read permissions for others on node “rac1”.
PRVH-0113 : Path “/etc/init.d/ohasd” with permissions “rwxr-x—” does not have execute permissions for others on node “rac1”.
PRVG-2031 : Owner of file “/etc/oracleafd.conf” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2032 : Group of file “/etc/oracleafd.conf” did not match the expected value on node “rac1”. [Expected = “oinstall(54321)” ; Found = “asmadmin(54329)”]
PRVH-0124 : Path “/var/tmp/.oracle” with permissions “rwxr-xr-x” does not have write permissions for the file’s group and others on node “rac1”.
PRVH-0100 : Restricted deletion flag is not set for path “/var/tmp/.oracle” on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/grid/diag/ofm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/lsnrctl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/netcman” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/audit” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/checkpoints” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/crsdata/rac1/olr/rac1_19.olr” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVH-0100 : Restricted deletion flag is not set for path “/u01/app/grid/crsdata/rac1/shm” on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/grid/crsdata/rac1/shm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/crsdata/rac1/cvu” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/crsdata/rac1/olr” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/crsdata” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/crsdata/rac1/ocr” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/kfod” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/asmtool” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/crs” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/dps” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/em” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/diagtool” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/gsm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/ios” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/rdbms” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/apx” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/tnslsnr” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/asmcmd” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/clients” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/asm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/grid/diag/afdboot” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-11960 : Set user ID bit is not set for file “/u01/app/19.3.0/grid/bin/jssu” on node “rac1”.
PRVH-0147 : Set group ID bit is not set for file “/u01/app/19.3.0/grid/bin/extproc” on node “rac1”.
PRVG-11960 : Set user ID bit is not set for file “/u01/app/19.3.0/grid/bin/extjob” on node “rac1”.
PRVG-11960 : Set user ID bit is not set for file “/u01/app/19.3.0/grid/bin/oradism” on node “rac1”.
PRVG-11960 : Set user ID bit is not set for file “/u01/app/19.3.0/grid/bin/oracle” on node “rac1”.
PRVH-0147 : Set group ID bit is not set for file “/u01/app/19.3.0/grid/bin/oracle” on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/HASLoad.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2032 : Group of file “/u01/app/19.3.0/grid/crs/install/cmdllroot.sh” did not match the expected value on node “rac1”. [Expected = “oinstall(54321)” ; Found = “root(0)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/crsconfig_params.sbs” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/crsconvert.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/install_gi.excl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/paramfile.crs” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/oracle-ohasd.service” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/oracle-ohasd.conf” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/installRemove.excl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/install.incl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/install.excl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/inittab” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/crstfa.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/crsconvtoext.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/crsconfig_addparams.sbs” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/CLSR.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/dropdb.pl” did not match the expected value on node “rac1”. [Expected = “root(0)|root(0)” ; Found = “grid(54322)”]
PRVH-0109 : Path “/u01/app/19.3.0/grid/lib/libacfs19.so” with permissions “rwxr-xr-x” does not have write permissions for the file’s group on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/tfa_setup” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/roothas.sh” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/rootcrs.sh” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVH-0109 : Path “/u01/app/19.3.0/grid/crs/install/rhpdata” with permissions “rwxr-xr-x” does not have write permissions for the file’s group on node “rac1”.
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/rhpdata” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/post_gimr_ugdg.pl” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVG-2031 : Owner of file “/u01/app/19.3.0/grid/crs/install/perlhasgen.pm” did not match the expected value on node “rac1”. [Expected = “root(0)” ; Found = “grid(54322)”]
PRVH-0109 : Path “/u01/app/19.3.0/grid/lib/libedtn19.a” with permissions “rwxr-xr-x” does not have write permissions for the file’s group on node “rac1”.
PRVH-0109 : Path “/u01/app/19.3.0/grid/lib/libskgxpcompat.so” with permissions “rwxr-xr-x” does not have write permissions for the file’s group on node “rac1”.
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
[root@luda lib]#
根据错误提示逐个手工修改权限:
[root@luda grid]# chown root:oinstall admin -R
[root@luda grid]# pwd
/u01/app/grid
[root@luda grid]# chown root:oinstall * -R
[root@luda grid]# cd /u01/app/19.3.0/grid/crs/install/
[root@luda install]# ll

Oracle 19c RAC 环境升级 19.6 RU OPatch Prerequisite check “CheckApplicable” failed

Oracle 19c 新特性一览
可用性
一般
简化了 DG Broker中对于数据库参数的管理
动态修改Fast-Start Failover (FSFO)目标库
Broker的FSFO支持仅观察模式
当主库闪回时,备库也会跟着闪回
将主库还原点传播
DG多实例REDO应用支持IM
ADG中DML重定向
PDB支持恢复目录
定期清除闪回日志以提高FRA大小的可预估
DG中引入新的参数用于调整自动解决中断方案
更细粒度的补充日志
分片
跨分片传播参数值值
同一个CDB中支持多个PDB分片
System-Managed Sharding支持多表家族
支持在备用分片目录数据库上执行多分片查询
跨分片生成唯一的序列
大数据和数据仓库
一般
提升SQL诊断和修复能力
自动索引
基于Bitmap的count distinct的SQL函数
大数据和In-Memory外部表性能提升
自动解决SQL计划回归
实时统计信息
高频率的优化器统计信息收集
混合分区表
数据库总体新特性
自动化安装、配置和打补丁
DBCA静默方式复制一个数据库
DBCA静默方式克隆一个远端PDB
DBCA静默方式将一个PDB迁移到另一个CDB中
简化基于镜像的Oracle客户端安装
安装Oracle数据库root脚本支持自动执行
支持Oracle集群升级的干运行验证(Dry-Run Validation)。
自动化升级、迁移和工具
数据泵在导入时支持排除加密字句
数据泵在TTS导入过程中,允许表空间保持只读
数据泵中传输表空间的测试模式
数据泵支持资源限制
一般新特性
数据泵命令行参数:ENABLE_SECURE_ROLES
Data Pump Import supports wildcard dump file names for URL-based dump files maintained in object stores
Data Pump command-line parameter CREDENTIAL allows Import from object stores
性能
一般新特性
SQL隔离
为IM自动启用资源管理
在填充时,IM会等待
Memoptimized Rowstore – Fast Ingest
Automatic Database Diagnostic Monitor (ADDM)支持PDB
实时SQL监控
PDB中负载捕捉和重放
RAC和GRID
一般新特性
奇偶校验保护文件–Parity Protected Files
自动化PDB迁移
Automated Transaction Draining for Oracle Grid Infrastructure Upgrades
Oracle 支持重新升级和打补丁
Oracle Grid支持零停机打补丁
安全
一般新特性
ALTER SYSTEM命令中新的字句 FLUSH PASSWORDFILE_METADATA_CACHE
在非OMF模式下透明在线转换支持自动重命名
Key Management of Encrypted Oracle-Managed Tablespaces in Transparent Data Encryption
支持离线表空间加密的附加算法
Support for Host Name-Based Partial DN Matching for Host Certificates
Privilege Analysis Now Available in Oracle Database Enterprise Edition
Support for Oracle Native Encryption and SSL Authentication for Different Users Concurrently
能够从仅模式帐户授予或撤消管理权限
Automatic Support for Both SASL and Non-SASL Active Directory Connections
统一审计TOP语句
从Oracle数据库帐户中删除的密码
Signature-Based Security for LOB Locators
New EVENT_TIMESTAMP_UTC Column in the UNIFIED_AUDIT_TRAIL View
New PDB_GUID Audit Record Field for SYSLOG and the Windows Event Viewer
Database Vault Operations Control for Infrastructure Database Administrators
Database Vault Command Rule Support for Unified Audit Policies
可用性
一般
简化了 DG Broker中对于数据库参数的管理
用户可以通过ALTER SYSTEM命令或DGMGRL中EDIT DATABASE … SET PARAMETER命令来管理,设置所有DG相关的参数。而且 可以通过ALL来一次性修改所有DG环境中的数据库某个参数的,而不用一个一个去修改。

动态修改Fast-Start Failover (FSFO)目标库
目前,DBA必须禁用Fast-Start Failover (FSFO)才能更改FSFO目标备用数据库。 从19c开始,令允许用户动态地将FSFO目标备用数据库更改为目标列表中的另一个备用数据库,而无需先禁用FSFO。

Broker的FSFO支持仅观察模式
当数据库管理员配置DG Broker的FSFO功能时,现在可以将其配置为仅观察模式用来创建测试模式,以查看在正常生产处理期间何时发生故障转移或其他交互。这允许用户更精确地调整FSFO参数属性,并发现其环境中的哪些情况会导致自动故障转移发生。这样可以更容易地证明使用自动故障转移来减少故障转移的恢复时间。

此配置允许用户测试自动故障转移配置,而不会对生产数据库产生任何实际影响。这改进了Broker中已存在的现有故障转移验证,并帮助用户更轻松地了解FSFO自动故障转移过程。

当主库闪回时,备库也会跟着闪回
闪回数据库将整个数据库移动到较旧的时间点,并使用RESETLOGS打开数据库。在DG中,如果主数据库闪回,则备库不再与主库同步。在以前的版本中,需要将备库设置为与主库相同的时间点需要手动过程来闪回备用数据库。 19c中引入了一个新参数,该参数使备库能够在主库上执行闪回数据库时自动闪回。

通过在主数据库闪回时自动闪回备用数据库,减少了时间,精力和人为错误,从而实现更快的同步和缩短的恢复时间目标(RTO)。

将主库还原点传播
在此之前,在主库上定义正常还原点或保证还原点,以便在出现任何逻辑损坏问题时实现快速时间点恢复。 但是,此还原点存储在控制文件中,不会传播到备库。如果发生故障转移,备库成为主库,并且还原点信息将丢失。 而这个新特性可确保还原点从主库传播到备库,以便即使在故障转移事件后还原点也可用。

DG多实例REDO应用支持IM
在此之前,多实例REDO应用和IM列式存储不能同时启用。从19c开始,可以同时启用。

ADG中DML重定向
ADG DML重定向允许在ADG备库上执行DML。执行DML时,该操作将传递到它相关的主库上执行,并且事务的REDO将应用到备库。简而言之,就是

PDB支持恢复目录
支持可插拔数据库(PDB)作为目标数据库,并且可以使用虚拟专用目录(VPC)用户更精细地控制在PDB级别执行备份和还原操作的权限。 元数据视图也是有限的,因此VPC用户只能查看用户已被授予权限的数据。 在以前的版本中,不支持在目标数据库是PDB时与恢复目录的连接。

Oracle 19c为容器数据库(CDB)和PDB级备份和还原提供了完整的备份和恢复灵活性,包括恢复目录支持。

定期清除闪回日志以提高FRA大小的可预估
当拥有许多都使用快速恢复区(FRA)的数据库。 他们通常使用recovery_dest_size初始化参数设置FRA。 而当需要足够的FRA空间时,闪回日志是不会被清除的,这样就会造成FRA压力。 在许多情况下,唯一的补救措施是关闭闪回日志记录并将其重新打开。 而在19c中,此功能使闪回空间的使用从存储管理角度变得可预测,因为闪回不会占用保留所需的空间。此功能还允许用户通过调整闪回日志保留时间来控制空间压力。

FRA对数据库至关重要,因为它存储备份,联机重做日志,归档重做日志和闪回日志。当FRA空间使用满了,会影响数据库的正常使用,后果非常严重。

DG中引入新的参数用于调整自动解决中断方案
DG在主库和备库上有多个进程,用于处理重做传输和归档,这些进程通过网络相互通信。在某些故障情况下,网络挂起,断开连接和磁盘I/O问题,这些进程可能会挂起,可能导致重做传输和GAP解决的延迟。 DG有一个内部机制来检测这些挂起的进程并终止它们,从而允许正常的中断解决方案发生。 在Oracle 19c中,DBA可以使用两个新参数DATA_GUARD_MAX_IO_TIME和DATA_GUARD_MAX_LONGIO_TIME来调整此检测周期的等待时间。 这些参数允许根据用户网络和磁盘I/O行为调整特定DG配置的等待时间。

更细粒度的补充日志
为逻辑备用或完整数据库复制要求设计并实现了补充日志记录。这会在仅复制表的子集的环境中增加不必要的开销。细粒度的补充日志记录为部分数据库复制用户提供了一种方法,可以禁用不感兴趣的表的补充日志记录,这样即使在数据库或模式级别启用了补充日志记录,也不会为不感兴趣的表提供补充日志记录开销。

使用此功能可以显着减少资源使用和重做生成方面的开销,以防数据库中只有部分表需要补充日志记录,例如在GoldenGate部分复制配置中。

分片
跨分片传播参数值值
在19c之前,DBA不得不一个一个去修改每个分片的参数值。而从19c开始,只需要在分片catalog数据库上执行即可。

同一个CDB中支持多个PDB分片
在此之前,只支持同一个CDB中一个PDB作为分片。当然还是有一些限制,如:该CDB中的不同PDB必须是不同分片数据库中的分片。

System-Managed Sharding支持多表家族
在此之前,不管什么方式的Sharding支持一个表家族。

支持在备用分片目录数据库上执行多分片查询
在此之前,只能在主分片目录数据库上支持。

跨分片生成唯一的序列
在此之前,只能通过手动方式来保证序列的唯一性在所有分片数据库上。从19c开始,这一切交给Oracle就可以了。

大数据和数据仓库
一般
提升SQL诊断和修复能力
SQL诊断和修复工具(如SQL Test Case Builder和SQL Repair Advisor)已得到增强,可为管理有问题的SQL语句提供更好的诊断和修复功能。

自动索引
自动索引功能可自动执行索引管理任务,例如根据应用程序工作负载的变化在Oracle数据库中创建,重建和删除索引。

基于Bitmap的count distinct的SQL函数
在12c中就引入了count distinct,用于粗略统计一列不同值的个数。在19c中持续增强,性能和准确性大大提高。

大数据和In-Memory外部表性能提升
IM外部表添加了对ORACLE_HIVE和ORACLE_BIGDATA驱动程序,并行查询,RAC,DG和按需填充的支持。

自动解决SQL计划回归
SQL计划管理在AWR中搜索SQL语句。通过最高负载确定优先级,它在所有可用源中查找备用计划,为SQL计划基准添加性能更好的计划。 Oracle数据库还提供计划比较工具和改进的提示报告。

实时统计信息
Oracle将在DML执行期间,自动收集统计信息。

高频率的优化器统计信息收集
用户可以为某些对象指定更高频率的统计信息收集。从而达到,有力生成更准确的执行计划。。。

混合分区表
混合分区表功能通过使分区驻留在Oracle数据库段以及外部文件和源中来扩展Oracle分区。此功能显着增强了大数据SQL的分区功能,其中表的大部分可以驻留在外部分区中。

数据库总体新特性
自动化安装、配置和打补丁
DBCA静默方式复制一个数据库
可通过DBCA中createDuplicateDB命令来复制一个数据库。

DBCA静默方式克隆一个远端PDB
可通过DBCA中createFromRemotePDB命令来克隆PDB。

DBCA静默方式将一个PDB迁移到另一个CDB中
可通过DBCA中relocatePDB命令来迁移PDB。

简化基于镜像的Oracle客户端安装
从Oracle Database 19c开始,Oracle Database Client软件可用作下载和安装的映像文件。 您必须将映像软件解压缩到您希望Oracle主目录所在的目录中,然后运行runInstaller脚本以启动Oracle Database Client安装。 当然依然提供二进制文件继续以传统格式提供为non-zip文件。

安装Oracle数据库root脚本支持自动执行
从Oracle Database 19c开始,数据库安装程序或设置向导提供了一些选项,用于设置在数据库安装期间根据需要自动运行根配置脚本的权限。 您可以继续手动运行根配置脚本。

支持Oracle集群升级的干运行验证(Dry-Run Validation)。
从19c开始,支持以干运行方式(模拟升级)来验证是否满足升级要求,而不是真正的升级。

自动化升级、迁移和工具
数据泵在导入时支持排除加密字句
可通过新的参数OMIT_ENCRYPTION_CLAUSE来忽略具有加密列的对象。

数据泵在TTS导入过程中,允许表空间保持只读
就是在TTS过程中,可以在源库和目标库上都可以降表空间至于只读模式,可以提供读的服务。而不是之前的只能保证源库的只读,目标库表空间无法正常使用。

数据泵中传输表空间的测试模式
可传输表空间的测试模式使用可传输表空间或完全可传输导出/导入执行仅元数据导出测试。 它还消除了源数据库表空间处于只读模式的要求。

现在,DBA可以更轻松地确定导出所需的时间,并发现闭包检查未报告的无法预料的问题。

数据泵支持资源限制
在数据泵的导出、导入过程中,可以限制其资源的使用。可通过两个新参数来实现:MAX_DATAPUMP_JOBS_PER_PDB 和 MAX_DATAPUMP_PARALLEL_PER_JOB。

一般新特性
数据泵命令行参数:ENABLE_SECURE_ROLES
默认情况下,Data Pump不再启用受密码保护的安全角色。从19c开始,您必须为单个导出或导入作业显启用受密码保护的角色。 添加了一个新的命令行参数,ENABLE_SECURE_ROLES =YES|NO,可用于为单个导出
或导入作业显式启用或禁用这些类型的角色。

Data Pump Import supports wildcard dump file names for URL-based dump files maintained in object stores
Data Pump command-line parameter CREDENTIAL allows Import from object stores
性能
一般新特性
SQL隔离
由于过度消耗CPU和I/O资源而由Oracle资源管理器终止的SQL语句可以自动隔离。与终止的SQL语句关联的执行计划将被隔离,以防止它们再次执行。

为IM自动启用资源管理
当INMEMORY_SIZE不为0时,那么资源管理器将被自动启用。

在填充时,IM会等待
DBMS_INMEMORY_ADMIN.POPULATE_WAIT新函数会让对象一直处于等待状态无法被访问,直到指定优先级的对象已填充到指定的百分比。

新函数确保在允许应用程序访问之前已填充指定的In-Memory对象。例如,数据库可能包含许多具有各种优先级设置的内存中表。 在受限会话中,您可以使用POPULATE_WAIT函数来确保完全填充每个In-Memory表。之后,您可以禁用受限会话,以确保应用程序仅查询表的In-Memory中表示。

Memoptimized Rowstore – Fast Ingest
Automatic Database Diagnostic Monitor (ADDM)支持PDB
实时SQL监控
PDB中负载捕捉和重放
在此之前,只能在CDB root容器级别捕获负载和重放。从19c开始,支持PDB级别。

RAC和GRID
一般新特性
奇偶校验保护文件–Parity Protected Files
REDUNDANCY文件类型属性指定文件组的冗余。 PARITY值指定冗余的单奇偶校验。 奇偶校验适用于一次写入文件,例如存档日志和备份集。

传统的两个或三个ASM镜像用于与数据库备份操作相关联的文件时,会消耗大量空间。 备份文件是一次写入文件,此功能允许保护奇偶校验而不是传统镜像。这样可以节省大量空间。

自动化PDB迁移
在Oracle Grid中,可以使用Fleet Patching和Provisioning自动将PDB从一个CDB重定位到另一个CDB。

Automated Transaction Draining for Oracle Grid Infrastructure Upgrades
Oracle 支持重新升级和打补丁
使用Fleet Patching和Provisioning来打补丁和升级Oracle Restart。 在以前的版本中,Oracle Restart
环境要求用户执行修补和升级操作,通常需要手动干预。Fleet Patching和Provisioning自动执行这些
过程。

Oracle Grid支持零停机打补丁
安全
一般新特性
ALTER SYSTEM命令中新的字句 FLUSH PASSWORDFILE_METADATA_CACHE
ALTER SYSTEM命令中新的子句FLUSH PASSWORDFILE_METADATA_CACHE使用数据库密码文件的最新详细信息刷新元数据缓存。可以通过查询V $ PASSWORDFILE_INFO视图来检索数据库密码文件的最新详细信息。

更改数据库密码文件名或位置时,此功能非常有用,并且需要使用更新的数据库密码文件的详细信息刷新元数据缓存。

在非OMF模式下透明在线转换支持自动重命名
从19c开始,在非OMF模式下的透明数据加密联机转换中,不再需要在ADMINISTER KEY MANAGEMENT SQL语句中包含FILE_NAME_CONVERT子句。 文件名保留其原始名称。

Key Management of Encrypted Oracle-Managed Tablespaces in Transparent Data Encryption
支持离线表空间加密的附加算法
Support for Host Name-Based Partial DN Matching for Host Certificates
Privilege Analysis Now Available in Oracle Database Enterprise Edition
Support for Oracle Native Encryption and SSL Authentication for Different Users Concurrently
能够从仅模式帐户授予或撤消管理权限
Automatic Support for Both SASL and Non-SASL Active Directory Connections
统一审计TOP语句
从Oracle数据库帐户中删除的密码
Signature-Based Security for LOB Locators
New EVENT_TIMESTAMP_UTC Column in the UNIFIED_AUDIT_TRAIL View
New PDB_GUID Audit Record Field for SYSLOG and the Windows Event Viewer
Database Vault Operations Control for Infrastructure Database Administrators
Database Vault Command Rule Support for Unified Audit Policies

https://docs.oracle.com/en/database/oracle/oracle-database/19/newft/new-features.html#GUID-06A15128-1172-48E5-8493-CD670B9E57DC

Oracle 19c 新特性一览

(1) 初始化参数设置的建议

(1.1) sec_case_sensitive_loacgon
在12.1.0.1中不建议修改。默认值是true,但是如果设置成false,启动的时候会有提示:ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance。但是设置成false仍然生效,即忽略大小写的密码可以登录。

(1.2) SQLNET.ALLOWED_LOGON_VERSION_SERVER
在12c中默认值是11,所以如10.2.0.5的JDBC连接过来,就会报错ora-28040,虽然可以设置SQLNET.ALLOWED_LOGON_VERSION_SERVER=8来解决这个问题,但是由于今后11.1以下版本的JDBC不再被oracle支持,因此还是建议升级JDBC驱动来实现。
jar file和JDBC驱动之间的关系,参考Doc ID 401934.1。 10.2版本的JDBC,使用ojdbc14.jar;11.2版本的JDBC,使用ojdbc6.jar;12.1.0的JDBC使用ojdbc7.jar。

SQLNET.ALLOWED_LOGON_VERSION_SERVER 的值有{ 8 | 10 | 11 | 12 | 12a },各个值如下含义:
12a for Oracle Database 12c authentication protocols (strongest protection)
12 for the critical patch updates CPUOct2012 and later Oracle Database 11g ,authentication protocols (recommended)
11 for Oracle Database 11g authentication protocols (default)
10 for Oracle Database 10g authentication protocols
8 for Oracle9i authentication protocol
默认值是11,推荐值是12(如果你没有小于10.2.0.5的客户端)

(1.3) AUDIT_TRAIL
取值范围是: { none | os | db [, extended] | xml [, extended] },默认值是db或者none。
和12c的新特性Unified Auditing有关。是否启用unified auditing,可以用select VALUE from V$OPTION where PARAMETER=’Unified Auditing’;检查。

Unified Auditing默认是工作在mixed模式。
建议:
如果你以前没有审计,那么你可以设置成none。
如果以前有审计,建议设置成db。
更多信息,参考 http://tinyurl.com/UnifiedAuditing

(1.4)DEFERRED_SEGMENT_CREATION
从11.2开始,这个值的默认值是true,建议12c中设置成false。

(1.5)JOB_QUEUE_PROCESSES
从11.1开始,这个值是1000,建议设置成和CPU core相等的值。

(1.6)_DATAFILE_WRITE_ERRORS_CRASH_INSTANCE
这个值默认值是true。所有datafile的IO写error,都会导致数据库crash。
在11.2.0.2之前这个值是false,即只是offline datafile(非system),而不crash instance。11.2.0.2之后,是true。
建议:注意这个从11.2.0.2之后的变化。

(1.7) MAX_STRING_SIZE
这是在12c中的新参数。默认值是standard。
这个参数用于控制varchar2,nvarchar2,raw类型的最大值。standard下,行为和12c之前一样,即varchar2和nvarchar2是4000 bytes,raw是2000 bytes,
改成extended之后,启用了32k strings新特性,varchar2、nvarchar2、raw最大长度可以达到32767 bytes。
修改方式:
1. startup upgrade
2. ALTER SYSTEM SET MAX_STRING_SIZE = EXTENDED;
3.运行@?/rdbms/admin/utl32k.sql
但是注意,单向修改,改了之后,就改不回来了。(可以flashback)

注意原生32k strings是以out-of-line的blob方式存储,且还是basic file,而如果是以modify成32k strings的是以in-line的方式行链接存储。
所以转32k strings需要考虑lob效率的问题。

(2)其他在12c中的参数:

(2.1) _OPTIMIZER_AGGR_GROUPBY_ELIM
Values: { TRUE | FALSE } 默认值是true
Recommendation: FALSE – Wrong Results with GROUP BY Clause in Nested Query (Doc ID 19567916.8)
建议值:false

(2.2) INMEMORY_FORCE
Values: { DEFAULT | OFF } 默认值是default
Explanation: In-Memory Optimization
Recommendation: OFF – Unless you have an Oracle In-Memory license
建议值为off,除非你有in-memory的license

(2.3) OPTIMIZER_DYNAMIC_SAMPLING
如果统计信息不存在,进行动态采样的比例。
Values: { 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 } 默认值是2
解释:
a.0: Off
b.2: Check 不多于64个blocks – generate stats during parse
c.11:12c的新增加值,进行动态采样的block数由系统自动决定,采样结果会保留在statistics repository,供下次使用。
建议值:参考在线文档SQL Tuning Guide,最佳实践是在session级设置。

(2.4) AWR Lite Snapshots
参考Doc ID 1993045.1,建议在手工snapshot的时候,用lite模式
–默认值是bestfit
_AWR_SNAPSHOT_LEVEL = BASIC | LITE | TYPICAL | ALL | BESTFIT
–手工snapshot的时候,建议lite
SQL> exec dbms_workload_repository.create_snapshot(‘LITE’);

(2.5) _OPTIMIZER_COST_BASED_TRANSFORMATION
默认值是on,从10.2开始,默认值就是on了,但是在11.2.0.3之前,建议设置off

(2.6) SESSION_CACHED_CURSORS
默认值是50,建议设置成200,然后按照208857.1进行调整。
太大容易有shared pool碎片

(2.7) _MEMORY_IMM_MODE_WITHOUT_AUTOSGA
默认值是true,如果需要禁用偷buffer cache的属性,设置成false

(2.8) OPTIMIZER_USE_PENDING_STATISTICS
默认值是false,可以在session级设置成true后,测试新的但是还没发布的统计信息,对sql的影响。

(2.9)OPTIMIZER_USE_INVISIBLE_INDEXES
是否让优化器看见invisible的索引,默认值是false
ALTER INDEX idx_ename ON emp(ename) INVISIBLE;
ALTER SESSION SET OPTIMIZER_USE_INVISIBLE_INDEXES=TRUE;

(2.10) 其他建议值:
a. _optimizer_adaptive_plans=FALSE (需要评估,Adaptive Query Optimization Doc ID 2031605.1)
b. _optimizer_unnest_scalar_sq=FALSE (Bug 19894622 – ORA-600 [kkqcsfixfro:1 — frooutj] error occur in 12c (Doc ID 19894622.8))
c. _rowsets_enabled=FALSE (Bug 22173980 : WRONG RESULTS WHEN “_ROWSETS_ENABLED” = TRUE)
d. _optimizer_reduce_groupby_key=FALSE (Bug 20634449 – Wrong results from OUTER JOIN with a bind variable and a GROUP BY clause in 12.1.0.2)
e. _kks_obsolete_dump_threshold=0 or 8(Huge Trace Files Created Containing “—– Cursor Obsoletion Dump sql_id=%s —–” (Doc ID 1955319.1) ,该参数取值范围是0~8,默认值是1。设置0表示永远不dump,设置8表示parent cursor obsoleted 8次之后才dump。)

(2)其他在12c中的参数:

(2.1) _OPTIMIZER_AGGR_GROUPBY_ELIM
Values: { TRUE | FALSE } 默认值是true
Recommendation: FALSE – Wrong Results with GROUP BY Clause in Nested Query (Doc ID 19567916.8)
建议值:false

(2.2) INMEMORY_FORCE
Values: { DEFAULT | OFF } 默认值是default
Explanation: In-Memory Optimization
Recommendation: OFF – Unless you have an Oracle In-Memory license
建议值为off,除非你有in-memory的license

(2.3) OPTIMIZER_DYNAMIC_SAMPLING
如果统计信息不存在,进行动态采样的比例。
Values: { 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 } 默认值是2
解释:
a.0: Off
b.2: Check 不多于64个blocks – generate stats during parse
c.11:12c的新增加值,进行动态采样的block数由系统自动决定,采样结果会保留在statistics repository,供下次使用。
建议值:参考在线文档SQL Tuning Guide,最佳实践是在session级设置。

(2.4) AWR Lite Snapshots
参考Doc ID 1993045.1,建议在手工snapshot的时候,用lite模式
–默认值是bestfit
_AWR_SNAPSHOT_LEVEL = BASIC | LITE | TYPICAL | ALL | BESTFIT
–手工snapshot的时候,建议lite
SQL> exec dbms_workload_repository.create_snapshot(‘LITE’);

(2.5) _OPTIMIZER_COST_BASED_TRANSFORMATION
默认值是on,从10.2开始,默认值就是on了,但是在11.2.0.3之前,建议设置off

(2.6) SESSION_CACHED_CURSORS
默认值是50,建议设置成200,然后按照208857.1进行调整。
太大容易有shared pool碎片

(2.7) _MEMORY_IMM_MODE_WITHOUT_AUTOSGA
默认值是true,如果需要禁用偷buffer cache的属性,设置成false

(2.8) OPTIMIZER_USE_PENDING_STATISTICS
默认值是false,可以在session级设置成true后,测试新的但是还没发布的统计信息,对sql的影响。

(2.9)OPTIMIZER_USE_INVISIBLE_INDEXES
是否让优化器看见invisible的索引,默认值是false
ALTER INDEX idx_ename ON emp(ename) INVISIBLE;
ALTER SESSION SET OPTIMIZER_USE_INVISIBLE_INDEXES=TRUE;

(2.10) 其他建议值:
a. _optimizer_adaptive_plans=FALSE (需要评估,Adaptive Query Optimization Doc ID 2031605.1)
b. _optimizer_unnest_scalar_sq=FALSE (Bug 19894622 – ORA-600 [kkqcsfixfro:1 — frooutj] error occur in 12c (Doc ID 19894622.8))
c. _rowsets_enabled=FALSE (Bug 22173980 : WRONG RESULTS WHEN “_ROWSETS_ENABLED” = TRUE)
d. _optimizer_reduce_groupby_key=FALSE (Bug 20634449 – Wrong results from OUTER JOIN with a bind variable and a GROUP BY clause in 12.1.0.2)
e. _kks_obsolete_dump_threshold=0 or 8(Huge Trace Files Created Containing “—– Cursor Obsoletion Dump sql_id=%s —–” (Doc ID 1955319.1) ,该参数取值范围是0~8,默认值是1。设置0表示永远不dump,设置8表示parent cursor obsoleted 8次之后才dump。)

(4) 查找补丁信息:

原来是通过dba_registry_history 查询,现在可以通过下面的方法查询:
SQL> exec dbms_qopatch.get_sqlpatch_status;

下面的语句是用来查询inventory的位置:
SQL> select xmltransform(DBMS_QOPATCH.GET_OPATCH_LSINVENTORY,DBMS_QOPATCH.GET_OPATCH_XSLT) from dual;

查询某个patch是否被安装:
SQL> select xmltransform(DBMS_QOPATCH.IS_PATCH_INSTALLED(‘19303936 ‘),DBMS_QOPATCH.GET_OPATCH_XSLT) from dual;

查询所有的patch的情况:
SQL> select xmltransform(DBMS_QOPATCH.GET_OPATCH_LIST,DBMS_QOPATCH.GET_OPATCH_XSLT) from dual;

(5)升级catalog变化:

RMAN Catalog Upgrade:
– SQL> @$ORACLE_HOME/rdbms/admin/dbmsrmansys.sql <<<<需要先运行这一步(以前没有),再运行upgrade catalog – $ rman CATALOG my_catalog_owner@catdb recovery catalog database Password: RMAN> UPGRADE CATALOG;
RMAN> UPGRADE CATALOG;
RMAN> EXIT;

(6)增量统计信息收集增强(增量统计信息收集和交换分区相结合)

–先设置使用过期比例USE_STALE_PERCENT。定义“变化”的分区不会收集新的统计信息,除非有xx%百分比的变化。
SQL> exec DBMS_STATS.SET_DATABASE_PREFS(‘INCREMENTAL_STALENESS’,’USE_STALE_PERCENT’);
–设置过期比例为12%(默认是10%)
SQL> exec DBMS_STATS.SET_DATABASE_PREFS(‘STALE_PERCENT’,’12’);

(7)统计信息情况查看:

–查看整个库的统计信息情况:
SQL> variable mystatrep2 clob;
SQL> set long 1000000
SQL> begin
2 :mystatrep2 := DBMS_STATS.REPORT_STATS_OPERATIONS(since=>SYSTIMESTAMP-
3 1,until=>SYSTIMESTAMP, detail_level=>’TYPICAL’,format=>’TEXT’);
4 end;
5 /

PL/SQL procedure successfully completed.

SQL>
SQL> set serverout on
SQL> set long 999999999
SQL> set line 10000
SQL> set pages 10000
SQL> col mystatrep2 for a200
SQL> print mystatrep2

MYSTATREP2
——————————————————————————————————————————————————————————————————–
——————————————————————————————————————————————————————————————————-
| Operation Id | Operation | Target | Start Time | End Time | Status | Total Tasks | Successful Tasks | Failed Tasks | Active Tasks |
——————————————————————————————————————————————————————————————————-
| 366 | export_stats_for_dp | TEST | 13-JUL-16 02.12.59.130000 PM +08:00 | 13-JUL-16 02.12.59.203000 PM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-
| 365 | export_stats_for_dp | TEST | 13-JUL-16 02.08.18.857000 PM +08:00 | 13-JUL-16 02.08.18.880000 PM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-
| 364 | export_stats_for_dp | TEST | 13-JUL-16 02.04.13.425000 PM +08:00 | 13-JUL-16 02.04.13.480000 PM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-
| 363 | export_stats_for_dp | TEST | 13-JUL-16 01.57.50.812000 PM +08:00 | 13-JUL-16 01.57.50.846000 PM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-
| 362 | export_stats_for_dp | TEST | 13-JUL-16 01.56.34.637000 PM +08:00 | 13-JUL-16 01.56.34.667000 PM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-
| 361 | import_stats_for_dp | TEST | 13-JUL-16 01.55.50.018000 PM +08:00 | 13-JUL-16 01.55.50.294000 PM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-
| 360 | export_stats_for_dp | TEST | 13-JUL-16 01.53.54.840000 PM +08:00 | 13-JUL-16 01.53.54.879000 PM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-
| 359 | import_stats_for_dp | TEST | 13-JUL-16 01.52.17.822000 PM +08:00 | 13-JUL-16 01.52.18.059000 PM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-
| 358 | import_stats_for_dp | TEST | 13-JUL-16 01.46.44.289000 PM +08:00 | 13-JUL-16 01.46.44.611000 PM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-
| 357 | export_stats_for_dp | TEST | 13-JUL-16 01.43.12.810000 PM +08:00 | 13-JUL-16 01.43.13.543000 PM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-
| 356 | import_stats_for_dp | TEST | 13-JUL-16 01.40.03.349000 PM +08:00 | 13-JUL-16 01.40.03.650000 PM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-
| 355 | import_stats_for_dp | TEST | 13-JUL-16 01.37.36.601000 PM +08:00 | 13-JUL-16 01.37.36.843000 PM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-
| 354 | import_stats_for_dp | TEST | 13-JUL-16 01.36.32.072000 PM +08:00 | 13-JUL-16 01.36.32.321000 PM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-
| 353 | import_stats_for_dp | TEST | 13-JUL-16 01.34.56.514000 PM +08:00 | 13-JUL-16 01.34.56.790000 PM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-
| 352 | import_stats_for_dp | TEST | 13-JUL-16 12.01.25.756000 PM +08:00 | 13-JUL-16 12.01.26.022000 PM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-
| 351 | import_stats_for_dp | TEST | 13-JUL-16 11.45.08.131000 AM +08:00 | 13-JUL-16 11.45.08.703000 AM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-
| 350 | export_stats_for_dp | TEST | 13-JUL-16 11.42.21.767000 AM +08:00 | 13-JUL-16 11.42.24.557000 AM +08:00 | COMPLETED | 0 | 0 | 0 | 0 |
——————————————————————————————————————————————————————————————————-

SQL>
–查看某个schema的统计信息收集情况:
SQL> set serverout on
SQL> set long 999999999
SQL> set line 10000
SQL> set pages 10000
SQL> col my_report for a200
SQL> variable my_report clob;
SQL> BEGIN
2 :my_report := DBMS_STATS.REPORT_GATHER_SCHEMA_STATS(ownname => ‘TEST’,
3 detail_level => ‘TYPICAL’, format => ‘TEXT’);
4 END;
5 /

PL/SQL procedure successfully completed.

SQL>
SQL> print my_report

MY_REPORT
——————————————————————————————————————————————————————————————————–
——————————————————————————————————————————————————————————————————-
| Operation Id | Operation | Target | Start Time | End Time | Status | Total Tasks | Successful Tasks | Failed Tasks | Active Tasks |
——————————————————————————————————————————————————————————————————-
| 368 | gather_schema_stats (reporting mode) | TEST | | | | 1 | | | |
——————————————————————————————————————————————————————————————————-
——————————————————————————————————————————————————————————————————-
| |
| ————————————————————————————————————————————————————————————————— |
| T A S K S |
| ————————————————————————————————————————————————————————————————— |
| ——————————————————————————————————————————————————————————————— |
| | Target | Type | Start Time | End Time | Status | |
| ——————————————————————————————————————————————————————————————— |
| | TEST.T1 | TABLE | | | | |
| ——————————————————————————————————————————————————————————————— |
| |
| |
——————————————————————————————————————————————————————————————————-

(8)DBMS_ROLLING

Data Guard Simple Rolling Upgrade
Semi-automation of Transient Logical Standby Rolling Upgrade
Works with Data Guard Broker
Procedure DBMS_ROLLING

(9) Real-Time ADDM:

– MMON进程负责收集数据, 每隔3秒一次,不会有lock/latch
– MMON的子进程创建report,并保留在AWR中,可以通过查询 DBA_HIST_REPORTS

参考 Mike Dietrich Upgrade, Migrate & Consolidate to Oracle Database 12c/ oracleblog

升级到12c/18c以后的版本需要了解的一些方面参数(参考Upgrade, Migrate & Consolidate to Oracle Database 12c)

Oracle 12.2.0.1 and 18c 的原厂支持周期

一般我们确认oracle 数据库某个版本的支持周期都是通过mos 742060.1的文档来获悉,

MOS Note: 742060.1.: Release Schedule of Current Database Releases

但目前在该文档中目前并未提及oracle 18c的支持周期。如下图:

Clarification: Support Periods for Oracle 12.2.0.1 and 18c

 

20187月23日,Oracle在内部发布了Oracle Database 18c 18.3.0。从此日期开始,已确定上一版本的 Oracle数据库12.2.0.1支持(如下图),Oracle 12.2.0.1的修补结束日期定在2020年7月23日,据mike dietrich 描述Oracle 18c也会发生同样的事情,一旦Oracle 19c在内部可用,将确定并公布Oracle 18c的修补结束日期。从特定日期算起至少两年 – 而不是从Oracle 18c发布之日起。

 

澄清:Oracle 12.2.0.1和18c的支持期

 

在这里需要提到的是 可以通过Oracle lifetime support的手册来获取更多关于支持周期的信息

http://www.oracle.com/us/support/library/lifetime-support-technology-069183.pdf