Skip to content

Oracle数据库的企业版,标准版,个人版之间的差异

其实Oracle也可以很便宜,标准版在很多情况下已经够用了,下面的信息是对比oracle数据库企业版,标准版,个人版之间的差异.

标准版比企业版主要缺少的功能如下:
1、不支持Oracle Data Guard.想要高可用性的客户,就不能选择标准版)
2、一些Online操作,比如Online index maintenance,Online table redefinition等不支持
3、备份和恢复的某些操作受限,比如不支持Block级别的恢复(Block-level media recovery),不支持并行备份和恢复(Parallel backup and recovery),多重备份(Duplexed backup sets)等等
4、Flashback功能,在标准版中Flashback Table,Flashback Database, Flashback Transaction Query都是不支持的 5、VPD(Virtual Private Database)不支持
6、FGA(Fine-grained auditing)不支持
7、Partitioning,分区也不支持
8、Data compression,比如表压缩,分区压缩都不支持
9、Bitmapped index and bitmapped join index,不支持,(如果是数据仓库系统就不能选择标准版了)
10、Export transportable tablespace,(注意,这里只是导入不支持),但是Import transportable tablespace,包括跨平台的import都是支持的,也就是说你如果选择了标准版,那么想把数据库简单地转移到其它平台上是比较费劲的事情了 11、一些并行操作都不支持,包括Parallel query,Parallel DML,Parallel index build,Parallel Data Pump export and import。
12、不支持Streams,又少了一个高可用性的可行性方案 13、不支持Advanced Replication的multimaster方式,再少一个高可用性方案,当然实体化视图的复制方案仍然是支持的 14、不支持Connection Manager

以下是官方描述的不同版本企业版、标准版之间的差异:

Differences between Enterprise, Standard and Personal Edition

Starting point: Note 465455.1

Content of this note:

Server
Version Note # ABSTRACT
====================================================================

8.1 NOTE.112591.1 Differences Between Enterprise, Standard and Personal Editions on Oracle 8.1

9.0 NOTE.161556.1 Differences between Oracle9i Personal, Standard and Enterprise on NT/2000

9.2 NOTE.269040.1 Differences Between Enterprise, Standard and Personal Editions on Oracle 9.2

10.1 NOTE.271886.1 Differences Between Different Editions of Oracle Database 10G

10.2 NOTE.465465.1 Differences Between Enterprise, Standard and Personal Editions on Oracle 10.2

11.1 Note.465460.1 Differences Between Enterprise, Standard and Personal Editions on Oracle 11.1

11.2 Note.1084132.1 Differences Between Enterprise, Standard and Personal Editions on Oracle 11.2

这里将11.2的差异说明贴出:

转到底部转到底部

I

APPLIES TO:

Oracle Database – Enterprise Edition – Version 11.2.0.1.0 to 11.2.0.3 [Release 11.2]
Information in this document applies to any platform.
***Checked for relevance on 25-Mar-2015***

DETAILS

Feature/Option SE1 SE EE Notes
High Availability
Oracle Fail Safe Y Y Y Windows only
Oracle RAC One Node N N Y Extra cost option
Oracle Data Guard—Redo Apply N N Y
Oracle Data Guard—SQL Apply N N Y
Oracle Data Guard—Snapshot Standby N N Y
Oracle Active Data Guard N N Y Extra cost option
Rolling Upgrades—Patch Set, Database, and Operating System N N Y
Online index rebuild N N Y
Online index-organized table organization N N Y ALTERTABLEMOVEONLINEoperations
Online table redefinition N N Y Using theDBMS_REDEFINITIONpackage
Duplexed backup sets N N Y
Block change tracking for fast incremental backup N N Y
Unused block compression in backups N N Y
Block-level media recovery N N Y
Lost Write Protection N N Y
Automatic Block Repair N N Y Requires Active Data Guard option
Parallel backup and recovery N N Y
Tablespace point-in-time recovery N N Y
Trial recovery N N Y
Fast-start fault recovery N N Y
Flashback Table N N Y
Flashback Database N N Y
Flashback Transaction N N Y
Flashback Transaction Query N N Y
Oracle Total Recall N N Y Extra cost option
Scalability
Oracle Real Application Clusters N Y Y Extra cost with EE, included with SE
Automatic Workload Management N Y Y Requires Oracle Real Application Clusters
Performance
Client Side Query Cache N N Y
Query Results Cache N N Y
PL/SQL Function Result Cache N N Y
In-Memory Database Cache N N Y Extra cost option
Database Smart Flash Cache N N Y Solaris and Oracle Enterprise Linux only
Support for Oracle Exadata Storage Server Software N N Y
Security
Advanced Security Option N N Y Extra cost option
Oracle Label Security N N Y Extra cost option
Virtual Private Database N N Y
Fine-grained auditing N N Y
Oracle Database Vault N N Y Extra cost option
Secure External Password Store N N Y
Development Platform
SQLJ Y Y Y Requires Oracle Programmer
Oracle Developer Tools for Visual Studio .NET Y Y Y Windows only
Microsoft Distributed Transaction Coordinator support Y Y Y Windows only
Active Directory integration Y Y Y Windows only
Native .NET Data Provider—ODP.NET Y Y Y Windows only
.NET Stored Procedures Y Y Y Windows only
Manageability
Oracle Change Management Pack N N Y Extra cost option
Oracle Configuration Management Pack N N Y Extra cost option
Oracle Diagnostic Pack N N Y Extra cost option
Oracle Tuning Pack N N Y Extra cost option, also requires the Diagnostic Pack
Oracle Provisioning and Patch Automation Pack N N Y Extra cost option
Oracle Real Application Testing N N Y Extra cost option
Database Resource Manager N N Y
Instance Caging N N Y
SQL Plan Management N N Y
VLDB, Data Warehousing, Business Intelligence
Oracle Partitioning N N Y Extra cost option
Oracle OLAP N N Y Extra cost option
Oracle Data Mining N N Y Extra cost option
Oracle Data Profiling and Quality N N Y Extra cost option
Oracle Data Watch and Repair Connector N N Y Extra cost option
Oracle Advanced Compression N N Y Extra cost option
Basic Table Compression N N Y
Bitmapped index, bitmapped join index, and bitmap plan conversions N N Y
Parallel query/DML N N Y
Parallel statistics gathering N N Y
Parallel index build/scans N N Y
Parallel Data Pump Export/Import N N Y
In-memory Parallel Execution N N Y
Parallel Statement Queuing N N Y
Transportable tablespaces, including cross-platform N N Y Import of transportable tablespaces supported into SE, SE1, and EE
Summary management—Materialized View Query Rewrite N N Y
Asynchronous Change Data Capture N N Y
Integration
Basic Replication Y Y Y SE1/SE: read-only, updateable materialized view
Advanced Replication N N Y Multi-master replication
Oracle Streams Y Y Y SE1/SE: no capture from redo
Database Gateways Y Y Y Separate product license
Messaging Gateway N N Y
Networking
Oracle Connection Manager N N Y Available via a custom install of the Oracle Database client, usually installed on a separate machineSee “Oracle Connection Manager” for more information
Infiniband Support N N Y
Content Management
Oracle Spatial N N Y Extra cost option
Semantic Technologies (RDF/OWL) N N Y Requires Oracle Spatial and the Oracle Partitioning option

_datafile_write_errors_crash_instance设置建议

该参数在11.2.0.2以前默认是false,在11.2.0.2后默认为true,作用为在出现io错误的时候数据库选择是offline出现io错误相关的datafile还是直接将instance crash.当为true时候,数据库在发生io错误时候会直接瘫痪.报错ORA-63999等,前阵子我碰到这样的错误,一般碰到此类错误都是从IO传输层,存储和系统网络之间找问题,该参数设置为FALSE或者TRUE只是从业务影响层面广度的考虑.所以一旦碰到IO错误,考调整此参数只是治标不治本,根源还需要从IO传输层的各层面找问题.可以考虑设置为false,减少因为io错误而导致影响的范围增加.

在考虑此参数时候多数已经是碰到IO错误了,所以此时候应该考虑下,数据库坏块的产生控制影响,对以下几个参数给予考虑:

DB_ULTRA_SAFE
DB_BLOCK_CHECKING
DB_LOST_WRITE_PROTECT
DB_BLOCK_CHECKSUM

建议设置参数db_ultra_safe为DATA_ONLY,会稍微增加数据库主机的消耗,以加强对数据的校验以便能及时发现问题,设置该参数会去自动修改对应的另外3个参数:

DB_BLOCK_CHECKING will be set to MEDIUM.(当前为FALSE)
DB_LOST_WRITE_PROTECT will be set to TYPICAL. (当前为TYPICAL)
DB_BLOCK_CHECKSUM will be set to FULL. (当前为NONE)

以下是国外一个工程师对该参数的建议:

Param ‘_datafile_write_errors_crash_instance’ , TRUE or FALSE?

Since 11.2.0.2 there’s a new parameter, “_datafile_write_errors_crash_instance” to prevent the intance to crash when a write error on a datafile occurs . But.. should I use this or not. The official text of this parameter:

This fix introduces a notable change in behaviour in that
from 11.2.0.2 onwards an I/O write error to a datafile will
now crash the instance.

Before this fix I/O errors to datafiles not in the system tablespace
offline the respective datafiles when the database is in archivelog mode.
This behavior is not always desirable. Some customers would prefer
that the instance crash due to a datafile write error.

This fix introduces a new hidden parameter to control if the instance
should crash on a write error or not:
_datafile_write_errors_crash_instance

With this fix:
If _datafile_write_errors_crash_instance = TRUE (default) then
any write to a datafile which fails due to an IO error causes
an instance crash.

If _datafile_write_errors_crash_instance = FALSE then the behaviour
reverts to the previous behaviour (before this fix) such that
a write error to a datafile offlines the file (provided the DB is
in archivelog mode and the file is not in SYSTEM tablespace in
which case the instance is aborted)
When you ask Oracle for advice, you get the following answer:

20+ years ago a feature was added to Oracle to offline a datafile when there was an error writing a dirty buffer to it and it was not part of the system tablespace. At that time it made sense to do this since neither RAC or even
OPS was implemented and storage arrays did not exist. Then the most likelycause of an I/O error was a problem with the direct attached disk drive holding the datafile. By offlining the datafile the database might be able to continue running. Customers assumed that a disk failure would require restoring a backup and doing a media recovery so taking the file offline might improve availability. High availability was not expected.
Today almost all customers use highly available storage arrays accessible from multiple hosts. Now most I/O errors are either transient or are local to the host that encounters them. Real disk failures are hidden by the storage array redundancy. Customers expect a disk failure to have no effect on the operation of the database.

Unfortunately the code to offline a datafile on an I/O error is still there. The effect is that an error on one node in a cluster offlines the datafile and usually takes down the entire application on all nodes or even crashes all instances if the problem is with an undo tablespace. For example dismounting a file system on one node in a cluster makes that node get I/O errors for the files on that file system. This makes a mistake on one node take down the entire cluster.

Offlining a datafile on a write error is even a problem with single instance. Most I/O errors today will go away if the database is restarted on another machine or if the current machine is rebooted. However if the I/O error took a datafile offline, then the administrator must do a media recovery to make the application function again. This is an unusual procedure that takes awhile.

If the database instances do not crash it takes longer for the administrator to find out that the application is not working even though the database appears to be up and running. This is a problem with both RAC and single
instance.

Question: One concern is that a failed datafile write to a non-critical tablespace will bring down the database when it occurs in the only open instance.

It is true that there may be some situations where taking the file offline would be better. On the other hand there are cases where crashing in single instance is better because rebooting the server or restarting the instance will bring it up sooner with no need for manual intervention. Since we have to choose without knowing much about the system we have to base our choice on the odds of the failure being one case or the other. Twenty years ago a datafile was on one disk, almost all I/O errors were disk failures and a disk failure always meant doing media recovery. In that situation taking the datafile offline was clearly the right thing to do, even if the tablespace was critical to the application – it was going to need media recovery in any case.

Today systems are much different.

– Storage arrays and mirroring mean that disk failures almost never require media recovery. I/O write errors usually stop happening when the system is reinitialized.
– Many customers have mechanisms like CRS to automatically restart the database, possibly on a different node.
Now it is much more likely that restarting the instance will resolve the problem without doing any media recovery, and it will happen automatically. The chance that the application can continue running with the offline datafile has always been slight, but when media recovery was going to be required anyway there was no harm in trying to offline the file. Now there is a lot of harm in offlining the file since it prevents automatic recovery and requires an administrator to perform tasks he is unfamiliar with. Today crashing the instance has a better chance of getting the application running sooner.
So Oracle advises to leave the parameter default (=TRUE) and use the new feature. The system is then more capable to recover without interfering needed of a DBA. But when someone has different experiences, feel free to comment on this post….

性能优化的一点体会

1、优化是基于目标的,我们的最终目的是达到一个目标,而不是做优化。目标的合理性决定了优化项目的成败。刚刚开始给用户优化的时候,我会把所有能够调整的东西一次性全部调整完毕。哪怕有些调整给系统性能带来的好处不到0.1%。对于生产系统,不确定因素十分多,而很多参数方面的调整本身就是双刃剑,如果你无法预期其调整的影响,那么这种调整是存在风险的,在实施的时候就应该慎重考虑。现在我做优化的项目,往往会根据用户的优化目标,然后在此基础上进行分析,制定方案,实施的结果虽然一般会超出客户的期望,但是我不会在生产系统上做一些没把握的事情。锦上添花的事情,有时候也要考虑考虑,是否值得,因为弄不好,锦上添花会变成画蛇添足。
2、1+1不一定大于1,在优化过程中,抓住主要矛盾,解决主要问题,而不要胡子眉毛一把抓。很多调整之间有关联性,甚至是互斥的,不合理的调整可能带来更坏的结果。
3、客户需要的是系统的优化,而不仅仅是DB的优化。客户的目标里,看到的是一个系统,而不是一个孤立的DB。在10年前,我可能会说,OS的问题,你们还是找一下厂家。而现在,我会对客户说,你放心,我们做的是系统优化。
4、用适当的方式和应用开发厂商配合。很多优化项目由于无法和应用厂商有效的配合,其效果大打折扣。因为应用是和系统性能关系最为紧密的。如果应用开发厂商不能很好的配合,那么优化项目将举步维艰。如果你和开发厂商说,“你这个SQL开销太大,需要修改一下”,那么得到可能就是强烈的反对。如果你说“这个SQL开销太大,我给你们提供了几个方案,第一是,。。。”,这样你很可能会得到比较好的结果。你是优化专家,找出几个TOP SQL这样的工作,不需要专家来完成,而专家的职责,不仅仅是发现TOP SQL,而是如何解决掉TOP SQL。
5、不要相信什么优化规则,实际上并没有条条框框限制你,实现目标的任何方法你都可以使用。对于一个初级DBA来说,可能老DBA会告诉你,什么是对的,什么是不对的。而对错都是相对的,如果你已经深刻的理解了Oracle以及系统优化,那么就像金庸小说里的内功高手一样,无招胜有招。

对于老白的几个观点,深深的赞同!

SPM固定执行计划以及踩bug事一件

原有2个sql语句有多个表连接,执行计划一直在走错误的执行计划.表级统计信息以及索引规划都已经是最新(这里统计信息有狗血不做描述),只是SQL里还有六个绑定变量以及字段的柱状图影响了执行计划,在这个优化里没有删除柱状图和对绑定变量的影响进行处理(星形连接不建议使用绑定变量),现场环境微妙最终选择通过sql profile以及spm对这2个sql的执行计划进行固定处理.先用sqlprofile固定后让sql重新解析后发现未能生效,逐用spm的方式固定.

这里以其中一个sql_id为bwwnw7r1gzhdf的语句为例,这是收集到对应1个小时内的sqlrpt,其中plan_hash_value为711942702执行计划为正确的执行计划,从报告中可以看到这个sql选择了错误的执行计划,并且从中也可以看到sql有多个执行计划.当中执行计划正确与否的判断方式就不做描述.
 

SQL ID: bwwnw7r1gzhdf

# Plan Hash Value Total Elapsed Time(ms) Executions 1st Capture Snap ID Last Capture Snap ID
1 3052678239 13,512,877 10 25060 25060
2 3392573872 0 0 25060 25060
3 4134955434 0 0 25060 25060
4 1564064893 0 0 25060 25060
5 2504448979 0 0 25060 25060
6 147966509 0 0 25060 25060
7 711942702 0 0 25060 25060

 
通过coe_xfr_sql_profile.sql脚本对bwwnw7r1gzhdf的sql进行固定711942702,生成sql profile的名字为coe_bwwnw7r1gzhdf_711942702.
(该部分可以参考
1.Using Sqltxplain to create a ‘SQL Profile’ to consistently reproduce a good plan (文档 ID 1487302.1)
2.Automatic SQL Tuning and SQL Profiles (文档 ID 271196.1)
3.Correcting Optimizer Cost Estimates to Encourage Good Execution Plans Using the COE XFR SQL Profile Script (文档 ID 1955195.1))

让sql从新解析后从v$sql视图中的sql profile字段没有看到生效的迹象,原因是在脚本coe_xfr_sql_profile.sql中对创建的sqlprofile默认的生效是false的,所以创建出来的profile不会失效,监控中的执行计划未变(现场我对此处的profile drop).

 

SQL>  select name,created,status from dba_sql_profiles;

NAME                           CREATED                        STATUS
------------------------------ ------------------------------ --------
coe_bwwnw7r1gzhdf_711942702    26-JUN-15 02.09.30.000000 PM   ENABLED
coe_g87an0j5djjpm_334801256    26-JUN-15 11.30.25.000000 AM   ENABLED

SQL>  select SQL_ID, SQL_PROFILE,PLAN_HASH_VALUE from V$SQL where SQL_ID='bwwnw7r1gzhdf' and sql_profile is not null;

no rows

SQL>  select sql_profile,EXECUTIONS,PLAN_HASH_VALUE,parse_calls,ELAPSED_TIME/1000000,
ELAPSED_TIME/1000000/EXECUTIONS,LAST_LOAD_TIME,ROWS_PROCESSED
from v$sql where EXECUTIONS>0 and sql_id='bwwnw7r1gzhdf' order by LAST_LOAD_TIME desc;
...

逐对profile进行disable并drop

=====disable profile==============
BEGIN
DBMS_SQLTUNE.ALTER_SQL_PROFILE(
name => 'coe_bwwnw7r1gzhdf_711942702',
attribute_name => 'STATUS',
value => 'DISABLED');
END;
/

BEGIN
DBMS_SQLTUNE.ALTER_SQL_PROFILE(
name => 'coe_g87an0j5djjpm_334801256',
attribute_name => 'STATUS',
value => 'ENABLED');
END;
/

=====drop profile=================
begin
DBMS_SQLTUNE.DROP_SQL_PROFILE(name => 'coe_bwwnw7r1gzhdf_711942702');
end;
/

begin
DBMS_SQLTUNE.DROP_SQL_PROFILE(name => 'coe_g87an0j5djjpm_334801256');
end;
/

由于已经存在了正确的执行计划,所以通过DBMS_SPM直接创建baseline,并通过DBMS_SPM包对该sql的baseline的enable,accept,fixed三个属性指定为yes.

该部分可以参考:
Plan Stability Features (Including SQL Plan Management (SPM)) (文档 ID 1359841.1)

为sql创建baseline

variable cnt number;
execute :cnt :=DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(SQL_ID => 'bwwnw7r1gzhdf', PLAN_HASH_VALUE => 711942702) ;

验证该baseline已经生成

SQL> set linesize 200
SQL> Select Sql_Handle, Plan_Name, Origin, Enabled, Accepted,Fixed,Optimizer_Cost,Sql_Text
From Dba_Sql_Plan_Baselines
Where Sql_Text Like '%FROM P1EDBADM.MES_PROCESSOPERATIONSPEC%' Order By Last_Modified;


SQL_HANDLE                     PLAN_NAME                      ORIGIN         ENA ACC FIX OPTIMIZER_COST SQL_TEXT
------------------------------ ------------------------------ -------------- --- --- --- -------------- --------------------------------------------------------------------------------
SQL_995463d3d1edd710           SQL_PLAN_9kp33ug8yvpsh4af503b5 MANUAL-LOAD    YES YES NO              69 SELECT D.LOTNAME LOT, D.PRODUCTNAME GLASS, TO_CHAR(D.CREATETIME, 'YYYY-MM-DD HH2

为sqlbaseline的fixed属性改为yes

variable cnt number;
execute :cnt :=DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(SQL_ID => 'bwwnw7r1gzhdf', PLAN_HASH_VALUE => 711942702,fixed => 'yes') ;
验证修改完成
SQL> set linesize 200
SQL> Select Sql_Handle, Plan_Name, Origin, Enabled, Accepted,Fixed,Optimizer_Cost,Sql_Text
  2  From Dba_Sql_Plan_Baselines
  3  Where Sql_Text Like '%FROM P1EDBADM.MES_PROCESSOPERATIONSPEC%' Order By Last_Modified;

SQL_HANDLE                     PLAN_NAME                      ORIGIN         ENA ACC FIX OPTIMIZER_COST SQL_TEXT
------------------------------ ------------------------------ -------------- --- --- --- -------------- --------------------------------------------------------------------------------
SQL_995463d3d1edd710           SQL_PLAN_9kp33ug8yvpsh4af503b5 MANUAL-LOAD    YES YES YES            574 SELECT D.LOTNAME LOT, D.PRODUCTNAME GLASS, TO_CHAR(D.CREATETIME, 'YYYY-MM-DD HH2

最终验证生效

SQL> Select Sql_Handle, Plan_Name, Origin, Enabled, Accepted,Fixed,Optimizer_Cost,Sql_Text
  2  From Dba_Sql_Plan_Baselines
  3  Where Sql_Text Like '%FROM P1EDBADM.MES_PROCESSOPERATIONSPEC%' Order By Last_Modified;

SQL_HANDLE                     PLAN_NAME                      ORIGIN         ENA ACC FIX OPTIMIZER_COST SQL_TEXT
------------------------------ ------------------------------ -------------- --- --- --- -------------- --------------------------------------------------------------------------------
SQL_995463d3d1edd710           SQL_PLAN_9kp33ug8yvpsh4af503b5 MANUAL-LOAD    YES YES YES            574 SELECT D.LOTNAME LOT, D.PRODUCTNAME GLASS, TO_CHAR(D.CREATETIME, 'YYYY-MM-DD HH2
SQL_2e1c8025edb165b3           SQL_PLAN_2w7404rqv2tdm56eb6fa8 MANUAL-LOAD    YES YES YES            311 SELECT 1 " ", D.LOTNAME LOT, D.PRODUCTNAME GLASS, TO_CHAR(MAX(H.EVENTTIME), 'YYY

SPM主要和2个参数有关,一个是baseline生效(optimizer_user_sql_plan_baselines,前提是accept属性要为yes,否则会产生干扰),一个是捕获sql语句生成baseline(optimizer_cature_sql_plan_baselines).在数据库中我一般不开启捕获,但是开启baseline生效.
开启的语法:

alter system set optimizer_user_sql_plan_baselines=true scope=both;
alter system set optimizer_cature_sql_plan_baselines=true scope=both;

关闭的语法:

alter system set optimizer_user_sql_plan_baselines=false scope=both;
alter system set optimizer_cature_sql_plan_baselines=false scope=both;

开启捕获的情况在一些11g版本中会触发该bug
Bug 9910484 – SQL Plan Management Capture uses excessive space in SYSAUX (文档 ID 9910484.8)
此bug会造成sysaux的表空间暴增,主要为sqllob$data,我遇见的是在一天内从2g增长到4g.关闭了捕获后,该现象消失.
删除掉不必要的baseline后可以通过shrink的方式回收sysaux的空间,具体可以参考
Reducing the Space Usage of the SQL Management Base in the SYSAUX Tablespace (文档 ID 1499542.1)

(转)Dumping Oracle Blocks

Dumping Oracle Blocks

 

 

Brian Peasland, Raytheon at EROS Data Center

 

 

 

 

Introduction

 

In a few cases, it is desirable to dump the contents of an Oracle data block. These blocks may comprise a table, an index, or even the control file. While we can query the contents of a table, we may want to see what is happening to a particular block of that table. We might want to see not only the data in the block that any query can return, but also see what’s going on behind the scenes in the block. This paper will show you how to dump the contents of a block in a table, index and control file. This paper will also show you how to interpret some of the results of these dumps.

 

 

Why Dump Blocks?

 

So why are we doing this? For the most part, it is just idle curiosity. DBAs are inquisitive folks by nature. Oracle Corp. has released just enough information on database internals to tantalize us without giving away all the secrets. And we’d like to see what’s going on behind the scenes. So for most cases, we are dumping blocks just for fun. In other cases, we are dumping blocks to actually find out some meaningful information. But in the end, it is up to you.

 

 

Trace File Information

 

All of the examples in this paper will generate trace files. Those trace files will be present in USER_DUMP_DEST for you to view. To determine the trace file generated, use a query similar to the following:

 

ORA9I SQL> select pa.value || ‘/’ || i.instance_name || ‘_ora_’

2         || pr.spid || ‘.trc’ as trace_file

3 from v$session s, v$process pr, v$parameter pa, v$instance i

4 where s.username = user and s.paddr = pr.addr

5* and pa.name=’user_dump_dest’;

 

TRACE_FILE

————————————————————————

/edcsns14/pkg/oracle/admin/ora9i/udump/ora9i_ora_25199.trc

 

This query shows the full path and filename of the generated trace file for my session. This is the text file we look in to see the results of our dump.

 

 

All trace files contain the same basic information at the beginning of the file.

 

Oracle9i Enterprise Edition Release 9.0.1.0.0 – Production

With the Partitioning option

JServer Release 9.0.1.0.0 – Production

ORACLE_HOME = /edcsns14/pkg/oracle/9.0.1

System name:   SunOS

Node name:     edcsns14

Release:       5.7

Version:       Generic_106541-11

Machine:       sun4u

Instance name: ora9i

Redo thread mounted by this instance: 1

Oracle process number: 11

Unix process pid: 653, image: oracle@edcsns14 (TNS V1-V3)

 

*** 2002-03-27 17:06:06.573

*** SESSION ID:(12.4240) 2002-03-27 17:06:06.535

 

Output similar to above is shown in each trace file. This output shows the database version, some platform specific information such as host name and OS level, the database instance name, the processes identifiers (Oracle and Unix) for the session that generated the trace file, and the date and time the file was generated. We’ll skip this introductory information in examining our trace files.

 

 

Dumping A Table Block

 

To dump a block belonging to a table, you’ll need to know the file number and block number of that block. If you already know the file number and block, then you are all set. But for our example, we’ll want to figure that out.

 

ORA9I SQL> select file_id,block_id,bytes,blocks

2 from dba_extents

3 where owner=’PEASLAND’ and segment_name=’EMP’;

 

FILE_ID   BLOCK_ID           BYTES   BLOCKS

——- ———- —————- ——–

3         9           65,536       8

 

Here, I’ve queried the data dictionary to find out which file my EMP table resides in. The EMP table is in file# 3, starting at block# 9, and is eight blocks long. This query will return one row for each extent of the object. So this object is comprised of only one extent. We can verify which tablespace file# 3 belongs to with the following query:

 

ORA9I SQL> select tablespace_name,file_name from dba_data_files

2 where relative_fno = 3;

 

TABLESPACE_NAME               FILE_NAME

—————————— —————————————–

USER_DATA                     /edcsns14/oradata3/ora9i/user_data01.dbf

 

As I had expected, my table is in the USER_DATA tablespace.

 

Now that we know which file and blocks hold our table, let’s dump a sample block of the table. This is done as follows:

 

ORA9I SQL> alter system dump datafile 3 block 10;

 

System altered.

 

You can dump a range of blocks with the following command:

 

ORA9I SQL> alter system dump datafile 3 block min 10 block max 12;

 

System altered.

 

Let’s now look at the contents of dumping one block.

 

Start dump data blocks tsn: 3 file#: 3 minblk 10 maxblk 10

buffer tsn: 3 rdba: 0x00c0000a (3/10)

scn: 0x0000.00046911 seq: 0x02 flg: 0x04 tail: 0x69110602

frmt: 0x02 chkval: 0x579d type: 0x06=trans data

Block header dump: 0x00c0000a

Object id on Block? Y

seg/obj: 0x6d9c csc: 0x00.46911 itc: 2 flg: O typ: 1 – DATA

fsl: 0 fnx: 0x0 ver: 0x01

Itl           Xid                 Uba         Flag Lck       Scn/Fsc

0x01   xid: 0x0005.02f.0000010c   uba: 0x00806f10.00ca.28 C—   0 scn 0x0000.00046900

0x02   xid: 0x0003.01c.00000101   uba: 0x00800033.0099.04 C—   0 scn 0x0000.00046906

 

This is the beginning of the data block dump. The first line tells us that we are dumping file#3, starting at block# 10 (minblk), and finishing with block# 10 (maxblk). Had we dumped more than one data block, these values would represent a range. The relative data block address (rdba) is 0x00c0000a. For more information on the rdba, refer to a later section in this paper. At the end of this line, we can see in parentheses that the rdba corresponds to file# 3, block# 10 (3/10).

 

The third line describes the SCN of the data block. In our case, the SCN is 0x0000.00046911. The tail of the data block is composed of the last two bytes of the SCN (6911) appended with the type (06) and the sequence (02). If the decomposition of the tail does not match these three values, then the system knows that the block is inconsistent and needs to be recovered. While this tail value shows up at the beginning of the block dump, it is physically stored at the end of the data block.

 

The block type shows up on the fourth line. Some of the valid types correspond to the following table:

 

Type   Meaning             

0x02   undo block

0x06 table or index data block

0x0e undo segment header

0x10   data segment header block

0x17   bitmapped data segment header

 

 

The “Object id on Block?” line tells us whether or not this object is in SYS.OBJ$. Since Oracle 6, this should always be “Y”. If you look at the next line, the seg/obj value tells us the segment’s object id (in hex). In our example, this is 0x6d9c. Hex ‘6D9C’ is ‘28060’ in decimal. We can verify that this is our table with the following query:

 

ORA9I SQL> select owner,object_name from dba_objects

2 where object_id=28060;

 

OWNER     OBJECT_NAME

———- ——————————

PEASLAND   EMP

 

As we had hoped, this is our table.

 

The csc value is the Cleanout System Change number. This value tells us when block cleanout was performed on this block. Hopefully, it matches the SCN of the data block. The itc value is the Interested Transaction List Count. In our case, there are two transactions interested in this block. Those interested transactions appear at the end of our example. We can see the transaction id (Xid) of those two transactions. Those transaction ids correspond to rollback segments that are used to process our transactions.

 

The flag (flg) is either “-” or “O”, used to indicate if this block is on a freelist. If the block is on a freelist, the flag will be “0”. If it is not on a freelist, then the flag will be “-”. Our block in question is on the freelist.

 

Well, that was quite a lot of information and we haven’t really looked at too much of the dump. Let’s look at the next section of the data block dump.

 

data_block_dump

===============

tsiz: 0x1fa0

hsiz: 0x2e

pbl: 0x024d015c

bdba: 0x00c0000a

flag=————-

ntab=1

nrow=14

frre=9

fsbo=0x2e

fseo=0x1b18

avsp=0x1d8a

tosp=0x1d8a

0xe:pti[0]     nrow=14 offs=0

0x12:pri[0]     offs=0x1c30

0x14:pri[1]     offs=0x1f4f

0x16:pri[2]     offs=0x1f24

0x18:pri[3]     offs=0x1efb

0x1a:pri[4]     offs=0x1ece

0x1c:pri[5]     offs=0x1ea5

0x1e:pri[6]     offs=0x1e7c

0x20:pri[7]     offs=0x1e54

0x22:pri[8]     offs=0x1e2e

0x24:pri[9]     sfll=13

0x26:pri[10]   offs=0x1ca4

0x28:pri[11]   offs=0x1cf1

0x2a:pri[12]   offs=0x1b18

0x2c:pri[13]   sfll=-1

 

The tsiz value shows us the amount of available room in the block for data. Here, we get ‘1fa0’ which translates to 8,096 bytes of useable room. The rest of our 8,192 byte block is used for overhead such as the block header.

 

The ntab value shows us how many tables are stored in this block. Unless this block belongs to a cluster, this value will be ‘1’. The nrow value tells us how many rows of data are stored in this block. Our data block has 14 rows of data.

 

Starting at address ‘0xe’, we get a directory to each row. We can see that the first row (index entry zero) starts at offset address to the block ‘0x1c30’. Each of the blocks rows follows from here. This way, a row can be found really quickly. Remember that a ROWID is basically a pointer to a unique row. In Oracle 8+, the ROWID is of the form O.F.B.R (or objectno,relativefno,blockno,rowno). So when the system quickly points to a particular block in a particular file, the row number points to a slot in this directory. The directory then points to a specific location in the block. This is the start of that row.

 

Now that we have a roadmap to our data block, let’s look at the remainder of the trace file to see the actual rows of data in the block.

 

block_row_dump:

tab 0, row 0, @0x1c30

tl: 39 fb: –H-FL– lb: 0x0 cc: 8

col 0: [ 3] c2 4a 46

col 1: [ 5] 53 4d 49 54 48

col 2: [ 5] 43 4c 45 52 4b

col 3: [ 3] c2 50 03

col 4: [ 7] 77 b4 0c 11 01 01 01

col 5: [ 3] c2 09 19

col 6: *NULL*

col 7: [ 2] c1 15

 

The actual row data starts with the phrase “block_row_dump:”. Then a row of data is given. I’ve only shown one row of data here, as the rest is similar. We can see that this row belongs to table ‘0’ (tab) of our cluster. Since there is no cluster in our example, we do not have more than one table so this value will be zero. We can also see that this is row ‘0’ and the address of that row is given. This address should correspond to our roadmap noted above.

 

The ‘tl’ value gives us the total number of bytes for this row, including any overhead. We can see that this row occupies 39 bytes. The ‘cc’ value gives us a column count. We have eight columns in this row. This can easily be verified by doing a DESCRIBE on the table and counting the columns, or by querying USER_TAB_COLUMNS.

 

The ‘fb’ value gives us flags about the row. ‘H’ means that we have the head of the row. ‘F’ means that we have the first piece of the row. ‘L’ means we also have the last piece of the row. Since this is the first and last piece of the row, the row is not chained. Since this is also the head of the row, the row has not been migrated.

 

The rest of the information for the row is the data for each column. For instance, in column 1, we have the following ASCII character codes, “53 4d 49 54 48”. A quick look at an ASCII conversion chart will tell us that these characters are “SMITH”. If you are familiar with the sample EMP table, you will know that SMITH is one of our employees. Notice that column 6 is NULL. Column 4 is the HIREDATE column. This is a DATE datatype. From this block, you can easily verify that the DATE datatype requires seven bytes of storage. Column 0 contains a number. The three bytes here are the representation of that number.

 

 

Dumping The Extent Header

 

In the previous section, we dump a block in the middle of the table. The first block in the segment contains some interesting information as well. If you recall, our EMP table was in file#3, starting at block# 9. We previously dumped a block in the middle of the table. Let’s now dump the first block of this table.

 

ORA9I SQL> alter system dump datafile 3 block 9;

 

System altered.

 

We will now look at the contents of our trace file.

 

Start dump data blocks tsn: 3 file#: 3 minblk 9 maxblk 9

buffer tsn: 3 rdba: 0x00c00009 (3/9)

scn: 0x0000.000467ee seq: 0x01 flg: 0x04 tail: 0x67ee1001

frmt: 0x02 chkval: 0x3d71 type: 0x10=DATA SEGMENT HEADER – UNLIMITED

Extent Control Header

—————————————————————–

Extent Header:: spare1: 0     spare2: 0     #extents: 1     #blocks: 7

last map 0x00000000 #maps: 0     offset: 4128

Highwater:: 0x00c0000b ext#: 0     blk#: 1     ext size: 7

#blocks in seg. hdr’s freelists: 1

#blocks below: 1

mapblk 0x00000000 offset: 0

Unlocked

Map Header:: next 0x00000000 #extents: 1   obj#: 28060 flag: 0x40000000

Extent Map

—————————————————————–

0x00c0000a length: 7

 

We’ve already seen the first four lines in the previous section. One thing to notice is that the type is 0x10 for a data segment header. After this, things look a little different. We are now given information about this particular extent. We can see that this particular segment is comprised of only one extent (#extents) and that it has seven data blocks (#blocks). If you remember from the previous section, this segment is really composed of eight blocks. But the first block is for the segment header, so it is not counted here.

 

From this segment header, we can get information about the High Watermark (HWM). If we look at the line starting with ‘Highwater::’, we can see that the HWM is in rdba 0x00c0000b. This is in the 0th extent of our segment (ext#), in the 1st block (blk#). Remember that we count from zero so the HWM is really in the first extent, second block. Additionally, two lines below this one, we can see that there is only 1 block below the HWM (#blocks below). We can also see that there is only one block on the segment header freelist (#blocks in seg. hdr’s freelists). Let’s look at a more complex example from a table that is using more than one block.

 

Extent Header:: spare1: 0     spare2: 0     #extents: 48     #blocks: 383

last map 0x00000000 #maps: 0      offset: 4128

Highwater:: 0x00c01d8c ext#: 47     blk#: 3     ext size: 8

#blocks in seg. hdr’s freelists: 0

#blocks below: 378

 

Here, we can see a much bigger table. From this information, we can tell that the table is made of 48 extents, totaling 383 blocks. The HWM is in the 47th extent in block 3. There are 378 blocks below the HWM. This means that we have five blocks above the HWM. Before the DBMS_SPACE package became available, this method of dumping the segment header was the only way to get information on the HWM. Now that the DBMS_SPACE package is available, this method has become basically obsolete. But it is still an interesting exercise.

 

 

 

Dumping An Index

 

For this section of the paper, we are only concerning ourselves with B-tree indexes. This section does not cover dumping blocks of bitmap indexes or IOTs.

 

The data blocks that make up an index are different than table data blocks. A B-tree index has branch blocks and leaf blocks. We will first perform a tree dump of the entire B-tree index. We will examine various blocks of this tree dump. To start the tree dump, we need to know the OBJECT_ID of the index. This is done with a simple SQL statement:

 

ORA9I SQL> select object_id from user_objects

2 where object_name=’DB_OBJ_OBJID_IDX’;

 

OBJECT_ID

———-

28046

 

We now have the OBJECT_ID of our index. Let’s generate a dump of this index.

 

ORA9I SQL> alter session set events ‘immediate trace name treedump level 28046’;

 

Session altered.

 

With the ALTER SESSION SET EVENTS command above, we have forced the system to generate a tree dump of the B-tree index. Note that the level number is the OBJECT_ID of the index in question. We have now generated a trace file that we can read to get detailed information of our B-tree index. Let’s examine the trace file a section at a time to understand its contents.

 

—– begin tree dump

branch: 0xc01d92 12590482 (0: nrow: 9, level: 2)

branch: 0xc01ed7 12590807 (-1: nrow: 323, level: 1)

leaf: 0xc01d93 12590483 (-1: nrow: 42 rrow: 42)

 

This is the beginning of our tree dump. The first branch block is the root of the tree. In our example, this block is hex address c01d92, which translates to decimal address 12590482. These addresses help us traverse from block to block. Notice that the level of this root block is level 2. If you start counting at zero, then it should be obvious that this index B-tree has a height of three.

 

Below the root block is another branch block at address 0xc01ed7. You can verify that this block is one below the root by looking at the level of this block, in this case, ‘1’. The next line shows us the first leaf block of the B-tree index. Leaf blocks are always at level 0 so there is no need to put that information in the dump. What follows next is a dump of that leaf block.

 

Leaf block dump

===============

header address 2349359196=0x8c08605c

kdxcolev 0

KDXCOLEV Flags = – – –

kdxcolok 0

kdxcoopc 0x80: opcode=0: iot flags=— is converted=Y

kdxconco 4

kdxcosdc 0

kdxconro 42

kdxcofbo 120=0x78

kdxcofeo 7080=0x1ba8

kdxcoavs 6960

kdxlespl 0

kdxlende 0

kdxlenxt 12590484=0xc01d94

kdxleprv 0=0x0

kdxledsz 0

kdxlebksz 8032

 

Of interest here is the kdxlenxt and kdxleprv values. These denote the addresses of the next leaf block and previous leaf block respectively. These pointers are important for INDEX RANGE SCANS of data. The next leaf block is at address 0xc01d94 while the previous leaf block is address 0x0. This block is the first leaf block looking at the tree from left to right. Therefore, there is no previous leaf block. The address 0x0 signifies no previous block. Likewise, the rightmost leaf block will have no next leaf block and will have this dummy address. Continuing with the leaf block dump is the data in the leaf block.

 

row#0[8009] flag: —–, lock: 0

col 0; len 2; (2): c1 03

col 1; len 6; (6): 43 5f 4f 42 4a 23

col 2; len 3; (3): 53 59 53

col 3; len 6; (6): 00 c0 1c 4a 00 9b

row#1[7986] flag: —–, lock: 0

col 0; len 2; (2): c1 03

col 1; len 6; (6): 43 5f 4f 42 4a 23

col 2; len 3; (3): 53 59 53

col 3; len 6; (6): 00 c0 1d 07 00 9b

 

I’ve only included the first two rows of data in the leaf block. You can see that (counting from zero) these are the first two rows. If you page through the dump file, you’ll see each and every row in the leaf block. There are many more leaf blocks, but they look very similar.

 

Of particular interest are the three columns for each row. This index is a composite index on three columns of the table. The index columns are listed in order. You can see the hexadecimal values of the ASCII codes for the data in those columns. This is where you can see the actual data stored in the leaf rows. Unfortunately, you’ll have to convert these ASCII codes by hand.

 

After this leaf block, the rest of the leaf block dumps off that first branch block are shown. With many, many leaf blocks per branch block, it can generate quite a lengthy dump file. If you page forward far enough, you’ll see the end of the leaf block dump and the start of the next branch block.

 

—– end of leaf block dump —–

branch: 0xc01fef 12591087 (0: nrow: 279, level: 1)

leaf: 0xc01ed6 12590806 (-1: nrow: 25 rrow: 25)

 

Here is the start of the second branch block off the root node. I know that this branch block is one off the root block since its level is one less. How do I know that this branch block is the second branch block from left to right? Look at the first number inside the parentheses. For this block, that number is ‘0’. The leftmost branch block is ‘-1’. You can go back in this paper to verify this is true for the first branch block. Then start counting forward. So ‘0’ is really the second branch block.

 

Why start counting at ‘-1’ when normally we start counting at ‘0’? It all has to do with how keys are stored in the branch blocks. The first key is not stored because it is assumed to be less than the second key. Since the second key is started at ‘0’ this means the first branch is one less than zero.

 

Let’s look at one more piece of information. The NROW indicator shows the number of rows in that block. For leaf blocks, this is the number of table rows pointed to by that leaf block. For branch blocks, this is the number of blocks underneath that branch block in the tree. Leaf blocks also contain the RROW indicator. RROW is the number of rows after all current transactions have been committed. In our examples, these two indicators are the same since there are no active transactions against that data.

 

 

 

Block Addresses

 

In our previous work, we saw some block addresses (often called the rdba). For instance, we saw the following when dumping a leaf block:

 

—– begin tree dump

branch: 0xc01d92 12590482 (0: nrow: 9, level: 2)

 

We know the hexadecimal and decimal equivalent of this block address. How can we determine where this is in the Oracle database? We’ll have to use the DBMS_UTILITY package to do that. We’ll use a procedure in this package to convert the decimal Data Block Address (DBA) to our familiar file number, block number.

 

ORA9I SQL> select dbms_utility.data_block_address_file(12590482) as file#,

2 dbms_utility.data_block_address_block(12590482) as block from dual;

 

FILE#     BLOCK

———- ———-

3       7570

 

Here, we can see that this branch block belongs to datafile 3, block number 7570. Quick queries on the data dictionary can reveal exactly which object this belongs to, but we already know that.

 

 

Control File Dumps

 

This section will take you beyond the simple BACKUP CONTROL FILE TO TRACE. If you have never viewed a control file backup to trace, I suggest you start there first. Most of the information in the control files is accessible from various V$ views. Those V$ views, in effect, just query the control file by accessing X$ tables whose sole purpose is to peek into the current contents of the control file. What is presented here is just a fun, interesting way to see the same information.

 

ORA9I SQL> alter session set events ‘immediate trace name controlf level 1′;

 

Session altered.

 

Here, we generated a dump of the control file, level 1. A level 1 dump of the control file dumps only the control file header. Let’s take a look at the information in the control file header.

 

DUMP OF CONTROL FILES, Seq # 1602 = 0x642

FILE HEADER:

Software vsn=150994944=0x9000000, Compatibility Vsn=134217728=0x8000000

Db Id=2254477402=0x8660985a, Db Name=’ORA9I’

Control Seq=1602=0x642, File size=194=0xc2

File Number=0, Blksiz=8192, File Type=1 CONTROL

*** END OF DUMP ***

 

That’s all there is to it! It is just a few short lines. From this dump, we can see interesting information such as the unique database identifier (2254477402), the database name (ORA9I), and the database block size (8192).

 

A level 2 dump shows the file header, the database info record, and the checkpoint progress record. We’ve already seen the file header, so let’s look at the database info record.

 

************************************************************************

DATABASE ENTRY

************************************************************************

(blkno = 0x1, size = 192, max = 1, in-use = 1, last-recid= 0)

DF Version: creation=0x9000000 compatible=0x8000000, Date 02/26/2002 13:43:54

DB Name “ORA9I”

Database flags = 0x00404000

Controlfile Creation Timestamp 02/26/2002 13:43:55

Incmplt recovery scn: 0x0000.00000000

Resetlogs scn: 0x0000.00000001 Resetlogs Timestamp 02/26/2002 13:43:54

Prior resetlogs scn: 0x0000.00000000 Prior resetlogs Timestamp 01/01/1988 00:00:00

Redo Version: creation=0x9000000 compatable=0x9000000

#Data files = 4, #Online files = 4

Database checkpoint: Thread=1 scn: 0x0000.0005f8d8

Threads: #Enabled=1, #Open=1, Head=1, Tail=1

enabled threads: 01000000 00000000 00000000 00000000 00000000 00000000

00000000 00000000

Max log members = 5, Max data members = 1

Arch list: Head=0, Tail=0, Force scn: 0x0000.0005eae3scn: 0x0000.00000000

Controlfile Checkpointed at scn: 0x0000.0005fa9b 03/27/2002 15:11:39

thread:0 rba:(0x0.0.0)

enabled threads: 00000000 00000000 00000000 00000000 00000000 00000000

00000000 00000000

 

There is a lot of information here. From here, we can again see that the database and control files were initially created on 02/26/2002 13:43. We can see that the last time the database was opened with the RESETLOGS option was at the time the database was created. Further down, we can see that we have four datafiles and that all four datafiles are online. Furthermore, we can see that we can have a maximum of five log file members. The control file was last checkpointed at SCN 0x0000.0005fa9b, at time 03/27/2002 15:11:39.

 

A level 3 dump also includes information on all records, including data files, log files, tempfiles, and tablespaces. We’ll look at a few of these examples.

 

***************************************************************************

LOG FILE RECORDS

***************************************************************************

(blkno = 0x5, size = 72, max = 5, in-use = 3, last-recid= 3)

LOG FILE #1:

(name #1) /edcsns14/oradata3/ora9i/redo01.log

Thread 1 redo log links: forward: 2 backward: 0

siz: 0x2800 seq: 0x00000202 hws: 0x1 bsz: 512 nab: 0xffffffff flg: 0x8 dup: 1

Archive links: fwrd: 0 back: 0 Prev scn: 0x0000.0005f1fc

Low scn: 0x0000.0005f8d8 03/27/2002 14:25:58

Next scn: 0xffff.ffffffff 01/01/1988 00:00:00

LOG FILE #2:

(name #2) /edcsns14/oradata3/ora9i/redo02.log

Thread 1 redo log links: forward: 3 backward: 1

siz: 0x2800 seq: 0x00000200 hws: 0x2 bsz: 512 nab: 0x2801 flg: 0x0 dup: 1

Archive links: fwrd: 0 back: 0 Prev scn: 0x0000.0005e23e

Low scn: 0x0000.0005eae3 03/27/2002 14:24:03

Next scn: 0x0000.0005f1fc 03/27/2002 14:25:01

LOG FILE #3:

(name #3) /edcsns14/oradata3/ora9i/redo03.log

Thread 1 redo log links: forward: 0 backward: 2

siz: 0x2800 seq: 0x00000201 hws: 0x2 bsz: 512 nab: 0x2801 flg: 0x0 dup: 1

Archive links: fwrd: 0 back: 0 Prev scn: 0x0000.0005eae3

Low scn: 0x0000.0005f1fc 03/27/2002 14:25:01

Next scn: 0x0000.0005f8d8 03/27/2002 14:25:58

 

This section shows us information about the online redo logs. This is how the control file knows about each log file (3 of them in our case). We can also see the low and high SCN of each online redo log file. Notice that in our case, the high SCN for log file #1 is 0xffff.ffffffff. This means that there is now high SCN yet because this is the active online redo log! Similar information is presented for the other sections.

 

There are more levels to the control file dump. Level 4, shows all of the above information and adds the four most recent records for circular reuse record types. Level 5 adds the same information, but double the most recent records for circular reuse record types. Each additional level doubles it again.

 

The information found in control file dumps is pretty self-explanatory. So I won’t go into much more detail here. Try dumping a control file and examining the contents yourself.

 

 

Conclusion

 

Hopefully, you’ve had some fun peering down to the internal operations of different files that make up an Oracle database. We’ve seen how to dump table data blocks, indexes, and control files. We’ve also seen how to make sense of some of the information presented to us in our dump. While you may not find many practical uses for these techniques, it is a good exercise.