asm – 但行好事 莫问前程 http://www.jydba.net Tue, 08 Oct 2024 07:22:55 +0000 zh-CN hourly 1 https://wordpress.org/?v=4.9.10 12C Oracle ASM Filter Driver http://www.jydba.net/index.php/archives/2540 http://www.jydba.net/index.php/archives/2540#respond Fri, 31 Aug 2018 01:15:31 +0000 http://www.jydba.net/?p=2540 Oracle ASM Filter Driver(Oracle ASMFD)消除了在系统每次被重启后Oracle ASM需要重新绑定磁盘设备来简化对磁盘设备的配置与管理。Oracle ASM Filter Driver(Oracle ASMFD)是一种内置在Oracle ASM磁盘的IO路径中的内核模块。Oracle ASM使用filter driver来验证对Oracle ASM磁盘的写IO操作。Oracle ASM Filter Driver会拒绝任何无效的IO请求。这种操作消除了意外覆盖Oracle ASM磁盘而损坏磁盘组中的磁盘与文件。例如,Oracle ASM Filter Driver会过滤掉所有可能意外覆盖磁盘的非Oracle IO操作。从Oracle 12.2开始,Oracle ASM Filter Driver(Oracle ASMFD)在系统安装Oracle ASMLIB的情况下不能被安装,如果你想安装与配置Oracle ASMFD,那么必须首先卸载Oracle ASMLIB。Oracle 12.2的ASMFD不支持扩展分区表。

配置Oracle ASM Filter Driver
可以在安装Oracle Grid Infrastructure时或在安装Oracle Grid Infrastructure后对磁盘设备永久性配置Oracle ASM Filter Driver(Oracle ASMFD)。

在安装Oracle Grid Infrastructure时配置Oracle ASM Filter Driver
在安装Oracle Grid Infrastructure时,可以选择启用自动安装与配置Oracle ASM Filter Driver。如果在安装Oracle Grid Infrastructure所在的系统中没有使用udev,那么可以在在安装Oracle Grid Infrastructure之前执行下面的操作来为Oracle ASMFD准备磁盘。下面的操作必须在Oracle Grid Infrastructure软件包在Oracle Grid Infrastructure home目录中必须解压后,但在配置ASMFD之前执行。

1.为了使用Oracle ASM Filter Driver来配置共享磁盘,以root用户来设置环境变量$ORACLE_HOME为Grid Home目录,设置环境变量$ORACLE_BASE为临时目录

# set ORACLE_HOME=/u01/app/oracle/12.2.0/grid
# set ORACLE_BASE=/tmp

ORACLE_BASE变量被设置为临时目录可以避免在安装Oracle Grid Infrastructure之前在Grid Home目录中创建诊断或跟踪文件。在执行下面的操作之前,确保是在$ORACLE_HOME/bin目录中执行命令。

2.使用ASMCMD afd_label命令来为Oracle ASM Filter Driver来准备磁盘.

#asmcmd afd_label DATA1 /dev/disk1a --init

3.使用ASMCMD afd_lslbl命令来验证磁盘是否已经被标记可以为Oracle ASMFD所使用

#asmcmd afd_lslbl /dev/disk1a

查看某块磁盘

[rootd@cs1 ~]./asmcmd afd_lslbl /dev/asmdisk01
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
CRS2                                  /dev/asmdisk01

列出已经标记可以为Oracle ASMFD所使用的所有磁盘

[grid@jytest1 ~]$ asmcmd afd_lslbl 
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
CRS1                                  /dev/asmdisk02
CRS2                                  /dev/asmdisk01
DATA1                                 /dev/asmdisk03
DATA2                                 /dev/asmdisk04
FRA1                                  /dev/asmdisk07
TEST1                                 /dev/asmdisk05
TEST2                                 /dev/asmdisk06

4.当为Oracle ASMFD准备完磁盘后清除变量ORACLE_BASE

# unset ORACLE_BASE

5.运行安装脚本(gridSetup.sh)来安装Oracle Grid Infrastructure并启用Oracle ASM Filter Driver配置。

在安装Oracle Grid Infrastructure后配置Oracle ASM Filter Driver
如果在安装Grid Infrastructure时没有启用配置Oracle ASMFD,那么可以使用Oracle ASMFD来手动配置Oracle ASM设备。

为Oracle Grid Infrastructure Clusterware环境配置Oracle ASM,具体操作如下:
1.以Oracle Grid Infrastructure用户来更新Oracle ASM磁盘发现路径来使Oracle ASMFD来发现磁盘。
首先检查当前Oracle ASM磁盘发现路径并更新

[grid@cs1 ~]$ asmcmd dsget
parameter:/dev/sd*, /dev/asm* 
profile:/dev/sd*,/dev/asm* 

将’AFD:*’增加到发现磁盘路径中

[grid@cs1 ~]$ asmcmd dsset 'dev/sd*','/dev/asm*','AFD:*'
[grid@cs1 ~]$ asmcmd dsget
parameter:dev/sd*, /dev/asm*, AFD:*
profile:dev/sd*,/dev/asm*,AFD:*

2.以Oracle Grid Infrastructure用户来获取cluster中的节点列表与角色

[grid@cs1 ~]$ olsnodes -a
cs1     Hub
cs2     Hub

3.在每个Hub与Leaf节点上,可以以回滚或非回滚模式来执行以下操作
3.1以root用户来停止Oracle Grid Infrastructure

# $ORACLE_HOME/bin/crsctl stop crs

如果命令返回错误,那么执行下面的命令来强制停止Oracle Grid Infrastructure

# $ORACLE_HOME/bin/crsctl stop crs -f

3.2在节点层面以root用户来配置Oracle ASMFD

# $ORACLE_HOME/bin/asmcmd afd_configure

3.3以Oracle Grid Infrastructure用户来验证Oracle ASMFD的状态

[grid@cs2 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'cs2.jy.net'

3.4以root用户来启动Oracle Clusterware stack

# $ORACLE_HOME/bin/crsctl start crs

3.5以Oracle Grid Infrastructure用户来设置Oracle ASMFD发现磁盘路径为步骤3.1中所检索到的原始Oracle ASM磁盘发现路径

[grid@cs1 ~]$ asmcmd dsset 'dev/sd*','/dev/asm*'

迁移不包含OCR或vote文件的磁盘组到Oracle ASMFD
1.以Oracle Grid Infrastructure用户来执行以下操作

2.列出已经存在的磁盘组

[grid@cs2 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     40960     1544                0            1544              0             Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960      860                0             860              0             N  DATA/
MOUNTED  NORMAL  N         512             512   4096  4194304     40960    40704                0           20352              0             N  DN/

3.列出相关磁盘

[grid@cs2 ~]$ asmcmd lsdsk -G DN
Path
/dev/asmdisk03
/dev/asmdisk05

从下面的查询可以看到/dev/asmdisk03和/dev/asmdisk05的label字段为空

SQL> select group_number,disk_number,name,label,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           LABEL                                              PATH
------------ ----------- ------------------------------ -------------------------------------------------- --------------------------------------------------
           0           0                                CRS2                                               /dev/asmdisk01
           0           1                                CRS1                                               /dev/asmdisk02
           0           2                                DATA1                                              /dev/asmdisk04
           3           0 DN_0000                                                                           /dev/asmdisk03
           3           1 DN_0001                                                                           /dev/asmdisk05
           1           0 CRS1                           CRS1                                               AFD:CRS1
           2           0 DATA1                          DATA1                                              AFD:DATA1
           1           1 CRS2                           CRS2                                               AFD:CRS2

4.检查Oracle ASM是否是活动状态

[grid@cs2 ~]$ srvctl status asm
ASM is running on cs1,cs2

5.在所有节点上停止数据库与dismount磁盘组

[grid@cs2 ~]$srvctl stop diskgroup -diskgroup DN -f

6.在每个Hub节点上执行以下命令来为磁盘组中的所有已经存在的磁盘进行标记

[grid@cs2 ~]$ asmcmd afd_label DN1 /dev/asmdisk03 --migrate
[grid@cs2 ~]$ asmcmd afd_label DN2 /dev/asmdisk05 --migrate

7.在所有Hub节点上扫描磁盘

[grid@cs1 ~]$ asmcmd afd_scan
[grid@cs2 ~]$ asmcmd afd_scan

8.在所有节点启动数据库并mount磁盘组

[grid@cs2 ~]$ srvctl start diskgroup -diskgroup DN

从下面的查询可以看到/dev/asmdisk03和/dev/asmdisk05的label字段分别显示为DN1和DN2

SQL> select group_number,disk_number,name,label,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           LABEL                                              PATH
------------ ----------- ------------------------------ -------------------------------------------------- --------------------------------------------------
           0           0                                CRS2                                               /dev/asmdisk01
           0           1                                DN2                                                /dev/asmdisk05
           0           2                                DN1                                                /dev/asmdisk03
           0           3                                CRS1                                               /dev/asmdisk02
           0           4                                DATA1                                              /dev/asmdisk04
           1           1 CRS2                           CRS2                                               AFD:CRS2
           2           0 DATA1                          DATA1                                              AFD:DATA1
           1           0 CRS1                           CRS1                                               AFD:CRS1
           3           0 DN_0000                        DN1                                                AFD:DN1
           3           1 DN_0001                        DN2                                                AFD:DN2

现在可以将原先的 udev rules 文件移除。当然,这要在所有节点中都运行。以后如果服务器再次重启,AFD 就会完全接管了。

[root@cs1 bin]# cd /etc/udev/rules.d/
[root@cs1 rules.d]# ls -lrt
total 16
-rw-r--r--. 1 root root  709 Mar  6  2015 70-persistent-ipoib.rules
-rw-r--r--  1 root root 1416 Mar  9 12:23 99-my-asmdevices.rules
-rw-r--r--  1 root root  224 Mar  9 15:52 53-afd.rules
-rw-r--r--  1 root root  190 Mar  9 15:54 55-usm.rules

[root@cs1 rules.d]# mv 99-my-asmdevices.rules 99-my-asmdevices.rules.bak

[root@cs1 rules.d]# cat 53-afd.rules
#
# AFD devices
KERNEL=="oracleafd/.*", OWNER="grid", GROUP="asmadmin", MODE="0775"
KERNEL=="oracleafd/*", OWNER="grid", GROUP="asmadmin", MODE="0775"
KERNEL=="oracleafd/disks/*", OWNER="grid", GROUP="asmadmin", MODE="0664"


[root@cs1 rules.d]# ls -l /dev/oracleafd/disks
total 20
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 CRS1
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 CRS2
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 DATA1
-rwxrwx--- 1 grid oinstall 15 Aug 30 17:42 DN1
-rwxrwx--- 1 grid oinstall 15 Aug 30 17:42 DN2

[root@cs2 bin]# cd /etc/udev/rules.d/
[root@cs2 rules.d]# ls -lrt
total 16
-rw-r--r--. 1 root root  709 Mar  6  2015 70-persistent-ipoib.rules
-rw-r--r--  1 root root 1416 Mar  9 12:23 99-my-asmdevices.rules
-rw-r--r--  1 root root  224 Mar  9 15:52 53-afd.rules
-rw-r--r--  1 root root  190 Mar  9 15:54 55-usm.rules

[root@cs2 rules.d]# mv 99-my-asmdevices.rules 99-my-asmdevices.rules.bak

[root@cs2 rules.d]# cat 53-afd.rules
#
# AFD devices
KERNEL=="oracleafd/.*", OWNER="grid", GROUP="asmadmin", MODE="0775"
KERNEL=="oracleafd/*", OWNER="grid", GROUP="asmadmin", MODE="0775"
KERNEL=="oracleafd/disks/*", OWNER="grid", GROUP="asmadmin", MODE="0664"


[root@cs2 rules.d]# ls -l /dev/oracleafd/disks
total 20
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 CRS1
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 CRS2
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 DATA1
-rwxrwx--- 1 grid oinstall 15 Aug 30 17:42 DN1
-rwxrwx--- 1 grid oinstall 15 Aug 30 17:42 DN2

其实,AFD 也在使用 udev

迁移包含OCR或vote文件的磁盘组到Oracle ASMFD
1.以root用户来列出包含OCR和vote文件的磁盘组

[root@cs1 ~]# cd /u01/app/product/12.2.0/crs/bin
[root@cs1 bin]# sh ocrcheck -config
Oracle Cluster Registry configuration is :
         Device/File Name         :       +CRS
         Device/File Name         :      +DATA
[root@cs1 bin]# sh crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   750a78e1ae984fcdbfb4dbf44d337a77 (/dev/asmdisk02) [CRS]
Located 1 voting disk(s).

2.以Oracle Grid Infrastructure用户来列出与磁盘组相关的磁盘

[grid@cs2 ~]$ asmcmd lsdsk -G CRS
Path
/dev/asmdisk01
/dev/asmdisk02

3.以root用户来在所有节点上停止数据库与Oracle Clusterware

[root@cs1 bin]# ./crsctl stop cluster -all
CRS-2673: Attempting to stop 'ora.crsd' on 'cs2'
CRS-2673: Attempting to stop 'ora.crsd' on 'cs1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'cs2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'cs1'
CRS-2673: Attempting to stop 'ora.cs.db' on 'cs1'
CRS-2673: Attempting to stop 'ora.qosmserver' on 'cs2'
CRS-2673: Attempting to stop 'ora.cs.db' on 'cs2'
CRS-2673: Attempting to stop 'ora.chad' on 'cs2'
CRS-2673: Attempting to stop 'ora.gns' on 'cs2'
CRS-2677: Stop of 'ora.gns' on 'cs2' succeeded
CRS-2677: Stop of 'ora.cs.db' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'cs2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'cs2'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'cs2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'cs2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'cs2'
CRS-2673: Attempting to stop 'ora.cvu' on 'cs2'
CRS-2673: Attempting to stop 'ora.gns.vip' on 'cs2'
CRS-2677: Stop of 'ora.cs.db' on 'cs1' succeeded
CRS-2677: Stop of 'ora.CRS.dg' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'cs1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'cs1'
CRS-2677: Stop of 'ora.DATA.dg' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs2'
CRS-2677: Stop of 'ora.asm' on 'cs2' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'cs1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'cs2' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'cs2' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'cs2' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cs2.vip' on 'cs2'
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'cs2'
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'cs2'
CRS-2677: Stop of 'ora.gns.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.cs2.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.qosmserver' on 'cs2' succeeded
CRS-2677: Stop of 'ora.scan2.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.chad' on 'cs1'
CRS-2677: Stop of 'ora.cvu' on 'cs2' succeeded
CRS-2677: Stop of 'ora.chad' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'cs2'
CRS-2677: Stop of 'ora.ons' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'cs2'
CRS-2677: Stop of 'ora.net1.network' on 'cs2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'cs2' has completed
CRS-2677: Stop of 'ora.crsd' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'cs2'
CRS-2673: Attempting to stop 'ora.evmd' on 'cs2'
CRS-2673: Attempting to stop 'ora.storage' on 'cs2'
CRS-2677: Stop of 'ora.storage' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs2'
CRS-2677: Stop of 'ora.ctssd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.chad' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.mgmtdb' on 'cs1'
CRS-2677: Stop of 'ora.mgmtdb' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'cs1'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'cs1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'cs1'
CRS-2677: Stop of 'ora.CRS.dg' on 'cs1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs1'
CRS-2677: Stop of 'ora.MGMTLSNR' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cs1.vip' on 'cs1'
CRS-2677: Stop of 'ora.cs1.vip' on 'cs1' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'cs2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'cs2'
CRS-2677: Stop of 'ora.cssd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs1'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'cs1'
CRS-2677: Stop of 'ora.ons' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'cs1'
CRS-2677: Stop of 'ora.net1.network' on 'cs1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'cs1' has completed
CRS-2677: Stop of 'ora.crsd' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'cs1'
CRS-2673: Attempting to stop 'ora.evmd' on 'cs1'
CRS-2673: Attempting to stop 'ora.storage' on 'cs1'
CRS-2677: Stop of 'ora.storage' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs1'
CRS-2677: Stop of 'ora.ctssd' on 'cs1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'cs1' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'cs1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'cs1'
CRS-2677: Stop of 'ora.cssd' on 'cs1' succeeded

4.以Oracle Grid Infrastructure用户来执行下面的命令为每个Hub节点上的磁盘组中的磁盘进行标记

[grid@cs2 ~]$ asmcmd afd_label DN1 /dev/asmdisk03
[grid@cs2 ~]$ asmcmd afd_label DN2 /dev/asmdisk05

5.以Oracle Grid Infrastructure用户来执行下面的命令对每个Hub节点进行磁盘重新扫描

[grid@cs1 ~]$ asmcmd afd_scan
[grid@cs2 ~]$ asmcmd afd_scan

6.以root用户来在所有节点上启用Oracle Clusterware stack并mount OCR与vote文件磁盘与数据库

[root@cs1 bin]# ./crsctl start cluster -all
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'cs1'
CRS-2672: Attempting to start 'ora.evmd' on 'cs1'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'cs2'
CRS-2672: Attempting to start 'ora.evmd' on 'cs2'
CRS-2676: Start of 'ora.cssdmonitor' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'cs1'
CRS-2672: Attempting to start 'ora.diskmon' on 'cs1'
CRS-2676: Start of 'ora.diskmon' on 'cs1' succeeded
CRS-2676: Start of 'ora.evmd' on 'cs1' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'cs2'
CRS-2672: Attempting to start 'ora.diskmon' on 'cs2'
CRS-2676: Start of 'ora.diskmon' on 'cs2' succeeded
CRS-2676: Start of 'ora.evmd' on 'cs2' succeeded
CRS-2676: Start of 'ora.cssd' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'cs1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'cs1'
CRS-2676: Start of 'ora.cssd' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'cs2'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'cs2'
CRS-2676: Start of 'ora.ctssd' on 'cs1' succeeded
CRS-2676: Start of 'ora.ctssd' on 'cs2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'cs1'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'cs2'
CRS-2676: Start of 'ora.asm' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'cs2'
CRS-2676: Start of 'ora.asm' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'cs1'
CRS-2676: Start of 'ora.storage' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cs1'
CRS-2676: Start of 'ora.crsd' on 'cs1' succeeded
CRS-2676: Start of 'ora.storage' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cs2'
CRS-2676: Start of 'ora.crsd' on 'cs2' succeeded

判断Oracle ASM Filter Driver是否已经配置
可以通过判断Oracle ASM实例的SYS_ASMFD_PROPERTIES的AFD_STATE参数值来判断Oracle ASMFD是否被配置。也可以使用ASMCMD afd_state命令来检查Oracle ASMFD的状态

[grid@cs1 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'DISABLED' on host 'cs1.jy.net'

[grid@cs2 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'cs2.jy.net'

下面的查询如果AFD_STATE参数值等于NOT AVAILABLE就表示Oracle ASMFD没有被配置

SQL> select sys_context('SYS_ASMFD_PROPERTIES', 'AFD_STATE') from dual;

SYS_CONTEXT('SYS_ASMFD_PROPERTIES','AFD_STATE')
--------------------------------------------------------------------------- 
CONFIGURED

设置Oracle ASM Filter Driver的AFD_DISKSTRING参数
AFD_DISKSTRING参数来指定Oracle ASMFD磁盘发现路径来标识由Oracle ASMFD来管理的磁盘。也可以使用ASMCMD afd_dsset和afd_dsget命令来设置和显示AFD_DISKSTRING参数:

[grid@cs1 ~]$ asmcmd afd_dsset '/dev/sd*','/dev/asm*','AFD:*'

[grid@cs2 ~]$ asmcmd afd_dsset '/dev/sd*','/dev/asm*','AFD:*'

[grid@cs1 ~]$ asmcmd dsget
parameter:dev/sd*, /dev/asm*, AFD:*
profile:dev/sd*,/dev/asm*,AFD:*

[grid@cs2 ~]$ asmcmd dsget
parameter:dev/sd*, /dev/asm*, AFD:*
profile:dev/sd*,/dev/asm*,AFD:*

可以使用alter system语句来设置AFD_DISKSTRING。标识已经被创建在磁盘头中通过Oracle ASMFD磁盘发现路径来识别磁盘

SQL> ALTER SYSTEM AFD_DISKSTRING SET 'dev/sd*','/dev/asm*','AFD:*';
System altered.

SQL> SELECT SYS_CONTEXT('SYS_ASMFD_PROPERTIES', 'AFD_DISKSTRING') FROM DUAL;

SYS_CONTEXT('SYS_ASMFD_PROPERTIES','AFD_DISKSTRING')
----------------------------------------------------------------------------------- 
dev/sd*,/dev/asm*,AFD:*

为Oracle ASM Filter Driver磁盘设置Oracle ASM ASM_DISKSTRING参数
可以更新Oracle ASM磁盘发现路径来增加或删除Oracle ASMFD 磁盘标识名到ASM_DISKSTRING参数,可以使用alter system语句或asmcmd dsset命令

SQL> show parameter asm_diskstring

NAME                                 TYPE                   VALUE
------------------------------------ ---------------------- ------------------------------
asm_diskstring                       string                 dev/sd*, /dev/asm*, AFD:*

[grid@cs1 ~]$  asmcmd dsset 'dev/sd*','/dev/asm*','AFD:*'
[grid@cs2 ~]$ asmcmd dsset 'dev/sd*','/dev/asm*','AFD:*'

测试Filter功能
首先检查filter功能是否开启

[grid@cs1 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'cs1.jy.net'

[grid@jytest1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk03
DATA2                       ENABLED   /dev/asmdisk04
FRA1                        ENABLED   /dev/asmdisk07
TEST1                       ENABLED   /dev/asmdisk05
TEST2                       ENABLED   /dev/asmdisk06

上面的结果显示filter功能已经开启,如果要禁用filter功能执行asmcmd afd_filter -d

[grid@cs1 ~]$ asmcmd afd_filter -d
[grid@cs1 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'jytest1.jydba.net'
[grid@cs1 ~]$ asmcmd afd_lsdsk
There are no labelled devices.

如果要开启filter功能执行asmcmd afd_filter -e

[grid@jytest1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk03
DATA2                       ENABLED   /dev/asmdisk04
FRA1                        ENABLED   /dev/asmdisk07
TEST1                       ENABLED   /dev/asmdisk05
TEST2                       ENABLED   /dev/asmdisk06

先用 KFED 读取一下TEST磁盘组的AFD:TEST1的磁盘头,验证一下确实无误

[grid@jytest1 ~]$ kfed read AFD:TEST1
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  3275580027 ; 0x00c: 0xc33d627b
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:    ORCLDISKTEST1 ; 0x000: length=13
kfdhdb.driver.reserved[0]:   1414743380 ; 0x008: 0x54534554
kfdhdb.driver.reserved[1]:           49 ; 0x00c: 0x00000031
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                203424000 ; 0x020: 0x0c200100
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        2 ; 0x026: KFDGTP_NORMAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:                   TEST1 ; 0x028: length=5
kfdhdb.grpname:                    TEST ; 0x048: length=4
kfdhdb.fgname:                    TEST1 ; 0x068: length=5
kfdhdb.fgname:                      DN1 ; 0x068: length=3

下面直接用dd尝试将磁盘头清零。dd 命令本身没有任何错误返回。

[root@cs1 ~]# dd if=/dev/zero of=/dev/asmdisk03 bs=1024 count=10000
10000+0 records in
10000+0 records out
10240000 bytes (10 MB) copied, 1.24936 s, 8.2 MB/s

备份磁盘的前1024字节并清除,普通用户没权限读

[root@jytest1 ~]# dd if=/dev/asmdisk05 of=asmdisk05_header bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000282638 s, 3.6 MB/s
[root@jytest1 ~]# ls -lrt
 
-rw-r--r--  1 root root 1024 Aug 31 01:22 asmdisk05_header


[root@jytest1 ~]# dd if=/dev/zero of=/dev/asmdisk05 bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000318516 s, 3.2 MB/s
再用 KFED 读取一下TEST磁盘组的AFD:TEST1的磁盘头,验证一下确实无误
[grid@jytest1 ~]$ kfed read AFD:TEST1
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  3275580027 ; 0x00c: 0xc33d627b
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:    ORCLDISKTEST1 ; 0x000: length=13
kfdhdb.driver.reserved[0]:   1414743380 ; 0x008: 0x54534554
kfdhdb.driver.reserved[1]:           49 ; 0x00c: 0x00000031
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                203424000 ; 0x020: 0x0c200100
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        2 ; 0x026: KFDGTP_NORMAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:                   TEST1 ; 0x028: length=5
kfdhdb.grpname:                    TEST ; 0x048: length=4
kfdhdb.fgname:                    TEST1 ; 0x068: length=5

测试dismount磁盘组TEST,再mount磁盘组TEST都能成功

[grid@jytest1 ~]$ asmcmd umount TEST
[grid@jytest1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     40960      264                0             264              0             Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    24732                0           24732              0             N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    18452                0           18452              0             N  FRA/
[grid@jytest1 ~]$ asmcmd mount TEST
[grid@jytest1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     40960      264                0             264              0             Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    24732                0           24732              0             N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    18452                0           18452              0             N  FRA/
MOUNTED  NORMAL  N         512             512   4096  4194304     40960    11128                0            5564              0             N  TEST/

现在对磁盘/dev/asmdisk05禁用filter功能

[grid@jytest1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk03
DATA2                       ENABLED   /dev/asmdisk04
FRA1                        ENABLED   /dev/asmdisk07
TEST1                       ENABLED   /dev/asmdisk05
TEST2                       ENABLED   /dev/asmdisk06
[grid@jytest1 ~]$ asmcmd afd_filter -d /dev/asmdisk05
[grid@jytest1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk03
DATA2                       ENABLED   /dev/asmdisk04
FRA1                        ENABLED   /dev/asmdisk07
TEST1                      DISABLED   /dev/asmdisk05
TEST2                       ENABLED   /dev/asmdisk06

清除磁盘的前1024字节

 
[root@jytest1 ~]# dd if=/dev/zero of=/dev/asmdisk05 bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000318516 s, 3.2 MB/s

[grid@jytest1 ~]$ asmcmd umount TEST
[grid@jytest1 ~]$ asmcmd mount TEST
ORA-15032: not all alterations performed
ORA-15017: diskgroup "TEST" cannot be mounted
ORA-15040: diskgroup is incomplete (DBD ERROR: OCIStmtExecute)


[grid@jytest1 ~]$ kfed read AFD:TEST1
kfbh.endian:                          0 ; 0x000: 0x00
kfbh.hard:                            0 ; 0x001: 0x00
kfbh.type:                            0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt:                          0 ; 0x003: 0x00
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       0 ; 0x008: file=0
kfbh.check:                           0 ; 0x00c: 0x00000000
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
000000000 00000000 00000000 00000000 00000000  [................]
  Repeat 255 times
KFED-00322: invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]

可以看到当filter功能被禁用时就失去了保护功能

使用之前备份的磁盘前1024字节信息来恢复磁盘头

[root@jytest1 ~]# dd if=asmdisk05_header of=/dev/asmdisk05 bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000274822 s, 3.7 MB/s

[grid@jytest1 ~]$ kfed  read /dev/asmdisk05
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  1645917758 ; 0x00c: 0x621ab63e
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:    ORCLDISKTEST1 ; 0x000: length=13
kfdhdb.driver.reserved[0]:   1414743380 ; 0x008: 0x54534554
kfdhdb.driver.reserved[1]:           49 ; 0x00c: 0x00000031
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                203424000 ; 0x020: 0x0c200100
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        2 ; 0x026: KFDGTP_NORMAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:                   TEST1 ; 0x028: length=5
kfdhdb.grpname:                    TEST ; 0x048: length=4
kfdhdb.fgname:                    TEST1 ; 0x068: length=5

再次mount磁盘组TEST

[grid@jytest1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     40960      264                0             264              0             Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    24732                0           24732              0             N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    18452                0           18452              0             N  FRA/
[grid@jytest1 ~]$ asmcmd mount TEST
[grid@jytest1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     40960      264                0             264              0             Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    24732                0           24732              0             N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    18452                0           18452              0             N  FRA/
MOUNTED  NORMAL  N         512             512   4096  4194304     40960    11120                0            5560              0             N  TEST/

设置,清除与扫描Oracle ASM Filter Driver Labels
给由Oracle ASMFD管理的磁盘设置一个标识,在标识设置后,指定的磁盘将会由Oracle ASMFD来管理。可以使用ASMCMD afd_label,afd_unlabel与afd_scan来增加,删除和扫描标识查看已经标识过的磁盘可以看到磁盘/dev/asmdisk03和 /dev/asmdisk05没有被标识。

[grid@cs1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk04

SQL> select group_number,disk_number,name,label,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           LABEL                                              PATH
------------ ----------- ------------------------------ -------------------------------------------------- --------------------------------------------------
           0           0                                CRS1                                               AFD:CRS1
           0           1                                                                                   /dev/asmdisk05
           0           2                                DATA1                                              AFD:DATA1
           0           3                                                                                   /dev/asmdisk03
           0           4                                CRS2                                               AFD:CRS2
           1           0 CRS1                           CRS1                                               /dev/asmdisk02
           1           1 CRS2                           CRS2                                               /dev/asmdisk01
           2           0 DATA1                          DATA1                                              /dev/asmdisk04

设置标识

[grid@cs2 ~]$ asmcmd afd_label DN1 /dev/asmdisk03
[grid@cs2 ~]$ asmcmd afd_label DN2 /dev/asmdisk05

查看已经标识过的磁盘可以看到磁盘/dev/asmdisk03和 /dev/asmdisk05已经被标识

[grid@cs1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk04
DN1                         ENABLED   /dev/asmdisk03
DN2                         ENABLED   /dev/asmdisk05

SQL> select group_number,disk_number,name,label,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           LABEL                                                          PATH
------------ ----------- ------------------------------ -------------------------------------------------------------- --------------------------------------------------
           0           0                                CRS1                                                           AFD:CRS1
           0           1                                DN2                                                            /dev/asmdisk05
           0           2                                DN1                                                            AFD:DN1
           0           3                                DATA1                                                          AFD:DATA1
           0           4                                DN1                                                            /dev/asmdisk03
           0           6                                CRS2                                                           AFD:CRS2
           0           5                                DN2                                                            AFD:DN2
           1           1 CRS2                           CRS2                                                           /dev/asmdisk01
           1           0 CRS1                           CRS1                                                           /dev/asmdisk02
           2           0 DATA1                          DATA1                                                          /dev/asmdisk04

清除标识

[grid@cs1 ~]$ asmcmd afd_unlabel 'DN1'
[grid@cs1 ~]$ asmcmd afd_unlabel 'DN2'

注意在清除标识时,如果标识所标记的磁盘已经用来创建磁盘组了那么是不能清除的,例如

[grid@cs1 ~]$ asmcmd afd_unlabel 'TEST1'
disk AFD:TEST1 is already provisioned for ASM
No devices to be unlabeled.
ASMCMD-9514: ASM disk label clear operation failed.

扫描标识

[grid@cs1 ~]$ asmcmd afd_scan

查看已经标识过的磁盘可以看到 /dev/asmdisk03和 /dev/asmdisk05的标识已经清除了

[grid@cs1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk04

SQL> select group_number,disk_number,name,label,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           LABEL                                                          PATH
------------ ----------- ------------------------------ -------------------------------------------------------------- --------------------------------------------------
           0           0                                CRS1                                                           AFD:CRS1
           0           1                                                                                               /dev/asmdisk05
           0           2                                DATA1                                                          AFD:DATA1
           0           3                                                                                               /dev/asmdisk03
           0           4                                CRS2                                                           AFD:CRS2
           1           1 CRS2                           CRS2                                                           /dev/asmdisk01
           1           0 CRS1                           CRS1                                                           /dev/asmdisk02
           2           0 DATA1                          DATA1                                                          /dev/asmdisk04
]]>
http://www.jydba.net/index.php/archives/2540/feed 0
Oracle ASM使用asmcmd中的cp命令来执行远程复制 http://www.jydba.net/index.php/archives/2535 http://www.jydba.net/index.php/archives/2535#respond Fri, 17 Aug 2018 01:31:40 +0000 http://www.jydba.net/?p=2535 Oracle ASM使用asmcmd中的cp命令来执行远程复制
cp命令的语法如下:

cp src_file [--target target_type] [--service service_name] [--port port_num] [connect_str:]tgt_file

–target target_type是用来指定asmcmd命令执行复制操作必须要连接到的实例的目标类型。有效选项为ASM,IOS或APX。
–service service_name如果缺省值不是+ASM,用来指定Oracle ASM实例名
–port port_num 缺省值是1521,用来指定监听端口

connect_str用来指定连接到远程实例的连接串。connect_str对于本地实例的复制是不需要指定的。对于远程实例复制,必须指定连接串并且会提示输入密码。它的格式如下:
user@host.SID
user,host和SID都是需要指定的。缺省端口为1521,也可以使用–port选项来修改。连接权限(sysasm或sysdba)是由启动asmcmd命令时由–privilege选项所决定的。

src_file 被复制的源文件名,它必须是一个完整路径文件名或一个Oracle ASM别名。在执行asmcmd复制时,Oracle ASM会创建一个OMF文件例如:
diskgroup/db_unique_name/file_type/file_name.#.#
其中db_unique_name被设置为ASM,#为数字。在复制过程中cp命令会对目标地址创建目录结构并对实际创建的OMF文件创建别名。

tgt_file 复制操作所创建的目标文件名或一个别名目录名的别名。

注意,cp命令不能在两个远程实例之间复制文件。在执行cp命令时本地Oracle ASM实例必须是源地址或目标地址。

使用cp命令可以执行以下三种复制操作:
1.从磁盘组中复制文件到操作系统中
2.从磁盘组中复制文件到磁盘组中
3.从操作系统中复制文件到磁盘组中

注意有些文件是不能执行复制的,比如OCR和SPFILE文件。为了备份,复制或移动Oracle ASM SPFILE文件,可以使用spbackup,spcopy或spmove命令。为了复制OCR备份文件,源地址必须是磁盘组。

如果文件存储在Oracle ASM磁盘组中,复制操作是可以跨字节序的(Little-Endian and Big-Endian)。Orale ASM会自动转换文件格式。在非Oracle ASM文件与Oracle ASM磁盘组之间是可以对不同字节序平台进行复制的,在复制完成后执行命令来对文件进行转换操作即可。

首先显示+data/cs/datafile目录中的所有文件

ASMCMD [+data/cs/datafile] > ls -lt
Type      Redund  Striped  Time             Sys  Name
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  N    jy01.dbf => +DATA/cs/DATAFILE/JY.331.976296525
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    USERS.275.970601909
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    UNDOTBS2.284.970602381
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    UNDOTBS1.274.970601905
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    TEST.326.976211663
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    SYSTEM.272.970601831
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    SYSAUX.273.970601881
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    JY.331.976296525
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    USERS.261.970598319
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    UNDOTBS1.260.970598319
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    SYSTEM.258.970598233
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    SYSAUX.259.970598293

将+data/cs/datafile/JY.331.976296525文件从磁盘组中复制到操作系统中

ASMCMD [+] > cp +data/cs/datafile/JY.331.976296525 /home/grid/JY.bak
copying +data/cs/datafile/JY.331.976296525 -> /home/grid/JY.bak

将操作系统中的文件复制到磁盘组中

ASMCMD [+] > cp /home/grid/JY.bak +data/cs/datafile/JY.bak
copying /home/grid/JY.bak -> +data/cs/datafile/JY.bak

ASMCMD [+] > ls -lt  +data/cs/datafile/
Type      Redund  Striped  Time             Sys  Name
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  N    jy01.dbf => +DATA/cs/DATAFILE/JY.331.976296525
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    USERS.275.970601909
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    UNDOTBS2.284.970602381
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    UNDOTBS1.274.970601905
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    TEST.326.976211663
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    SYSTEM.272.970601831
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    SYSAUX.273.970601881
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  N    JY.bak => +DATA/ASM/DATAFILE/JY.bak.453.984396007
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    JY.331.976296525
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    USERS.261.970598319
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    UNDOTBS1.260.970598319
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    SYSTEM.258.970598233
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    SYSAUX.259.970598293

将+data/cs/datafile/JY.331.976296525文件从磁盘组中复制到远程ASM实例的磁盘组中

ASMCMD [+] > cp +data/cs/datafile/JY.331.976296525 sys@10.138.130.175.+ASM1:+TEST/JY.bak
Enter password: ***********
copying +data/cs/datafile/JY.331.976296525 -> 10.138.130.175:+TEST/JY.bak

ASMCMD [+test] > ls -lt
Type      Redund  Striped  Time             Sys  Name
                                            N    rman_backup/
                                            N    arch/
                                            Y    JY/
                                            Y    DUP/
                                            Y    CS_DG/
                                            Y    ASM/
DATAFILE  MIRROR  COARSE   AUG 17 16:00:00  N    JY.bak => +TEST/ASM/DATAFILE/JY.bak.342.984413875

将+data/cs/datafile/JY.331.976296525文件从磁盘组中复制到远程ASM实例所在服务器的操作系统中

ASMCMD [+] > cp +data/cs/datafile/JY.331.976296525 sys@10.138.130.175.+ASM1:/home/grid/JY.bak
Enter password: ***********
copying +data/cs/datafile/JY.331.976296525 -> 10.138.130.175:/home/grid/JY.bak

[grid@jytest1 ~]$ ls -lrt
-rw-r----- 1 grid oinstall 104865792 Aug 17 16:21 JY.bak

使用asmcmd cp命令比使用dbms_file_transfer来方便些。

]]>
http://www.jydba.net/index.php/archives/2535/feed 0
Oracle 12C Database File Mapping for Oracle ASM Files http://www.jydba.net/index.php/archives/2531 http://www.jydba.net/index.php/archives/2531#comments Mon, 13 Aug 2018 09:05:32 +0000 http://www.jydba.net/?p=2531 为了理解I/O性能,你必须要详细了解存储文件的存储层次信息。Oracle提供了一组动态性能视国来显示文件到逻辑卷中间层到实际的物理设备之间的映射信息。使用这些动态性能视图,可以找到一个文件的任何数据块所内置的实际物理磁盘。Oracle数据库使用一个名叫FMON的后台进程来管理映射信息。Oracle提供了PL/SQL dbms_storage_map包来将映射操作填充到映射视图中。Oracle数据库文件映射当映射Oracle ASM文件时不需要使用第三方的动态库。另外,Oracle数据库支持在所有操作系统平台上对Oracle ASM文件的映射。

对Oracle ASM文件启用文件映射
为了启用文件映射,需要将参数file_mapping设置为true。数据库实例不必关闭来设置这个参数。可以使用以下alter system语句来设置这个参数:

SQL> alter system set file_mapping=true scope=both sid='*';

System altered.

执行合适的dbms_storage_map映射过程
.在冷启动情况下,Oracle数据库在刚刚启动时没有映射操作被调用。可以执行dbms_storage_map.map_all过程来为数据库相关的整个I/O子系统来构建映射信息。例如,下面的命令构建映射信息并且提供10000事件:

SQL> execute dbms_storage_map.map_all(10000);

PL/SQL procedure successfully completed.

.在暖启动情况下,Oracle数据库已经构建了映射信息,可以选择执行dbms_storage_map.map_save过程来将映射信息保存在数据字典中。缺省情况下这个过程将被dbms_storage_map.map_all过程调用,这将强制SGA中的所有映射信息被刷新到磁盘。缺省情况下dbms_storage_map.map_save过程将被dbms_storage_map.map_all()。在重启数据库后,使用dbms_storage_map.restore()过程来还原映射信息到SGA中。如果需要,dbms_storage_map.map_all()可以用来刷新映射信息。

由dbms_storage_map包生成的映射信息会被捕获到动态性能视图中。这些视图包括v$map_comp_list,v$map_element,v$map_ext_element,v$map_file,v$map_file_extent,v$map_file_io_stack,v$map_library与v$map_subelement。
可以使用v$map_file来查询文件映射信息:

SQL> select file_map_idx, substr(file_name,1,45), file_type, file_structure from v$map_file;

FILE_MAP_IDX SUBSTR(FILE_NAME,1,45)                                                                     FILE_TYPE   FILE_STRU
------------ ------------------------------------------------------------------------------------------ ----------- ---------
           0 +DATA/CS/DATAFILE/system.272.970601831                                                     DATAFILE    ASMFILE
           1 +DATA/CS/DATAFILE/sysaux.273.970601881                                                     DATAFILE    ASMFILE
           2 +DATA/CS/DATAFILE/undotbs1.274.970601905                                                   DATAFILE    ASMFILE
           3 +DATA/CS/4700A987085B3DFAE05387E5E50A8C7B/DAT                                              DATAFILE    ASMFILE
           4 +DATA/CS/4700A987085B3DFAE05387E5E50A8C7B/DAT                                              DATAFILE    ASMFILE
           5 +DATA/CS/DATAFILE/users.275.970601909                                                      DATAFILE    ASMFILE
           6 +DATA/CS/4700A987085B3DFAE05387E5E50A8C7B/DAT                                              DATAFILE    ASMFILE
           7 +DATA/CS/DATAFILE/undotbs2.284.970602381                                                   DATAFILE    ASMFILE
           8 +DATA/CS/DATAFILE/test.326.976211663                                                       DATAFILE    ASMFILE
           9 +DATA/CS/DATAFILE/jy.331.976296525                                                         DATAFILE    ASMFILE
          10 +DATA/CS/6C61AD7B443C2CD2E053BE828A0A2A74/DAT                                              DATAFILE    ASMFILE
          11 +DATA/CS/6C61AD7B443C2CD2E053BE828A0A2A74/DAT                                              DATAFILE    ASMFILE
          12 +DATA/CS/6C61AD7B443C2CD2E053BE828A0A2A74/DAT                                              DATAFILE    ASMFILE
          13 +DATA/CS/ONLINELOG/group_2.277.970601985                                                   LOGFILE     ASMFILE
          14 +DATA/CS/ONLINELOG/group_1.278.970601985                                                   LOGFILE     ASMFILE
          15 +DATA/CS/ONLINELOG/group_3.285.970602759                                                   LOGFILE     ASMFILE
          16 +DATA/CS/ONLINELOG/group_4.286.970602761                                                   LOGFILE     ASMFILE
          17 +DATA/CS/ONLINELOG/redo05.log                                                              LOGFILE     ASMFILE
          18 +DATA/CS/ONLINELOG/redo06.log                                                              LOGFILE     ASMFILE
          19 +DATA/CS/ONLINELOG/redo07.log                                                              LOGFILE     ASMFILE
          20 +DATA/CS/ONLINELOG/redo08.log                                                              LOGFILE     ASMFILE
          21 +DATA/CS/ONLINELOG/redo09.log                                                              LOGFILE     ASMFILE
          22 +DATA/CS/ONLINELOG/redo10.log                                                              LOGFILE     ASMFILE
          23 +DATA/CS/TEMPFILE/temp.279.970602003                                                       TEMPFILE    ASMFILE
          24 +DATA/CS/67369AA1C9AA3E71E053BE828A0A8262/TEM                                              TEMPFILE    ASMFILE
          25 +DATA/CS/6C61AD7B443C2CD2E053BE828A0A2A74/TEM                                              TEMPFILE    ASMFILE
          26 +DATA/arch/1_222_970601983.dbf                                                             ARCHIVEFILE ASMFILE
          27 +DATA/arch/1_223_970601983.dbf                                                             ARCHIVEFILE ASMFILE
          28 +DATA/arch/2_277_970601983.dbf                                                             ARCHIVEFILE ASMFILE
          29 +DATA/arch/2_278_970601983.dbf                                                             ARCHIVEFILE ASMFILE
          30 +DATA/arch/2_279_970601983.dbf                                                             ARCHIVEFILE ASMFILE
          31 +DATA/CS/CONTROLFILE/current.276.970601979                                                 CONTROLFILE ASMFILE

31 rows selected.

可以使用dbms_storage_map PL/SQL包中的过程来控制映射操作。例如,可以使用dbms_storage_map.map_object过程通过指定对象名,所有者与类型来对数据库对象来构建映射信息。在dbms_storage_map.map_object过程运行之后,那么可以通过查询map_object视图来查询映射信息

SQL> execute dbms_storage_map.map_object('T1','C##TEST','TABLE');

PL/SQL procedure successfully completed.

SQL> select io.object_name o_name, io.object_owner o_owner, io.object_type o_type,
  2  mf.file_name, me.elem_name, io.depth,
  3  (sum(io.cu_size * (io.num_cu - decode(io.parity_period, 0, 0,
  4  trunc(io.num_cu / io.parity_period)))) / 2) o_size
  5  from map_object io, v$map_element me, v$map_file mf
  6  where io.object_name = 'T1'
  7  and io.object_owner = 'C##TEST'
and io.object_type = 'TABLE'
  8    9  and me.elem_idx = io.elem_idx
 10  and mf.file_map_idx = io.file_map_idx
 11  group by io.elem_idx, io.file_map_idx, me.elem_name, mf.file_name, io.depth,
 12  io.object_name, io.object_owner, io.object_type
 13  order by io.depth;

O_NAME               O_OWNER              O_TYP FILE_NAME                                          ELEM_NAME                 DEPTH     O_SIZE
-------------------- -------------------- ----- -------------------------------------------------- -------------------- ---------- ----------
T1                   C##TEST              TABLE +DATA/CS/DATAFILE/users.275.970601909              +/dev/asmdisk04               0         64

]]>
http://www.jydba.net/index.php/archives/2531/feed 1
Oracle Linux 7使用syslog来管理Oracle ASM的审计文件 http://www.jydba.net/index.php/archives/2528 http://www.jydba.net/index.php/archives/2528#respond Wed, 01 Aug 2018 07:16:01 +0000 http://www.jydba.net/?p=2528 使用syslog来管理Oracle ASM的审计文件
如果不对Oracle ASM实例的审计文件目录进行定期维护那么它将会包含大量的审计文件。如果存在大理审计文件可能会造成文件系统耗尽磁盘空间或indoes,或者由于文件系统扩展限制而造成Oracle运行缓慢,还有可能造成Oracle ASM实例在启动时hang住。这里将介绍如何使用Linux syslog工具来管理Oracle ASM审计记录,因此通过使用操作系统的syslog工具来代替单独的audit_dump_dest目录来记录Oracle ASM审计记录。下面将介绍具体的操作,而且这些操作必须对于RAC环境中的每个节点执行。
1.对Oracle ASM实例设置audit_syslog_level与audit_sys_operations参数

SQL> show parameter audit_sys_

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
audit_sys_operations                 boolean     TRUE
audit_syslog_level                   string

SQL> alter system set AUDIT_SYSLOG_LEVEL='local0.info' scope=spfile sid='*';

System altered.

由于audit_sys_operations参数默认为启用所以不用进行设置了。

2.为Oracle ASM审计配置/etc/syslog.conf
通过执行以下两处改变来对Oracle ASM审计配置syslog的配置文件/etc/syslog.conf或/etc/rsyslog.conf:
2.1在/etc/syslog.conf或/etc/rsyslog.conf文件中增加以下内容

local0.info   /var/log/oracle_asm_audit.log

2.2在/etc/syslog.conf或/etc/rsyslog.conf文件中的/var/log/messages这一行增加local0.none,修改后的配置如下:

*.info;mail.none;authpriv.none;cron.none;local0.none   /var/log/messages
[root@cs1 ~]# vi /etc/rsyslog.conf
 
 ....省略....

# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none;local0.none    /var/log/messages
local0.info                                            /var/log/oracle_asm_audit.log


[root@cs2 ~]# vi /etc/rsyslog.conf
 ....省略....

# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none;local0.none    /var/log/messages
local0.info                                            /var/log/oracle_asm_audit.log

3.配置logrotate来管理syslog日志文件
Linux的logrotate工具被用来管理Oracle ASM审计的syslog日志文件的大小与数量,创建文件/etc/logrotate.d/oracle_asm_audit,并向文件增加以下内容:

/var/log/oracle_asm_audit.log {
  weekly
  rotate 4
  compress
  copytruncate
  delaycompress
  notifempty
}
[root@cs1 ~]# cd /etc/logrotate.d/
[root@cs1 logrotate.d]# pwd
/etc/logrotate.d
[root@cs1 logrotate.d]# vi oracle_asm_audit
/var/log/oracle_asm_audit.log {
  weekly
  rotate 4
  compress
  copytruncate
  delaycompress
  notifempty
}

[root@cs2 ~]# cd /etc/logrotate.d/
[root@cs1 logrotate.d]# pwd
/etc/logrotate.d
[root@cs1 logrotate.d]# vi oracle_asm_audit
/var/log/oracle_asm_audit.log {
  weekly
  rotate 4
  compress
  copytruncate
  delaycompress
  notifempty
}

4.重启Oracle ASM实例与rsyslog服务
为了使用这些改变生效必须重启Oracle ASM实例与rsyslog服务。可以使用crsctl stop cluster -all与crsctl start cluster -all在任何一个RAC节点上执行来重启Oracle ASM实例,这个操作会将数据库实例也关闭。

[root@cs1 bin]# /u01/app/product/12.2.0/crs/bin/crsctl stop cluster -all
CRS-2673: Attempting to stop 'ora.crsd' on 'cs1'
CRS-2673: Attempting to stop 'ora.crsd' on 'cs2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'cs2'
CRS-2673: Attempting to stop 'ora.chad' on 'cs2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'cs1'
CRS-2673: Attempting to stop 'ora.cs.db' on 'cs2'
CRS-2673: Attempting to stop 'ora.cs.db' on 'cs1'
CRS-2673: Attempting to stop 'ora.qosmserver' on 'cs1'
CRS-2673: Attempting to stop 'ora.gns' on 'cs1'
CRS-2677: Stop of 'ora.gns' on 'cs1' succeeded
CRS-2677: Stop of 'ora.cs.db' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'cs2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'cs2'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'cs2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'cs2'
CRS-2677: Stop of 'ora.CRS.dg' on 'cs2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs2'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.cs2.vip' on 'cs2'
CRS-2673: Attempting to stop 'ora.chad' on 'cs1'
CRS-2677: Stop of 'ora.chad' on 'cs2' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'cs2'
CRS-2677: Stop of 'ora.cs.db' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'cs1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'cs1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'cs1'
CRS-2673: Attempting to stop 'ora.cvu' on 'cs1'
CRS-2673: Attempting to stop 'ora.gns.vip' on 'cs1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'cs1' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'cs1'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'cs1'
CRS-2677: Stop of 'ora.asm' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs2'
CRS-2677: Stop of 'ora.cs2.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.gns.vip' on 'cs1' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'cs1' succeeded
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'cs2'
CRS-2677: Stop of 'ora.scan2.vip' on 'cs1' succeeded
CRS-2677: Stop of 'ora.ons' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'cs2'
CRS-2677: Stop of 'ora.net1.network' on 'cs2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'cs2' has completed
CRS-2677: Stop of 'ora.chad' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.mgmtdb' on 'cs1'
CRS-2677: Stop of 'ora.crsd' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'cs2'
CRS-2673: Attempting to stop 'ora.evmd' on 'cs2'
CRS-2673: Attempting to stop 'ora.storage' on 'cs2'
CRS-2677: Stop of 'ora.cvu' on 'cs1' succeeded
CRS-2677: Stop of 'ora.storage' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs2'
CRS-2677: Stop of 'ora.ctssd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.mgmtdb' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'cs1'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'cs1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'cs1'
CRS-2677: Stop of 'ora.CRS.dg' on 'cs1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs1'
CRS-2677: Stop of 'ora.evmd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.qosmserver' on 'cs1' succeeded
CRS-2677: Stop of 'ora.MGMTLSNR' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cs1.vip' on 'cs1'
CRS-2677: Stop of 'ora.cs1.vip' on 'cs1' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'cs2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'cs2'
CRS-2677: Stop of 'ora.cssd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs1'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'cs1'
CRS-2677: Stop of 'ora.ons' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'cs1'
CRS-2677: Stop of 'ora.net1.network' on 'cs1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'cs1' has completed
CRS-2677: Stop of 'ora.crsd' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'cs1'
CRS-2673: Attempting to stop 'ora.evmd' on 'cs1'
CRS-2673: Attempting to stop 'ora.storage' on 'cs1'
CRS-2677: Stop of 'ora.storage' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs1'
CRS-2677: Stop of 'ora.evmd' on 'cs1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'cs1' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'cs1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'cs1'
CRS-2677: Stop of 'ora.cssd' on 'cs1' succeeded


[root@cs1 bin]# /u01/app/product/12.2.0/crs/bin/crsctl start cluster -all
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'cs1'
CRS-2672: Attempting to start 'ora.evmd' on 'cs1'
CRS-2672: Attempting to start 'ora.evmd' on 'cs2'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'cs2'
CRS-2676: Start of 'ora.cssdmonitor' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'cs2'
CRS-2672: Attempting to start 'ora.diskmon' on 'cs2'
CRS-2676: Start of 'ora.cssdmonitor' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'cs1'
CRS-2672: Attempting to start 'ora.diskmon' on 'cs1'
CRS-2676: Start of 'ora.diskmon' on 'cs1' succeeded
CRS-2676: Start of 'ora.evmd' on 'cs1' succeeded
CRS-2676: Start of 'ora.diskmon' on 'cs2' succeeded
CRS-2676: Start of 'ora.evmd' on 'cs2' succeeded
CRS-2676: Start of 'ora.cssd' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'cs2'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'cs2'
CRS-2676: Start of 'ora.cssd' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'cs1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'cs1'
CRS-2676: Start of 'ora.ctssd' on 'cs2' succeeded
CRS-2676: Start of 'ora.ctssd' on 'cs1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'cs1'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'cs2'
CRS-2676: Start of 'ora.asm' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'cs2'
CRS-2676: Start of 'ora.asm' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'cs1'
CRS-2676: Start of 'ora.storage' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cs1'
CRS-2676: Start of 'ora.crsd' on 'cs1' succeeded
CRS-2676: Start of 'ora.storage' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cs2'
CRS-2676: Start of 'ora.crsd' on 'cs2' succeeded

执行service rsyslog restart命令来重启rsyslog服务

[root@cs1 bin]# service rsyslog restart
Redirecting to /bin/systemctl restart  rsyslog.service
[root@cs1 bin]# service rsyslog status
Redirecting to /bin/systemctl status  rsyslog.service
rsyslog.service - System Logging Service
   Loaded: loaded (/usr/lib/systemd/system/rsyslog.service; enabled)
   Active: active (running) since Wed 2018-08-01 15:13:22 CST; 12s ago
 Main PID: 23011 (rsyslogd)
   CGroup: /system.slice/rsyslog.service
           鈹斺攢23011 /usr/sbin/rsyslogd -n

Aug 01 15:13:22 cs1.jy.net systemd[1]: Started System Logging Service.

[root@cs2 logrotate.d]#  service rsyslog restart
Redirecting to /bin/systemctl restart  rsyslog.service
[root@cs2 logrotate.d]# service rsyslog status
Redirecting to /bin/systemctl status  rsyslog.service
rsyslog.service - System Logging Service
   Loaded: loaded (/usr/lib/systemd/system/rsyslog.service; enabled)
   Active: active (running) since Wed 2018-08-01 15:13:54 CST; 7s ago
 Main PID: 9809 (rsyslogd)
   CGroup: /system.slice/rsyslog.service
           鈹斺攢9809 /usr/sbin/rsyslogd -n

Aug 01 15:13:54 cs2.jy.net systemd[1]: Started System Logging Service.

5.验证Oracle ASM审计记录是否被记录到/var/log/oracle_asm_audit.log中

[root@cs1 bin]# tail -f /var/log/oracle_asm_audit.log
Aug  1 15:13:46 cs1 journal: Oracle Audit[23601]: LENGTH : '317' ACTION :[80] 'begin dbms_diskgroup.close(:handle); exception when others then   raise;   end;
Aug  1 15:13:48 cs1 journal: Oracle Audit[23610]: LENGTH : '244' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[0] '' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs1.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[3] '100'
Aug  1 15:13:50 cs1 journal: Oracle Audit[23654]: LENGTH : '244' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[0] '' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs1.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[3] '100'
Aug  1 15:13:50 cs1 journal: Oracle Audit[23654]: LENGTH : '494' ACTION :[257] 'select name_kfgrp, number_kfgrp, incarn_kfgrp, compat_kfgrp, dbcompat_kfgrp, state_kfgrp, flags32_kfgrp, type_kfgrp, refcnt_kfgrp, sector_kfgrp, blksize_kfgrp, ausize_kfgrp , totmb_kfgrp, freemb_kfgrp, coldmb_kfgrp, hotmb_kfgrp, minspc_kfgrp, usable_kfgrp, ' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[0] '' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs1.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[1] '3'
Aug  1 15:13:50 cs1 journal: Oracle Audit[23654]: LENGTH : '308' ACTION :[071] 'offline_kfgrp, lflags_kfgrp  , logical_sector_kfgrp  from x$kfgrp_stat
Aug  1 15:13:55 cs1 journal: Oracle Audit[23681]: LENGTH : '244' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[0] '' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs1.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[3] '100'
Aug  1 15:13:56 cs1 journal: Oracle Audit[23681]: LENGTH : '370' ACTION :[132] 'begin dbms_diskgroup.openpwfile(:NAME,:lblksize,:fsz,:handle,:pblksz,:fmode,:genfname);  exception when others then   raise;   end;
Aug  1 15:13:56 cs1 journal: Oracle Audit[23681]: LENGTH : '355' ACTION :[117] 'begin dbms_diskgroup.read(:handle,:offset,:length,:buffer,:reason,:mirr); exception when others then   raise;   end;
Aug  1 15:13:56 cs1 journal: Oracle Audit[23681]: LENGTH : '355' ACTION :[117] 'begin dbms_diskgroup.read(:handle,:offset,:length,:buffer,:reason,:mirr); exception when others then   raise;   end;
Aug  1 15:13:56 cs1 journal: Oracle Audit[23681]: LENGTH : '317' ACTION :[80] 'begin dbms_diskgroup.close(:handle); exception when others then   raise;   end;


[root@cs2 logrotate.d]# tail -f /var/log/oracle_asm_audit.log
Aug  1 15:14:46 cs2 journal: Oracle Audit[9928]: LENGTH : '299' ACTION :[51] 'BEGIN DBMS_SESSION.USE_DEFAULT_EDITION_ALWAYS; END;' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[2] '47'
Aug  1 15:14:46 cs2 journal: Oracle Audit[9928]: LENGTH : '287' ACTION :[39] 'ALTER SESSION SET "_notify_crs" = false' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[2] '42'
Aug  1 15:14:46 cs2 journal: Oracle Audit[9926]: LENGTH : '287' ACTION :[39] 'ALTER SESSION SET "_notify_crs" = false' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[2] '42'
Aug  1 15:14:47 cs2 journal: Oracle Audit[9928]: LENGTH : '292' ACTION :[45] 'SELECT value FROM v$parameter WHERE name = :1' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[1] '3'
Aug  1 15:14:47 cs2 journal: Oracle Audit[9928]: LENGTH : '292' ACTION :[45] 'SELECT value FROM v$parameter WHERE name = :1' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[1] '3'
Aug  1 15:14:47 cs2 journal: Oracle Audit[9928]: LENGTH : '292' ACTION :[45] 'SELECT value FROM v$parameter WHERE name = :1' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[1] '3'
Aug  1 15:14:47 cs2 journal: Oracle Audit[9928]: LENGTH : '292' ACTION :[45] 'SELECT value FROM v$parameter WHERE name = :1' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[1] '3'
Aug  1 15:15:01 cs2 journal: Oracle Audit[9944]: LENGTH : '244' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[0] '' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[3] '100'

可以看到Oracle ASM审计记录已经被记录到了/var/log/oracle_asm_audit.log文件中。

]]>
http://www.jydba.net/index.php/archives/2528/feed 0
Oracle Linux 7使用cron来管理Oracle ASM审计文件目录的增长 http://www.jydba.net/index.php/archives/2526 http://www.jydba.net/index.php/archives/2526#respond Wed, 01 Aug 2018 07:12:38 +0000 http://www.jydba.net/?p=2526 使用cron来管理Oracle ASM审计文件目录的增长
如果不对Oracle ASM实例的审计文件目录进行定期维护那么它将会包含大量的审计文件。如果存在大理审计文件可能会造成文件系统耗尽磁盘空间或indoes,或者由于文件系统扩展限制而造成Oracle运行缓慢,还有可能造成Oracle ASM实例在启动时hang住。这里将介绍如何使用Linux的cron工具来管理Oracle ASM审计文件目录的文件数量。

下面将介绍具体的操作,而且这些操作必须对于RAC环境中的每个节点执行。
1.识别Oracle ASM审计目录
这里有三个目录可能存在Oracle ASM的审计文件。所有三个目录都要控制让其不要过度增长。两个缺省目录是基于Oracle ASM实例启动时环境变量的设置。为了判断系统右的缺省目录,以安装Grid Infrastructure软件的用户(grid)登录系统,设置环境变量,因此可以连接到Oracle ASM实例,运行echo命令。

[grid@cs1 ~]$ . /usr/local/bin/oraenv
ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/grid

[grid@cs1 ~]$ echo $ORACLE_HOME/rdbms/audit
/u01/app/product/12.2.0/crs/rdbms/audit

[grid@cs1 ~]$ echo $ORACLE_BASE/admin/$ORACLE_SID/adump
/u01/app/grid/admin/+ASM1/adump


[grid@cs2 ~]$ . /usr/local/bin/oraenv
ORACLE_SID = [+ASM2] ? 
The Oracle base remains unchanged with value /u01/app/grid

[grid@cs2 ~]$ echo $ORACLE_HOME/rdbms/audit
/u01/app/product/12.2.0/crs/rdbms/audit

[grid@cs2 ~]$ echo $ORACLE_BASE/admin/$ORACLE_SID/adump
/u01/app/grid/admin/+ASM2/adump

第三个Oracle ASM审计目录可以使用SQL*Plus登录Oracle ASM实例后进行查询

grid@cs1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 12.2.0.1.0 Production on Wed Aug 1 14:13:47 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select value from v$parameter where name = 'audit_file_dest';

VALUE
--------------------------------------------------------------------------------
/u01/app/product/12.2.0/crs/rdbms/audit

这里第三个目录与第一个目录是相同的

2.给Grid Infrastructure软件用户使用cron的权限
Oracle ASM的审计文件是由Grid Infrastructure软件用户所创建的,它通常为oracle或grid。移动或删除审计文件的命令必须由Grid Infrastructure软件用户来执行。在Oracle Linux中如果/etc/cron.allow 文件存在,只有在文件中出现其登录名称的用户可以使用 crontab 命令。root 用户的登录名必须出现在cron.allow 文件中,如果/etc/cron.deny 文件存在,并且用户的登录名列在其中,那么这些用户将不能执行crontab命令。如果只有/etc/cron.deny 文件存在,任一名称没有出现在这个文件中的用户可以使用crontab 命令。在Oracle Linux 7.1中只有/etc/cron.deny文件,而且访文件没有任何用户存在,就是说所有用户都能执行crontab命令。

[root@cs1 etc]# cat cron.deny

[root@cs1 etc]# ls -lrt crontab
-rw-r--r--. 1 root root 451 Apr 29  2014 crontab

[root@cs1 etc]# chmod 777 crontab
[root@cs1 etc]# ls -lrt crontab
-rwxrwxrwx. 1 root root 451 Apr 29  2014 crontab

3.添加命令到crontab来管理审计文件
以Grid Infrastructure软件用户来向crontab文件增加命令

[grid@cs1 ~]$ crontab -e

0 6 * * sun /usr/bin/find /u01/app/product/12.2.0/crs/rdbms/audit /u01/app/grid/admin/+ASM1/adump /u01/app/product/12.2.0/crs/rdbms/audit -maxdepth 1 -name '*.aud' -mtime +30 -delete

这个crontab条目在每个星期日的上午6点执行find命令,find命令将从三个审计目录中找出保存时间超过30天的所有审计文件将其删除。如果想要保存审计文件更长的时间,那么在执行find命令后,将相关审计文件移到备份目录中,例如:


0 6 * * sun /usr/bin/find /u01/app/product/12.2.0/crs/rdbms/audit /u01/app/grid/admin/+ASM1/adump /u01/app/product/12.2.0/crs/rdbms/audit -maxdepth 1 -name '*.aud' -mtime +30 -execdir 

/bin/mv {} /archived_audit_dir \;

检查crontab

[grid@cs1 ~]$ crontab -l

0 6 * * sun /usr/bin/find /u01/app/product/12.2.0/crs/rdbms/audit /u01/app/grid/admin/+ASM1/adump /u01/app/product/12.2.0/crs/rdbms/audit -maxdepth 1 -name '*.aud' -mtime +30 -delete
]]>
http://www.jydba.net/index.php/archives/2526/feed 0
Oracle 12CR2 Oracle Restart – ASM Startup fails with PRCR-1079 http://www.jydba.net/index.php/archives/2239 http://www.jydba.net/index.php/archives/2239#respond Tue, 02 May 2017 08:23:31 +0000 http://www.jydba.net/?p=2239 操作系统Oracle Linux 7.1,数据库版本为12.2.0.1 Oracle Restart在重启之后不能正常启动ASM实例

[grid@jytest3 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  OFFLINE      jytest3                  STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       jytest3                  STABLE
ora.asm
               ONLINE  OFFLINE      jytest3                  STABLE
ora.ons
               OFFLINE OFFLINE      jytest3                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       jytest3                  STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       jytest3                  STABLE
ora.evmd
      1        ONLINE  ONLINE       jytest3                  STABLE
ora.jy.db
      1        ONLINE  OFFLINE                               Instance Shutdown,ST
                                                             ABLE
--------------------------------------------------------------------------------

尝试手动启动ASM实例,提示无效的用户与密码

[grid@jytest3 ~]$ srvctl start asm -startoption MOUNT -f
PRKO-2002 : Invalid command line option: -f
[grid@jytest3 ~]$ srvctl start asm -startoption MOUNT 
PRCR-1079 : Failed to start resource ora.asm
CRS-5017: The resource action "ora.asm start" encountered the following error: 
ORA-01017: invalid username/password; logon denied
. For details refer to "(:CLSN00107:)" in "/u01/app/grid/diag/crs/jytest3/crs/trace/ohasd_oraagent_grid.trc".

CRS-2674: Start of 'ora.asm' on 'jytest3' failed
ORA-01017: invalid username/password; logon denied

如果尝试使用SQL*PLUS来启动ASM实例,仍然提示无效的用户与密码

[grid@jytest3 ~]$ sqlplus / as sysasm

SQL*Plus: Release 12.2.0.1.0 Production on Tue May 2 20:30:15 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied

检查错误跟踪文件内容如下:

2017-05-02 21:47:39.181 :    AGFW:380081920: {0:0:169} Agent received the message: RESOURCE_START[ora.asm jytest3 1] ID 4098:1530
2017-05-02 21:47:39.181 :    AGFW:380081920: {0:0:169} Preparing START command for: ora.asm jytest3 1
2017-05-02 21:47:39.181 :    AGFW:380081920: {0:0:169} ora.asm jytest3 1 state changed from: OFFLINE to: STARTING
2017-05-02 21:47:39.181 :    AGFW:380081920: {0:0:169} RECYCLE_AGENT attribute not found
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] (:CLSN00107:) clsn_agent::start {
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::start 000 { 
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::start stopConnection 020
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::stopConnection
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::broadcastEvent 000 entry { OHSid:/u01/app/grid/product/12.2.0/crs+ASM 

s_ohSidEventMapLock:0x139e930 action:2
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::removeConnection connection count 0
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::removeConnection freed 0
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::stopConnection sid +ASM status  1
2017-05-02 21:47:39.182 : USRTHRD:388486912: {0:0:169} ConnectionPool::~ConnectionPool  destructor this:f4061e90 m_oracleHome:/u01/app/grid/product/12.2.0/crs, 

m_oracleSid:+ASM,  m_usrOraEnv:  m_pResState:0x7f10f4049000
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent::refresh
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent::refresh ORACLE_HOME = /u01/app/grid/product/12.2.0/crs
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnUtils::cmdIdIsStart CmdId:257
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnUtils::cmdIdIsStart CmdId:257
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent:getOracleSidAttrib 2 getResAttrib USR_ORA_INST_NAME oracleSid:+ASM
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent:getOracleSidAttrib oracleSid:+ASM
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent::refresh ORACLE_SID = +ASM
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::ConnectionPool 2 constructor this:406d700 

m_oracleHome:/u01/app/grid/product/12.2.0/crs, m_oracleSid:+ASM, m_usrOraEnv: m_instanceType:2 m_instanceVersion:12.2.0.1.0
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::ConnectionPool 2 constructor m_pResState:0x7f1104033540
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent::setOracleSidAttrib updating GEN_USR_ORA_INST_NAME to +ASM
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnUtils::setResAttrib nonPerX current value GEN_USR_ORA_INST_NAME value +ASM
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Utils::setResAttrib clsagfw_modify_attribute attr GEN_USR_ORA_INST_NAME value +ASM retCode 0
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sModifyConfig entry resversion:12.2.0.1.0 compId:+ASM comment:StartOption[1] {
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sInitFile entry pathname:/etc finename:oratab
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sclsnInstAgent::sInitFile pathname:/etc 

backupPath:/u01/app/grid/product/12.2.0/crs/srvm/admin/ filename:oratab pConfigF:0x7f11040752d8
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::ConfigFile constructor name:oratab
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] CssLock::lock s_siha_mtex, got lock CLSN.oratab.jytest3
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse parseLine name: value: comment:
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse mmap name:+asm nameWithCase:+ASM value:/u01/app/grid/product/12.2.0/crs:N 

comment:
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse mmap name:jy nameWithCase:jy value:/u01/app/oracle/product/12.2.0/db:N 

comment:
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] CssLock::unlock s_siha_mtex, released lock CLSN.oratab.jytest3
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::setAltName this:0x7f11040d80e0 altName:+ASM
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::setAltValue altValue:/u01/app/grid/product/12.2.0/crs:N
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sclsnInstAgent::sInitFile resType:ora.asm.type setAltName(compId):+ASM setAltValue

(oracleHome):/u01/app/grid/product/12.2.0/crs:N
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnUtils::cmdIdIsStart CmdId:257
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnUtils::cmdIdIsStart CmdId:257
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent:getOracleSidAttrib 2 getResAttrib USR_ORA_INST_NAME oracleSid:+ASM
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent:getOracleSidAttrib oracleSid:+ASM
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::getAltName this:0x7f11040d80e0
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile:getAltName altName:+asm
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sclsnInstAgent::sInitFile dbname:+ASM altName:+asm
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::getComment name:+asm comment:
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::getAltName this:0x7f11040d80e0
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile:getAltName altName:+asm
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::getAltName this:0x7f11040d80e0
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile:getAltName altName:+asm
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::getComment name:+asm comment:
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sclsnInstAgent::sInitFile exit dbname:+ASM startup comment: startup altName:+asm comment:
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::start pConfigF:40d80e0
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] CssLock::lock s_siha_mtex, got lock CLSN.oratab.jytest3
2017-05-02 21:47:39.185 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse parseLine name: value: comment:
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse mmap name:+asm nameWithCase:+ASM value:/u01/app/grid/product/12.2.0/crs:N 

comment:
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse mmap name:jy nameWithCase:jy value:/u01/app/oracle/product/12.2.0/db:N 

comment:
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sclsnInstAgent::sCleanEntry - delete alt(sid) entry
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnUtils::cmdIdIsStart CmdId:257
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnUtils::cmdIdIsStart CmdId:257
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent:getOracleSidAttrib 2 getResAttrib USR_ORA_INST_NAME oracleSid:+ASM
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent:getOracleSidAttrib oracleSid:+ASM
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sclsnInstAgent::sCleanEntry - key = +ASM
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse parseLine name: value: comment:
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse mmap name:+asm nameWithCase:+ASM value:/u01/app/grid/product/12.2.0/crs:N 

comment:
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse mmap name:jy nameWithCase:jy value:/u01/app/oracle/product/12.2.0/db:N 

comment:
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::updateInPlace file /etc/oratab is not modified
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] CssLock::unlock s_siha_mtex, released lock CLSN.oratab.jytest3
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sclsnInstAgent::sCleanEntry - key[+ASM] is cleaned
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sModifyConfig exit for compId:+ASM }
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Utils::getResAttrib entry attribName:USR_ORA_OPI required:0 loglevel:1
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Utils::getResAttrib: attribname USR_ORA_OPI value false len 5
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Utils::getResAttrib attribname:USR_ORA_OPI value:false exit
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Agent::valueOfAttribIs attrib: REASON compare value: user attribute value: user
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Agent::valueOfAttribIs returns 1
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::start 100 getGenRestart
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::getGenRestart exit GEN_RESTART:StartOption[1] }
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::start 120 comment:StartOption[1]
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Agent::valueOfAttribIs attrib: REASON compare value: failure attribute value: user
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Agent::valueOfAttribIs returns 0
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::checkState 030 new gimh oracleSid:+ASM
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Gimh::constructor ohome:/u01/app/grid/product/12.2.0/crs sid:+ASM
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::resetConnection  s_statusOfConnectionMap:0x139ea20
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::resetConnection sid +ASM status  2
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Gimh::check condition changes to (GIMH_NEXT_NUM) 0(Abnormal Termination) exists
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] CLS_DRUID_REF(CLSN00006) AsmAgent::gimhChecks 100 failed gimh state 0
2017-05-02 21:47:39.188 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] (:CLSN00006:)InstAgent::checkState 110 return unplanned offline
2017-05-02 21:47:39.188 : USRTHRD:388486912: {0:0:169} Gimh::destructor gimh_dest_query_ctx rc=0
2017-05-02 21:47:39.188 : USRTHRD:388486912: {0:0:169} Gimh::destructor gimh_dest_inst_ctx rc=0
2017-05-02 21:47:39.188 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::stopConnection
2017-05-02 21:47:39.188 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::broadcastEvent 000 entry { OHSid:/u01/app/grid/product/12.2.0/crs+ASM 

s_ohSidEventMapLock:0x139e930 action:2
2017-05-02 21:47:39.188 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::removeConnection connection count 0
2017-05-02 21:47:39.188 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::removeConnection freed 0
2017-05-02 21:47:39.188 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::stopConnection sid +ASM status  1
2017-05-02 21:47:39.188 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::checkState 200 prev clsagfw_res_status 3 current clsagfw_res_status 1
2017-05-02 21:47:39.188 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::start 180 startOption:StartOption[1]
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnUtils::setResAttrib nonPerX current value GEN_RESTART value StartOption[1]
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Utils::setResAttrib clsagfw_modify_attribute attr GEN_RESTART value StartOption[1] retCode 0
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::setGenRestart updating GEN_RESTART to StartOption[1] retcode:0 ohasd resource:1
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::start 200 StartInstance with startoption:1 pfile:null 
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::start 400 startInstance
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::startInstance 000 { startOption:1
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstConnection:InstConnection: init:040d5ef0 oracleHome:/u01/app/grid/product/12.2.0/crs 

oracleSid:+ASM instanceType:2 instanceVersion:12.2.0.1.0 
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnInstConnection::makeConnectStr UsrOraEnv  m_oracleHome /u01/app/grid/product/12.2.0/crs 

Crshome /u01/app/grid/product/12.2.0/crs
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnInstConnection::makeConnectStr = (DESCRIPTION=(ADDRESS=(PROTOCOL=beq)

(PROGRAM=/u01/app/grid/product/12.2.0/crs/bin/oracle)(ARGV0=oracle+ASM)(ENVS='ORACLE_HOME=/u01/app/grid/product/12.2.0/crs,ORACLE_SID=+ASM')(ARGS='(DESCRIPTION=

(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))')(CONNECT_DATA=(SID=+ASM))))
2017-05-02 21:47:39.191 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Container:start oracle home /u01/app/grid/product/12.2.0/crs
2017-05-02 21:47:39.191 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::startInstance 020 connect logmode:8008
2017-05-02 21:47:39.191 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstConnection::connectInt 020 server not attached
2017-05-02 21:47:39.694 :  CLSDMC:357324544: command 0 failed with status 1
2017-05-02 21:47:39.695 :CLSDYNAM:357324544: [ora.evmd]{0:0:2} [check] DaemonAgent::check returned 0
2017-05-02 21:47:39.695 :CLSDYNAM:357324544: [ora.evmd]{0:0:2} [check] Deep check returned 1
2017-05-02 21:47:40.210 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ORA-01017: invalid username/password; logon denied

关于ASM启动报无效用户名与密码的问题在MOS上有一篇相关文档”ASM not Starting With: ORA-01017: invalid username/password; logon denied (Doc ID 1918617.1)”,说原因是因为修改了sqlnet.ora文件,正确设置应该如下:

SQLNET.AUTHENTICATION_SERVICES= (NTS)

NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

但我这里并不是这种情况,从另一篇文档”Oracle Restart – ASM Startup fails with PRCR-1079 (Doc ID 1904942.1)”找到类似的错误信息,说原因是因为ASM资源的ACL(访问控制列表)的权限发生了改变。

检查asm资源的访问控制列表权限

[grid@jytest3 dbs]$  crsctl stat res ora.asm -p
NAME=ora.asm
TYPE=ora.asm.type
ACL=owner:grid:rwx,pgrp:asmdba:r-x,other::r--
ACTIONS=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
ASM_DISKSTRING=AFD:*
AUTO_START=restore
CHECK_INTERVAL=1
CHECK_TIMEOUT=30
CLEAN_TIMEOUT=60
CSS_CRITICAL=no
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle ASM resource
ENABLED=1
GEN_RESTART=StartOption[1]
GEN_USR_ORA_INST_NAME=+ASM
IGNORE_TARGET_ON_FAILURE=no
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NLS_LANG=
OFFLINE_CHECK_INTERVAL=0
OS_CRASH_THRESHOLD=0
OS_CRASH_UPTIME=0
PRESENCE=standard
PWFILE=+DATA/orapwasm
PWFILE_BACKUP=
REGISTERED_TYPE=srvctl
RESOURCE_GROUP=
RESTART_ATTEMPTS=5
RESTART_DELAY=0
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=
SPFILE=+DATA/ASM/ASMPARAMETERFILE/registry.253.938201547
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.cssd) weak(ora.LISTENER.lsnr)
START_DEPENDENCIES_RTE_INTERNAL=
START_TIMEOUT=900
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(ora.cssd)
STOP_DEPENDENCIES_RTE_INTERNAL=
STOP_TIMEOUT=600
TARGET_DEFAULT=default
TYPE_VERSION=1.2
UPTIME_THRESHOLD=1d
USER_WORKLOAD=no
USR_ORA_ENV=
USR_ORA_INST_NAME=+ASM
USR_ORA_OPEN_MODE=mount  
USR_ORA_OPI=false
USR_ORA_STOP_MODE=immediate
WORKLOAD_CPU=0
WORKLOAD_CPU_CAP=0
WORKLOAD_MEMORY_MAX=0
WORKLOAD_MEMORY_TARGET=0

从上面的信息ACL=owner:grid:rwx,pgrp:asmdba:r-x,other::r–,可知组名(pgrp)变为了asmdba组,正常来说应该是grid用户的oinstall组,如果尝试修改:

[grid@jytest3 dbs]$ crsctl setperm resource ora.asm -g oinstall
CRS-4995:  The command 'Setperm  resource' is invalid in crsctl. Use srvctl for this command.

修改时出错了,在12.2.0.1中crsctl setperm命令是一种无效的crsctl,建议使用srvctl命令,但srvctl没有setperm命令,如是加上-unsupported参数再次执行修改:

[grid@jytest3 dbs]$ crsctl setperm resource ora.asm -g oinstall -unsupported

重启Oracle Restart,ASM实例正常启动

[grid@jytest3 lib]$ crsctl stop has
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'jytest3'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'jytest3'
CRS-2673: Attempting to stop 'ora.evmd' on 'jytest3'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'jytest3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'jytest3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'jytest3'
CRS-2677: Stop of 'ora.cssd' on 'jytest3' succeeded
CRS-2673: Attempting to stop 'ora.driver.afd' on 'jytest3'
CRS-2677: Stop of 'ora.driver.afd' on 'jytest3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'jytest3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[grid@jytest3 ~]$ crsctl start has
CRS-4123: Oracle High Availability Services has been started.
[grid@jytest3 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       jytest3                  STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       jytest3                  STABLE
ora.asm
               ONLINE  ONLINE       jytest3                  Started,STABLE
ora.ons
               OFFLINE OFFLINE      jytest3                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       jytest3                  STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       jytest3                  STABLE
ora.evmd
      1        ONLINE  ONLINE       jytest3                  STABLE
ora.jy.db
      1        ONLINE  ONLINE       jytest3                  Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             /db,STABLE
--------------------------------------------------------------------------------
]]>
http://www.jydba.net/index.php/archives/2239/feed 0
Oracle ASM Renaming Disks Groups http://www.jydba.net/index.php/archives/2053 http://www.jydba.net/index.php/archives/2053#respond Thu, 09 Feb 2017 02:57:35 +0000 http://www.jydba.net/?p=2053 renamedg工具可以用来改变一个磁盘组的名称。在对磁盘组执行renamedg之前,磁盘组必须在所有节点中dismounted。renamedg工具重命名磁盘组由两个阶段组成:
1.步骤一会生成一个配置文件供步骤二使用
2.步骤二会使用步骤一生成的配置文件来重命名磁盘组

renamedg的语法如下:

[grid@jyrac1 ~]$ renamedg -help
NOTE: No asm libraries found in the system

Parsing parameters..
phase                           Phase to execute, 
                                (phase=ONE|TWO|BOTH), default BOTH

dgname                          Diskgroup to be renamed

newdgname                       New name for the diskgroup

config                          intermediate config file

check                           just check-do not perform actual operation,
                                (check=TRUE/FALSE), default FALSE

confirm                         confirm before committing changes to disks,
                                (confirm=TRUE/FALSE), default FALSE

clean                           ignore errors,
                                (clean=TRUE/FALSE), default TRUE

asm_diskstring                  ASM Diskstring (asm_diskstring='discoverystring',
                                'discoverystring1' ...)

verbose                         verbose execution, 
                                (verbose=TRUE|FALSE), default FALSE

keep_voting_files               Voting file attribute, 
                                (keep_voting_files=TRUE|FALSE), default FALSE


phase=(ONE|TWO|BOTH):指定被执行的阶段。它的取值为one,two或both。这个参数是一个可选参数,它的缺省值为both。通过使用both。如果在第二个阶段出现问题,那么可以使用生成的配

置文件来重新执行two(第二个)阶段。

dgname=diskgroup:指定要被重新命名的磁盘组

newdgname=newdiskgroup:指定新磁盘组名

config=configfile:指定在第一阶段所生成的的配置文件路径或在第二阶段所使用的配置文件路径。这个参数是一个可选参数。缺省的配置文件名为renamedg_config并且它存储在执行

renamedg命令的目录中。在有些平台上配置文件可能需要使用单引号。

asm_diskstring=discoverystring,discoverystrig…:指定Oracle ASM发现字符串。如果Oracle ASM磁盘不是在操作系统的缺省位置,就必须指定asm_diskstring参数。在有些平台上配置文件可能需要使用单引号,当指定通配符时通常需要使用单引号。

clean=(true|false):指定是否容忍错误,否则将会忽略。缺省值是true。

check=(true|false):指定一个boolean值将用在第二阶段。如果为true,那么renamedg工具将打印对磁盘所使用的改变信息列表。没有写操作被执行,这是一个可选参数,缺省值为false。

confirm=(true|false):指定一个boolean值将
用在第二阶段。如果为false,那么renamedg工具将打印所执行的改变并且在实际执行改变之前进行确认。这是一个可选参数,缺省值为false。

如果check参数被设置为true,那么这个参数值是多余的。

verbose=(true|false):当verbose=true时指定verbose执行。缺省值为false。

keep_voting_files=(true|false):指定voting文件是否被保留在被重命名的磁盘组中。它的缺省值为false,它将从被重命名的磁盘组中删除voting文件。

renamedg工具不会更新集群资源,也不会更新数据库所使用的任何文件路径。因为这个原因,在完成第二阶段操作之后,原来的磁盘组资源不会自动删除。旧磁盘组资源的状态可以通过Oracle Clusterware Control(crsctl)的crsctl stat res -t命令来进行检查,并且然后使用Server Control Utility(srvctl)的srvctl remove diskgroup命令来进行删除。

下面将使用两个例子来演示renamedg工具的使用方法
1.将创建一个test磁盘组,并将test磁盘组使用带有asm_diskstring,verbose参数的renamedg命令重命名为new_test磁盘组。

SQL> create diskgroup test normal redundancy disk '/dev/raw/raw12','/dev/raw/raw13';

Diskgroup created.

SQL> select group_number,name from v$asm_diskgroup;

GROUP_NUMBER NAME
------------ ------------------------------
           1 ACFS
           2 ARCHDG
           3 CRSDG
           4 DATADG
           5 TEST

SQL> select group_number,disk_number,name,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           PATH
------------ ----------- ------------------------------ ------------------------------
           0           3                                /dev/raw/raw14
           1           2 ACFS_0002                      /dev/raw/raw7
           5           1 TEST_0001                      /dev/raw/raw13
           5           0 TEST_0000                      /dev/raw/raw12
           4           0 DATADG_0001                    /dev/raw/raw11
           1           0 ACFS_0000                      /dev/raw/raw5
           4           3 DATADG_0000                    /dev/raw/raw10
           2           1 ARCHDG_0001                    /dev/raw/raw9
           3           1 CRSDG_0001                     /dev/raw/raw8
           1           1 ACFS_0001                      /dev/raw/raw6
           2           0 ARCHDG_0000                    /dev/raw/raw2
           4           1 DATADG_0003                    /dev/raw/raw4
           4           2 DATADG_0002                    /dev/raw/raw3
           3           0 CRSDG_0000                     /dev/raw/raw1

14 rows selected.

在两节点上将磁盘组test执行dismout

SQL> alter diskgroup test dismount;

Diskgroup altered.

SQL> alter diskgroup test dismount;

Diskgroup altered.

将磁盘组test重命名为new_test

[grid@jyrac1 ~]$ renamedg  phase=both dgname=test newdgname=new_test asm_diskstring='/dev/raw/raw*' verbose=true
NOTE: No asm libraries found in the system

Parsing parameters..

Parameters in effect:

         Old DG name       : TEST 
         New DG name          : NEW_TEST 
         Phases               :
                 Phase 1
                 Phase 2
         Discovery str        : (null) 
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=both dgname=test newdgname=new_test asm_diskstring='/dev/raw/raw*' verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:
Identified disk UFS:/dev/raw/raw12 with disk number:0 and timestamp (33048873 -682272768)
Identified disk UFS:/dev/raw/raw13 with disk number:1 and timestamp (33048873 -682272768)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:
Identified disk UFS:/dev/raw/raw12 with disk number:0 and timestamp (33048873 -682272768)
Identified disk UFS:/dev/raw/raw13 with disk number:1 and timestamp (33048873 -682272768)
Checking if the diskgroup is mounted or used by CSS 
Checking disk number:0
Checking disk number:1
Generating configuration file..
Completed phase 1
Executing phase 2
Looking for /dev/raw/raw12
Modifying the header
Looking for /dev/raw/raw13
Modifying the header
Completed phase 2
Terminating kgfd context 0x2b0307e120a0

检查生成的配置文件

[grid@jyrac1 ~]$ ls -lrt
total 58
-rw-r--r-- 1 grid oinstall       58 Feb  9 10:04 renamedg_config
[grid@jyrac1 ~]$ cat renamedg_config
/dev/raw/raw12 TEST NEW_TEST
/dev/raw/raw13 TEST NEW_TEST

在两节点上将命名为new_test的磁盘组执行mount操作

SQL> alter diskgroup new_test mount;

Diskgroup altered.


SQL> alter diskgroup new_test mount;

Diskgroup altered.


ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  1048576     15360     8763             5120            1821              0             N  ACFS/
MOUNTED  NORMAL  N         512   4096  1048576     10240     2152                0            1076              0             N  ARCHDG/
MOUNTED  EXTERN  N         512   4096  1048576     10240     9842                0            9842              0             Y  CRSDG/
MOUNTED  NORMAL  N         512   4096  1048576     20480    12419             5120            3649              0             N  DATADG/
MOUNTED  NORMAL  N         512   4096  1048576     10240    10054                0            5027              0             N  NEW_TEST/

检查资源状态信息,可以看到原来的磁盘组test还存在,但处于offline状态,需要使用srvctl remove diskgroup命令将其删除

[grid@jyrac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFS.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ARCHDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.CRSDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.DATADG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.NEW_TEST.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.TEST.dg
               OFFLINE OFFLINE      jyrac1                                       
               OFFLINE OFFLINE      jyrac2                                       
ora.asm
               ONLINE  ONLINE       jyrac1                   Started             
               ONLINE  ONLINE       jyrac2                   Started             
ora.gsd
               OFFLINE OFFLINE      jyrac1                                       
               OFFLINE OFFLINE      jyrac2                                       
ora.net1.network
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ons
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.registry.acfs
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       jyrac1                                       
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.cvu
      1        ONLINE  ONLINE       jyrac2                                       
ora.jyrac.db
      1        ONLINE  ONLINE       jyrac1                   Open                
      2        ONLINE  ONLINE       jyrac2                   Open                
ora.jyrac1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.jyrac2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.oc4j
      1        ONLINE  ONLINE       jyrac2                                       
ora.scan1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.scan2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.scan3.vip
      1        ONLINE  ONLINE       jyrac2                       

使用srvctl remove diskgroup命令将资源ora.TEST.dg(磁盘组test)删除

[grid@jyrac1 ~]$ srvctl remove diskgroup -g test

再次检查资源状态信息,可以看到原来的磁盘组test不存在了。

[grid@jyrac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFS.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ARCHDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.CRSDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.DATADG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.NEW_TEST.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.asm
               ONLINE  ONLINE       jyrac1                   Started             
               ONLINE  ONLINE       jyrac2                   Started             
ora.gsd
               OFFLINE OFFLINE      jyrac1                                       
               OFFLINE OFFLINE      jyrac2                                       
ora.net1.network
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ons
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.registry.acfs
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       jyrac1                                       
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.cvu
      1        ONLINE  ONLINE       jyrac2                                       
ora.jyrac.db
      1        ONLINE  ONLINE       jyrac1                   Open                
      2        ONLINE  ONLINE       jyrac2                   Open                
ora.jyrac1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.jyrac2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.oc4j
      1        ONLINE  ONLINE       jyrac2                                       
ora.scan1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.scan2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.scan3.vip
      1        ONLINE  ONLINE       jyrac2           

2.下面的例子将new_test磁盘组使用带有asm_diskstring,verbose参数的renamedg命令分两个阶段操作重命名为test磁盘组。

只生成第二阶段操作需要的配置文件

[grid@jyrac1 ~]$ renamedg phase=one dgname=new_test newdgname=test asm_diskstring='/dev/raw/raw*' config=/home/grid/new_test.conf verbose=true
NOTE: No asm libraries found in the system

Parsing parameters..

Parameters in effect:

         Old DG name       : NEW_TEST 
         New DG name          : TEST 
         Phases               :
                 Phase 1
         Discovery str        : /dev/raw/raw* 
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=one dgname=new_test newdgname=test asm_diskstring=/dev/raw/raw* config=/home/grid/new_test.conf verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:/dev/raw/raw*
Identified disk UFS:/dev/raw/raw12 with disk number:0 and timestamp (33048873 -682272768)
Identified disk UFS:/dev/raw/raw13 with disk number:1 and timestamp (33048873 -682272768)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:/dev/raw/raw*
Identified disk UFS:/dev/raw/raw12 with disk number:0 and timestamp (33048873 -682272768)
Identified disk UFS:/dev/raw/raw13 with disk number:1 and timestamp (33048873 -682272768)
Checking if the diskgroup is mounted or used by CSS 
Checking disk number:0
KFNDG-00405: file not found; arguments: [NEW_TEST]

出现了KFNDG-00405错误,这是因为在执行renamedg操作之前忘记了将磁盘组new_test在所有节点上执行dismount

在两节点上将磁盘组new_test执地dismount

SQL> alter diskgroup new_test dismount;

Diskgroup altered.


SQL> alter diskgroup new_test dismount;

Diskgroup altered.

Terminating kgfd context 0x2b34496f80a0
[grid@jyrac1 ~]$ renamedg phase=one dgname=new_test newdgname=test asm_diskstring='/dev/raw/raw*' config=/home/grid/new_test.conf verbose=true
NOTE: No asm libraries found in the system

Parsing parameters..

Parameters in effect:

         Old DG name       : NEW_TEST 
         New DG name          : TEST 
         Phases               :
                 Phase 1
         Discovery str        : /dev/raw/raw* 
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=one dgname=new_test newdgname=test asm_diskstring=/dev/raw/raw* config=/home/grid/new_test.conf verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:/dev/raw/raw*
Identified disk UFS:/dev/raw/raw12 with disk number:0 and timestamp (33048873 -682272768)
Identified disk UFS:/dev/raw/raw13 with disk number:1 and timestamp (33048873 -682272768)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:/dev/raw/raw*
Identified disk UFS:/dev/raw/raw12 with disk number:0 and timestamp (33048873 -682272768)
Identified disk UFS:/dev/raw/raw13 with disk number:1 and timestamp (33048873 -682272768)
Checking if the diskgroup is mounted or used by CSS 
Checking disk number:0
Checking disk number:1
Generating configuration file..
Completed phase 1
Terminating kgfd context 0x2b1c202a80a0

执行重命名磁盘组的第二阶段操作

[grid@jyrac1 ~]$ renamedg phase=two dgname=new_test newdgname=test config=/home/grid/new_test.conf verbose=true
NOTE: No asm libraries found in the system

Parsing parameters..

Parameters in effect:

         Old DG name       : NEW_TEST 
         New DG name          : TEST 
         Phases               :
                 Phase 2
         Discovery str        : (null) 
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=two dgname=new_test newdgname=test config=/home/grid/new_test.conf verbose=true
Executing phase 2
Looking for /dev/raw/raw12
Modifying the header
Looking for /dev/raw/raw13
Modifying the header
Completed phase 2
Terminating kgfd context 0x2b8da14950a0

检查磁盘组new_test是否成功被重命名为test

SQL> select group_number,name from v$asm_diskgroup;

GROUP_NUMBER NAME
------------ ------------------------------
           1 ACFS
           2 ARCHDG
           3 CRSDG
           4 DATADG
           0 TEST

在两节点上将磁盘组test执行mount操作

SQL> alter diskgroup test mount;

Diskgroup altered.


SQL> alter diskgroup test mount;

Diskgroup altered.

ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  1048576     15360     8763             5120            1821              0             N  ACFS/
MOUNTED  NORMAL  N         512   4096  1048576     10240     1958                0             979              0             N  ARCHDG/
MOUNTED  EXTERN  N         512   4096  1048576     10240     9842                0            9842              0             Y  CRSDG/
MOUNTED  NORMAL  N         512   4096  1048576     20480    12419             5120            3649              0             N  DATADG/
MOUNTED  NORMAL  N         512   4096  1048576     10240    10054                0            5027              0             N  TEST/

检查资源状态信息,可以看到原磁盘组new_test还存在,但为offline状态,重命名后的磁盘组test状态为online

                              
[grid@jyrac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFS.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ARCHDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.CRSDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.DATADG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.NEW_TEST.dg
               OFFLINE OFFLINE      jyrac1                                       
               OFFLINE OFFLINE      jyrac2                                       
ora.TEST.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.asm
               ONLINE  ONLINE       jyrac1                   Started             
               ONLINE  ONLINE       jyrac2                   Started             
ora.gsd
               OFFLINE OFFLINE      jyrac1                                       
               OFFLINE OFFLINE      jyrac2                                       
ora.net1.network
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ons
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.registry.acfs
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       jyrac1                                       
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.cvu
      1        ONLINE  ONLINE       jyrac2                                       
ora.jyrac.db
      1        ONLINE  ONLINE       jyrac1                   Open                
      2        ONLINE  ONLINE       jyrac2                   Open                
ora.jyrac1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.jyrac2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.oc4j
      1        ONLINE  ONLINE       jyrac2                                       
ora.scan1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.scan2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.scan3.vip
      1        ONLINE  ONLINE       jyrac2                  

将原来的磁盘组new_test从资源信息中删除

[grid@jyrac1 ~]$ srvctl remove diskgroup -g new_test

再将检查资源状态信息,可以看到原磁盘组new_test不存在了

[grid@jyrac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFS.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ARCHDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.CRSDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.DATADG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.TEST.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.asm
               ONLINE  ONLINE       jyrac1                   Started             
               ONLINE  ONLINE       jyrac2                   Started             
ora.gsd
               OFFLINE OFFLINE      jyrac1                                       
               OFFLINE OFFLINE      jyrac2                                       
ora.net1.network
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ons
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.registry.acfs
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       jyrac1                                       
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.cvu
      1        ONLINE  ONLINE       jyrac2                                       
ora.jyrac.db
      1        ONLINE  ONLINE       jyrac1                   Open                
      2        ONLINE  ONLINE       jyrac2                   Open                
ora.jyrac1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.jyrac2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.oc4j
      1        ONLINE  ONLINE       jyrac2                                       
ora.scan1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.scan2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.scan3.vip
      1        ONLINE  ONLINE       jyrac2                        
]]>
http://www.jydba.net/index.php/archives/2053/feed 0
Oracle Find block in ASM http://www.jydba.net/index.php/archives/2051 http://www.jydba.net/index.php/archives/2051#respond Fri, 03 Feb 2017 00:31:32 +0000 http://www.jydba.net/?p=2051 为了更容易的从ASM定位与抽取Oracle数据文件,可以创建一个Perl脚本find_block.pl来自动进行处理,只需要提供数据文件名与块号。脚本find_block.pl是一个Perl脚本,它由dd或kfed命令组成。它可以与所有的Linux和Unix ASM版本与单实例的ASM或RAC环境中使用。

这个脚本必须以ASM/Grid Infrastructure用户来执行,在RAC环境中,脚本可以在任何一个节点上执行,在运行脚本之前,设置ASM环境变量并且确保ORACLE_SID,ORACLE_HOME,LD_LIBRARY_PATH等等设置正确。对于ASM 10g与11gr1来说,也可以设置环境变量PERL5LIB,比如:

export PERL5LIB=$ORACLE_HOME/perl/lib/5.8.3:$ORACLE_HOME/perl/lib/site_perl

运行脚本例子如下:

$ORACLE_HOME/perl/bin/perl find_block.pl filename block

命令中的filename是指要抽取的数据块所在文件的文件名。对于数据文件,文件名可以通过在数据库实例上执行select name from v$datafile来获得。block是要从ASM中抽取数据块的块号

输出结果类似如:

dd if=[ASM disk path] ... of=block_N.dd

或Exadata中的

kfed read dev=[ASM disk path] ... > block_N.txt

如果磁盘组是外部冗余,这个脚本将生成单个命令。对于normal冗余磁盘组文件来说,这个脚本将生成两个命令,对于high冗余磁盘组文件来说,这个脚本将生成三个命令。

下面将对于oracle 10g rac来通过find_block.pl脚来从ASM中抽取数据块

SQL> select name from v$tablespace;

NAME
------------------------------
SYSTEM
UNDOTBS1
SYSAUX
USERS
TEMP
EXAMPLE
UNDOTBS2
YB
TEST

9 rows selected.

SQL> create table t1 (name varchar2(16)) tablespace TEST;

Table created.

SQL> insert into t1 values ('CAT');

1 row created.

SQL> insert into t1 values ('DOG');

1 row created.

SQL> commit;

Commit complete.

SQL> select rowid,name from t1;

ROWID              NAME
------------------ ----------------
AAAN8qAAIAAAAAUAAA CAT
AAAN8qAAIAAAAAUAAB DOG

SQL> select dbms_rowid.rowid_block_number('AAAN8qAAIAAAAAUAAA') "block" from dual;

     block
----------
        20

SQL> select t.name "Tablespace", f.name "Datafile" from v$tablespace t, v$datafile f where t.ts#=f.ts# and t.name='TEST';

Tablespace                     Datafile
------------------------------ --------------------------------------------------
TEST                           +DATADG/test/datafile/test.269.930512093

切换到ASM环境,设置PERL5LIB,运行脚本

[oracle@jyrac3 bin]$ export ORACLE_SID=+ASM1
[oracle@jyrac3 bin]$ export PERL5LIB=$ORACLE_HOME/perl/lib/5.8.3:$ORACLE_HOME/perl/lib/site_perl

[oracle@jyrac3 bin]$ $ORACLE_HOME/perl/bin/perl find_block.pl +DATADG/test/datafile/test.269.930512093 20
dd if=/dev/raw/raw3 bs=8192 count=1 skip=266260 of=block_20.dd
dd if=/dev/raw/raw4 bs=8192 count=1 skip=266260 of=block_20.dd

从上面的输出可以看到指定的文件是normal冗余,脚本生成了两个dd命令,下面我们来运行:

[root@jyrac3 ~]# dd if=/dev/raw/raw3 bs=8192 count=1 skip=266260 of=block_20.dd
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.00608323 seconds, 1.3 MB/s

下面查看block_20.dd文件的内容,使用od工具,我们可以看到插入表中的数据:

[root@jyrac3 ~]# od -c block_20.dd | tail -3
0017740   S   O   R   T   =   '   B   I   N   A   R   Y   '  \b   , 001
0017760 001 003   D   O   G   , 001 001 003   C   A   T 001 006  \r 203
0020000

可以看到DOG与CAT

Example with ASM version 12.1.0.1 in Exadata
在Exadata中,不可以使用dd命令来抽取数据块,因为磁盘对数据库服务器不可见。为了得到数据库的数据块,可以使用kfed工具,因为find_block.pl将由kfed命令组成。


$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on [date]

SQL> alter pluggable database BR_PDB open;

Pluggable database altered.

SQL> show pdbs

CON_ID CON_NAME OPEN MODE   RESTRICTED
------ -------- ----------- ----------
       2 PDB$SEED READ ONLY   NO
...
       5 BR_PDB   READ WRITE  NO

SQL>

$ sqlplus bane/welcome1@BR_PDB

SQL*Plus: Release 12.1.0.1.0 Production on [date]

SQL> create table TAB1 (n number, name varchar2(16)) tablespace USERS;

Table created.

SQL> insert into TAB1 values (1, 'CAT');

1 row created.

SQL> insert into TAB1 values (2, 'DOG');

1 row created.

SQL> commit;

Commit complete.

SQL> select t.name "Tablespace", f.name "Datafile"
from v$tablespace t, v$datafile f
where t.ts#=f.ts# and t.name='USERS';

Tablespace Datafile
---------- ---------------------------------------------
USERS      +DATA/CDB/054.../DATAFILE/users.588.860861901

SQL> select ROWID, NAME from TAB1;

ROWID              NAME
------------------ ----
AAAWYEABfAAAACDAAA CAT
AAAWYEABfAAAACDAAB DOG

SQL> select DBMS_ROWID.ROWID_BLOCK_NUMBER('AAAWYEABfAAAACDAAA') "Block number" from dual;

Block number
------------
       131

SQL>

切换到ASM环境执行脚本

$ $ORACLE_HOME/perl/bin/perl find_block.pl +DATA/CDB/0548068A10AB14DEE053E273BB0A46D1/DATAFILE/users.588.860861901 131
kfed read dev=o/192.168.1.9/DATA_CD_03_exacelmel05 ausz=4194304 aunum=16212 blksz=8192 blknum=131 | grep -iv ^kf > block_131.txt
kfed read dev=o/192.168.1.11/DATA_CD_09_exacelmel07 ausz=4194304 aunum=16267 blksz=8192 blknum=131 | grep -iv ^kf > block_131.txt

注意find_block.pl将生成两个脚本,数据文件是normal冗余,执行以下命令:

$ kfed read dev=o/192.168.1.9/DATA_CD_03_exacelmel05 ausz=4194304 aunum=16212 blksz=8192 blknum=131 | grep -iv ^kf > block_131.txt
$

检查block_131文件的内容,可以看到DOG与CAT

$ more block_131.txt
...
FD5106080 00000000 00000000 ...  [................]
      Repeat 501 times
FD5107FE0 00000000 00000000 ...  [........,......D]
FD5107FF0 012C474F 02C10202 ...  [OG,......CAT..,-]
$

Find any block
find_block.pl脚本可以用来从存储在ASM中的任何文件抽取数据块。执行下面的命令来抽取控制文件与随机数据块

$ $ORACLE_HOME/perl/bin/perl find_block.pl +DATA/CDB/CONTROLFILE/current.289.843047837 5
kfed read dev=o/192.168.1.9/DATA_CD_10_exacelmel05 ausz=4194304 aunum=73 blksz=16384 blknum=5 | grep -iv ^kf > block_5.txt
kfed read dev=o/192.168.1.11/DATA_CD_01_exacelmel07 ausz=4194304 aunum=66 blksz=16384 blknum=5 | grep -iv ^kf > block_5.txt
kfed read dev=o/192.168.1.10/DATA_CD_04_exacelmel06 ausz=4194304 aunum=78 blksz=16384 blknum=5 | grep -iv ^kf > block_5.txt
$

可以看到脚本显示了正确的控制文件块大小(16K),并且生成三个不同的命令。当磁盘组data是normal冗余磁盘组时,控制文件会为high冗余(ASM中控制文件缺省为冗余)。

小结:
find_block.pl是perl脚本,它由dd或kfed命令组成用来从ASM中的文件中抽取数据块。在大多数情况下我们想要从数据文件中抽取数据块,但脚本也能从控制文件,重做日志或任何其它文件中抽取数据块。

如果文件存储在外部冗余磁盘组中,那么脚本将会生成单个命令,这可以用来从ASM磁盘中抽取数据块。

如果文件存储在normal冗余磁盘组,那么脚本将会生成两个命令,它们用来从两个不同的ASM磁盘来抽取数据块(相同副本)。

如果文件存储在high冗余磁盘组,那么脚本将会生成三个命令。

]]>
http://www.jydba.net/index.php/archives/2051/feed 0
Oracle ASM REQUIRED_MIRROR_FREE_MB http://www.jydba.net/index.php/archives/2049 http://www.jydba.net/index.php/archives/2049#respond Fri, 03 Feb 2017 00:28:03 +0000 http://www.jydba.net/?p=2049 REQUIRED_MIRROR_FREE_MB与USABLE_FILE_MB是V$ASM_DISKGROUP[_STAT]视图中两个非常有意义的列。关于这两个字段有很多问题与及如何计算它们。

How much space can I use
ASM不能阻止你使用外部冗余磁盘组的所有可用空间,normal冗余磁盘组总空间的一半,high冗余磁盘组总空间的三分之一。但如果你想填满磁盘组直到溢出,使它没有足够空间来增长或增加任何文件,在磁盘故障情况下,直到故障磁盘被替换与rebalance操作完成之前,将没有空间来还原一些数据的冗余.

11gr2 ASM in Exadata
在Exadata ASM 11gr2中,required_mirror_free_mb作为磁盘组中的最大故障组的大小被显示。下面是Exadata中的11.2.0.4 ASM的例子进行说明。

[grid@exadb01 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on [date]

SQL> select NAME, GROUP_NUMBER from v$asm_diskgroup_stat;

NAME      GROUP_NUMBER
--------- ------------
DATA                 1
DBFS_DG              2
RECO                 3

SQL>

下面将查看磁盘组DBFS_DG。正常来说对于磁盘组DBFS_DG的每个故障组有10块磁盘。为了说明REQUIRED_MIRROR_FREE_MB对于最大故障组所显示的大小删除了几块磁盘。

SQL> select FAILGROUP, count(NAME) "Disks", sum(TOTAL_MB) "MB"
from v$asm_disk_stat
where GROUP_NUMBER=2
group by FAILGROUP
order by 3;

FAILGROUP       Disks         MB
---------- ---------- ----------
EXACELL04           7     180096
EXACELL01           8     205824
EXACELL02           9     231552
EXACELL03          10     257280

SQL>

可以看到最大故障组中的总空间大小为257280MB。

最后我们来查看最大故障组的required_mirror_free_mb大小

SQL> select NAME, TOTAL_MB, FREE_MB, REQUIRED_MIRROR_FREE_MB, USABLE_FILE_MB
from v$asm_diskgroup_stat
where GROUP_NUMBER=2;

NAME         TOTAL_MB    FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB
---------- ---------- ---------- ----------------------- --------------
DBFS_DG        874752     801420                  257280         272070

ASM计算USABLE_FILE_MB使用以下公式:

USABLE_FILE_MB=(FREE_MB-REQUIRED_MIRROR_FREE_MB)/2
              =(801420-257280)/2=544140/2=272070 

Exadata with ASM version 12cR1
在使用ASM 12cr1的exadata中,required_mirror_free_mb作为磁盘组中最大的大小被显示

[grid@exadb03 ~]$ sqlplus / as sysasm

SQL*Plus: Release 12.1.0.2.0 Production on [date]

SQL> select NAME, GROUP_NUMBER from v$asm_diskgroup_stat;

NAME     GROUP_NUMBER
-------- ------------
DATA                1
DBFS_DG             2
RECO                3

SQL> select FAILGROUP, count(NAME) "Disks", sum(TOTAL_MB) "MB"
from v$asm_disk_stat
where GROUP_NUMBER=2
group by FAILGROUP
order by 3;

FAILGROUP       Disks         MB
---------- ---------- ----------
EXACELL05           8     238592
EXACELL07           9     268416
EXACELL06          10     298240

最大故障组中的总空间为298240MB,但这时required_mirror_free_mb显示的大小为29824MB。

SQL> select NAME, TOTAL_MB, FREE_MB, REQUIRED_MIRROR_FREE_MB, USABLE_FILE_MB
from v$asm_diskgroup_stat
where GROUP_NUMBER=2;  2    3

NAME         TOTAL_MB    FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB
---------- ---------- ---------- ----------------------- --------------
DBFS_DG        805248     781764                   29824         375970

下面查看磁盘组中最大磁盘的大小

SQL> select max(TOTAL_MB) from v$asm_disk_stat where GROUP_NUMBER=2;

MAX(TOTAL_MB)
-------------
        29824

ASM计算usable_file_mb的公式如下:

USABLE_FILE_MB = (FREE_MB - REQUIRED_MIRROR_FREE_MB) / 2
               =(781764-29824)/2=375970

小结:
required_mirror_free_mb与usable_file_mb用来帮助dba与存储管理员来规划磁盘组容量与冗余。这只是一个报告,ASM并不执行。在ASM为12cr1的exadata中,required_mirror_free_mb的大小就是磁盘组中最大磁盘的大小。通过这种设计,从这个字段就能反映出实践经验,它显示了磁盘出现故障,而不是整个存储单元。

]]>
http://www.jydba.net/index.php/archives/2049/feed 0
Oracle ASM Disk Group Attributes http://www.jydba.net/index.php/archives/2047 http://www.jydba.net/index.php/archives/2047#respond Fri, 03 Feb 2017 00:21:01 +0000 http://www.jydba.net/?p=2047 磁盘组属性是在ASM 11.1中引入的,它们属于磁盘组,不是属于ASM实例。有一些属性只能在磁盘组创建时设置,有一些只能在磁盘组创建之后设置,而有些属性可以在任何时候设置。

ACCESS_CONTROL.ENABLED
这个属性决定了对磁盘组是否启用ASM文件访问控制。它的参数值可以设置为TRUE或FALSE(缺省值)。如果属性被设置为TRUE,访问ASM文件受制于访问控制。如果为FALSE,任何用户都可以访问磁盘组中的文件。所有其它操作不依赖这个属性。这个属性只能在执行修改磁盘组时才能设置。

ACCESS_CONTROL.UMASK
这个属性决定了那些权限掩盖了创建ASM文件的所有者,组与用户组中的其它信息。这个属性应用于磁盘组中的所有文件。这个属性值可以是三个数字的组合 {0|2|6} {0|2|6} {0|2|6},缺省值是066。设置为’0’表示不掩盖任何信息。设置为’2’掩盖写权限,设置为’6’掩盖读写权限。在设置ACCESS_CONTROL.UMASK磁盘组属性之前,ACCESS_CONTROL.ENABLED必须被设置为TRUE。

AU_SIZE
这个属性控制着AU大小并且只能在创建磁盘组时进行设置。每个磁盘组可以有不同的AU大小。

CELL.SMART_SCAN_CAPABLE[Exadata]
这个属性在exadata中对storage cells中的grid disks创建磁盘组时使用。它能对存储在磁盘组中的对象启用smart scan功能。

COMPATIBLE.ASM
磁盘组的compatible.asm属性决定了可以使用这些磁盘组的ASM实例的最低软件版本。这个设置也会影响ASM元数据结构的格式。compatible.asm的缺省值为10.1,当使用create diskgroup语句时,ASMCMD的mkdg命令与EM的Create Disk Group page时会使用。当使用ASMCA创建磁盘组时,在ASM 11gr2中它的缺省值为11.2,在ASM 12c中它的缺省值为12.1。

COMPATIBLE.RDBMS
这个属性决定了使用磁盘组的任何数据库实例的compatible参数的最小值。在推进compatible.rdbms属性之前,确保访问磁盘组的所有数据库的compatible参数被设置为新的compatible.rdbms所允许的最小值。

COMPATIBLE.ADVM
这个属性值决定了是否磁盘组可以创建ASM卷。这个属性必须设置为11.2或更高版本。在设置这个属性之前,compatible.asm值必须被设置为11.2或更高版本,ADVM卷驱动必须被加载到所支持的环境中。缺省情况下,compatible.advm属性为空。

CONTENT.CHECK[12c]
当对磁盘组执行rebalance时,这个属性决定了是否启用或禁用内容检查。这个属性可以设置为TRUE或FALSE。内容检查包括对用户数据的硬件辅助弹性数据(HARD)检查,验证文件目录中的文件类型,文件目录信息与镜像比较。当这个属性设置为TRUE时,会对所有rebalance操作执行内容检查。内容检查也可以当作磁盘清除功能。

CONTENT.TYPE[11.2.0.3,Exadata]
这个属性标识磁盘组类型,它可以是DATA,RECOVERY或SYSTEM。它决定了最近的伙伴磁盘/故障磁盘组的距离。缺省值是DATA,它指定了距离为1,RECOVERY指定距离为3,SYSTEM指定距离为5。距离为1,意味着ASM会将所有磁盘考虑为伙伴关系。距离为3意味着每3个磁盘将被考虑为伙伴关系,距离为5意味着每5个磁盘将考虑为伙伴关系。

这个属性可以在创建或修改磁盘组时可以设置。如果content.type属性被设置或使用alter diskgroup被改变,新的配置直到磁盘组rebalance显式完成之前不会生效。

content.type属性只对normal与high冗余磁盘组有效。compatible.asm属性必须被设置为11.2.0.3或更高版本来启用content.type属性。

DISK_REPAIR_TIME
DISK_REPAIR_TIME属性决定了在磁盘删除之前,ASM将其它保持脱机状态的时间。这个属性与快速镜像重新同步功能相关,并且compatible.asm属性必须设置为11.1或更高版本。而且只能在修改磁盘组时才能设置。

FAILGROUP_REPAIR_TIME[12c]
这个属性指定了磁盘组中的故障磁盘组的缺省修复时间。故障磁盘组的修复时间在ASM判断整个故障磁盘组出现故障时会被用到。它的缺省值是24小时。如果对磁盘指定了修复时间,比如执行alter diskgroup offline disk drop after语句,那么磁盘修复时间将会覆盖故障磁盘组的修复时间。

这个属性只能对normal与high冗余磁盘组执行修改时进行设置。

IDP.BOUNDARY and IDP.TYPE[Exadata]
这两个属性被用来配置Exadata存储,并且与智能数据存储功能相关。

PHYS_META_REPLICATED[12c]
这个属性用来跟踪磁盘组的复制状态。当一个磁盘组的ASM的compatible被设置为12.1或更高版本时,每块磁盘的物理元数据会被复制。这些元数据包含磁盘头,可用空间表块与分配表块。复制操作是以联机异步方式执行。这个属性值被设置为true时,磁盘组中的每个磁盘中的物理元数据会被复制。

这个属性只能在compatible.asm被设置为12.1或更高版本才能对磁盘组进行定义。这个属性是只读状态并且用户不能设置或修改它。它的参数值为true或false。

SECTOR_SIZE
这个属性指定磁盘组中磁盘的sector size,并且只能在创建磁盘组时设置。SECTOR_SIZE的值可以是512,4094或4K(提供的磁盘支持这些参数值)。缺省值依赖于平台。compatible.asm与compatible.rdbms属性必须被设置为11.2或更高版本才能将sector size设置为非缺省值。ACFS不支持4KB的sector驱动。

STORAGE.TYPE
这个属性指定了磁盘组中的磁盘类型。它的参数值有:exadata,pillar,zfsas与other。如果属性被设置为exadata|pillar|zfsas,那么磁盘组中的所有磁盘必须是这种类型。如果属性被设置为other,磁盘组中的磁盘的类型可以是任何类型。

如果storage.type磁盘组属性被设置为pillar或zfsas,Hybrid Columnar Compression(HCC)混合列压缩功能可以对磁盘组中的对象启用。Exadata支持HCC。

注意:ZFS存储必须通过Direct NFS(dNFS)提供来使用,Pillar Axiom存储必须通过SCSI或光纤通道接口提供来使用。

为了设置storage.type属性,compatible.asm与compatible.rdbms磁盘组属性必须设置为11.2.0.3或更高版本。为了对ZFS存储提供最大化支持,将compatible.asm与compatible.rdbms磁盘组属性设置为11.2.0.4或更高版本。

storage.type属性可以在创建或修改磁盘组时进行设置。当客户端连接到磁盘组时不能进行设置。例如,当磁盘组使用ADVM卷时,这个属性不能进行设置。

这个属性在v$asm_attribute视图或ASMCMD lsattr命令中直到它被设置之前是不可见的。

THIN_PROVISIONED[12c]
这个属性在磁盘组完成rebalance操作之后用来启用或禁用丢弃没有使用的空间。这个属性值可以设置为true或false(缺省值)

存储厂商产品支持thin provisioning的能力来更用效率的重用丢弃的存储空间

APPLIANCE.MODE[11.2.0.4,Exadatga]
APPLIANCE.MODE属性提供了在删除一个或多个ASM磁盘时,磁盘rebalance完成时间。这意味着在磁盘故障后冗余会被快速还原。当在Exadata中创建新磁盘组时这个属性会自动被启用。现有磁盘组必须使用alter diskgroup命令来显式设置。这个功能叫作固定伙伴。

这个属性在满足以下条件时可以对磁盘组启用:
Oracle ASM磁盘组属性compatible.asm被设置为11.2.0.4或更高版本,CELL.SMART_SCAN_CAPABLE属性被设置为TRUE。磁盘组中的所有磁盘是相同的磁盘类型,比如都是磁盘或闪存磁盘。磁盘组中的所有磁盘是相同大小。磁盘组中的所有故障组有相同磁盘数。磁盘组中没有磁盘脱机。

最小化软件:Oracle Exadata Storage Server Software release 11.2.3.3 运行Oracle Database 11g Release 2(11.2) release 11.2.0.4

注意:这个功能在Oracle Database version 12.1.0.1中不可用。

Hidden disk group attributes

_REBALANCE_COMPACT
这个属性与rebalance的compacting阶段有关。这个属性可以为TRUE(缺省值)或FALSE。将这个属性设置为FALSE,会对磁盘组的rebalance禁用compacting阶段。

_EXTENT_COUNTS
_EXTENT_COUNTS这个属性,与可变区大小有关,它决定了那个区大小将会增加。这个属性的值为”20000 20000 214748367″,这意味着前20000个区大小将是1个AU,接下来的20000区大小将由_extent_sizes属性的第二个值决定,并且剩余的区大小将由_extent_sizes的第三个值决定。

_EXTENT_SIZES
这个属性是与可变区大小有关的第二个隐藏参数,并且它决定了区大小的增长,也就是AU的数量。

在ASM 11.1中,属性值为”1 8 64″。在ASM 11.2与以后版本中,属性值为”1 4 16″。

v$asm_attribute视图与ASMCMD lsattr命令
磁盘组属性可以通过v$asm_attribute视图与asmcmd lsattr命令。

下面是显示磁盘组DATADG属性的一种方法:

[grid@jyrac1 ~]$ asmcmd lsattr -G DATADG -l
Name                     Value       
access_control.enabled   FALSE       
access_control.umask     066         
au_size                  1048576     
cell.smart_scan_capable  FALSE       
compatible.asm           11.2.0.0.0  
compatible.rdbms         11.2.0.0.0  
disk_repair_time         3.6h        
sector_size              512         

磁盘组属性可以通过alter diskgroup set attribute语句,ASMCMD setattr命令与ASMCA来进行修改。下面是使用ASMCMD setattr命令来修改disk_repair_time属性的一个例子:

[grid@jyrac1 ~]$ asmcmd setattr -G DATADG disk_repair_time '4.5 H'

检查新属性:

[grid@jyrac1 ~]$ asmcmd lsattr -G DATADG -l disk_repair_time
Name              Value  
disk_repair_time  4.5 H  

小结:
磁盘组属性,在ASM 11.1中引入,它是优化磁盘组能力的一种很好方式。有些属性是Exadata特定的,有些只能在特定版本中使用。大多数磁盘组属性可以通过v$asm_attribute视图来查看。

查询

]]>
http://www.jydba.net/index.php/archives/2047/feed 0
Oracle ASM spfile in a disk group http://www.jydba.net/index.php/archives/2045 http://www.jydba.net/index.php/archives/2045#respond Mon, 16 Jan 2017 00:33:47 +0000 http://www.jydba.net/?p=2045 从ASM 11.2开始,ASM spfile可以被存储在ASM磁盘组中。的确,在新ASM安装时,OUI会将ASM spfile存储在磁盘组中。这对于Oracle Restart(单实例环境)与RAC环境是一样的。在安装时第一个创建的磁盘组就是存储spfile的缺省位置,但这不是必须的。spfile仍然可以存储在文件系统中,比如$ORACLE_HOME/dbs目录。

ASMCMD命令的新功能
为了支持这个功能,ASMCMD增加了新的命令来备份,复制与迁移ASM spfile。这些命令是:
.spbackup:将一个ASM spfile备份到一个备份文件中。这个备份文件不是一种特定文件类型并且不会被标识为一个spfile。

.spcopy:将一个ASM spfile文件从原目录复制到目标目录中

.spmove:将一个ASM spfile文件从原目录迁移到目标目录中并且自动更新GPnP profile。

SQL命令create pfile from spfile与create spfile from pfile对于存储在磁盘组中的ASM spfile仍然有效。

存储在磁盘组中的ASM spfile
在我的环境中,ASM spfile存储在磁盘组crsdg中

[grid@jyrac1 trace]$ asmcmd find --type ASMPARAMETERFILE +CRSDG "*"
+CRSDG/jyrac-cluster/asmparameterfile/REGISTRY.253.928747387

从上面的结果可以看到,ASM spfile存储在特定的目录中,并且它的ASM文件号为253,ASM spfile以一个注册文件存储在磁盘组中,并且它的ASM元数据文件号总是253

可以使用sqlplus来查看

SQL> show parameter spfile

NAME                                 TYPE                   VALUE
------------------------------------ ---------------------- ------------------------------
spfile                               string                 +CRSDG/jyrac-cluster/asmparame
                                                            terfile/REGISTRY.253.928747387

备份ASM spfile文件

[grid@jyrac1 trace]$ asmcmd spbackup +CRSDG/jyrac-cluster/asmparameterfile/REGISTRY.253.928747387 /home/grid/asmspfile.backup

查看备份ASM spfile文件的内容

[grid@jyrac1 ~]$ strings asmspfile.backup
+ASM1.__oracle_base='/u01/app/grid'#ORACLE_BASE set from in memory value
+ASM2.asm_diskgroups='ARCHDG','DATADG'#Manual Dismount
+ASM1.asm_diskgroups='ARCHDG','DATADG','ACFS'#Manual Mount
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/grid'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'

可以看到这是ASM spfile的一个副本,它包含了参数与相关注释

ASM spfile查找顺序
因为ASM实例在启动时需要读取spfile文件,如果spfile所在磁盘组不能mount,那么ASM不仅不知道spfile存储在那个磁盘组,而且也不知道ASM spfile查找字符串的值。当Oracle ASM实例搜索一个初始化参数文件时,它的搜索顺序为:
1.在Grid Plug and Play(GPnp) profile中指定的初始化参数文件目录
2.如果在GPnP profile中没有指定,那么搜索顺序将变为:
2.1.Oracle ASM实例home目录中的spfile(比如:$ORACLE_HOME/dbs/spfile+ASM.ora)
2.2.Oracle ASM实例home目录中的pfile

这里并没有告诉关于ASM查找顺序字符串的任何信息,但至少告诉我们了spfile与GPnp profile。下面是来自Exadata环境中的值:

[root@jyrac2 ~]# find / -name profile.xml
/u01/app/product/11.2.0/crs/gpnp/jyrac2/profiles/peer/profile.xml
/u01/app/product/11.2.0/crs/gpnp/profiles/peer/profile.xml
[grid@jyrac2 peer]$ gpnptool getpval -p=profile.xml -?

Oracle GPnP Tool
     getpval  Get value(s) from GPnP Profile
Usage: 
 "gpnptool getpval ", where switches are: 
    -prf                  Profile Tag: , optional
    -[id:]prf_cn          Profile Tag: , optional
    -[id:]prf_pa          Profile Tag: , optional
    -[id:]prf_sq          Profile Tag: , optional
    -[id:]prf_cid         Profile Tag: , optional
    -[pid:]nets           Profile Tag:  children of , optional
    -[pid:]haip           Profile Tag:  children of , optional
    -[id:]haip_ma         Profile Tag: , optional
    -[id:]haip_bm         Profile Tag: , optional
    -[id:]haip_s          Profile Tag: , optional
    -[pid:]hnet           Profile Tag:  children of , optional
    -[id:]hnet_nm         Profile Tag: , optional
    -[pid:]net            Profile Tag:  children of , optional
    -[id:]net_ip          Profile Tag: , optional
    -[id:]net_use         Profile Tag: , optional
    -[id:]net_nt          Profile Tag: , optional
    -[id:]net_aip         Profile Tag: , optional
    -[id:]net_ada         Profile Tag: , optional
    -[pid:]asm            Profile Tag:  children of , optional
    -[id:]asm_dis         Profile Tag: , optional
    -[id:]asm_spf         Profile Tag: , optional
    -[id:]asm_uid         Profile Tag: , optional
    -[pid:]css            Profile Tag:  children of , optional
    -[id:]css_dis         Profile Tag: , optional
    -[id:]css_ld          Profile Tag: , optional
    -[id:]css_cin         Profile Tag: , optional
    -[id:]css_cuv         Profile Tag: , optional
    -[pid:]ocr            Profile Tag:  children of , optional
    -[id:]ocr_oid         Profile Tag: , optional
    -rmws                 Remove whitespace from xml, optional
    -fmt[=0,2]            Format profile. Value is ident level,step, optional
    -p[=profile.xml]      GPnP profile name
    -o[=gpnptool.out]     Output result to a file, optional
    -o-                   Output result to stdout
    -ovr                  Overwrite output file, if exists, optional
    -t[=3]                Trace level (min..max=0..7), optional
    -f=              Command file name, optional
    -?                    Print verb help and exit


[grid@jyrac2 peer]$ gpnptool getpval -p=profile.xml -asm_dis -o-

[grid@jyrac2 peer]$ gpnptool getpval -p=profile.xml -asm_spf -o-
+CRSDG/jyrac-cluster/asmparameterfile/spfileasm.ora

在单实例环境中没有GPnP profile,因此为了支持在磁盘组中存储ASM spfile

[grid@jyrac1 ~]$ crsctl stat res ora.asm -p | egrep "ASM_DISKSTRING|SPFILE"
ASM_DISKSTRING=
SPFILE=+DATA/ASM/ASMPARAMETERFILE/registry.253.822856169

现在知道ASM在什么目录查找ASM磁盘与spfile。但磁盘组不能被mount,ASM实例没有启动时,ASM如何读取spfile呢,答案就在ASM磁盘头中。为了支持在磁盘组中存储ASM spfile,在ASM磁盘头中增加了两个字段:
.kfdhdb.spfile:ASM spfile的AU号
.kfdhdb.spfflg:ASM spfile标记,如果为1,ASM spfile将存储在kfdhdb.spfile所指示的AU中。

作为磁盘发现操作的一部分,ASM实例将读取磁盘头并查找spfile信息。一旦它查找到磁盘存储了spfile,它将可以读取真实的初始化参数。

下面先来检查我环境中的磁盘组CRSDG的状态与冗余类型

[grid@jyrac1 ~]$ asmcmd lsdg -g CRSDG  | cut -c1-26
Inst_ID  State    Type    
      1  MOUNTED  EXTERN  
      2  MOUNTED  EXTERN  

磁盘组CRSDG被mount并且磁盘组为外部冗余。这意味着ASM spfile不会有镜像副本,因此我们只能看到一个磁盘有kfdhdb.spfile与fkdhdb.spfflg字段。例如:

[grid@jyrac1 ~]$ asmcmd lsdsk -G CRSDG --suppressheader
/dev/raw/raw1
/dev/raw/raw8
[grid@jyrac1 ~]$ kfed read /dev/raw/raw1 | grep spf
kfdhdb.spfile:                        0 ; 0x0f4: 0x00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0x00000000
[grid@jyrac1 ~]$ kfed read /dev/raw/raw8 | grep spf
kfdhdb.spfile:                       30 ; 0x0f4: 0x0000001e
kfdhdb.spfflg:                        1 ; 0x0f8: 0x00000001

SQL> select group_number,disk_number, name,path from v$asm_disk where group_number=2;

GROUP_NUMBER DISK_NUMBER NAME                                                         PATH
------------ ----------- ------------------------------------------------------------ --------------------------------------------------
           2           1 CRSDG_0001                                                   /dev/raw/raw8
           2           0 CRSDG_0000                                                   /dev/raw/raw1

可以看到只有一个磁盘上存储了ASM spfile文件

使用kfed工具来查看磁盘/dev/raw/raw8上的30号AU所存储的内容

[grid@jyrac1 ~]$ dd if=/dev/raw/raw8 bs=1048576 skip=30 count=1 | strings
+ASM1.__oracle_base='/u01/app/grid'#ORACLE_BASE set from in memory value
+ASM2.asm_diskgroups='ARCHDG','DATADG'#Manual Dismount
+ASM1.asm_diskgroups='ARCHDG','DATADG','ACFS'#Manual Mount
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/grid'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.035288 seconds, 29.7 MB/s
KeMQ
jyrac-cluster/asmparameterfile/spfileasm.ora

磁盘/dev/raw/raw8上的30号AU的确是存储的ASM spfile内容

ASM spfile别名块
新的ASM磁盘头有一个额外的字段,它的元数据块类型为KFBTYP_ASMSPFALS,它用来描述ASM spfile别名。ASM spfile别名存储在ASM spfile所在AU的最后一个块中。下面来查看磁盘/dev/raw/raw8上的30号AU的最一个块255:

[grid@jyrac1 ~]$ kfed read /dev/raw/raw8 aun=30 blkn=255
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           27 ; 0x002: KFBTYP_ASMSPFALS
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                     255 ; 0x004: blk=255
kfbh.block.obj:                     253 ; 0x008: file=253
kfbh.check:                  1364026699 ; 0x00c: 0x514d654b
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfspbals.incarn:              928747387 ; 0x000: 0x375b8f7b
kfspbals.blksz:                     512 ; 0x004: 0x00000200
kfspbals.size:                        3 ; 0x008: 0x0003
kfspbals.path.len:                   44 ; 0x00a: 0x002c
kfspbals.path.buf:                      ; 0x00c: length=0

这个元数据块不大,大多数的条目都是块头信息(字段kfbh.*)。实际上ASM spfile别名数据(字段kfspbals.*)只有几个条目。spfile文件的incarnation为928747387是文件名(REGISTRY.253.928747387)的一部分,ASM spfile的块大小512 bytes并且文件大小为3个块。path信息为空,意味着没有真实的ASM spfile别名。

下面将创建ASM spfile别名,先使用现有的spfile来创建pfile,再使用pfile来创建spfile别名:

[grid@jyrac1 ~]$sqlplus / as sysasm

SQL> create pfile='/tmp/pfile+ASM.ora' from spfile;

File created.

SQL> shutdown abort;
ASM instance shutdown

SQL> startup pfile='/tmp/pfile+ASM.ora';
ASM instance started

Total System Global Area 1135747072 bytes
Fixed Size                  2297344 bytes
Variable Size            1108283904 bytes
ASM Cache                  25165824 bytes
ASM diskgroups mounted

SQL> create spfile='+CRSDG/jyrac-cluster/asmparameterfile/spfileasm.ora' from pfile='/tmp/pfile+ASM.ora';

File created.

SQL> exit

再次使用asmcmd查看ASM spfile将会发现存在两个条目

[grid@jyrac1 trace]$ asmcmd find --type ASMPARAMETERFILE +CRSDG "*"
+CRSDG/jyrac-cluster/asmparameterfile/REGISTRY.253.928747387
+CRSDG/jyrac-cluster/asmparameterfile/spfileasm.ora

现在可以看到ASM spfile本身(REGISTRY.253.928747387)与它的别名或链接文件(spfileasm.ora)。查看spfileasm.ora可以看到它确实是注册文件(REGISTRY.253.928747387)的别名

[grid@jyrac1 ~]$ asmcmd ls -l +CRSDG/jyrac-cluster/asmparameterfile/
Type              Redund  Striped  Time             Sys  Name
ASMPARAMETERFILE  UNPROT  COARSE   JAN 12 16:00:00  Y    REGISTRY.253.928745345
                                                    N    spfileasm.ora => +CRSDG/jyrac-cluster/asmparameterfile/REGISTRY.253.928745345

下面再次查看磁盘/dev/raw/raw8上的30号AU的最一个块255:

[grid@jyrac1 ~]$ kfed read /dev/raw/raw8 aun=30 blkn=255
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           27 ; 0x002: KFBTYP_ASMSPFALS
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                     255 ; 0x004: blk=255
kfbh.block.obj:                     253 ; 0x008: file=253
kfbh.check:                  1364026699 ; 0x00c: 0x514d654b
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfspbals.incarn:              928745345 ; 0x000: 0x375b8f7b
kfspbals.blksz:                     512 ; 0x004: 0x00000200
kfspbals.size:                        3 ; 0x008: 0x0003
kfspbals.path.len:                   44 ; 0x00a: 0x002c
kfspbals.path.buf:jyrac-cluster/asmparameterfile/spfileasm.ora ; 0x00c: length=44

现在可以看到别名文件名出现在ASM spfile别名块中。并且出现了新的incarnation号来表示新的ASM spfile文件的创建时间。

小结:
从ASM 11.2开始,ASM spfile可以被存储在ASM磁盘组中。为了支持这个功能,ASMCMD增加了相关命令来进行管理,并且在ASM磁盘头中增加了新的ASM元数据结构。

]]>
http://www.jydba.net/index.php/archives/2045/feed 0
Oracle ASM ACFS disk group rebalance http://www.jydba.net/index.php/archives/2043 http://www.jydba.net/index.php/archives/2043#respond Mon, 16 Jan 2017 00:26:44 +0000 http://www.jydba.net/?p=2043 从Oracle 11.2开始,一个ASM磁盘组可以被用来创建一个或多个集群文件系统。这就是Oracle ASM集群文件系统或Oracle ACFS。这个功能通过在ASM磁盘组中创建特定的volume文件来实现,然后作为块设备给操作系统来使用,再在这些块设备上创建文件系统。下面将介绍ACFS volume文件的rebalance,mirror与extent管理。

测试环境如下:
.64-bit Oracle Linux 5.4
.Oracle Restart and ASM version 11.2.0.4.0 – 64bit

设置ACFS volumes
单实例加载ADVM/ACFS驱动的命令如下,RAC环境不需要,因为已经默认加载

[root@jyrac1 bin]# ./acfsroot install
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9118: oracleadvm.ko driver in use - cannot unload.
ACFS-9312: Existing ADVM/ACFS installation detected.
ACFS-9118: oracleadvm.ko driver in use - cannot unload.
ACFS-9314: Removing previous ADVM/ACFS installation.
ACFS-9315: Previous ADVM/ACFS components successfully removed.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9154: Loading 'oracleacfs.ko' driver.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9309: ADVM/ACFS installation correctness verified.
[root@jyrac1 bin]#  ./acfsload  start
ACFS-9391: Checking for existing ADVM/ACFS installation.
ACFS-9392: Validating ADVM/ACFS installation files for operating system.
ACFS-9393: Verifying ASM Administrator setup.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9322: completed
[root@jyrac1 bin]# ./acfsdriverstate version
ACFS-9325:     Driver OS kernel version = 2.6.18-8.el5(x86_64).
ACFS-9326:     Driver Oracle version = 130707.

创建一个用来创建ASM集群文件系统的磁盘组

SQL> create diskgroup acfs disk '/dev/raw/raw5','/dev/raw/raw6' attribute 'COMPATIBLE.ASM' = '11.2', 'COMPATIBLE.ADVM' = '11.2'; 

Diskgroup created.

虽然一个磁盘组可以用来存储数据库文件与ACFS volume files,但是建议为ACFS volume创建一个单独的磁盘组。这将提供角色/功能分离与对数据库文件性能有潜在好处。

检查所有磁盘组的AU大小

SQL> select group_number "Group#", name "Name", allocation_unit_size "AU size" from v$asm_diskgroup_stat;

    Group# Name                                                            AU size
---------- ------------------------------------------------------------ ----------
         1 ARCHDG                                                          1048576
         2 CRSDG                                                           1048576
         3 DATADG                                                          1048576
         4 ACFS                                                            1048576

对于所有磁盘组来说缺省的AU大小为1MB,当后面介绍volume file的区大小时会使用到AU大小。

在磁盘组ACFS中创建三个volume

[grid@jyrac1 ~]$ asmcmd volcreate -G ACFS -s 1G ACFS_VOL1
[grid@jyrac1 ~]$ asmcmd volcreate -G ACFS -s 1G ACFS_VOL2
[grid@jyrac1 ~]$ asmcmd volcreate -G ACFS -s 1G ACFS_VOL3

查看volume信息

[grid@jyrac1 ~]$ asmcmd volinfo -a
Diskgroup Name: ACFS

         Volume Name: ACFS_VOL1
         Volume Device: /dev/asm/acfs_vol1-10
         State: ENABLED
         Size (MB): 1024
         Resize Unit (MB): 32
         Redundancy: MIRROR
         Stripe Columns: 4
         Stripe Width (K): 128
         Usage: 
         Mountpath: 

         Volume Name: ACFS_VOL2
         Volume Device: /dev/asm/acfs_vol2-10
         State: ENABLED
         Size (MB): 1024
         Resize Unit (MB): 32
         Redundancy: MIRROR
         Stripe Columns: 4
         Stripe Width (K): 128
         Usage: 
         Mountpath: 

         Volume Name: ACFS_VOL3
         Volume Device: /dev/asm/acfs_vol3-10
         State: ENABLED
         Size (MB): 1024
         Resize Unit (MB): 32
         Redundancy: MIRROR
         Stripe Columns: 4
         Stripe Width (K): 128
         Usage: 
         Mountpath: 

在volume创建这后会自动被启用。当服务器重启之后可能需要手动加载ADVM/ACFS驱动(acfsload start)并启用volume(asmcmd volenable -a)。

对于每个volume,ASM将创建一个volume file。在冗余磁盘组中,每个卷将有一个dirty region logging(DRL)文件

SQL> select file_number "File#", volume_name "Volume", volume_device "Device", size_mb "MB", drl_file_number "DRL#" from v$asm_volume;

File# Volume                                   Device                                           MB       DRL#
----- ---------------------------------------- ---------------------------------------- ---------- ----------
  257 ACFS_VOL1                                /dev/asm/acfs_vol1-10                          1024        256
  259 ACFS_VOL2                                /dev/asm/acfs_vol2-10                          1024        258
  261 ACFS_VOL3                                /dev/asm/acfs_vol3-10                          1024        260

除了卷名,设备名与大小之外,还显示了ASM文件号257,259,261给卷设备使用,ASM文件号256,258,260给DRL文件使用。

查询卷文件的AU分布情况

SQL> select 
  2  xnum_kffxp,            -- virtual extent number
  3  pxn_kffxp,             -- physical extent number
  4  disk_kffxp,            -- disk number
  5  au_kffxp               -- allocation unit number
  6  from x$kffxp
  7  where number_kffxp=261-- asm file 256
  8  and group_kffxp=4      -- group number 1
  9  order by 1,2,3;

XNUM_KFFXP  PXN_KFFXP DISK_KFFXP   AU_KFFXP
---------- ---------- ---------- ----------
         0          0          0       2160
         0          1          1       2160
         1          2          1       2168
         1          3          0       2168
         2          4          0       2176
         2          5          1       2176
         3          6          1       2184
         3          7          0       2184
         4          8          0       2192
         4          9          1       2192
         5         10          1       2200
         5         11          0       2200
         6         12          0       2208
         6         13          1       2208
......

       124        248          0       3152
       124        249          1       3152
       125        250          1       3160
       125        251          0       3160
       126        252          0       3168
       126        253          1       3168
       127        254          1       3176
       127        255          0       3176
2147483648          0          0       2156
2147483648          1          1       2156
2147483648          2      65534 4294967294

259 rows selected.

当在normal冗余磁盘组中创建卷,那么卷的每个区同样也会被镜像。可以看到卷文件261有128个区。卷大小为1GB,这意味着每个区大小为8MB或8个AU。卷文件有属于它自己的区大小,不像标准的ASM文件继承来自磁盘组AU大小来初始化区大小。

在逻辑卷设备上创建ASM集群文件系统(ACFS)

[grid@jyrac1 ~]$ /sbin/mkfs -t acfs /dev/asm/acfs_vol1-10
mkfs.acfs: version                   = 11.2.0.4.0
mkfs.acfs: on-disk version           = 39.0
mkfs.acfs: volume                    = /dev/asm/acfs_vol1-10
mkfs.acfs: volume size               = 1073741824
mkfs.acfs: Format complete.
[grid@jyrac1 ~]$ /sbin/mkfs -t acfs /dev/asm/acfs_vol2-10
mkfs.acfs: version                   = 11.2.0.4.0
mkfs.acfs: on-disk version           = 39.0
mkfs.acfs: volume                    = /dev/asm/acfs_vol2-10
mkfs.acfs: volume size               = 1073741824
mkfs.acfs: Format complete.
[grid@jyrac1 ~]$ /sbin/mkfs -t acfs /dev/asm/acfs_vol3-10
mkfs.acfs: version                   = 11.2.0.4.0
mkfs.acfs: on-disk version           = 39.0
mkfs.acfs: volume                    = /dev/asm/acfs_vol3-10
mkfs.acfs: volume size               = 1073741824
mkfs.acfs: Format complete.

[root@jyrac1 /]# mkdir /acfs1
[root@jyrac1 /]# mkdir /acfs2
[root@jyrac1 /]# mkdir /acfs3
[root@jyrac1 /]# chown -R grid:oinstall /acfs1
[root@jyrac1 /]# chown -R grid:oinstall /acfs2
[root@jyrac1 /]# chown -R grid:oinstall /acfs3
[root@jyrac1 /]# chmod -R 777 /acfs1
[root@jyrac1 /]# chmod -R 777 /acfs2
[root@jyrac1 /]# chmod -R 777 /acfs3


[root@jyrac1 /]# mount -t acfs /dev/asm/acfs_vol1-10 /acfs1
[root@jyrac1 /]# mount -t acfs /dev/asm/acfs_vol2-10 /acfs2
[root@jyrac1 /]# mount -t acfs /dev/asm/acfs_vol3-10 /acfs3
[root@jyrac1 /]# mount | grep acfs
/dev/asm/acfs_vol1-10 on /acfs1 type acfs (rw)
/dev/asm/acfs_vol2-10 on /acfs2 type acfs (rw)
/dev/asm/acfs_vol3-10 on /acfs3 type acfs (rw)

复制一些文件到新的文件系统中

[grid@jyrac1 +asm]$ cp $ORACLE_BASE/diag/asm/+asm/+ASM1/trace/* /acfs1
[grid@jyrac1 +asm]$ cp $ORACLE_BASE/diag/asm/+asm/+ASM1/trace/* /acfs2
[grid@jyrac1 +asm]$ cp $ORACLE_BASE/diag/asm/+asm/+ASM1/trace/* /acfs3

检查使用空间

[root@jyrac1 /]# df -h /acfs?
Filesystem            Size  Used Avail Use% Mounted on
/dev/asm/acfs_vol1-10
                      1.0G  105M  920M  11% /acfs1
/dev/asm/acfs_vol2-10
                      1.0G  105M  920M  11% /acfs2
/dev/asm/acfs_vol3-10
                      1.0G  105M  920M  11% /acfs3

现在向ACFS磁盘组添加磁盘组并监控rebalance操作

SQL> alter diskgroup ACFS add disk '/dev/raw/raw7';

Diskgroup altered.

从alert_+ASM1.log文件中可以找到ARB0进程的PID为1074

[grid@jyrac1 trace]$ tail -f alert_+ASM1.log
SQL> alter diskgroup ACFS add disk '/dev/raw/raw7' 
NOTE: GroupBlock outside rolling migration privileged region
NOTE: Assigning number (4,2) to disk (/dev/raw/raw7)
NOTE: requesting all-instance membership refresh for group=4
NOTE: initializing header on grp 4 disk ACFS_0002
NOTE: requesting all-instance disk validation for group=4
Thu Jan 12 14:54:45 2017
NOTE: skipping rediscovery for group 4/0xd98640a (ACFS) on local instance.
NOTE: requesting all-instance disk validation for group=4
NOTE: skipping rediscovery for group 4/0xd98640a (ACFS) on local instance.
Thu Jan 12 14:54:45 2017
GMON updating for reconfiguration, group 4 at 249 for pid 27, osid 18644
NOTE: group 4 PST updated.
NOTE: initiating PST update: grp = 4
GMON updating group 4 at 250 for pid 27, osid 18644
NOTE: group ACFS: updated PST location: disk 0000 (PST copy 0)
NOTE: group ACFS: updated PST location: disk 0001 (PST copy 1)
NOTE: group ACFS: updated PST location: disk 0002 (PST copy 2)
NOTE: PST update grp = 4 completed successfully 
NOTE: membership refresh pending for group 4/0xd98640a (ACFS)
GMON querying group 4 at 251 for pid 18, osid 5012
NOTE: cache opening disk 2 of grp 4: ACFS_0002 path:/dev/raw/raw7
GMON querying group 4 at 252 for pid 18, osid 5012
SUCCESS: refreshed membership for 4/0xd98640a (ACFS)
NOTE: starting rebalance of group 4/0xd98640a (ACFS) at power 1
SUCCESS: alter diskgroup ACFS add disk '/dev/raw/raw7'
Starting background process ARB0
Thu Jan 12 14:54:48 2017
ARB0 started with pid=40, OS id=1074 
NOTE: assigning ARB0 to group 4/0xd98640a (ACFS) with 1 parallel I/O
cellip.ora not found.
NOTE: F1X0 copy 3 relocating from 65534:4294967294 to 2:2 for diskgroup 4 (ACFS)
Thu Jan 12 14:55:00 2017
NOTE: Attempting voting file refresh on diskgroup ACFS
NOTE: Refresh completed on diskgroup ACFS. No voting file found.

通过命令tail -f +ASM1_arb0_1074.trc来监控rebalance过程

*** 2017-01-12 14:55:18.731
ARB0 relocating file +ACFS.259.933075367 (86 entries)

*** 2017-01-12 14:55:38.599
ARB0 relocating file +ACFS.259.933075367 (1 entries)
ARB0 relocating file +ACFS.260.933075373 (17 entries)

*** 2017-01-12 14:55:39.617
ARB0 relocating file +ACFS.261.933075373 (86 entries)

*** 2017-01-12 14:55:59.106
ARB0 relocating file +ACFS.261.933075373 (1 entries)

*** 2017-01-12 14:55:59.274
ARB0 relocating file +ACFS.258.933075367 (1 entries)
ARB0 relocating file +ACFS.258.933075367 (1 entries)
ARB0 relocating file +ACFS.258.933075367 (1 entries)
ARB0 relocating file +ACFS.258.933075367 (1 entries)
ARB0 relocating file +ACFS.258.933075367 (1 entries)
ARB0 relocating file +ACFS.258.933075367 (1 entries)
ARB0 relocating file +ACFS.258.933075367 (1 entries)
ARB0 relocating file +ACFS.257.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)
ARB0 relocating file +ACFS.256.933075361 (1 entries)

*** 2017-01-12 14:56:00.201
ARB0 relocating file +ACFS.1.1 (1 entries)
ARB0 relocating file +ACFS.7.1 (1 entries)
ARB0 relocating file +ACFS.5.1 (1 entries)
ARB0 relocating file +ACFS.8.1 (1 entries)
ARB0 relocating file +ACFS.9.1 (1 entries)
ARB0 relocating file +ACFS.6.1 (1 entries)
ARB0 relocating file +ACFS.4.1 (1 entries)
ARB0 relocating file +ACFS.4.1 (1 entries)
ARB0 relocating file +ACFS.3.1 (1 entries)

.....

可以看到每个ASM文件的rebalance操作过程。这种操作行为与数据库文件是一样的,ASM对每个文件执行rebalance操作。ASM元数据文件(1-9)最先被rebalance。ASM然后对卷文件号257,259,261,ASM文件号256,258,260执行rebalance等等。
可以看到对卷文件(与其它ASM文件)执行rebalance操作并不是对存储在相关文件系统中的用户文件进行操作,而是对每个卷文件执行rebalance操作。

ACFS磁盘组中的磁盘联机操作
当一个ASM磁盘脱机时,ASM将创建staleness registry and staleness directory来跟踪磁盘联机时需要修改的区。一旦磁盘联机,ASM使用这些信息来执行快速镜像重新同步。这个功能对于ASM 11.2中的卷文件是不可用的。相反,对于联机的磁盘,ASM将重建整个磁盘内容。这就是为什么对于存储卷文件的磁盘组执行磁盘联机的性能要比存储标准数据库文件的磁盘组执行磁盘联机的性能差的原因。 对于卷文件执行快速镜像重新同步在ASM 12.1及以后的版本中是可以使用的。

小结:
ASM磁盘组可以被用来创建一般目录的集群文件系统。ASM通过在磁盘组中创建卷文件来实现,并将它们作为块设备提供给操作系统使用。现有的ASM磁盘组镜像功能(normal与high冗余)可以被用来在文件系统级别保护用户文件。ASM通过镜像卷文件区来实现,这种方式也用于任何其它的ASM文件。卷文件有它自己的区大小,不像标准数据库文件继承来自磁盘组的AU大小来初始化区大小。对存储ASM集群文件系统卷的ASM磁盘组执行rebalance操作,实际上是对每个卷文件执行rebalance操作,而不是对存储在相关文件系统中的单个用户文件执行rebalance操作。

]]>
http://www.jydba.net/index.php/archives/2043/feed 0
Oracle ASM How many allocation units per file http://www.jydba.net/index.php/archives/2041 http://www.jydba.net/index.php/archives/2041#respond Fri, 13 Jan 2017 07:34:32 +0000 http://www.jydba.net/?p=2041 ASM分配空间的最小量就是一个分配单元AU。缺省的AU大小是1MB,但exadata中缺省的AU大小是4MB。ASM基于文件的空间分配是以区为单位的,区是由一个或多个AU组成的。在11.2中,前20000个区只包含1个AU,接下来的20000个区包含4个AU,之后的区包含16个AU。这就叫做可变大小区。在11.1中,区的增长是以1-8-64倍AU的方式增长的。在10g中,没有可变大小区,所以所有的区大小实际上都是1个AU。

在v$asm_file视图中可以看到bytes与space列:
.bytes:文件大小以bytes为单位
.space:分配给文件的字节数

在定义方面有细微的差别并且在数量方面有较大差别。

首先查询关于磁盘组datadg的一些基本信息,因为磁盘组datadg存储了大多数数据文件。在数据库实例上执行以下查询:

SQL> select name, group_number, allocation_unit_size/1024/1024 "au size (mb)", type from v$asm_diskgroup where name='DATADG';

NAME                                                         GROUP_NUMBER au size (mb) TYPE
------------------------------------------------------------ ------------ ------------ ------------
DATADG                                                                  3            1 NORMAL

现在创建一个小文件(不超过60个区)和一个大文件(超过60个区)

SQL> create tablespace t1 datafile '+datadg' size 10M;

Tablespace created.

SQL> create tablespace t2 datafile '+datadg' size 100M;

Tablespace created.

查询到所创建的表空间的数据文件号

SQL> select name,round(bytes/1024/1024) "MB" from v$datafile;

NAME                                                                 MB
------------------------------------------------------------ ----------
+DATADG/jyrac/datafile/system.259.930413057                         760
+DATADG/jyrac/datafile/sysaux.258.930413055                        1630
+DATADG/jyrac/datafile/undotbs1.262.930413057                       100
+DATADG/jyrac/datafile/users.263.930413057                            5
+DATADG/jyrac/datafile/example.260.930413057                        346
+DATADG/jyrac/datafile/undotbs2.261.930413057                       150
+DATADG/jyrac/datafile/test01.dbf                                   100
+DATADG/jyrac/datafile/cs.271.931880499                               1
+DATADG/jyrac/datafile/cs_stripe_coarse.272.931882089                 1
+DATADG/jyrac/datafile/not_important.273.931882831                    1
+TESTDG/jyrac/datafile/t_cs.256.932985243                            50
+DATADG/jyrac/datafile/t1.274.933003755                              10
+DATADG/jyrac/datafile/t2.275.933003775                             100

13 rows selected.

小文件的ASM文件号是274,大文件的ASM文件号是275。

查询这两个文件的字节数与空间信息(AU)

SQL> select file_number, round(bytes/1024/1024) "bytes (au)", round(space/1024/1024) "space (aus)", redundancy
  2  from v$asm_file where file_number in (274, 275) and group_number=3;

FILE_NUMBER bytes (au) space (aus) REDUND
----------- ---------- ----------- ------
        274         10          22 MIRROR
        275        100         205 MIRROR

bytes列显示了真实的文件大小。对于小文件来说,bytes显示文件的大小为10个AU=10MB(AU的大小为1MB),而小文件空间使用了22个AU。10个AU用于真实的数据文件,1个AU用于文件头,并且因为文件被镜像,因此需要双倍数量的AU,也就是22个AU。对于大文件来说,bytes显示文件的大小为100个AU=100MB(AU的大小为1MB),100个AU用于真实的数据文件,1个AU用于文件头,并且因为文件被像像,因此需要双倍数量的AU,也就是202个AU,但这里大文件空间使用了205个AU,额外的3个AU是什么呢?

执行下面的查询来获得275号文件的AU分布情况:

SQL> select xnum_kffxp "virtual extent", pxn_kffxp "physical extent", disk_kffxp "disk number", au_kffxp "au number"
  2  from x$kffxp where group_kffxp=3 and number_kffxp=275
  3  order by 1,2,3;

virtual extent physical extent disk number  au number
-------------- --------------- ----------- ----------
             0               0           3       1787
             0               1           2       1779
             1               2           1       1779
             1               3           3       1788
             2               4           2       1780
             2               5           3       1789
             3               6           0       1785
             3               7           2       1781
             4               8           3       1790
             4               9           1       1780
             5              10           1       1781
             5              11           2       1782
             6              12           2       1783
             6              13           0       1786
             7              14           0       1787
             7              15           3       1791
             8              16           3       1792
             8              17           0       1788
             9              18           1       1782
             9              19           0       1789
            10              20           2       1784
            10              21           1       1783
            11              22           0       1790
            11              23           1       1784
            12              24           3       1793
            12              25           2       1785
            13              26           1       1785
            13              27           3       1794
            14              28           2       1786
            14              29           3       1795
            15              30           0       1791
            15              31           2       1787
            16              32           3       1796
            16              33           1       1786
            17              34           1       1787
            17              35           2       1788
            18              36           2       1789
            18              37           0       1792
            19              38           0       1793
            19              39           3       1797
            20              40           3       1798
            20              41           0       1794
            21              42           1       1788
            21              43           0       1795
            22              44           2       1790
            22              45           1       1789
            23              46           0       1796
            23              47           1       1790
            24              48           3       1799
            24              49           2       1791
            25              50           1       1791
            25              51           3       1800
            26              52           2       1792
            26              53           3       1801
            27              54           0       1797
            27              55           2       1793
            28              56           3       1802
            28              57           1       1792
            29              58           1       1793
            29              59           2       1794
            30              60           2       1796
            30              61           0       1798
            31              62           0       1799
            31              63           3       1804
            32              64           3       1805
            32              65           0       1800
            33              66           1       1795
            33              67           0       1801
            34              68           2       1797
            34              69           1       1796
            35              70           0       1802
            35              71           1       1797
            36              72           3       1806
            36              73           2       1798
            37              74           1       1798
            37              75           3       1807
            38              76           2       1799
            38              77           3       1808
            39              78           0       1803
            39              79           2       1800
            40              80           3       1809
            40              81           1       1799
            41              82           1       1800
            41              83           2       1801
            42              84           2       1802
            42              85           0       1804
            43              86           0       1805
            43              87           3       1810
            44              88           3       1811
            44              89           0       1806
            45              90           1       1801
            45              91           0       1807
            46              92           2       1803
            46              93           1       1802
            47              94           0       1808
            47              95           1       1803
            48              96           3       1812
            48              97           2       1804
            49              98           1       1804
            49              99           3       1813
            50             100           2       1805
            50             101           3       1814
            51             102           0       1809
            51             103           2       1806
            52             104           3       1815
            52             105           1       1805
            53             106           1       1806
            53             107           2       1807
            54             108           2       1808
            54             109           0       1810
            55             110           0       1811
            55             111           3       1816
            56             112           3       1817
            56             113           0       1812
            57             114           1       1807
            57             115           0       1813
            58             116           2       1809
            58             117           1       1808
            59             118           0       1814
            59             119           1       1809
            60             120           3       1818
            60             121           2       1810
            61             122           1       1810
            61             123           3       1819
            62             124           2       1811
            62             125           3       1820
            63             126           0       1815
            63             127           2       1812
            64             128           3       1821
            64             129           1       1811
            65             130           1       1812
            65             131           2       1813
            66             132           2       1814
            66             133           0       1816
            67             134           0       1817
            67             135           3       1822
            68             136           3       1823
            68             137           0       1818
            69             138           1       1813
            69             139           0       1819
            70             140           2       1815
            70             141           1       1814
            71             142           0       1820
            71             143           1       1815
            72             144           3       1824
            72             145           2       1816
            73             146           1       1816
            73             147           3       1825
            74             148           2       1817
            74             149           3       1826
            75             150           0       1821
            75             151           2       1818
            76             152           3       1827
            76             153           1       1817
            77             154           1       1818
            77             155           2       1819
            78             156           2       1820
            78             157           0       1822
            79             158           0       1823
            79             159           3       1828
            80             160           3       1829
            80             161           0       1824
            81             162           1       1819
            81             163           0       1825
            82             164           2       1821
            82             165           1       1820
            83             166           0       1826
            83             167           1       1821
            84             168           3       1830
            84             169           2       1822
            85             170           1       1822
            85             171           3       1831
            86             172           2       1823
            86             173           3       1832
            87             174           0       1827
            87             175           2       1824
            88             176           3       1833
            88             177           1       1823
            89             178           1       1824
            89             179           2       1825
            90             180           2       1826
            90             181           0       1828
            91             182           0       1829
            91             183           3       1834
            92             184           3       1835
            92             185           0       1830
            93             186           1       1825
            93             187           0       1831
            94             188           2       1827
            94             189           1       1826
            95             190           0       1832
            95             191           1       1827
            96             192           3       1836
            96             193           2       1828
            97             194           1       1828
            97             195           3       1837
            98             196           2       1829
            98             197           3       1838
            99             198           0       1833
            99             199           2       1830
           100             200           3       1839
           100             201           1       1829
    2147483648               0           1       1794
    2147483648               1           2       1795
    2147483648               2           3       1803

205 rows selected.

当文件被镜像时,可以看到每个虚拟区有两个物理区。但有趣的是最三个AU的虚拟区号为2147483648,有三份镜像副本。接下来将使用kfed工具来查看它们的内容。

查询磁盘组datadg的磁盘名

SQL> select disk_number, path from v$asm_disk where group_number=3;

DISK_NUMBER PATH
----------- --------------------------------------------------
          0 /dev/raw/raw11
          1 /dev/raw/raw4
          3 /dev/raw/raw10
          2 /dev/raw/raw3


[grid@jyrac1 ~]$ kfed read /dev/raw/raw4 aun=1794 | grep type
kfbh.type:                           12 ; 0x002: KFBTYP_INDIRECT
[grid@jyrac1 ~]$ kfed read /dev/raw/raw3 aun=1795 | grep type
kfbh.type:                           12 ; 0x002: KFBTYP_INDIRECT
[grid@jyrac1 ~]$ kfed read /dev/raw/raw10 aun=1803 | grep type
kfbh.type:                           12 ; 0x002: KFBTYP_INDIRECT

这些额外的AU用来存储大文件的ASM元数据。更确切地说,它们所持有的区映射信息无法存储在ASM文件目录块中。文件目录需要额外的空间来跟踪超过60个区的大文件,因为需要额外的AU。虽然文件目录只需要几个额外的ASM元数据块,因为ASM空间分配的最小单元是一个AU,并且因为这是元数据,AU存在三份镜像(虽然是normal冗余磁盘组),因此对于大文件来说会分配3个额外的AU。在外部冗余磁盘组中,对于每个大文件只有1个额外的AU。

小结:
对于一个文件ASM磁盘组所需要的空间大小依赖于两个因素:文件大小与磁盘的冗余类型:
在外部冗余磁盘组中,如果文件大小超过60个AU,那么所需要的空间将是文件大小所占用的AU,加上文件头所占用的1个AU,加上间接区所占用的1个AU。
在normal冗余磁盘组中,如果文件大小超过60个AU,那么所需要的空间将是文件大小所占用的AU,加上文件头所占用的2个AU,加上间接区所占用的3个AU。
在high冗余磁盘组中,如果文件大小超过60个AU,那么所需要的空间将是文件大小所占用的AU,加上文件头所占用的3个AU,加上间接区所占用的3个AU。

]]>
http://www.jydba.net/index.php/archives/2041/feed 0
定位数据在ASM中的位置 http://www.jydba.net/index.php/archives/2029 http://www.jydba.net/index.php/archives/2029#respond Thu, 12 Jan 2017 00:43:44 +0000 http://www.jydba.net/?p=2029 有时候我们想要知道一个特定的database block位于ASM的哪个磁盘,磁盘的哪个AU以及AU的哪一个块。本篇文章将向大家展示如何解决这个问题。

首先在数据库里创建测试表空间:

SQL> create tablespace t_cs datafile '+testdg' size 50M autoextend off;

Tablespace created.


SQL> set long 200
SQL> set linesize 200
SQL> select f.file#, f.name "file", t.name "tablespace"
  2  from v$datafile f, v$tablespace t
  3  where t.name='T_CS' and f.ts# = t.ts#;

     FILE# file                                               tablespace
---------- -------------------------------------------------- ------------------------------
        11 +TESTDG/jyrac/datafile/t_cs.256.932913341          T_CS

注意到ASM file number是256,现在创建一张测试表并插入数据:

SQL> create table t(n number,name varchar2(20)) tablespace t_cs;

Table created.

SQL> insert into t values(1,'JY');

1 row created.

SQL> commit;

Commit complete.

查询表T所占用的数据块号:

SQL> select rowid,name from t;

ROWID              NAME
------------------ --------------------------------------------------
AAAV/pAALAAAACHAAA JY

SQL> select dbms_rowid.rowid_block_number('AAAV/pAALAAAACHAAA') "block number" from dual;

block number
------------
         135

查询数据文件的块大小:

SQL> select block_size from v$datafile where file#=11;

BLOCK_SIZE
----------
      8192

可以看到插入的数据位于135号块,数据文件块大小为8K。

连接ASM实例,查询256号文件的区分布:

SQL> select group_number from v$asm_diskgroup where name='TESTDG';

GROUP_NUMBER
------------
           5

SQL> select 
  2  xnum_kffxp,            -- virtual extent number
  3  pxn_kffxp,             -- physical extent number
  4  disk_kffxp,            -- disk number
  5  au_kffxp               -- allocation unit number
  6  from x$kffxp
  7  where number_kffxp=256 -- asm file 256
  8  and group_kffxp=5      -- group number 1
  9  order by 1,2,3;

XNUM_KFFXP  PXN_KFFXP DISK_KFFXP   AU_KFFXP
---------- ---------- ---------- ----------
         0          0          2         41
         0          1          3         38
         1          2          3         39
         1          3          2         42
         2          4          1         41
         2          5          0         36
         3          6          0         37
         3          7          2         43
         4          8          2         45
         4          9          1         42
         5         10          3         40
         5         11          1         43
         6         12          1         44
         6         13          2         47
         7         14          0         38
         7         15          1         45
         8         16          2         48
         8         17          0         39
         9         18          3         43
         9         19          0         40
        10         20          1         46
        10         21          3         44
        11         22          0         41
        11         23          3         45
        12         24          2         49
        12         25          3         46
        13         26          3         47
        13         27          2         46
        14         28          1         47
        14         29          0         42
        15         30          0         43
        15         31          2         52
        16         32          2         53
        16         33          1         48
        17         34          3         48
        17         35          1         49
        18         36          1         50
        18         37          2         54
        19         38          0         44
        19         39          1         51
        20         40          2         55
        20         41          0         45
        21         42          3         50
        21         43          0         46
        22         44          1         52
        22         45          3         51
        23         46          0         47
        23         47          3         52
        24         48          2         56
        24         49          3         53
        25         50          3         54
        25         51          2         59
        26         52          1         53
        26         53          0         48
        27         54          0         49
        27         55          2         60
        28         56          2         61
        28         57          1         54
        29         58          3         55
        29         59          1         56
        30         60          1         58
        30         61          2         65
        31         62          0         51
        31         63          1         59
        32         64          2         66
        32         65          0         52
        33         66          3         57
        33         67          0         53
        34         68          1         60
        34         69          3         58
        35         70          0         54
        35         71          3         59
        36         72          2         67
        36         73          3         60
        37         74          3         61
        37         75          2         68
        38         76          1         61
        38         77          0         55
        39         78          0         56
        39         79          2         71
        40         80          2         72
        40         81          1         63
        41         82          3         63
        41         83          1         64
        42         84          1         65
        42         85          2         73
        43         86          0         57
        43         87          1         66
        44         88          2         74
        44         89          0         58
        45         90          3         64
        45         91          0         59
        46         92          1         67
        46         93          3         65
        47         94          0         60
        47         95          3         66
        48         96          2         77
        48         97          3         67
        49         98          3         69
        49         99          2         78
        50        100          1         69
        50        101          0         61
2147483648          0          1         57
2147483648          1          0         50
2147483648          2          2         62

105 rows selected.

可以看到文件的区分布在所有磁盘,由于数据文件为Normal冗余,每个区都是两副本。注意我说的是数据文件为Normal冗余。默认情况下,文件会继承磁盘组的冗余策略。控制文件是个例外:即使在Normal冗余的磁盘组,如果磁盘组包含至少3个failgroup,控制文件也会被创建为high冗余。

查询磁盘组的AU大小:

SQL> select block_size,allocation_unit_size from v$asm_diskgroup where group_number=5;

BLOCK_SIZE ALLOCATION_UNIT_SIZE
---------- --------------------
      4096              1048576

AU大小为1MB。注意每个磁盘组可以有不同的AU大小。

现在已知测试数据在256号ASM file的135号块。数据块为8K的情况下,每个AU可以包含128个块。这就意味着135号块位于第二个virtual extent的第7个块。第二个virtual extent包含3号磁盘的39号au和2号磁盘的42号au.

查询磁盘2和3的名字:

SQL> set long 200
SQL> set linesize 200
SQL> select disk_number, name,path from v$asm_disk where group_number=5 and disk_number in (2,3);

DISK_NUMBER NAME                                                         PATH
----------- ------------------------------------------------------------ --------------------------------------------------
          2 TESTDG_0002                                                  /dev/raw/raw13
          3 TESTDG_0003                                                  /dev/raw/raw14

测试数据位于2号磁盘的42号AU的第7个块。我们首先将这个AU的数据dd出来:

[grid@jyrac1 ~]$ dd if=/dev/raw/raw13 bs=1024k count=1 skip=42 of=AU42.dd
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.021209 seconds, 49.4 MB/s

注意几个参数的含义:
bs=1024k — AU的大小
skip=42 — 它是指跳过文件的前42个数据块,从第43个数据块开始,因为AU的序号是从0开始
count=1 — 只需要导出一个AU

然后将7号块的数据从AU中导出:

[grid@jyrac1 ~]$ dd if=AU42.dd bs=8k count=1 skip=7 of=block135.dd
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 9.3e-05 seconds, 88.1 MB/s

注意bs指定为8k(数据块大小),skip指它是指跳过文件的前7个数据块,从第8个数据块开始,因为AU中的块号也是从0开始(要导出的块号)。

查看数据块内容:

[grid@jyrac1 ~]$ od -c block135.dd
0000000 006 242  \0  \0 207  \0 300 002 020   W 314  \0  \0  \0 001 006
0000020 305 276  \0  \0 001  \0 020  \0 351   _ 001  \0 016   W 314  \0
0000040  \0  \0 350 037 002 037   2  \0 200  \0 300 002 005  \0  \r  \0
....
0017760 001 200 001   , 001 002 002 301 002 002   J   Y 001 006 020   W
0020000

在内容的最后可以看到插入的数据 — JY.注意Oracle数据块从下向上填充。

查看3号磁盘/dev/raw/raw14的39号AU,结果是一样的

[grid@jyrac1 ~]$ dd if=/dev/raw/raw14 bs=1024k count=1 skip=39 of=AU39.dd
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.017309 seconds, 60.6 MB/s
[grid@jyrac1 ~]$ dd if=AU39.dd bs=8k count=1 skip=7 of=block135_copy.dd
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000207 seconds, 39.6 MB/s
0000000 006 242  \0  \0 207  \0 300 002 020   W 314  \0  \0  \0 001 006
0000020 305 276  \0  \0 001  \0 020  \0 351   _ 001  \0 016   W 314  \0
0000040  \0  \0 350 037 002 037   2  \0 200  \0 300 002 005  \0  \r  \0
....
0017760 001 200 001   , 001 002 002 301 002 002   J   Y 001 006 020   W
0020000

小结:
要定位ASM中数据块的位置,需要知道数据块位于哪个数据文件。然后通过X$KFFXP视图查看数据文件的区分布。还需要数据块大小和ASM AU大小去定位数据块位于哪个AU。 以上操作和ASM或者RDBMS的版本无关。(V$ASM_ATTRIBUTE视图除外,因为在10g中没有该视图)在Normal和high冗余模式下,将会有多副本数据。但是定位数据块的方法是相同的。

]]>
http://www.jydba.net/index.php/archives/2029/feed 0
Oracle ASM Rebalance执行过程 http://www.jydba.net/index.php/archives/2025 http://www.jydba.net/index.php/archives/2025#respond Wed, 11 Jan 2017 00:35:24 +0000 http://www.jydba.net/?p=2025 磁盘组的rebalance什么时候能完成?这没有一个具体的数值,但ASM本身已经给你提供了一个估算值(GV$ASM_OPERATION.EST_MINUTES),想知道rebalance完成的精确的时间,虽然不能给出一个精确的时间,但是可以查看一些rebalance的操作细节,让你知道当前rebalance是否正在进行中,进行到哪个阶段,以及这个阶段是否需要引起你的关注。

理解rebalance
rebalance操作本身包含了3个阶段-planning, extents relocation 和 compacting,就rebalance需要的总时间而言,planning阶段需要的时间是非常少的,你通常都不用去关注这一个阶段,第二个阶段extent relocation一般会占取rebalance阶段的大部分时间,也是我们最为需要关注的阶段,最后我们也会讲述第三阶段compacting阶段在做些什么。

首先需要明白为什么会需要做rebalance,如果你为了增加磁盘组的可用空间,增加了一块新磁盘或者为了调整磁盘的空间,例如resizing或者删除磁盘,你可能也不会太去关注rebalance啥时候完成。但是,如果磁盘组中的一块磁盘损坏了,这个时候你就有足够的理由关注rebalance的进度了,假如,你的磁盘组是normal冗余的,这个时候万一你损坏磁盘的partner磁盘也损坏,那么你的整个磁盘组会被dismount,所有跑在这个磁盘组上的数据库都会crash,你可能还会丢失数据。在这种情况下,你非常需要知道rebalance什么时候完成,实际上,你需要知道第二个阶段extent relocation什么时候完成,一旦它完成了,整个磁盘组的冗余就已经完成了(第三个阶段对于冗余度来说并不重要,后面会介绍)。

Extents relocation

为了进一步观察extents relocation阶段,我删除了具有默认并行度的磁盘组上的一块磁盘:

SQL> show parameter power

NAME                                 TYPE                   VALUE
------------------------------------ ---------------------- ------------------------------
asm_power_limit                      integer                1

14:47:35 SQL> select group_number,disk_number,name,state,path,header_status from v$asm_disk where group_number=5;

GROUP_NUMBER DISK_NUMBER NAME                 STATE                PATH                 HEADER_STATUS
------------ ----------- -------------------- -------------------- -------------------- --------------------
           5           0 TESTDG_0000          NORMAL               /dev/raw/raw7        MEMBER
           5           2 TESTDG_0002          NORMAL               /dev/raw/raw13       MEMBER
           5           1 TESTDG_0001          NORMAL               /dev/raw/raw12       MEMBER
           5           3 TESTDG_0003          NORMAL               /dev/raw/raw14       MEMBER

14:48:38 SQL> alter diskgroup testdg drop disk TESTDG_0000;

Diskgroup altered.

下面视图GV$ASMOPERATION的ESTMINUTES字段给出了估算值的时间,单位为分钟,这里给出的估算时间为9分钟。

14:49:04 SQL> select inst_id, operation, state, power, sofar, est_work, est_rate, est_minutes from gv$asm_operation where group_number=5;

   INST_ID OPERATION            STATE                     POWER      SOFAR   EST_WORK   EST_RATE EST_MINUTES
---------- -------------------- -------------------- ---------- ---------- ---------- ---------- -----------
         1 REBAL                RUN                           1          4       4748        475           9

大约过了1分钟后,EST_MINUTES的值变为了0分钟:

14:50:22 SQL> select inst_id, operation, state, power, sofar, est_work, est_rate, est_minutes from gv$asm_operation where group_number=5;

   INST_ID OPERATION            STATE                     POWER      SOFAR   EST_WORK   EST_RATE EST_MINUTES
---------- -------------------- -------------------- ---------- ---------- ---------- ---------- -----------
         1 REBAL                RUN                           1       3030       4748       2429           0

有些时候EST_MINUTES的值可能并不能给你太多的证据,我们还可以看到SOFAR(截止目前移动的UA数)的值一直在增加,恩,不错,这是一个很好的一个观察指标。ASM的alert日志中也显示了删除磁盘的操作,以及OS ARB0进程的ID,ASM用它用来做所有的rebalance工作。更重要的,整个过程之中,没有任何的错误输出:

SQL> alter diskgroup testdg drop disk TESTDG_0000 
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=5
Tue Jan 10 14:49:01 2017
GMON updating for reconfiguration, group 5 at 222 for pid 42, osid 6197
NOTE: group 5 PST updated.
Tue Jan 10 14:49:01 2017
NOTE: membership refresh pending for group 5/0x97f863e8 (TESTDG)
GMON querying group 5 at 223 for pid 18, osid 5012
SUCCESS: refreshed membership for 5/0x97f863e8 (TESTDG)
NOTE: starting rebalance of group 5/0x97f863e8 (TESTDG) at power 1
Starting background process ARB0
SUCCESS: alter diskgroup testdg drop disk TESTDG_0000
Tue Jan 10 14:49:04 2017
ARB0 started with pid=39, OS id=25416 
NOTE: assigning ARB0 to group 5/0x97f863e8 (TESTDG) with 1 parallel I/O
cellip.ora not found.
NOTE: F1X0 copy 1 relocating from 0:2 to 2:2 for diskgroup 5 (TESTDG)
NOTE: F1X0 copy 3 relocating from 2:2 to 3:2599 for diskgroup 5 (TESTDG)
Tue Jan 10 14:49:13 2017
NOTE: Attempting voting file refresh on diskgroup TESTDG
NOTE: Refresh completed on diskgroup TESTDG. No voting file found.
Tue Jan 10 14:51:05 2017
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 5/0x97f863e8 (TESTDG)
Tue Jan 10 14:51:07 2017
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=5
Tue Jan 10 14:51:10 2017
GMON updating for reconfiguration, group 5 at 224 for pid 39, osid 25633
NOTE: group 5 PST updated.
SUCCESS: grp 5 disk TESTDG_0000 emptied
NOTE: erasing header on grp 5 disk TESTDG_0000
NOTE: process _x000_+asm1 (25633) initiating offline of disk 0.3915944675 (TESTDG_0000) with mask 0x7e in group 5
NOTE: initiating PST update: grp = 5, dsk = 0/0xe96892e3, mask = 0x6a, op = clear
GMON updating disk modes for group 5 at 225 for pid 39, osid 25633
NOTE: group TESTDG: updated PST location: disk 0001 (PST copy 0)
NOTE: group TESTDG: updated PST location: disk 0002 (PST copy 1)
NOTE: group TESTDG: updated PST location: disk 0003 (PST copy 2)
NOTE: PST update grp = 5 completed successfully 
NOTE: initiating PST update: grp = 5, dsk = 0/0xe96892e3, mask = 0x7e, op = clear
GMON updating disk modes for group 5 at 226 for pid 39, osid 25633
NOTE: cache closing disk 0 of grp 5: TESTDG_0000
NOTE: PST update grp = 5 completed successfully 
GMON updating for reconfiguration, group 5 at 227 for pid 39, osid 25633
NOTE: cache closing disk 0 of grp 5: (not open) TESTDG_0000
NOTE: group 5 PST updated.
NOTE: membership refresh pending for group 5/0x97f863e8 (TESTDG)
GMON querying group 5 at 228 for pid 18, osid 5012
GMON querying group 5 at 229 for pid 18, osid 5012
NOTE: Disk TESTDG_0000 in mode 0x0 marked for de-assignment
SUCCESS: refreshed membership for 5/0x97f863e8 (TESTDG)
Tue Jan 10 14:51:16 2017
NOTE: Attempting voting file refresh on diskgroup TESTDG
NOTE: Refresh completed on diskgroup TESTDG. No voting file found.

因此ASM预估了9分钟的时间来完成rebalance,但实际上只使用了2分钟的时候,因此首先能知道rebalance正在做什么非常重要,然后才能知道rebalance什么时候能完成。注意,估算的时间是动态变化的,可能会增加或减少,这个依赖你的系统负载变化,以及你的rebalance的power值的设置,对于一个非常大容量的磁盘组来说,可能rebalance会花费你数小时甚至是数天的时间。

ARB0进程的跟踪文件也显示了,当前正在对哪一个ASM文件的extent的在进行重分配,也是通过这个跟踪文件,我们可以知道ARB0确实是在干着自己的本职工作,没有偷懒。

[grid@jyrac1 trace]$ tail -f  +ASM1_arb0_25416.trc
*** 2017-01-10 14:49:20.160
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:24.081
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:28.290
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:32.108
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:35.419
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:38.921
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:43.613
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:47.523
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:51.073
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:54.545
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:49:58.538
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:02.944
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:06.428
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:10.035
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:13.507
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:17.526
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:21.692
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:25.649
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:29.360
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:33.233
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:37.287
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:40.843
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:44.356
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:48.158
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:51.854
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:55.568
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:50:59.439
ARB0 relocating file +TESTDG.256.932913341 (120 entries)

*** 2017-01-10 14:51:02.877
ARB0 relocating file +TESTDG.256.932913341 (50 entries)

注意,跟踪目录下的arb0的跟踪文件可能会有很多,因此我们需要知道arb0的OS是进程号,是哪一个arb0在实际做rebalance的工作,这个信息在ASM实例执行rebalance操作的时候,alert文件中会有显示。我们还可以通过操作系统命令pstack来跟踪ARB0进程,查看具体它在做什么,如下,它向我们显示了,ASM正在重分配extent(在堆栈中的关键函数 kfgbRebalExecute – kfdaExecute – kffRelocate):

[root@jyrac1 ~]# pstack 25416
#0  0x0000003aa88005f4 in ?? () from /usr/lib64/libaio.so.1
#1  0x0000000002bb9b11 in skgfrliopo ()
#2  0x0000000002bb9909 in skgfospo ()
#3  0x00000000086c595f in skgfrwat ()
#4  0x00000000085a4f79 in ksfdwtio ()
#5  0x000000000220b2a3 in ksfdwat_internal ()
#6  0x0000000003ee7f33 in kfk_reap_ufs_async_io ()
#7  0x0000000003ee7e7b in kfk_reap_ios_from_subsys ()
#8  0x0000000000aea0ac in kfk_reap_ios ()
#9  0x0000000003ee749e in kfk_io1 ()
#10 0x0000000003ee7044 in kfkRequest ()
#11 0x0000000003eed84a in kfk_transitIO ()
#12 0x0000000003e40e7a in kffRelocateWait ()
#13 0x0000000003e67d12 in kffRelocate ()
#14 0x0000000003ddd3fb in kfdaExecute ()
#15 0x0000000003ec075b in kfgbRebalExecute ()
#16 0x0000000003ead530 in kfgbDriver ()
#17 0x00000000021b37df in ksbabs ()
#18 0x0000000003ec4768 in kfgbRun ()
#19 0x00000000021b8553 in ksbrdp ()
#20 0x00000000023deff7 in opirip ()
#21 0x00000000016898bd in opidrv ()
#22 0x0000000001c6357f in sou2o ()
#23 0x00000000008523ca in opimai_real ()
#24 0x0000000001c6989d in ssthrdmain ()
#25 0x00000000008522c1 in main ()

Compacting
在下面的例子里,我们来看下rebalance的compacting阶段,我把上面删除的磁盘加回来,同时设置rebalance的power为2:

17:26:48 SQL> alter diskgroup testdg add disk '/dev/raw/raw7' rebalance power 2;

Diskgroup altered.

ASM给出的rebalance的估算时间为6分钟:

16:07:13 SQL> select INST_ID, OPERATION, STATE, POWER, SOFAR, EST_WORK, EST_RATE, EST_MINUTES from GV$ASM_OPERATION where GROUP_NUMBER=1;

  INST_ID OPERA STAT      POWER      SOFAR   EST_WORK   EST_RATE EST_MINUTES
---------- ----- ---- ---------- ---------- ---------- ---------- -----------
        1 REBAL RUN          10        489      53851       7920           6

大约10秒后,EST_MINUTES的值变为0.

16:07:23 SQL> /

  INST_ID OPERA STAT      POWER      SOFAR   EST_WORK   EST_RATE EST_MINUTES
---------- ----- ---- ---------- ---------- ---------- ---------- -----------
        1 REBAL RUN          10      92407      97874       8716           0

这个时候我们在ASM的alert日志中观察到:

SQL> alter diskgroup testdg add disk '/dev/raw/raw7'  rebalance power 2
NOTE: GroupBlock outside rolling migration privileged region
NOTE: Assigning number (5,0) to disk (/dev/raw/raw7)
NOTE: requesting all-instance membership refresh for group=5
NOTE: initializing header on grp 5 disk TESTDG_0000
NOTE: requesting all-instance disk validation for group=5
Tue Jan 10 16:07:12 2017
NOTE: skipping rediscovery for group 5/0x97f863e8 (TESTDG) on local instance.
NOTE: requesting all-instance disk validation for group=5
NOTE: skipping rediscovery for group 5/0x97f863e8 (TESTDG) on local instance.
Tue Jan 10 16:07:12 2017
GMON updating for reconfiguration, group 5 at 230 for pid 42, osid 6197
NOTE: group 5 PST updated.
NOTE: initiating PST update: grp = 5
GMON updating group 5 at 231 for pid 42, osid 6197
NOTE: PST update grp = 5 completed successfully 
NOTE: membership refresh pending for group 5/0x97f863e8 (TESTDG)
GMON querying group 5 at 232 for pid 18, osid 5012
NOTE: cache opening disk 0 of grp 5: TESTDG_0000 path:/dev/raw/raw7
GMON querying group 5 at 233 for pid 18, osid 5012
SUCCESS: refreshed membership for 5/0x97f863e8 (TESTDG)
NOTE: starting rebalance of group 5/0x97f863e8 (TESTDG) at power 1
SUCCESS: alter diskgroup testdg add disk '/dev/raw/raw7'
Starting background process ARB0
Tue Jan 10 16:07:14 2017
ARB0 started with pid=27, OS id=982 
NOTE: assigning ARB0 to group 5/0x97f863e8 (TESTDG) with 1 parallel I/O
cellip.ora not found.
Tue Jan 10 16:07:23 2017
NOTE: Attempting voting file refresh on diskgroup TESTDG

上面的输出意味着ASM已经完成了rebalance的第二个阶段,开始了第三个阶段compacting,如果我说的没错,通过pstack工具可以看到kfdCompact()函数,下面的输出显示,确实如此:

# pstack 982
#0  0x0000003957ccb6ef in poll () from /lib64/libc.so.6
...
#9  0x0000000003d711e0 in kfk_reap_oss_async_io ()
#10 0x0000000003d70c17 in kfk_reap_ios_from_subsys ()
#11 0x0000000000aea50e in kfk_reap_ios ()
#12 0x0000000003d702ae in kfk_io1 ()
#13 0x0000000003d6fe54 in kfkRequest ()
#14 0x0000000003d76540 in kfk_transitIO ()
#15 0x0000000003cd482b in kffRelocateWait ()
#16 0x0000000003cfa190 in kffRelocate ()
#17 0x0000000003c7ba16 in kfdaExecute ()
#18 0x0000000003c4b737 in kfdCompact ()
#19 0x0000000003c4c6d0 in kfdExecute ()
#20 0x0000000003d4bf0e in kfgbRebalExecute ()
#21 0x0000000003d39627 in kfgbDriver ()
#22 0x00000000020e8d23 in ksbabs ()
#23 0x0000000003d4faae in kfgbRun ()
#24 0x00000000020ed95d in ksbrdp ()
#25 0x0000000002322343 in opirip ()
#26 0x0000000001618571 in opidrv ()
#27 0x0000000001c13be7 in sou2o ()
#28 0x000000000083ceba in opimai_real ()
#29 0x0000000001c19b58 in ssthrdmain ()
#30 0x000000000083cda1 in main ()

通过tail命令查看ARB0的跟踪文件,发现relocating正在进行,而且一次只对一个条目进行relocating。(这是正进行到compacting阶段的另一个重要线索):

$ tail -f +ASM1_arb0_25416.trc
ARB0 relocating file +DATA1.321.788357323 (1 entries)
ARB0 relocating file +DATA1.321.788357323 (1 entries)
ARB0 relocating file +DATA1.321.788357323 (1 entries)
...

compacting过程中,V$ASM_OPERATION视图的EST_MINUTES字段会显示为0(也是一个重要线索):

16:08:56 SQL> /

  INST_ID OPERA STAT      POWER      SOFAR   EST_WORK   EST_RATE EST_MINUTES
---------- ----- ---- ---------- ---------- ---------- ---------- -----------
        2 REBAL RUN          10      98271      98305       7919           0

固态表X$KFGMG的REBALST_KFGMG字段会显示为2,代表正在compacting。

16:09:12 SQL> select NUMBER_KFGMG, OP_KFGMG, ACTUAL_KFGMG, REBALST_KFGMG from X$KFGMG;

NUMBER_KFGMG   OP_KFGMG ACTUAL_KFGMG REBALST_KFGMG
------------ ---------- ------------ -------------
          1          1           10             2

一旦compacting阶段完成,ASM的alert 日志中会显示stopping process ARB0 和rebalance completed:

Tue Jan 10 16:10:19 2017
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 5/0x97f863e8 (TESTDG)

一旦extents relocation完成,所有的数据就已经满足了冗余度的要求,不再会担心已经失败磁盘的partern磁盘再次失败而出现严重故障。

Changing the power
Rebalance的power可以在磁盘组rebalance过程中动态的更改,如果你认为磁盘组的默认级别太低了,可以去很容易的增加它。但是增加到多少呢?这个需要你根据你系统的IO负载,IO吞吐量来定。一般情况下,你可以先尝试增加到一个保守的值,例如5,过上十分钟看是否有所提升,以及是否影响到了其他业务对IO的使用,如果你的IO性能非常强,那么可以继续增加power的值,但是就我的经验来看,很少能看到power 的设置超过30后还能有较大提升的。测试的关键点在于,你需要在你生产系统的正常负载下去测试,不同的业务压力,不同的存储系统,都可能会让rebalance时间产生较大的差异。

]]>
http://www.jydba.net/index.php/archives/2025/feed 0
Oracle AMDU- ASM Metadata Dump Utility http://www.jydba.net/index.php/archives/2017 http://www.jydba.net/index.php/archives/2017#respond Thu, 05 Jan 2017 09:10:20 +0000 http://www.jydba.net/?p=2017 ASM Metadata Dump Utility,即ASM元数据导出工具,它的简写amdu更被人所熟知,常被Oracle技术支持人员和Oracle开发人员用来诊断和解决ASM故障。它能输出ASM的元数据信息并且从ASM磁盘组中抽取元数据和数据文件。 amdu工具不依赖于ASM实例或者ASM磁盘组的状态,所以它能在ASM实例关闭和磁盘组未挂载的情况下正常使用,它甚至能在ASM磁盘出现故障或者不可见的场景下使用。

使用amdu从mounted磁盘组中抽取控制文件
在接下来的第一个例子中,我们将以一个处于mount状态的磁盘组为例,使用amdu提取数据库jyrac的一个控制文件。通过asmcmd的find命令结合–type参数,指定查找文件类型为controlfile的文件,以下输出列出了所有找到的控制文件的位置

[grid@jyrac1 ~]$  asmcmd find --type controlfile + "*"
+DATADG/JYRAC/CONTROLFILE/current.257.930412709

以上输出我们可以知道,在DATADG磁盘组存放了JYRAC数据库控制文件的一个副本。这里以提取DATA磁盘组的current.257.930412709控制文件为例。首先我们看下DATA磁盘组有哪些磁盘:

[grid@jyrac1 ~]$ asmcmd lsdsk -G DATADG
Path
/dev/raw/raw10
/dev/raw/raw11
/dev/raw/raw3
/dev/raw/raw4

DATADG磁盘组共有四块磁盘/dev/raw/raw10,/dev/raw/raw11,/dev/raw/raw3和/dev/raw/raw4,如果名字都是以ORCL为前缀,那么磁盘是ASMLIB磁盘。严格意义上,并不需要知道具体的磁盘名,只需要查找ASM_DISKSTRING参数值所定义的目录即可。我们接着用amdu工具将控制文件从DATA磁盘组提取到文件系统上:

[grid@jyrac1 ~]$ amdu -diskstring="/dev/raw/*" -extract DATADG.257 -output control.257 -noreport -nodir
AMDU-00204: Disk N0003 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0003: '/dev/raw/raw11'
AMDU-00204: Disk N0009 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0009: '/dev/raw/raw4'
AMDU-00204: Disk N0008 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0008: '/dev/raw/raw3'


[grid@jyrac1 ~]$ ls -lrt control.257 
-rw-r--r-- 1 grid oinstall 18595840 Jan  5 16:03 control.257

此命令相关参数的含义如下:
diskstring: 使用磁盘的全路径或者是ASM_DISKSTRING参数值
extract: 磁盘组名.ASM文件序号
output:提取的输出文件(当前目录下)
noreport:不输出amdu的执行过程
nodir:不创建dump目录

使用amdu从dismounted磁盘组中抽取数据文件
上例中从一个已挂载的磁盘组上提取控制文件的过程简单明了。但在实际工作中,可能有客户提出要求从一个未挂载的磁盘组中提取一个重要数据文件,同时并不知道数据文件名,也没有备份。以下是一个具体的例子,演示了整个操作和分析过程。本例的目标是使用amdu工具从一个不能被挂载的DATA磁盘组中提取一个数据文件,文件名字中包含NSA。这首先意味着在这里sqlplus和asmcmd工具都不能使用。首先使用amdu工具对DATA磁盘组做一份元数据的完整dump。

[grid@jyrac1 ~]$  amdu -dump DATADG -noimage
amdu_2017_01_05_16_09_47/
AMDU-00204: Disk N0003 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0003: '/dev/raw/raw11'
AMDU-00204: Disk N0009 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0009: '/dev/raw/raw4'
AMDU-00204: Disk N0008 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0008: '/dev/raw/raw3'

[grid@jyrac1 ~]$ cd amdu_2017_01_05_16_09_47/
[grid@jyrac1 amdu_2017_01_05_16_09_47]$ 

[grid@jyrac1 amdu_2017_01_05_16_09_47]$ ls -lrt
total 44
-rw-r--r-- 1 grid oinstall 16222 Jan  5 16:09 report.txt
-rw-r--r-- 1 grid oinstall 27520 Jan  5 16:09 DATADG.map

在本例中amdu创建了dump目录并产生了两个文件。report.txt文件包含主机、amdu命令及使用的参数、DATADG磁盘组可能的成员磁盘和这些磁盘上的AU信息。report.txt文件内容如下:

[grid@jyrac1 amdu_2017_01_05_16_09_47]$ more report.txt 
-*-amdu-*-

******************************* AMDU Settings ********************************
ORACLE_HOME = /u01/app/product/11.2.0/crs
System name:    Linux
Node name:      jyrac1
Release:        2.6.18-164.el5
Version:        #1 SMP Tue Aug 18 15:51:48 EDT 2009
Machine:        x86_64
amdu run:       05-JAN-17 16:09:47
Endianess:      1

--------------------------------- Operations ---------------------------------
       -dump DATADG

------------------------------- Disk Selection -------------------------------
 -diskstring ''

------------------------------ Reading Control -------------------------------

------------------------------- Output Control -------------------------------
    -noimage

********************************* DISCOVERY **********************************

----------------------------- DISK REPORT N0001 ------------------------------
                Disk Path: /dev/raw/raw1
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: CRSDG
                Disk Name: CRSDG_0000
       Failure Group Name: CRSDG_0000
              Disk Number: 0
            Header Status: 3
       Disk Creation Time: 2016/11/22 18:24:35.358000
          Last Mount Time: 2016/12/14 17:02:09.327000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 1
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/11/22 18:24:35.079000
  File 1 Block 1 location: AU 2
              OCR Present: YES

----------------------------- DISK REPORT N0002 ------------------------------
                Disk Path: /dev/raw/raw10
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: DATADG
                Disk Name: DATADG_0000
       Failure Group Name: DATADG_0000
              Disk Number: 3
            Header Status: 3
       Disk Creation Time: 2016/12/12 15:36:39.090000
          Last Mount Time: 2017/01/03 11:54:18.454000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/12 15:36:38.488000
  File 1 Block 1 location: AU 0
              OCR Present: NO

----------------------------- DISK REPORT N0003 ------------------------------
                Disk Path: /dev/raw/raw11
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: DATADG
                Disk Name: DATADG_0001
       Failure Group Name: DATADG_0001
              Disk Number: 0
            Header Status: 3
       Disk Creation Time: 2016/12/12 15:36:39.090000
          Last Mount Time: 2016/12/14 17:02:10.127000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/12 15:36:38.488000
  File 1 Block 1 location: AU 2
              OCR Present: NO

----------------------------- DISK REPORT N0004 ------------------------------
                Disk Path: /dev/raw/raw12
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: USD
                Disk Name: USD_0001
       Failure Group Name: USD_0001
              Disk Number: 1
            Header Status: 3
       Disk Creation Time: 2016/12/30 14:58:59.434000
          Last Mount Time: 2017/01/03 09:57:50.397000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/30 14:58:59.213000
  File 1 Block 1 location: AU 1344
              OCR Present: NO

----------------------------- DISK REPORT N0005 ------------------------------
                Disk Path: /dev/raw/raw13
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: TESTDG
                Disk Name: TESTDG_0004
       Failure Group Name: TESTDG_0004
              Disk Number: 4
            Header Status: 4
       Disk Creation Time: 2016/12/28 16:04:46.242000
          Last Mount Time: 2016/12/28 16:04:57.102000
    Compatibility Version: 0x0a100000(10010000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/28 16:04:45.574000
  File 1 Block 1 location: AU 0
              OCR Present: NO

----------------------------- DISK REPORT N0006 ------------------------------
                Disk Path: /dev/raw/raw14
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: TESTDG
                Disk Name: TESTDG_0005
       Failure Group Name: TESTDG_0005
              Disk Number: 5
            Header Status: 4
       Disk Creation Time: 2016/12/28 16:04:46.242000
          Last Mount Time: 2016/12/28 16:04:57.102000
    Compatibility Version: 0x0a100000(10010000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/28 16:04:45.574000
  File 1 Block 1 location: AU 0
              OCR Present: NO

----------------------------- DISK REPORT N0007 ------------------------------
                Disk Path: /dev/raw/raw2
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: ARCHDG
                Disk Name: ARCHDG_0000
       Failure Group Name: ARCHDG_0000
              Disk Number: 0
            Header Status: 3
       Disk Creation Time: 2016/11/22 19:18:27.892000
          Last Mount Time: 2016/12/14 17:02:08.754000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/11/22 19:18:27.619000
  File 1 Block 1 location: AU 2
              OCR Present: NO

----------------------------- DISK REPORT N0008 ------------------------------
                Disk Path: /dev/raw/raw3
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: DATADG
                Disk Name: DATADG_0002
       Failure Group Name: DATADG_0002
              Disk Number: 2
            Header Status: 3
       Disk Creation Time: 2016/12/12 15:36:39.090000
          Last Mount Time: 2016/12/14 17:02:10.127000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/12 15:36:38.488000
  File 1 Block 1 location: AU 2
              OCR Present: NO

----------------------------- DISK REPORT N0009 ------------------------------
                Disk Path: /dev/raw/raw4
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: DATADG
                Disk Name: DATADG_0003
       Failure Group Name: DATADG_0003
              Disk Number: 1
            Header Status: 3
       Disk Creation Time: 2016/12/12 15:36:39.090000
          Last Mount Time: 2016/12/14 17:02:10.127000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/12 15:36:38.488000
  File 1 Block 1 location: AU 2
              OCR Present: NO

----------------------------- DISK REPORT N0010 ------------------------------
                Disk Path: /dev/raw/raw5
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: ACFS
                Disk Name: ACFS_0000
       Failure Group Name: ACFS_0000
              Disk Number: 0
            Header Status: 3
       Disk Creation Time: 2016/12/30 09:09:30.242000
          Last Mount Time: 2016/12/30 09:09:41.395000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/30 09:09:29.830000
  File 1 Block 1 location: AU 2
              OCR Present: NO

----------------------------- DISK REPORT N0011 ------------------------------
                Disk Path: /dev/raw/raw6
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: ACFS
                Disk Name: ACFS_0001
       Failure Group Name: ACFS_0001
              Disk Number: 1
            Header Status: 3
       Disk Creation Time: 2016/12/30 09:09:30.242000
          Last Mount Time: 2016/12/30 09:09:41.395000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/30 09:09:29.830000
  File 1 Block 1 location: AU 2
              OCR Present: NO

----------------------------- DISK REPORT N0012 ------------------------------
                Disk Path: /dev/raw/raw7
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: USD
                Disk Name: USD_0000
       Failure Group Name: USD_0000
              Disk Number: 0
            Header Status: 3
       Disk Creation Time: 2016/12/30 14:58:59.434000
          Last Mount Time: 2016/12/30 14:59:10.816000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/12/30 14:58:59.213000
  File 1 Block 1 location: AU 2
              OCR Present: NO

----------------------------- DISK REPORT N0013 ------------------------------
                Disk Path: /dev/raw/raw8
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: CRSDG
                Disk Name: CRSDG_0001
       Failure Group Name: CRSDG_0001
              Disk Number: 1
            Header Status: 3
       Disk Creation Time: 2016/11/22 18:24:35.358000
          Last Mount Time: 2016/12/14 17:02:09.327000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 1
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/11/22 18:24:35.079000
  File 1 Block 1 location: AU 0
              OCR Present: NO

----------------------------- DISK REPORT N0014 ------------------------------
                Disk Path: /dev/raw/raw9
           Unique Disk ID: 
               Disk Label: 
     Physical Sector Size: 512 bytes
                Disk Size: 5120 megabytes
               Group Name: ARCHDG
                Disk Name: ARCHDG_0001
       Failure Group Name: ARCHDG_0001
              Disk Number: 1
            Header Status: 3
       Disk Creation Time: 2016/11/22 19:18:27.892000
          Last Mount Time: 2016/12/14 17:02:08.754000
    Compatibility Version: 0x0b200000(11020000)
         Disk Sector Size: 512 bytes
         Disk size in AUs: 5120 AUs
         Group Redundancy: 2
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2016/11/22 19:18:27.619000
  File 1 Block 1 location: AU 2
              OCR Present: NO

***************** Slept for 6 seconds waiting for heartbeats *****************

************************* SCANNING DISKGROUP DATADG **************************
            Creation Time: 2016/12/12 15:36:38.488000
         Disks Discovered: 4
               Redundancy: 2
                  AU Size: 1048576 bytes
      Metadata Block Size: 4096 bytes
     Physical Sector Size: 512 bytes
          Metadata Stride: 113792 AU
   Duplicate Disk Numbers: 0


---------------------------- SCANNING DISK N0003 -----------------------------
Disk N0003: '/dev/raw/raw11'
AMDU-00204: Disk N0003 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0003: '/dev/raw/raw11'
** HEARTBEAT DETECTED **
           Allocated AU's: 1737
                Free AU's: 3383
       AU's read for dump: 83
       Block images saved: 19712
        Map lines written: 83
          Heartbeats seen: 1
  Corrupt metadata blocks: 0
        Corrupt AT blocks: 0


---------------------------- SCANNING DISK N0009 -----------------------------
Disk N0009: '/dev/raw/raw4'
AMDU-00204: Disk N0009 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0009: '/dev/raw/raw4'
** HEARTBEAT DETECTED **
           Allocated AU's: 1734
                Free AU's: 3386
       AU's read for dump: 85
       Block images saved: 20488
        Map lines written: 85
          Heartbeats seen: 1
  Corrupt metadata blocks: 0
        Corrupt AT blocks: 0


---------------------------- SCANNING DISK N0008 -----------------------------
Disk N0008: '/dev/raw/raw3'
AMDU-00204: Disk N0008 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0008: '/dev/raw/raw3'
** HEARTBEAT DETECTED **
           Allocated AU's: 1733
                Free AU's: 3387
       AU's read for dump: 89
       Block images saved: 21256
        Map lines written: 89
          Heartbeats seen: 1
  Corrupt metadata blocks: 0
        Corrupt AT blocks: 0


---------------------------- SCANNING DISK N0002 -----------------------------
Disk N0002: '/dev/raw/raw10'
           Allocated AU's: 1740
                Free AU's: 3380
       AU's read for dump: 87
       Block images saved: 20487
        Map lines written: 87
          Heartbeats seen: 0
  Corrupt metadata blocks: 0
        Corrupt AT blocks: 0


------------------------ SUMMARY FOR DISKGROUP DATADG ------------------------
           Allocated AU's: 6944
                Free AU's: 13536
       AU's read for dump: 344
       Block images saved: 81943
        Map lines written: 344
          Heartbeats seen: 3
  Corrupt metadata blocks: 0
        Corrupt AT blocks: 0


******************************* END OF REPORT ********************************


[grid@jyrac1 amdu_2017_01_05_16_09_47]$ more DATADG.map
...
N0008 D0002 R00 A00000069 F00000003 I0 E00000241 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00000070 F00000003 I0 E00000244 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00000071 F00000003 I0 E00000248 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00000072 F00000003 I0 E00000249 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00000073 F00000004 I0 E00000012 U00 C00000 S0000 B0000000000  
N0008 D0002 R00 A00000074 F00000004 I0 E00000017 U00 C00000 S0000 B0000000000  
N0008 D0002 R00 A00000075 F00000004 I0 E00000019 U00 C00000 S0000 B0000000000  
N0008 D0002 R00 A00000076 F00000004 I0 E00000022 U00 C00000 S0000 B0000000000  
N0008 D0002 R00 A00000077 F00000001 I0 E00000004 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00000094 F00000257 I1 E00000002 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00000111 F00000258 I1 E00000001 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00000641 F00000259 I1 E00000000 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00001022 F00000260 I1 E00000001 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00001197 F00000261 I1 E00000002 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00001272 F00000262 I1 E00000000 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00001328 F00000264 I1 E00000000 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00001356 F00000265 I1 E00000001 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00001380 F00000266 I1 E00000000 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00001453 F00000270 I1 E00000000 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00001707 F00000012 I0 E00000001 U00 C00256 S0000 B0000000000  
...

上面感觉有价值的内容是A和F起始的两列。比如,A00000094代表本行是关于AU 94. F00000257代表本行与序号257的ASM文件相关。重新回到查找NSA数据文件的目标。ASM序号6的元数据文件是alias别名目录,这是查找目标的起点。通过DATADG.map文件,能找到序号6的ASM元数据文件的所有AU。

[grid@jyrac1 amdu_2017_01_05_16_09_47]$  grep F00000006 DATADG.map
N0009 D0001 R00 A00000036 F00000006 I0 E00000002 U00 C00256 S0000 B0000000000  
N0008 D0002 R00 A00000038 F00000006 I0 E00000000 U00 C00256 S0000 B0000000000  
N0002 D0003 R00 A00000037 F00000006 I0 E00000001 U00 C00256 S0000 B0000000000  

通过查找定位到与该元数据文件相关的AU记录有三行。同时别名目录元数据文件存放在磁盘1(D0001)的AU 36(A00000036),磁盘2(D0002)的AU 38(A00000038)和磁盘3(D0003)的AU 37(A00000037)。 从前面report.txt的记录中知道,磁盘1指的是’/dev/raw/raw4’并且它的AU大小是1MB。通过kfed工具来查看alias目录文件。

[grid@jyrac1 amdu_2017_01_05_16_09_47]$ kfed read /dev/raw/raw4 aun=36 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           11 ; 0x002: KFBTYP_ALIASDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       6 ; 0x008: file=6
kfbh.check:                  2235498606 ; 0x00c: 0x853f006e
kfbh.fcn.base:                     3565 ; 0x010: 0x00000ded
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:        4294967295 ; 0x00c: 0xffffffff
kffdnd.overfl.incarn:                 0 ; 0x010: A=0 NUMM=0x0
kffdnd.parent.number:                 0 ; 0x014: 0x00000000
kffdnd.parent.incarn:                 1 ; 0x018: A=1 NUMM=0x0
kffdnd.fstblk.number:                 0 ; 0x01c: 0x00000000
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfade[0].entry.incarn:                1 ; 0x024: A=1 NUMM=0x0
kfade[0].entry.hash:         2990280982 ; 0x028: 0xb23c1116
kfade[0].entry.refer.number:          1 ; 0x02c: 0x00000001
kfade[0].entry.refer.incarn:          1 ; 0x030: A=1 NUMM=0x0
kfade[0].name:                    JYRAC ; 0x034: length=5
kfade[0].fnum:               4294967295 ; 0x064: 0xffffffff
kfade[0].finc:               4294967295 ; 0x068: 0xffffffff
kfade[0].flags:                       8 ; 0x06c: U=0 S=0 S=0 U=1 F=0
kfade[0].ub1spare:                    0 ; 0x06d: 0x00
kfade[0].ub2spare:                    0 ; 0x06e: 0x0000
kfade[1].entry.incarn:                1 ; 0x070: A=1 NUMM=0x0
kfade[1].entry.hash:         3585957073 ; 0x074: 0xd5bd5cd1
kfade[1].entry.refer.number:          9 ; 0x078: 0x00000009
kfade[1].entry.refer.incarn:          1 ; 0x07c: A=1 NUMM=0x0
kfade[1].name:               DB_UNKNOWN ; 0x080: length=10
kfade[1].fnum:               4294967295 ; 0x0b0: 0xffffffff
kfade[1].finc:               4294967295 ; 0x0b4: 0xffffffff
kfade[1].flags:                       4 ; 0x0b8: U=0 S=0 S=1 U=0 F=0
kfade[1].ub1spare:                    0 ; 0x0b9: 0x00
kfade[1].ub2spare:                    0 ; 0x0ba: 0x0000
kfade[2].entry.incarn:                3 ; 0x0bc: A=1 NUMM=0x1
kfade[2].entry.hash:         1585230659 ; 0x0c0: 0x5e7cb343
kfade[2].entry.refer.number: 4294967295 ; 0x0c4: 0xffffffff
kfade[2].entry.refer.incarn:          0 ; 0x0c8: A=0 NUMM=0x0
...

kfed的输出信息中kfbh.type验证了这是一个alias目录文件。下一步查找名字包含SYS的数据文件

[grid@jyrac1 amdu_2017_01_05_16_09_47]$ vi getfilename.sh

for (( i=0; i<256; i++ ))
do
kfed read /dev/raw/raw4 aun=36 blkn=$i | grep -1 SYS
done

[grid@jyrac1 amdu_2017_01_05_16_09_47]$ chmod 777 getfilename.sh 
[grid@jyrac1 amdu_2017_01_05_16_09_47]$ ./getfilename.sh 
kfade[0].entry.refer.incarn:          0 ; 0x030: A=0 NUMM=0x0
kfade[0].name:                   SYSAUX ; 0x034: length=6
kfade[0].fnum:                      258 ; 0x064: 0x00000102
--
kfade[1].entry.refer.incarn:          0 ; 0x07c: A=0 NUMM=0x0
kfade[1].name:                   SYSTEM ; 0x080: length=6
kfade[1].fnum:                      259 ; 0x0b0: 0x00000103

名字包含SYS的数据文件是SYSTEM,SYSAUX,它们的ASM文件序号是258,259.接下来可以进一步提取数据文件

[grid@jyrac1 amdu_2017_01_05_16_09_47]$ amdu -diskstring="/dev/raw/*" -extract DATADG.258 -output SYSAUX.258 -noreport -nodir
AMDU-00204: Disk N0003 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0003: '/dev/raw/raw11'
AMDU-00204: Disk N0009 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0009: '/dev/raw/raw4'
AMDU-00204: Disk N0008 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0008: '/dev/raw/raw3'
[grid@jyrac1 amdu_2017_01_05_16_09_47]$ amdu -diskstring="/dev/raw/*" -extract DATADG.259 -output SYSTEM.259 -noreport -nodir
AMDU-00204: Disk N0003 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0003: '/dev/raw/raw11'
AMDU-00204: Disk N0009 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0009: '/dev/raw/raw4'
AMDU-00204: Disk N0008 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0008: '/dev/raw/raw3'
[grid@jyrac1 amdu_2017_01_05_16_09_47]$ ls -lrt SYS*
-rw-r--r-- 1 grid oinstall 1625300992 Jan  5 16:44 SYSAUX.258
-rw-r--r-- 1 grid oinstall  796925952 Jan  5 16:46 SYSTEM.259

提取其它数据文件的操作与提取system,sysaux数据文件类似,如果能提取到数据库控制文件,system和sysaux系统表空间及其它数据文件,就可以用这些文件来打开数据库。还可能把这个文件“迁移”到别的数据库。需要注意是amdu提取的可能是有损坏或者是已破坏的文件,这取决于文件本身是否有损坏。对于那些由于元信息损坏或者丢失儿导致的不能呗mount的磁盘组,也有可能数据文件是好的,这样情况下同样可以使用amdu来抽取到完好的数据文件。

]]>
http://www.jydba.net/index.php/archives/2017/feed 0
Oracle ASM Staleness Directory and Staleness Registry http://www.jydba.net/index.php/archives/2015 http://www.jydba.net/index.php/archives/2015#respond Wed, 04 Jan 2017 08:32:07 +0000 http://www.jydba.net/?p=2015 Staleness Directory包含映射Staleness Registry中插槽到特定磁盘与ASM客户端的元数据。Staleness Directory在磁盘组中的文件号为12(F12)。当需要它时它将与Staleness Registry一起分配。staleness registry在磁盘组中的文件号为254,当磁盘offline时,用于跟踪AU的状态。这两个特性适用于COMPATIBLE.RDBMS设置为11.1或以上且NORMAL或HIGH冗余模式的磁盘组。只有在需要时staleness元信息才会被创建,本身的内容大小会随着offline磁盘的增多而增长。

当一个磁盘offline时,每个RDBMS实例都会从staleness registry中得到一个映射到该磁盘的槽位。这个槽位中的每一个比特位映射这个offline磁盘上的一个AU。当RDBMS实例对offline的磁盘发起写IO操作时,该实例会在staleness registry中修改对应的比特位。

当一个磁盘被online时,ASM会从冗余的extent中拷贝staleness registry比特位中记录的AU。因为只有offline时被改变过的AU会被更新,所以磁盘online操作的效率会高于该盘被drop并添加一块新盘的效率。

当所有磁盘都处于online状态时,意味着不会存在staleness directory与staleness registry

SQL> col "disk group" for 999
SQL> col "group#" for 999
SQL> col "disk#" for 999
SQL> col "disk status" for a30
SQL> select g.name "disk group",
  2   g.group_number "group#",
  3   d.disk_number "disk#",
  4   d.name "disk",
  5   d.path,
  6   d.mode_status "disk status",
  7   g.type
  8  from v$asm_disk d, v$asm_diskgroup g
  9  where g.group_number=d.group_number and g.group_number<>0
 10  order by 1, 2, 3;

disk group                   group# disk# disk                   PATH                           disk status                   TYPE
---------------------------- ------ ----- ---------------------- ------------------------------ ------------------------------ ----------
ACFS                              4     0 ACFS_0000              /dev/raw/raw5                  ONLINE                        NORMAL
ACFS                              4     1 ACFS_0001              /dev/raw/raw6                  ONLINE                        NORMAL
ARCHDG                            1     0 ARCHDG_0000            /dev/raw/raw2                  ONLINE                        NORMAL
ARCHDG                            1     1 ARCHDG_0001            /dev/raw/raw9                  ONLINE                        NORMAL
CRSDG                             2     0 CRSDG_0000             /dev/raw/raw1                  ONLINE                        EXTERN
CRSDG                             2     1 CRSDG_0001             /dev/raw/raw8                  ONLINE                        EXTERN
DATADG                            3     0 DATADG_0001            /dev/raw/raw11                 ONLINE                        NORMAL
DATADG                            3     1 DATADG_0003            /dev/raw/raw4                  ONLINE                        NORMAL
DATADG                            3     2 DATADG_0002            /dev/raw/raw3                  ONLINE                        NORMAL
DATADG                            3     3 DATADG_0000            /dev/raw/raw10                 ONLINE                        NORMAL
USD                               5     0 USD_0000               /dev/raw/raw7                  ONLINE                        NORMAL
USD                               5     1 USD_0001               /dev/raw/raw12                 ONLINE                        NORMAL

12 rows selected.


SQL> select  x.number_kffxp "file#",x.group_kffxp "group#",x.disk_kffxp "disk #",d.name "disk name",d.path "disk path",x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=3
  6  and x.number_kffxp in(12,254)
  7  order by 1,2,3;

no rows selected

Staleness信息在磁盘offline并且在针对该offline盘有写IO时会才被创建。在下面的例子中,通过ALTER DISKGROUP OFFLINE DISK命令手动把一个磁盘offline。staleness元信息的创建跟磁盘以何种方式何种原因offline无关。

SQL> alter diskgroup datadg offline disk DATADG_0000;

Diskgroup altered.



SQL> select  x.number_kffxp "file#",x.group_kffxp "group#",x.disk_kffxp "disk #",d.name "disk name",d.path "disk path",x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=5
  6  and x.number_kffxp in(12,254)
  7  order by 1,2,3;

no rows selected

数据库针对该磁盘组进行不断的写入,过一会就可以观察到该磁盘组中已经创建了staleness directory 和 staleness registry。

SQL> select  x.number_kffxp "file#",x.group_kffxp "group#",x.disk_kffxp "disk #",d.name "disk name",d.path "disk path",x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=3
  6  and x.number_kffxp in(12,254)
  7  order by 1,2,3;

file# group#     disk # disk name            disk path            virtual extent physical extent         au
----- ------ ---------- -------------------- -------------------- -------------- --------------- ----------
   12      3          1 DATADG_0003          /dev/raw/raw4                     0               2       1707
   12      3          2 DATADG_0002          /dev/raw/raw3                     0               1       1707
   12      3          3 DATADG_0000                                            0               0 4294967294
  254      3          0 DATADG_0001          /dev/raw/raw11                    0               1       1711
  254      3          1 DATADG_0003          /dev/raw/raw4                     0               0       1706
  254      3          1 DATADG_0003          /dev/raw/raw4                     1               5       1708
  254      3          2 DATADG_0002          /dev/raw/raw3                     1               4       1708
  254      3          3 DATADG_0000                                            0               2 4294967294
  254      3          3 DATADG_0000                                            1               3 4294967294

9 rows selected.

上面的结果显示staleness directory(12号文件)分布在1号磁盘(/dev/raw/raw4)的1707号AU与2号磁盘(/dev/raw/raw3)的1707号AU中。staleness registry(254号文件)分布在0号磁盘(/dev/raw/raw11)的1711号AU,1号磁盘(/dev/raw/raw4)的1706,1708号AU,2号磁盘(/dev/raw/raw3)的1708号AU中。

通过kfed工具来定位staleness directory 和 staleness registry的AU分布情况

[grid@jyrac1 ~]$ kfed read /dev/raw/raw11 aun=2 blkn=12 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                      12 ; 0x004: blk=12
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                  3437528684 ; 0x00c: 0xcce4866c
kfbh.fcn.base:                     7010 ; 0x010: 0x00001b62
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:                   1 ; 0x000: A=1 NUMM=0x0
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 1048576 ; 0x010: 0x00100000
kfffdb.xtntcnt:                       3 ; 0x014: 0x00000003
kfffdb.xtnteof:                       3 ; 0x018: 0x00000003
kfffdb.blkSize:                    4096 ; 0x01c: 0x00001000
kfffdb.flags:                         1 ; 0x020: O=1 S=0 S=0 D=0 C=0 I=0 R=0 A=0
kfffdb.fileType:                     15 ; 0x021: 0x0f
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                       3 ; 0x03c: 0x0003
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:             4294967295 ; 0x044: 0xffffffff
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      0 ; 0x04c: 0x00
kfffdb.strpsz:                        0 ; 0x04d: 0x00
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33047659 ; 0x050: HOUR=0xb DAYS=0x3 MNTH=0x1 YEAR=0x7e1
kfffdb.crets.lo:              121661440 ; 0x054: USEC=0x0 MSEC=0x1a SECS=0x34 MINS=0x1
kfffdb.modts.hi:               33047659 ; 0x058: HOUR=0xb DAYS=0x3 MNTH=0x1 YEAR=0x7e1
kfffdb.modts.lo:              121661440 ; 0x05c: USEC=0x0 MSEC=0x1a SECS=0x34 MINS=0x1
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:           4294967294 ; 0x4a0: 0xfffffffe
kfffde[0].xptr.disk:                  3 ; 0x4a4: 0x0003
kfffde[0].xptr.flags:                32 ; 0x4a6: L=0 E=0 D=0 S=1
kfffde[0].xptr.chk:                   8 ; 0x4a7: 0x08
kfffde[1].xptr.au:                 1707 ; 0x4a8: 0x000006ab
kfffde[1].xptr.disk:                  2 ; 0x4ac: 0x0002
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                 133 ; 0x4af: 0x85
kfffde[2].xptr.au:                 1707 ; 0x4b0: 0x000006ab
kfffde[2].xptr.disk:                  1 ; 0x4b4: 0x0001
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                 134 ; 0x4b7: 0x86
kfffde[3].xptr.au:           4294967295 ; 0x4b8: 0xffffffff
kfffde[3].xptr.disk:              65535 ; 0x4bc: 0xffff
kfffde[3].xptr.flags:                 0 ; 0x4be: L=0 E=0 D=0 S=0
kfffde[3].xptr.chk:                  42 ; 0x4bf: 0x2a

从上面的kfffde[1].xptr.au=1707,kfffde[1].xptr.disk=2与kfffde[2].xptr.au=1707,kfffde[2].xptr.disk=1可知staleness directory(12号文件)分布在1号磁盘(/dev/raw/raw4)的1707号AU与2号磁盘(/dev/raw/raw3)的1707号AU中。


[grid@jyrac1 ~]$ kfed read /dev/raw/raw11 aun=2 blkn=254 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                     254 ; 0x004: blk=254
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                  3989368441 ; 0x00c: 0xedc8ee79
kfbh.fcn.base:                     6753 ; 0x010: 0x00001a61
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:                   1 ; 0x000: A=1 NUMM=0x0
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 2097152 ; 0x010: 0x00200000
kfffdb.xtntcnt:                       6 ; 0x014: 0x00000006
kfffdb.xtnteof:                       6 ; 0x018: 0x00000006
kfffdb.blkSize:                    4096 ; 0x01c: 0x00001000
kfffdb.flags:                        17 ; 0x020: O=1 S=0 S=0 D=0 C=1 I=0 R=0 A=0
kfffdb.fileType:                     25 ; 0x021: 0x19
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                       6 ; 0x03c: 0x0006
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:             4294967295 ; 0x044: 0xffffffff
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      8 ; 0x04c: 0x08
kfffdb.strpsz:                       20 ; 0x04d: 0x14
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33047659 ; 0x050: HOUR=0xb DAYS=0x3 MNTH=0x1 YEAR=0x7e1
kfffdb.crets.lo:              121410560 ; 0x054: USEC=0x0 MSEC=0x325 SECS=0x33 MINS=0x1
kfffdb.modts.hi:               33047659 ; 0x058: HOUR=0xb DAYS=0x3 MNTH=0x1 YEAR=0x7e1
kfffdb.modts.lo:                      0 ; 0x05c: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                 1706 ; 0x4a0: 0x000006aa
kfffde[0].xptr.disk:                  1 ; 0x4a4: 0x0001
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                 135 ; 0x4a7: 0x87
kfffde[1].xptr.au:                 1711 ; 0x4a8: 0x000006af
kfffde[1].xptr.disk:                  0 ; 0x4ac: 0x0000
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                 131 ; 0x4af: 0x83
kfffde[2].xptr.au:           4294967294 ; 0x4b0: 0xfffffffe
kfffde[2].xptr.disk:                  3 ; 0x4b4: 0x0003
kfffde[2].xptr.flags:                32 ; 0x4b6: L=0 E=0 D=0 S=1
kfffde[2].xptr.chk:                   8 ; 0x4b7: 0x08
kfffde[3].xptr.au:           4294967294 ; 0x4b8: 0xfffffffe
kfffde[3].xptr.disk:                  3 ; 0x4bc: 0x0003
kfffde[3].xptr.flags:                32 ; 0x4be: L=0 E=0 D=0 S=1
kfffde[3].xptr.chk:                   8 ; 0x4bf: 0x08
kfffde[4].xptr.au:                 1708 ; 0x4c0: 0x000006ac
kfffde[4].xptr.disk:                  2 ; 0x4c4: 0x0002
kfffde[4].xptr.flags:                 0 ; 0x4c6: L=0 E=0 D=0 S=0
kfffde[4].xptr.chk:                 130 ; 0x4c7: 0x82
kfffde[5].xptr.au:                 1708 ; 0x4c8: 0x000006ac
kfffde[5].xptr.disk:                  1 ; 0x4cc: 0x0001
kfffde[5].xptr.flags:                 0 ; 0x4ce: L=0 E=0 D=0 S=0
kfffde[5].xptr.chk:                 129 ; 0x4cf: 0x81
kfffde[6].xptr.au:           4294967295 ; 0x4d0: 0xffffffff
kfffde[6].xptr.disk:              65535 ; 0x4d4: 0xffff
kfffde[6].xptr.flags:                 0 ; 0x4d6: L=0 E=0 D=0 S=0
kfffde[6].xptr.chk:                  42 ; 0x4d7: 0x2a

从kfffde[0].xptr.au=1706,kfffde[0].xptr.disk=1,kfffde[1].xptr.au=1711,kfffde[1].xptr.disk=0,kfffde[4].xptr.au=1708,kfffde[4].xptr.disk=2,kfffde[5].xptr.au=1708,kfffde[5].xptr.disk=1可知staleness registry(254号文件)分布在0号磁盘(/dev/raw/raw11)的1711号AU,1号磁盘(/dev/raw/raw4)的1706,1708号AU,2号磁盘(/dev/raw/raw3)的1708号AU中。

元信息中并没有很多有价值的信息,连kfed都无法分辨出这种类型元信息block,除了一些比特位,没有太多有价值信息

[grid@jyrac1 ~]$ kfed read /dev/raw/raw4 aun=1707  | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           21 ; 0x002: *** Unknown Enum ***
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                      12 ; 0x008: file=12
kfbh.check:                   981317996 ; 0x00c: 0x3a7db96c
kfbh.fcn.base:                     7015 ; 0x010: 0x00001b67
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:                 1 ; 0x00c: 0x00000001
kffdnd.overfl.incarn:                 1 ; 0x010: A=1 NUMM=0x0
kffdnd.parent.number:                 0 ; 0x014: 0x00000000
kffdnd.parent.incarn:                 1 ; 0x018: A=1 NUMM=0x0
kffdnd.fstblk.number:                 0 ; 0x01c: 0x00000000
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfdsde.entry.incarn:                  1 ; 0x024: A=1 NUMM=0x0
kfdsde.entry.hash:                    0 ; 0x028: 0x00000000
kfdsde.entry.refer.number:   4294967295 ; 0x02c: 0xffffffff
kfdsde.entry.refer.incarn:            0 ; 0x030: A=0 NUMM=0x0
kfdsde.cid:          jyrac2:jyrac:+ASM2 ; 0x034: length=18
kfdsde.indlen:                        1 ; 0x074: 0x0001
kfdsde.flags:                         0 ; 0x076: 0x0000
kfdsde.spare1:                        0 ; 0x078: 0x00000000
kfdsde.spare2:                        0 ; 0x07c: 0x00000000
kfdsde.indices[0]:                    0 ; 0x080: 0x00000000
kfdsde.indices[1]:                    0 ; 0x084: 0x00000000
kfdsde.indices[2]:                    0 ; 0x088: 0x00000000
kfdsde.indices[3]:                    0 ; 0x08c: 0x00000000
kfdsde.indices[4]:                    0 ; 0x090: 0x00000000
kfdsde.indices[5]:                    0 ; 0x094: 0x00000000
kfdsde.indices[6]:                    0 ; 0x098: 0x00000000
kfdsde.indices[7]:                    0 ; 0x09c: 0x00000000
kfdsde.indices[8]:                    0 ; 0x0a0: 0x00000000
kfdsde.indices[9]:                    0 ; 0x0a4: 0x00000000
kfdsde.indices[10]:                   0 ; 0x0a8: 0x00000000
kfdsde.indices[11]:                   0 ; 0x0ac: 0x00000000
kfdsde.indices[12]:                   0 ; 0x0b0: 0x00000000
kfdsde.indices[13]:                   0 ; 0x0b4: 0x00000000
kfdsde.indices[14]:                   0 ; 0x0b8: 0x00000000

kfdsde.indices[14]:                   0 ; 0x0b8: 0x00000000
[grid@jyrac1 ~]$ kfed read /dev/raw/raw4 aun=1708  | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           20 ; 0x002: *** Unknown Enum ***
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                     256 ; 0x004: blk=256
kfbh.block.obj:                     254 ; 0x008: file=254
kfbh.check:                  3890924893 ; 0x00c: 0xe7eacd5d
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdsHdrB.clientId:            996679687 ; 0x000: 0x3b682007
kfdsHdrB.incarn:                      0 ; 0x004: 0x00000000
kfdsHdrB.dskNum:                      3 ; 0x008: 0x0003
kfdsHdrB.ub2spare:                    0 ; 0x00a: 0x0000
ub1[0]:                               0 ; 0x00c: 0x00
ub1[1]:                               0 ; 0x00d: 0x00
ub1[2]:                               0 ; 0x00e: 0x00
ub1[3]:                               0 ; 0x00f: 0x00
ub1[4]:                               0 ; 0x010: 0x00
ub1[5]:                               0 ; 0x011: 0x00
ub1[6]:                               0 ; 0x012: 0x00
ub1[7]:                               0 ; 0x013: 0x00
ub1[8]:                               0 ; 0x014: 0x00
ub1[9]:                              32 ; 0x015: 0x20
ub1[10]:                              0 ; 0x016: 0x00
ub1[11]:                            128 ; 0x017: 0x80
ub1[12]:                              0 ; 0x018: 0x00
ub1[13]:                             56 ; 0x019: 0x38
ub1[14]:                            120 ; 0x01a: 0x78
ub1[15]:                              1 ; 0x01b: 0x01
ub1[16]:                             32 ; 0x01c: 0x20
ub1[17]:                              0 ; 0x01d: 0x00
ub1[18]:                              0 ; 0x01e: 0x00
ub1[19]:                              0 ; 0x01f: 0x00
ub1[20]:                              0 ; 0x020: 0x00
ub1[21]:                              0 ; 0x021: 0x00
ub1[22]:                              0 ; 0x022: 0x00
ub1[23]:                              0 ; 0x023: 0x00
ub1[24]:                              0 ; 0x024: 0x00
ub1[25]:                              0 ; 0x025: 0x00
ub1[26]:                              0 ; 0x026: 0x00
ub1[27]:                              0 ; 0x027: 0x00
ub1[28]:                              0 ; 0x028: 0x00

小结:
staleness directory 和 staleness registry提供的元信息结构用来为ASM 11中引入的fast mirror resync新特性提供支持。staleness directory是ASM文件号为12,包含了可以把staleness registry中的槽位映射给特定磁盘和客户端的元信息。当磁盘offline时,staleness registry用于跟踪AU的状态。这个特性只在NORMAL或HIGH冗余模式的磁盘组中生效。

]]>
http://www.jydba.net/index.php/archives/2015/feed 0
Oracle ASM User Directory and Group Directory http://www.jydba.net/index.php/archives/2013 http://www.jydba.net/index.php/archives/2013#respond Mon, 02 Jan 2017 10:38:16 +0000 http://www.jydba.net/?p=2013 ASM元信息的10号文件是ASM用户目录,11号文件是组目录。它们是用来为ASM文件访问控制特性提供支持的元信息结构。ASM文件访问控制机制用来限制特定的ASM客户端(通常就是数据库实例)对文件的访问,它是基于操作系统层database home的effective user标识号实现的。这些信息可以通过V$ASM_USER、V$ASM_USERGROUP、$ASM_USERGROUP_MEMBER视图查询到。

ASM用户与组
如果要使用ASM文件访问控制特性,我们需要适当的设置操作系统用户和组。通过ALTER DISKGROUP ADD USERGROUP命令将用户和组添加至ASM磁盘组中。下面的语句将对5号磁盘组(USD)增加用户组。

下面是操作系统中我们创建的用户。

[root@jyrac1 bin]# id grid
uid=500(grid) gid=505(oinstall) groups=505(oinstall),500(asmadmin),501(asmdba),502(asmoper),503(dba)
[root@jyrac1 bin]# id oracle
uid=501(oracle) gid=505(oinstall) groups=505(oinstall),501(asmdba),503(dba),504(oper)

给磁盘组设置用户与组

SQL> alter diskgroup usd add usergroup 'test_usergroup'  with member 'grid','oracle';
alter diskgroup usd add usergroup 'test_usergroup'  with member 'grid','oracle'
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15304: operation requires ACCESS_CONTROL.ENABLED attribute to be TRUE

错误信息显示对于5号磁盘组(USD)的access_control.enabled属性需要启用才能给磁盘组设置用户与组


[grid@jyrac1 ~]$ asmcmd setattr -G USD access_control.enabled 'TRUE';
[grid@jyrac1 ~]$ asmcmd lsattr -lm access_control.enabled
Group_Name  Name                    Value  RO  Sys  
ACFS        access_control.enabled  FALSE  N   Y    
ARCHDG      access_control.enabled  FALSE  N   Y    
CRSDG       access_control.enabled  FALSE  N   Y    
DATADG      access_control.enabled  FALSE  N   Y    
USD         access_control.enabled  TRUE   N   Y    


SQL> alter diskgroup usd add usergroup 'test_usergroup'  with member 'grid','oracle';

Diskgroup altered.

执行以下查询来获得在磁盘组中设置的用户和组

SQL> col "disk group#" for 999
SQL> col "os id" for a10
SQL> col "os user" for a10
SQL> col "asm user#" for 999
SQL> col "asm group#" for 999
SQL> col "asm user group" for a40
SQL> select u.group_number "disk group#",
  2  u.os_id "os id",
  3  u.os_name "os user",
  4  u.user_number "asm user#",
  5  g.usergroup_number "asm group#",
  6  g.name "asm user group"
  7  from v$asm_user u, v$asm_usergroup g, v$asm_usergroup_member m
  8  where u.group_number=g.group_number and u.group_number=m.group_number
  9  and u.user_number=m.member_number
 10  and g.usergroup_number=m.usergroup_number
 11  order by 1, 2;  

disk group# os id      os user    asm user# asm group# asm user group
----------- ---------- ---------- --------- ---------- ----------------------------------------
          5 500        grid               1          1 test_usergroup
          5 501        oracle             2          1 test_usergroup

获取5号磁盘组的ASM用户和组目录所在的AU

SQL> select  x.number_kffxp "file#",x.group_kffxp "group#",x.disk_kffxp "disk #",d.name "disk name",d.path "disk path",x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=5
  6  and x.number_kffxp in(10,11)
  7  order by 1,2,3;

file#     group#     disk # disk name                      disk path                                virtual extent physical extent         au
----- ---------- ---------- ------------------------------ ---------------------------------------- -------------- --------------- ----------
   10          5          0 USD_0000                       /dev/raw/raw7                                         0               1        100
   10          5          1 USD_0001                       /dev/raw/raw12                                        0               0        164
   11          5          0 USD_0000                       /dev/raw/raw7                                         0               1        101
   11          5          1 USD_0001                       /dev/raw/raw12                                        0               0        165

从上面的结果可以看到10号文件有两份镜像,分别存储在5号磁盘组的0号磁盘(/dev/raw/raw7)的100号AU与1号磁盘(/dev/raw/raw12)的164号AU,11号文件有两份镜像分别存储在分别存储在5号磁盘组的0号磁盘(/dev/raw/raw7)的101号AU与1号磁盘(/dev/raw/raw12)的165号AU。

通过kfed工具来获得5号磁盘组的用户与组目录的AU分布情况
由于1号文件总是开始在0号磁盘2号AU,记住这个位置:0号盘2号AU。这是ASM中定位文件的起点,它的作用,有点相当于磁盘上的引导区,在电脑开机后负责将OS启动起来。1号文件在最少情况下,至少有两个AU。在1号文件中,每个文件占用一个元数据块,存放自身的空间分布信息。每个元数据块大小是4K,一个AU是1M,哪么,每个AU中,可以存储256个文件的空间分布信息。这其中,0号盘2号AU中,全是元文件的信息。再具体一点,0号盘2号AU,第一个元数据块被系统占用,从第二个块开始,到255为止,共255个元数据块,对应索引号1至255的文件。其实,也就是全部的元文件了。也就是说0号盘2号AU,保存了全部元文件的空间分布信息。1号文件的第二个AU,从第一个块开始,保存256号文件。第二个块对应257号文件,等等。每次从ASM中读数据时,Oracle都要先读到1号文件,从中找出要读的目标文件在磁盘上的分布位置,然后再去读取相应的文件的数据。由于用户目录是10号文件,组目录是11号文件,这可以通过读取5号磁盘组的0号磁盘(/dev/raw/raw7)的2号AU的10与11号块来获得

[grid@jyrac1 ~]$ kfed read /dev/raw/raw7 aun=2 blkn=10 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                      10 ; 0x004: blk=10
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                   751075078 ; 0x00c: 0x2cc47f06
kfbh.fcn.base:                     7473 ; 0x010: 0x00001d31
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:                   1 ; 0x000: A=1 NUMM=0x0
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 1048576 ; 0x010: 0x00100000
kfffdb.xtntcnt:                       3 ; 0x014: 0x00000003
kfffdb.xtnteof:                       3 ; 0x018: 0x00000003
kfffdb.blkSize:                    4096 ; 0x01c: 0x00001000
kfffdb.flags:                         1 ; 0x020: O=1 S=0 S=0 D=0 C=0 I=0 R=0 A=0
kfffdb.fileType:                     15 ; 0x021: 0x0f
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                       3 ; 0x03c: 0x0003
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:             4294967295 ; 0x044: 0xffffffff
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      0 ; 0x04c: 0x00
kfffdb.strpsz:                        0 ; 0x04d: 0x00
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33043408 ; 0x050: HOUR=0x10 DAYS=0x1e MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:             2908193792 ; 0x054: USEC=0x0 MSEC=0x1e1 SECS=0x15 MINS=0x2b
kfffdb.modts.hi:               33043408 ; 0x058: HOUR=0x10 DAYS=0x1e MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:             2908193792 ; 0x05c: USEC=0x0 MSEC=0x1e1 SECS=0x15 MINS=0x2b
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                  164 ; 0x4a0: 0x000000a4
kfffde[0].xptr.disk:                  1 ; 0x4a4: 0x0001
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                 143 ; 0x4a7: 0x8f
kfffde[1].xptr.au:                  100 ; 0x4a8: 0x00000064
kfffde[1].xptr.disk:                  0 ; 0x4ac: 0x0000
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                  78 ; 0x4af: 0x4e
kfffde[2].xptr.au:           4294967294 ; 0x4b0: 0xfffffffe
kfffde[2].xptr.disk:              65534 ; 0x4b4: 0xfffe
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                  42 ; 0x4b7: 0x2a

从上面的kfffde[0].xptr.au=164,kfffde[0].xptr.disk=1与kfffde[1].xptr.au=100,kfffde[1].xptr.disk=0可知10号文件有两份镜像,分别存储在5号磁盘组的0号磁盘(/dev/raw/raw7)的100号AU与1号磁盘(/dev/raw/raw12)的164号AU,与查询语句所获得的结果完全一致。

[grid@jyrac1 ~]$ kfed read /dev/raw/raw7 aun=2 blkn=11 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                      11 ; 0x004: blk=11
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                   751074319 ; 0x00c: 0x2cc47c0f
kfbh.fcn.base:                     7737 ; 0x010: 0x00001e39
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:                   1 ; 0x000: A=1 NUMM=0x0
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 1048576 ; 0x010: 0x00100000
kfffdb.xtntcnt:                       3 ; 0x014: 0x00000003
kfffdb.xtnteof:                       3 ; 0x018: 0x00000003
kfffdb.blkSize:                    4096 ; 0x01c: 0x00001000
kfffdb.flags:                         1 ; 0x020: O=1 S=0 S=0 D=0 C=0 I=0 R=0 A=0
kfffdb.fileType:                     15 ; 0x021: 0x0f
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                       3 ; 0x03c: 0x0003
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:             4294967295 ; 0x044: 0xffffffff
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      0 ; 0x04c: 0x00
kfffdb.strpsz:                        0 ; 0x04d: 0x00
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33043408 ; 0x050: HOUR=0x10 DAYS=0x1e MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:             2908340224 ; 0x054: USEC=0x0 MSEC=0x270 SECS=0x15 MINS=0x2b
kfffdb.modts.hi:               33043408 ; 0x058: HOUR=0x10 DAYS=0x1e MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:             2908340224 ; 0x05c: USEC=0x0 MSEC=0x270 SECS=0x15 MINS=0x2b
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                  165 ; 0x4a0: 0x000000a5
kfffde[0].xptr.disk:                  1 ; 0x4a4: 0x0001
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                 142 ; 0x4a7: 0x8e
kfffde[1].xptr.au:                  101 ; 0x4a8: 0x00000065
kfffde[1].xptr.disk:                  0 ; 0x4ac: 0x0000
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                  79 ; 0x4af: 0x4f
kfffde[2].xptr.au:           4294967294 ; 0x4b0: 0xfffffffe
kfffde[2].xptr.disk:              65534 ; 0x4b4: 0xfffe
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                  42 ; 0x4b7: 0x2a

从上面的kfffde[0].xptr.au=165,kfffde[0].xptr.disk=1与kfffde[1].xptr.au=101,kfffde[1].xptr.disk=0可知11号文件有两份镜像,分别存储在5号磁盘组的0号磁盘(/dev/raw/raw7)的101号AU与1号磁盘(/dev/raw/raw12)的165号AU,与查询语句所获得的结果完全一致。

对于每个用户,用户目录元信息中都有一个block相对应,而block号是跟用户号(对应v$asm_user的user_number列)相对应的。我们有两个用户,用户号码分别对应1-2,那么他们也分别位于1-2号block中。接下来加以验证。

[grid@jyrac1 ~]$ kfed read /dev/raw/raw7 aun=100 blkn=1 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           24 ; 0x002: KFBTYP_USERDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       1 ; 0x004: blk=1
kfbh.block.obj:                      10 ; 0x008: file=10
kfbh.check:                  4275524483 ; 0x00c: 0xfed75383
kfbh.fcn.base:                     7745 ; 0x010: 0x00001e41
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:                 2 ; 0x00c: 0x00000002
kffdnd.overfl.incarn:                 1 ; 0x010: A=1 NUMM=0x0
kffdnd.parent.number:        4294967295 ; 0x014: 0xffffffff
kffdnd.parent.incarn:                 0 ; 0x018: A=0 NUMM=0x0
kffdnd.fstblk.number:                 0 ; 0x01c: 0x00000000
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfzude.entry.incarn:                  1 ; 0x024: A=1 NUMM=0x0
kfzude.entry.hash:                    0 ; 0x028: 0x00000000
kfzude.entry.refer.number:   4294967295 ; 0x02c: 0xffffffff
kfzude.entry.refer.incarn:            0 ; 0x030: A=0 NUMM=0x0
kfzude.flags:                         0 ; 0x034: 0x00000000
kfzude.user:                        500 ; 0x038: length=3
...

1号block对应500号操作系统用户。这与上文v$asm_user查询结果是匹配的。接下来看其他block。

[grid@jyrac1 ~]$ vi getuser.sh 
let b=1
while (($b < = 2))
do
kfed read /dev/raw/raw7 aun=100 blkn=$b | grep kfzude.user
let b=b+1
done
[grid@jyrac1 ~]$ chmod 777 getuser.sh 
[grid@jyrac1 ~]$ ./getuser.sh 
kfzude.user:                        500 ; 0x038: length=3
kfzude.user:                        501 ; 0x038: length=3

正如所想的,以上显示了ASM用户目录中的两个操作系统用户对应的ID。

组目录也是一个条目对应一个block,block号也是跟ASM组号码匹配的,继续验证。

[grid@jyrac1 ~]$ kfed read /dev/raw/raw7 aun=101 blkn=1 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           25 ; 0x002: KFBTYP_GROUPDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       1 ; 0x004: blk=1
kfbh.block.obj:                      11 ; 0x008: file=11
kfbh.check:                  2137693031 ; 0x00c: 0x7f6a9b67
kfbh.fcn.base:                     7747 ; 0x010: 0x00001e43
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:        4294967295 ; 0x00c: 0xffffffff
kffdnd.overfl.incarn:                 0 ; 0x010: A=0 NUMM=0x0
kffdnd.parent.number:        4294967295 ; 0x014: 0xffffffff
kffdnd.parent.incarn:                 0 ; 0x018: A=0 NUMM=0x0
kffdnd.fstblk.number:                 0 ; 0x01c: 0x00000000
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfzgde.entry.incarn:                  1 ; 0x024: A=1 NUMM=0x0
kfzgde.entry.hash:                    0 ; 0x028: 0x00000000
kfzgde.entry.refer.number:   4294967295 ; 0x02c: 0xffffffff
kfzgde.entry.refer.incarn:            0 ; 0x030: A=0 NUMM=0x0
kfzgde.flags:                         0 ; 0x034: 0x00000000
kfzgde.owner.entnum:                  1 ; 0x038: 0x0001
kfzgde.owner.entinc:                  1 ; 0x03a: 0x0001
kfzgde.name:             test_usergroup ; 0x03c: length=14
...

组目录也是一个条目对应一个block,block号也是跟ASM组号码匹配的,因为我这里只有一个用户组,如果有多个可以编写脚本来进行获得

[grid@jyrac1 ~]$ vi getusergroup.sh

let b=1
while (($b < = 3))
do
kfed read  /dev/raw/raw7 aun=101 blkn=$b | grep kfzgde.name
let b=b+1
done

[grid@jyrac1 ~]$ chmod 777 getusergroup.sh 

[grid@jyrac1 ~]$ ./getusergroup.sh 
kfzgde.name:             test_usergroup ; 0x03c: length=14
kfzgde.name:                            ; 0x03c: length=0
kfzgde.name:                            ; 0x03c: length=0

小结:
ASM用户目录和组目录是用来为ASM文件访问控制特性提供支持的元信息结构,该特性在11.2版本中引入。这些信息可以通过V$ASM_USER、V$ASM_USERGROUP、$ASM_USERGROUP_MEMBER视图查询到。

]]>
http://www.jydba.net/index.php/archives/2013/feed 0
Oracle ASM Attributes Directory http://www.jydba.net/index.php/archives/2011 http://www.jydba.net/index.php/archives/2011#respond Mon, 02 Jan 2017 10:34:42 +0000 http://www.jydba.net/?p=2011 Attributes Directory包含了关于磁盘组属性的元数据。只有当compatible.asm设置为11.1或更高版本时目录才会在磁盘组中存在。Attribute Directory在磁盘组中的文件号为9。直到ASM 11.1版本开始,才引入了磁盘组属性的概念,它被用来细粒度的调整磁盘组的属性。有些属性只能在磁盘组创建时指定(如au_size),AU_SIZE属性存储在disk header中。如果compatible.asm设置为11.1或更高版本那么compatible.asm将会存储在PST中,否则compatible.asm将会被存储在disk header中。在Oracle 11gr1中,compatible.rdbms与disk_repair_time以及compatible.asm被存储在属性目录中。有些属性可以在任何时候指定(如disk_repair_time)。

公共属性
大多数属性存放在属性目录并且可以通过查询v$asm_attribute视图获得。我们通过查询这个视图来看下我的所有磁盘组的属性信息

SQL> col "group " for a30
SQL> col "attribute" for a50
SQL> col "value" for a50
SQL> select g.name "group", a.name "attribute", a.value "value"
  2  from v$asm_diskgroup g, v$asm_attribute a
  3  where g.group_number=a.group_number and a.name not like 'template%';

group                                                        attribute                                          value
------------------------------------------------------------ -------------------------------------------------- --------------------------------------------------
ARCHDG                                                       compatible.asm                                     11.2.0.0.0
ARCHDG                                                       sector_size                                        512
ARCHDG                                                       access_control.umask                               066
ARCHDG                                                       access_control.enabled                             FALSE
ARCHDG                                                       cell.smart_scan_capable                            FALSE
ARCHDG                                                       compatible.rdbms                                   10.1.0.0.0
ARCHDG                                                       disk_repair_time                                   3.6h
ARCHDG                                                       au_size                                            1048576
CRSDG                                                        disk_repair_time                                   3.6h
CRSDG                                                        access_control.enabled                             FALSE
CRSDG                                                        cell.smart_scan_capable                            FALSE
CRSDG                                                        compatible.rdbms                                   10.1.0.0.0
CRSDG                                                        compatible.asm                                     11.2.0.0.0
CRSDG                                                        sector_size                                        512
CRSDG                                                        au_size                                            1048576
CRSDG                                                        access_control.umask                               066
DATADG                                                       compatible.asm                                     11.2.0.0.0
DATADG                                                       sector_size                                        512
DATADG                                                       au_size                                            1048576
DATADG                                                       disk_repair_time                                   3.6h
DATADG                                                       compatible.rdbms                                   10.1.0.0.0
DATADG                                                       access_control.umask                               066
DATADG                                                       access_control.enabled                             FALSE
DATADG                                                       cell.smart_scan_capable                            FALSE
ACFS                                                         disk_repair_time                                   3.6h
ACFS                                                         au_size                                            1048576
ACFS                                                         access_control.umask                               066
ACFS                                                         access_control.enabled                             FALSE
ACFS                                                         cell.smart_scan_capable                            FALSE
ACFS                                                         compatible.advm                                    11.2.0.0.0
ACFS                                                         compatible.rdbms                                   10.1.0.0.0
ACFS                                                         compatible.asm                                     11.2.0.0.0
ACFS                                                         sector_size                                        512
USD                                                          disk_repair_time                                   3.6h
USD                                                          compatible.advm                                    11.2.0.0.0
USD                                                          cell.smart_scan_capable                            FALSE
USD                                                          access_control.enabled                             FALSE
USD                                                          access_control.umask                               066
USD                                                          compatible.asm                                     11.2.0.0.0
USD                                                          sector_size                                        512
USD                                                          au_size                                            1048576
USD                                                          compatible.rdbms                                   11.2.0.0.0

42 rows selected.

可以在任何时候修改的属性是disk repair time。以下是通过asmcmd修改USD磁盘组disk repair time属性的命令

[grid@jyrac1 ~]$ asmcmd setattr -G USD disk_repair_time '8.0h'
[grid@jyrac1 ~]$ asmcmd lsattr -lm disk_repair_time
Group_Name  Name              Value  RO  Sys  
ACFS        disk_repair_time  3.6h   N   Y    
ARCHDG      disk_repair_time  3.6h   N   Y    
CRSDG       disk_repair_time  3.6h   N   Y    
DATADG      disk_repair_time  3.6h   N   Y    
USD         disk_repair_time  8.0h   N   Y    

从上面的查询可以看到成功将USD磁盘组disk repair time属性修改为8.0h了。

隐藏属性
属性目录位于ASM元数据的9号文件。现在我们定位到5号磁盘组(USD)的属性目录。

SQL> select x.group_kffxp "group#",x.disk_kffxp "disk #",d.name "disk name",d.path "disk path",x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=5
  6  and x.number_kffxp=9
  7  order by 1,2; 

    group#     disk # disk name                      disk path                                virtual extent physical extent         au
---------- ---------- ------------------------------ ---------------------------------------- -------------- --------------- ----------
         5          0 USD_0000                       /dev/raw/raw7                                         0               0         50
         5          1 USD_0001                       /dev/raw/raw12                                        0               1         50

上面的查询结果显示属性目录有两份镜像副本(磁盘组是normal冗余),AU分布在5号磁盘组(USD)的0号磁盘(/dev/raw/raw7)的50号AU,与1号磁盘(/dev/raw/raw12)的50号AU中。

通过kfed工具来获得5号磁盘组(USD)的属性目录AU分布情况
由于1号文件总是开始在0号磁盘2号AU,记住这个位置:0号盘2号AU。这是ASM中定位文件的起点,它的作用,有点相当于磁盘上的引导区,在电脑开机后负责将OS启动起来。1号文件在最少情况下,至少有两个AU。在1号文件中,每个文件占用一个元数据块,存放自身的空间分布信息。每个元数据块大小是4K,一个AU是1M,哪么,每个AU中,可以存储256个文件的空间分布信息。这其中,0号盘2号AU中,全是元文件的信息。再具体一点,0号盘2号AU,第一个元数据块被系统占用,从第二个块开始,到255为止,共255个元数据块,对应索引号1至255的文件。其实,也就是全部的元文件了。也就是说0号盘2号AU,保存了全部元文件的空间分布信息。1号文件的第二个AU,从第一个块开始,保存256号文件。第二个块对应257号文件,等等。每次从ASM中读数据时,Oracle都要先读到1号文件,从中找出要读的目标文件在磁盘上的分布位置,然后再去读取相应的文件的数据。由于属性目录是9号文件,这可以通过读取5号磁盘组的0号磁盘(/dev/raw/raw7)的2号AU的9号块来获得

[grid@jyrac1 ~]$ kfed read /dev/raw/raw7 aun=2 blkn=9 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       9 ; 0x004: blk=9
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                    20914363 ; 0x00c: 0x013f20bb
kfbh.fcn.base:                     6545 ; 0x010: 0x00001991
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:                   1 ; 0x000: A=1 NUMM=0x0
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 1048576 ; 0x010: 0x00100000
kfffdb.xtntcnt:                       3 ; 0x014: 0x00000003
kfffdb.xtnteof:                       3 ; 0x018: 0x00000003
kfffdb.blkSize:                    4096 ; 0x01c: 0x00001000
kfffdb.flags:                        17 ; 0x020: O=1 S=0 S=0 D=0 C=1 I=0 R=0 A=0
kfffdb.fileType:                     15 ; 0x021: 0x0f
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                       3 ; 0x03c: 0x0003
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:             4294967295 ; 0x044: 0xffffffff
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      0 ; 0x04c: 0x00
kfffdb.strpsz:                        0 ; 0x04d: 0x00
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33043406 ; 0x050: HOUR=0xe DAYS=0x1e MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:             3959646208 ; 0x054: USEC=0x0 MSEC=0xda SECS=0x0 MINS=0x3b
kfffdb.modts.hi:                      0 ; 0x058: HOUR=0x0 DAYS=0x0 MNTH=0x0 YEAR=0x0
kfffdb.modts.lo:                      0 ; 0x05c: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                   50 ; 0x4a0: 0x00000032
kfffde[0].xptr.disk:                  0 ; 0x4a4: 0x0000
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                  24 ; 0x4a7: 0x18
kfffde[1].xptr.au:                   50 ; 0x4a8: 0x00000032
kfffde[1].xptr.disk:                  1 ; 0x4ac: 0x0001
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                  25 ; 0x4af: 0x19
kfffde[2].xptr.au:           4294967294 ; 0x4b0: 0xfffffffe
kfffde[2].xptr.disk:              65534 ; 0x4b4: 0xfffe
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                  42 ; 0x4b7: 0x2a

从kfffde[0].xptr.au=50,kfffde[0].xptr.disk=0,与kfffde[1].xptr.au=50,kfffde[1].xptr.au=50,kfffde[1].xptr.disk=1,可知属性目录有两份镜像副本(磁盘组是normal冗余),AU分布在5号磁盘组(USD)的0号磁盘(/dev/raw/raw7)的50号AU,与1号磁盘(/dev/raw/raw12)的50号AU中,这与查询语句所获得的结果相符。

现在用kfed工具进行查看属性目录的具体内容:

[grid@jyrac1 ~]$ kfed read /dev/raw/raw7 aun=50 blkn=0 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           23 ; 0x002: KFBTYP_ATTRDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       9 ; 0x008: file=9
kfbh.check:                  2524475834 ; 0x00c: 0x967871ba
kfbh.fcn.base:                     7211 ; 0x010: 0x00001c2b
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:                 5 ; 0x00c: 0x00000005
kffdnd.overfl.incarn:                 1 ; 0x010: A=1 NUMM=0x0
kffdnd.parent.number:                 0 ; 0x014: 0x00000000
kffdnd.parent.incarn:                 1 ; 0x018: A=1 NUMM=0x0
kffdnd.fstblk.number:                 0 ; 0x01c: 0x00000000
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfede[0].entry.incarn:                1 ; 0x024: A=1 NUMM=0x0
kfede[0].entry.hash:                  0 ; 0x028: 0x00000000
kfede[0].entry.refer.number: 4294967295 ; 0x02c: 0xffffffff
kfede[0].entry.refer.incarn:          0 ; 0x030: A=0 NUMM=0x0
kfede[0].name:         disk_repair_time ; 0x034: length=16
kfede[0].value:                    8.0h ; 0x074: length=4
kfede[0].length:                      4 ; 0x174: 0x0004
kfede[0].flags:                      16 ; 0x176: R=0 D=0 H=0 H=0 S=1 C=0 S=0 V=0 I=0
kfede[0].spare1[0]:                   0 ; 0x178: 0x00000000
kfede[0].spare1[1]:                   0 ; 0x17c: 0x00000000
kfede[0].spare1[2]:                   0 ; 0x180: 0x00000000
kfede[0].spare1[3]:                   0 ; 0x184: 0x00000000
kfede[0].spare1[4]:                   0 ; 0x188: 0x00000000
kfede[0].spare1[5]:                   0 ; 0x18c: 0x00000000
kfede[0].spare1[6]:                   0 ; 0x190: 0x00000000
kfede[0].spare1[7]:                   0 ; 0x194: 0x00000000
kfede[1].entry.incarn:                1 ; 0x198: A=1 NUMM=0x0
kfede[1].entry.hash:                  0 ; 0x19c: 0x00000000
kfede[1].entry.refer.number: 4294967295 ; 0x1a0: 0xffffffff
kfede[1].entry.refer.incarn:          0 ; 0x1a4: A=0 NUMM=0x0
kfede[1].name:       _rebalance_compact ; 0x1a8: length=18
kfede[1].value:                    TRUE ; 0x1e8: length=4
kfede[1].length:                      4 ; 0x2e8: 0x0004
kfede[1].flags:                      22 ; 0x2ea: R=0 D=1 H=1 H=0 S=1 C=0 S=0 V=0 I=0
kfede[1].spare1[0]:                   0 ; 0x2ec: 0x00000000
kfede[1].spare1[1]:                   0 ; 0x2f0: 0x00000000
kfede[1].spare1[2]:                   0 ; 0x2f4: 0x00000000
kfede[1].spare1[3]:                   0 ; 0x2f8: 0x00000000
kfede[1].spare1[4]:                   0 ; 0x2fc: 0x00000000
kfede[1].spare1[5]:                   0 ; 0x300: 0x00000000
kfede[1].spare1[6]:                   0 ; 0x304: 0x00000000
kfede[1].spare1[7]:                   0 ; 0x308: 0x00000000
kfede[2].entry.incarn:                1 ; 0x30c: A=1 NUMM=0x0
kfede[2].entry.hash:                  0 ; 0x310: 0x00000000
kfede[2].entry.refer.number: 4294967295 ; 0x314: 0xffffffff
kfede[2].entry.refer.incarn:          0 ; 0x318: A=0 NUMM=0x0
kfede[2].name:            _extent_sizes ; 0x31c: length=13
kfede[2].value:                  1 4 16 ; 0x35c: length=6
kfede[2].length:                      6 ; 0x45c: 0x0006
kfede[2].flags:                      22 ; 0x45e: R=0 D=1 H=1 H=0 S=1 C=0 S=0 V=0 I=0
kfede[2].spare1[0]:                   0 ; 0x460: 0x00000000
kfede[2].spare1[1]:                   0 ; 0x464: 0x00000000
kfede[2].spare1[2]:                   0 ; 0x468: 0x00000000
kfede[2].spare1[3]:                   0 ; 0x46c: 0x00000000
kfede[2].spare1[4]:                   0 ; 0x470: 0x00000000
kfede[2].spare1[5]:                   0 ; 0x474: 0x00000000
kfede[2].spare1[6]:                   0 ; 0x478: 0x00000000
kfede[2].spare1[7]:                   0 ; 0x47c: 0x00000000
kfede[3].entry.incarn:                1 ; 0x480: A=1 NUMM=0x0
kfede[3].entry.hash:                  0 ; 0x484: 0x00000000
kfede[3].entry.refer.number: 4294967295 ; 0x488: 0xffffffff
kfede[3].entry.refer.incarn:          0 ; 0x48c: A=0 NUMM=0x0
kfede[3].name:           _extent_counts ; 0x490: length=14
kfede[3].value:  20000 20000 2147483647 ; 0x4d0: length=22
kfede[3].length:                     22 ; 0x5d0: 0x0016
kfede[3].flags:                      22 ; 0x5d2: R=0 D=1 H=1 H=0 S=1 C=0 S=0 V=0 I=0
kfede[3].spare1[0]:                   0 ; 0x5d4: 0x00000000
kfede[3].spare1[1]:                   0 ; 0x5d8: 0x00000000
kfede[3].spare1[2]:                   0 ; 0x5dc: 0x00000000
kfede[3].spare1[3]:                   0 ; 0x5e0: 0x00000000
kfede[3].spare1[4]:                   0 ; 0x5e4: 0x00000000
kfede[3].spare1[5]:                   0 ; 0x5e8: 0x00000000
kfede[3].spare1[6]:                   0 ; 0x5ec: 0x00000000
kfede[3].spare1[7]:                   0 ; 0x5f0: 0x00000000
kfede[4].entry.incarn:                1 ; 0x5f4: A=1 NUMM=0x0
kfede[4].entry.hash:                  0 ; 0x5f8: 0x00000000
kfede[4].entry.refer.number:          1 ; 0x5fc: 0x00000001
kfede[4].entry.refer.incarn:          1 ; 0x600: A=1 NUMM=0x0
kfede[4].name:                        _ ; 0x604: length=1
kfede[4].value:                       0 ; 0x644: length=1
kfede[4].length:                      1 ; 0x744: 0x0001
kfede[4].flags:                      22 ; 0x746: R=0 D=1 H=1 H=0 S=1 C=0 S=0 V=0 I=0
kfede[4].spare1[0]:                   0 ; 0x748: 0x00000000
kfede[4].spare1[1]:                   0 ; 0x74c: 0x00000000
kfede[4].spare1[2]:                   0 ; 0x750: 0x00000000
kfede[4].spare1[3]:                   0 ; 0x754: 0x00000000
kfede[4].spare1[4]:                   0 ; 0x758: 0x00000000
kfede[4].spare1[5]:                   0 ; 0x75c: 0x00000000
kfede[4].spare1[6]:                   0 ; 0x760: 0x00000000
kfede[4].spare1[7]:                   0 ; 0x764: 0x00000000
kfede[5].entry.incarn:                1 ; 0x768: A=1 NUMM=0x0
kfede[5].entry.hash:                  0 ; 0x76c: 0x00000000
kfede[5].entry.refer.number: 4294967295 ; 0x770: 0xffffffff
kfede[5].entry.refer.incarn:          0 ; 0x774: A=0 NUMM=0x0
kfede[5].name:                  au_size ; 0x778: length=7
kfede[5].value:                ; 0x7b8: length=9
kfede[5].length:                      9 ; 0x8b8: 0x0009
kfede[5].flags:                     147 ; 0x8ba: R=1 D=1 H=0 H=0 S=1 C=0 S=0 V=1 I=0
kfede[5].spare1[0]:                   0 ; 0x8bc: 0x00000000
kfede[5].spare1[1]:                   0 ; 0x8c0: 0x00000000
kfede[5].spare1[2]:                   0 ; 0x8c4: 0x00000000
kfede[5].spare1[3]:                   0 ; 0x8c8: 0x00000000
kfede[5].spare1[4]:                   0 ; 0x8cc: 0x00000000
kfede[5].spare1[5]:                   0 ; 0x8d0: 0x00000000
kfede[5].spare1[6]:                   0 ; 0x8d4: 0x00000000
kfede[5].spare1[7]:                   0 ; 0x8d8: 0x00000000
kfede[6].entry.incarn:                1 ; 0x8dc: A=1 NUMM=0x0
kfede[6].entry.hash:                  0 ; 0x8e0: 0x00000000
kfede[6].entry.refer.number: 4294967295 ; 0x8e4: 0xffffffff
kfede[6].entry.refer.incarn:          0 ; 0x8e8: A=0 NUMM=0x0
kfede[6].name:              sector_size ; 0x8ec: length=11
kfede[6].value:                ; 0x92c: length=9
kfede[6].length:                      9 ; 0xa2c: 0x0009
kfede[6].flags:                     147 ; 0xa2e: R=1 D=1 H=0 H=0 S=1 C=0 S=0 V=1 I=0
kfede[6].spare1[0]:                   0 ; 0xa30: 0x00000000
kfede[6].spare1[1]:                   0 ; 0xa34: 0x00000000
kfede[6].spare1[2]:                   0 ; 0xa38: 0x00000000
kfede[6].spare1[3]:                   0 ; 0xa3c: 0x00000000
kfede[6].spare1[4]:                   0 ; 0xa40: 0x00000000
kfede[6].spare1[5]:                   0 ; 0xa44: 0x00000000
kfede[6].spare1[6]:                   0 ; 0xa48: 0x00000000
kfede[6].spare1[7]:                   0 ; 0xa4c: 0x00000000
kfede[7].entry.incarn:                1 ; 0xa50: A=1 NUMM=0x0
kfede[7].entry.hash:                  0 ; 0xa54: 0x00000000
kfede[7].entry.refer.number:          2 ; 0xa58: 0x00000002
kfede[7].entry.refer.incarn:          1 ; 0xa5c: A=1 NUMM=0x0
kfede[7].name:               compatible ; 0xa60: length=10
kfede[7].value:                ; 0xaa0: length=9
kfede[7].length:                      9 ; 0xba0: 0x0009
kfede[7].flags:                     178 ; 0xba2: R=0 D=1 H=0 H=0 S=1 C=1 S=0 V=1 I=0
kfede[7].spare1[0]:                   0 ; 0xba4: 0x00000000
kfede[7].spare1[1]:                   0 ; 0xba8: 0x00000000
kfede[7].spare1[2]:                   0 ; 0xbac: 0x00000000
kfede[7].spare1[3]:                   0 ; 0xbb0: 0x00000000
kfede[7].spare1[4]:                   0 ; 0xbb4: 0x00000000
kfede[7].spare1[5]:                   0 ; 0xbb8: 0x00000000
kfede[7].spare1[6]:                   0 ; 0xbbc: 0x00000000
kfede[7].spare1[7]:                   0 ; 0xbc0: 0x00000000
kfede[8].entry.incarn:                1 ; 0xbc4: A=1 NUMM=0x0
kfede[8].entry.hash:                  0 ; 0xbc8: 0x00000000
kfede[8].entry.refer.number:          3 ; 0xbcc: 0x00000003
kfede[8].entry.refer.incarn:          1 ; 0xbd0: A=1 NUMM=0x0
kfede[8].name:                     cell ; 0xbd4: length=4
kfede[8].value:                   FALSE ; 0xc14: length=5
kfede[8].length:                      5 ; 0xd14: 0x0005
kfede[8].flags:                      34 ; 0xd16: R=0 D=1 H=0 H=0 S=0 C=1 S=0 V=0 I=0
kfede[8].spare1[0]:                   0 ; 0xd18: 0x00000000
kfede[8].spare1[1]:                   0 ; 0xd1c: 0x00000000
kfede[8].spare1[2]:                   0 ; 0xd20: 0x00000000
kfede[8].spare1[3]:                   0 ; 0xd24: 0x00000000
kfede[8].spare1[4]:                   0 ; 0xd28: 0x00000000
kfede[8].spare1[5]:                   0 ; 0xd2c: 0x00000000
kfede[8].spare1[6]:                   0 ; 0xd30: 0x00000000
kfede[8].spare1[7]:                   0 ; 0xd34: 0x00000000
kfede[9].entry.incarn:                1 ; 0xd38: A=1 NUMM=0x0
kfede[9].entry.hash:                  0 ; 0xd3c: 0x00000000
kfede[9].entry.refer.number:          4 ; 0xd40: 0x00000004
kfede[9].entry.refer.incarn:          1 ; 0xd44: A=1 NUMM=0x0
kfede[9].name:           access_control ; 0xd48: length=14
kfede[9].value:                   FALSE ; 0xd88: length=5
kfede[9].length:                      5 ; 0xe88: 0x0005
kfede[9].flags:                      18 ; 0xe8a: R=0 D=1 H=0 H=0 S=1 C=0 S=0 V=0 I=0
kfede[9].spare1[0]:                   0 ; 0xe8c: 0x00000000
kfede[9].spare1[1]:                   0 ; 0xe90: 0x00000000
kfede[9].spare1[2]:                   0 ; 0xe94: 0x00000000
kfede[9].spare1[3]:                   0 ; 0xe98: 0x00000000
kfede[9].spare1[4]:                   0 ; 0xe9c: 0x00000000
kfede[9].spare1[5]:                   0 ; 0xea0: 0x00000000
kfede[9].spare1[6]:                   0 ; 0xea4: 0x00000000
kfede[9].spare1[7]:                   0 ; 0xea8: 0x00000000

kfede[i]字段包含了磁盘组属性的名称和值。使用kfed read path | egrep  "name|value"命令查看所有的ASM磁盘组的属性值
[grid@jyrac1 ~]$ kfed read /dev/raw/raw7 aun=50 blkn=0  |  egrep "name|value"
kfede[0].name:         disk_repair_time ; 0x034: length=16
kfede[0].value:                    8.0h ; 0x074: length=4
kfede[1].name:       _rebalance_compact ; 0x1a8: length=18
kfede[1].value:                    TRUE ; 0x1e8: length=4
kfede[2].name:            _extent_sizes ; 0x31c: length=13
kfede[2].value:                  1 4 16 ; 0x35c: length=6
kfede[3].name:           _extent_counts ; 0x490: length=14
kfede[3].value:  20000 20000 2147483647 ; 0x4d0: length=22
kfede[4].name:                        _ ; 0x604: length=1
kfede[4].value:                       0 ; 0x644: length=1
kfede[5].name:                  au_size ; 0x778: length=7
kfede[5].value:                ; 0x7b8: length=9
kfede[6].name:              sector_size ; 0x8ec: length=11
kfede[6].value:                ; 0x92c: length=9
kfede[7].name:               compatible ; 0xa60: length=10
kfede[7].value:                ; 0xaa0: length=9
kfede[8].name:                     cell ; 0xbd4: length=4
kfede[8].value:                   FALSE ; 0xc14: length=5
kfede[9].name:           access_control ; 0xd48: length=14
kfede[9].value:                   FALSE ; 0xd88: length=5

上面的信息还窥探到了许多隐藏的磁盘组属性。可以看到_rebalance_compact属性是TRUE。这个属性关系到磁盘组rebalance中的compact环节。我们还可以看到extent的增长方式(extent_sizes),初始化大小会从1个AU到4个AU再到16AU。_extent_counts表示extent增长的断点,前20000个extent只包含1个AU,接下来20000个extent包含4个AU,再往后的extent会包含16个AU。cell属性是FALSE,access_control属性是FALSE。

小结:
每个磁盘组都具有一些磁盘组的属性,磁盘组属性用来细粒度的控制磁盘组的特性。大多数属性存放在属性目录并且可以通过查询v$asm_attribute视图获得。] 在ASM 11.1之前的版本,我们也可以在创建磁盘组时指定AU的大小,这是通过ASM隐含参数_ASM_AUSIZE来实现的,在ASM 11.1及之后的版本,由于ASM磁盘组属性的出现,就可以通过设置ASM磁盘组的AU_SIZE属性值来达到同样的目的。

]]>
http://www.jydba.net/index.php/archives/2011/feed 0
Oracle ASM Disk Used Space Directory http://www.jydba.net/index.php/archives/2009 http://www.jydba.net/index.php/archives/2009#respond Mon, 02 Jan 2017 10:31:57 +0000 http://www.jydba.net/?p=2009 Disk Used Space Directory
ASM的8号文件是磁盘空间使用目录Used Space Directory,简称USD,它记录了每个ASM磁盘组中每个磁盘的每个zone上被使用的AU数。一个磁盘的zone包含hot zone-热区(磁盘外圈,译者注)和cold zone-冷区(磁盘内圈,译者注)。USD目录为每个磁盘提供了一个条目,条目信息记录了2个zone(COLD和HOT)的AU使用数。USD结构是在11.2版本中引入的,并且与智能数据存放特性有关。USD元数据文件在ASM兼容性参数设置为11.2以上时会存在。

通过下面的查询获取每个磁盘组中USD目录的AU分布情况

SQL> select x.group_kffxp "group#",x.disk_kffxp "disk #",d.name "disk name",d.path "disk path",x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  --and x.group_kffxp=4
  6  and x.number_kffxp=8
  7  order by 1,2; 

    group#     disk # disk name                      disk path                                virtual extent physical extent         au
---------- ---------- ------------------------------ ---------------------------------------- -------------- --------------- ----------
         1          0 ARCHDG_0000                    /dev/raw/raw2                                         0               1         51
         1          1 ARCHDG_0001                    /dev/raw/raw9                                         0               0         51
         2          1 CRSDG_0001                     /dev/raw/raw8                                         0               0         25
         3          0 DATADG_0001                    /dev/raw/raw11                                        0               0         38
         3          1 DATADG_0003                    /dev/raw/raw4                                         0               1         38
         3          2 DATADG_0002                    /dev/raw/raw3                                         0               2         40
         4          0 ACFS_0000                      /dev/raw/raw5                                         0               1         51
         4          1 ACFS_0001                      /dev/raw/raw6                                         0               0         51
         5          0 USD_0000                       /dev/raw/raw7                                         0               1         51
         5          1 USD_0001                       /dev/raw/raw12                                        0               0         51

10 rows selected.

从上面的查询结果可知3号磁盘组的磁盘空间使用目录AU有三份镜像(因为虚拟区0有3个对应的物理区),它们分别为0号磁盘(/dev/raw/raw11)的38号AU,1号磁盘(/dev/raw/raw4)的38号AU,2号磁盘(/dev/raw/raw3)的40号AU。

使用kfed工具来查看磁盘组3的空间使用目录的AU分布情况
由于1号文件总是开始在0号磁盘2号AU,记住这个位置:0号盘2号AU。这是ASM中定位文件的起点,它的作用,有点相当于磁盘上的引导区,在电脑开机后负责将OS启动起来。1号文件在最少情况下,至少有两个AU。在1号文件中,每个文件占用一个元数据块,存放自身的空间分布信息。每个元数据块大小是4K,一个AU是1M,哪么,每个AU中,可以存储256个文件的空间分布信息。这其中,0号盘2号AU中,全是元文件的信息。再具体一点,0号盘2号AU,第一个元数据块被系统占用,从第二个块开始,到255为止,共255个元数据块,对应索引号1至255的文件。其实,也就是全部的元文件了。也就是说0号盘2号AU,保存了全部元文件的空间分布信息。1号文件的第二个AU,从第一个块开始,保存256号文件。第二个块对应257号文件,等等。每次从ASM中读数据时,Oracle都要先读到1号文件,从中找出要读的目标文件在磁盘上的分布位置,然后再去读取相应的文件的数据。由于空间使用目录是8号文件,所以要读取0号磁盘(/dev/raw/raw11)的2号AU的8号块

[grid@jyrac1 ~]$ kfed read /dev/raw/raw11 aun=2 blkn=8 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       8 ; 0x004: blk=8
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                   960193247 ; 0x00c: 0x393b62df
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:                   1 ; 0x000: A=1 NUMM=0x0
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 1048576 ; 0x010: 0x00100000
kfffdb.xtntcnt:                       3 ; 0x014: 0x00000003
kfffdb.xtnteof:                       3 ; 0x018: 0x00000003
kfffdb.blkSize:                    4096 ; 0x01c: 0x00001000
kfffdb.flags:                         1 ; 0x020: O=1 S=0 S=0 D=0 C=0 I=0 R=0 A=0
kfffdb.fileType:                     15 ; 0x021: 0x0f
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                       3 ; 0x03c: 0x0003
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:             4294967295 ; 0x044: 0xffffffff
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      0 ; 0x04c: 0x00
kfffdb.strpsz:                        0 ; 0x04d: 0x00
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33042831 ; 0x050: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:             2460055552 ; 0x054: USEC=0x0 MSEC=0x5e SECS=0x2a MINS=0x24
kfffdb.modts.hi:               33042831 ; 0x058: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:             2460055552 ; 0x05c: USEC=0x0 MSEC=0x5e SECS=0x2a MINS=0x24
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                   38 ; 0x4a0: 0x00000026
kfffde[0].xptr.disk:                  0 ; 0x4a4: 0x0000
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                  12 ; 0x4a7: 0x0c
kfffde[1].xptr.au:                   38 ; 0x4a8: 0x00000026
kfffde[1].xptr.disk:                  1 ; 0x4ac: 0x0001
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                  13 ; 0x4af: 0x0d
kfffde[2].xptr.au:                   40 ; 0x4b0: 0x00000028
kfffde[2].xptr.disk:                  2 ; 0x4b4: 0x0002
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                   0 ; 0x4b7: 0x00
kfffde[3].xptr.au:           4294967295 ; 0x4b8: 0xffffffff
kfffde[3].xptr.disk:              65535 ; 0x4bc: 0xffff
kfffde[3].xptr.flags:                 0 ; 0x4be: L=0 E=0 D=0 S=0
kfffde[3].xptr.chk:                  42 ; 0x4bf: 0x2a

从上面的kfffde[0].xptr.au=38,kfffde[0].xptr.disk=0与kfffde[1].xptr.au=38,kfffde[1].xptr.disk=1以及kfffde[2].xptr.au=40,kfffde[2].xptr.disk=2可知3号磁盘组的磁盘空间使用目录有三份镜像,它们分别为0号磁盘(/dev/raw/raw11)的38号AU,1号磁盘(/dev/raw/raw4)的38号AU,2号磁盘(/dev/raw/raw3)的40号AU。

检查所有磁盘组中每个磁盘已经使用空间的分配情况

SQL> select group_number "group#",disk_number "disk#",name "disk name",path,hot_used_mb "hot (mb)",cold_used_mb "cold (mb)"
  2  from v$asm_disk_stat
  3  order by 1,2;

    group#      disk# disk name                      PATH                             hot (mb)  cold (mb)
---------- ---------- ------------------------------ ------------------------------ ---------- ----------
         1          0 ARCHDG_0000                    /dev/raw/raw2                           0       3447
         1          1 ARCHDG_0001                    /dev/raw/raw9                           0       3447
         2          0 CRSDG_0000                     /dev/raw/raw1                           0        215
         2          1 CRSDG_0001                     /dev/raw/raw8                           0        183
         3          0 DATADG_0001                    /dev/raw/raw11                          0       1676
         3          1 DATADG_0003                    /dev/raw/raw4                           0       1672
         3          2 DATADG_0002                    /dev/raw/raw3                           0       1670
         3          3 DATADG_0000                    /dev/raw/raw10                          0       1677
         4          0 ACFS_0000                      /dev/raw/raw5                           0       4187
         4          1 ACFS_0001                      /dev/raw/raw6                           0       4187
         5          0 USD_0000                       /dev/raw/raw7                           0         53
         5          1 USD_0001                       /dev/raw/raw12                          0         53

12 rows selected.

以上结果显示每个磁盘的所有空间都被分配在了冷区中。下面使用kfed工具来查看磁盘组3的空间使用目录。

[grid@jyrac1 ~]$ kfed read /dev/raw/raw11 aun=38 blkn=0 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           26 ; 0x002: KFBTYP_USEDSPC
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       8 ; 0x008: file=8
kfbh.check:                    18521018 ; 0x00c: 0x011a9bba
kfbh.fcn.base:                     6591 ; 0x010: 0x000019bf
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdusde[0].used[0].spare:             0 ; 0x000: 0x00000000
kfdusde[0].used[0].hi:                0 ; 0x004: 0x00000000
kfdusde[0].used[0].lo:             1490 ; 0x008: 0x000005d2
kfdusde[0].used[1].spare:             0 ; 0x00c: 0x00000000
kfdusde[0].used[1].hi:                0 ; 0x010: 0x00000000
kfdusde[0].used[1].lo:                0 ; 0x014: 0x00000000
kfdusde[1].used[0].spare:             0 ; 0x018: 0x00000000
kfdusde[1].used[0].hi:                0 ; 0x01c: 0x00000000
kfdusde[1].used[0].lo:             1481 ; 0x020: 0x000005c9
kfdusde[1].used[1].spare:             0 ; 0x024: 0x00000000
kfdusde[1].used[1].hi:                0 ; 0x028: 0x00000000
kfdusde[1].used[1].lo:                0 ; 0x02c: 0x00000000
kfdusde[2].used[0].spare:             0 ; 0x030: 0x00000000
kfdusde[2].used[0].hi:                0 ; 0x034: 0x00000000
kfdusde[2].used[0].lo:             1476 ; 0x038: 0x000005c4
kfdusde[2].used[1].spare:             0 ; 0x03c: 0x00000000
kfdusde[2].used[1].hi:                0 ; 0x040: 0x00000000
kfdusde[2].used[1].lo:                0 ; 0x044: 0x00000000
kfdusde[3].used[0].spare:             0 ; 0x048: 0x00000000
kfdusde[3].used[0].hi:                0 ; 0x04c: 0x00000000
kfdusde[3].used[0].lo:             1491 ; 0x050: 0x000005d3
kfdusde[3].used[1].spare:             0 ; 0x054: 0x00000000
kfdusde[3].used[1].hi:                0 ; 0x058: 0x00000000
kfdusde[3].used[1].lo:                0 ; 0x05c: 0x00000000
kfdusde[4].used[0].spare:             0 ; 0x060: 0x00000000
kfdusde[4].used[0].hi:                0 ; 0x064: 0x00000000
kfdusde[4].used[0].lo:                0 ; 0x068: 0x00000000
kfdusde[4].used[1].spare:             0 ; 0x06c: 0x00000000
kfdusde[4].used[1].hi:                0 ; 0x070: 0x00000000
kfdusde[4].used[1].lo:                0 ; 0x074: 0x00000000

上面kfed工具的输出显示了ASM磁盘组一中有四块磁盘,因此只有kfdusde结构的前四个条目被占用(kfdusde[0].used[0],kfdusde[1].used[0],kfdusde[2].used[0],kfdusde[3].used[0]),而且四个条目都显示所有的已分配空间都在到了冷区中。

我们来为5号磁盘组创建一个磁盘组模板,模板中指定基于此模板创建的文件都要位于磁盘的热区

SQL> select group_number "group#",disk_number "disk#",name "disk name",path,hot_used_mb "hot (mb)",cold_used_mb "cold (mb)"
  2  from v$asm_disk_stat
  3  order by 1,2;

    group#      disk# disk name                      PATH                             hot (mb)  cold (mb)
---------- ---------- ------------------------------ ------------------------------ ---------- ----------
         1          0 ARCHDG_0000                    /dev/raw/raw2                           0       3447
         1          1 ARCHDG_0001                    /dev/raw/raw9                           0       3447
         2          0 CRSDG_0000                     /dev/raw/raw1                           0        215
         2          1 CRSDG_0001                     /dev/raw/raw8                           0        183
         3          0 DATADG_0001                    /dev/raw/raw11                          0       1676
         3          1 DATADG_0003                    /dev/raw/raw4                           0       1672
         3          2 DATADG_0002                    /dev/raw/raw3                           0       1670
         3          3 DATADG_0000                    /dev/raw/raw10                          0       1677
         4          0 ACFS_0000                      /dev/raw/raw5                           0       4187
         4          1 ACFS_0001                      /dev/raw/raw6                           0       4187
         5          0 USD_0000                       /dev/raw/raw7                           0         53
         5          1 USD_0001                       /dev/raw/raw12                          0         53

12 rows selected.

上面的结果显示5号磁盘组的的磁盘使用空间都在磁盘的冷区,两个磁盘都为53MB。

SQL> alter diskgroup usd add template hotfile attributes (HOT);

Diskgroup altered.

这个特性需要磁盘组compatible.rdbms属性设置为11.2或以上。现在创建一个datafile,并放置于热区。

SQL> create tablespace t_hot datafile '+USD(HOTFILE)' size 50M;

Tablespace created.

再次查询磁盘组的空间使用情况

SQL> select group_number "group#",disk_number "disk#",name "disk name",path,hot_used_mb "hot (mb)",cold_used_mb "cold (mb)"
  2  from v$asm_disk_stat
  3  order by 1,2;

    group#      disk# disk name                      PATH                             hot (mb)  cold (mb)
---------- ---------- ------------------------------ ------------------------------ ---------- ----------
         1          0 ARCHDG_0000                    /dev/raw/raw2                           0       3447
         1          1 ARCHDG_0001                    /dev/raw/raw9                           0       3447
         2          0 CRSDG_0000                     /dev/raw/raw1                           0        215
         2          1 CRSDG_0001                     /dev/raw/raw8                           0        183
         3          0 DATADG_0001                    /dev/raw/raw11                          0       1676
         3          1 DATADG_0003                    /dev/raw/raw4                           0       1672
         3          2 DATADG_0002                    /dev/raw/raw3                           0       1670
         3          3 DATADG_0000                    /dev/raw/raw10                          0       1677
         4          0 ACFS_0000                      /dev/raw/raw5                           0       4187
         4          1 ACFS_0001                      /dev/raw/raw6                           0       4187
         5          0 USD_0000                       /dev/raw/raw7                          26         86
         5          1 USD_0001                       /dev/raw/raw12                         25         87

12 rows selected.

以上结果显示,51MB(5号磁盘组的0号磁盘的热区26M,1号磁盘的热区25M)的空间(文件本身占用50MB,1MB用于文件头)被分配在热区,并且分布在磁盘组的所有磁盘中。

我们还可以将一个已经存在的数据文件从磁盘的冷区移到热区,创建在磁盘组USD中创建一个数据文件让其空间分配在磁盘的冷区

SQL> create tablespace t_cold datafile '+USD' size 50M;  

Tablespace created.

再次查询磁盘组的空间使用情况

SQL> select group_number "group#",disk_number "disk#",name "disk name",path,hot_used_mb "hot (mb)",cold_used_mb "cold (mb)"
  2  from v$asm_disk_stat
  3  order by 1,2;

    group#      disk# disk name                      PATH                             hot (mb)  cold (mb)
---------- ---------- ------------------------------ ------------------------------ ---------- ----------
         1          0 ARCHDG_0000                    /dev/raw/raw2                           0       3447
         1          1 ARCHDG_0001                    /dev/raw/raw9                           0       3447
         2          0 CRSDG_0000                     /dev/raw/raw1                           0        215
         2          1 CRSDG_0001                     /dev/raw/raw8                           0        183
         3          0 DATADG_0001                    /dev/raw/raw11                          0       1676
         3          1 DATADG_0003                    /dev/raw/raw4                           0       1672
         3          2 DATADG_0002                    /dev/raw/raw3                           0       1670
         3          3 DATADG_0000                    /dev/raw/raw10                          0       1677
         4          0 ACFS_0000                      /dev/raw/raw5                           0       4187
         4          1 ACFS_0001                      /dev/raw/raw6                           0       4187
         5          0 USD_0000                       /dev/raw/raw7                          26        138
         5          1 USD_0001                       /dev/raw/raw12                         25        139

12 rows selected.

以上结果5号磁盘组的0号磁盘的热区仍然显示使用了26M,1号磁盘的热区显示仍然使用了25M,没有变化。而5号磁盘组的0号磁盘的准区显示使用了138M,1号磁盘的冷区显示使用了139M。

SQL> col "tablespace_name" for a30        
SQL> col "file_name" for a50
SQL> set long 200
SQL> set linesize 200
SQL> select a.name "tablespace_name",b.name "file_name" from v$tablespace a,v$datafile b where a.ts#=b.ts# and a.name='T_COLD';

tablespace_name                file_name
------------------------------ --------------------------------------------------
T_COLD                         +USD/jyrac/datafile/t_cold.257.931965173

现在我们把t_cold表空间的数据文件移入热区

SQL> alter diskgroup usd modify file '+USD/jyrac/datafile/t_cold.257.931965173' attributes (HOT); 

Diskgroup altered.

这个命令会触发一次磁盘组DATA的rebalance,因为文件的extent都需要移动到磁盘的热区。当rebalance结束时,查询发现热区的数据增多了。虽然说是磁盘组的rebalance,但是速度上会比较快,只取决于undo文件的大小,因为其他文件本身已经是rebalance状态,只需要做一次快速的检查即可,并没有真正的大量的extent需要做移动。

再次查询磁盘组的空间使用情况

SQL> select group_number "group#",disk_number "disk#",name "disk name",path,hot_used_mb "hot (mb)",cold_used_mb "cold (mb)"
  2  from v$asm_disk_stat
  3  order by 1,2;

    group#      disk# disk name                      PATH                             hot (mb)  cold (mb)
---------- ---------- ------------------------------ ------------------------------ ---------- ----------
         1          0 ARCHDG_0000                    /dev/raw/raw2                           0       3447
         1          1 ARCHDG_0001                    /dev/raw/raw9                           0       3447
         2          0 CRSDG_0000                     /dev/raw/raw1                           0        215
         2          1 CRSDG_0001                     /dev/raw/raw8                           0        183
         3          0 DATADG_0001                    /dev/raw/raw11                          0       1676
         3          1 DATADG_0003                    /dev/raw/raw4                           0       1672
         3          2 DATADG_0002                    /dev/raw/raw3                           0       1670
         3          3 DATADG_0000                    /dev/raw/raw10                          0       1677
         4          0 ACFS_0000                      /dev/raw/raw5                           0       4187
         4          1 ACFS_0001                      /dev/raw/raw6                           0       4187
         5          0 USD_0000                       /dev/raw/raw7                          52        112
         5          1 USD_0001                       /dev/raw/raw12                         50        114

12 rows selected.

以上结果5号磁盘组的0号磁盘的热区使用大小从26M变成了52M,增加了26M,1号磁盘的热区使用大小从25M变成了50M,增加了25M,而增加的26+25=51M刚好为数据文件本身大小50M加上1M的文件头,而5号磁盘组的0号磁盘的准区使用大小从138M变成了112M,减小了26M,1号磁盘的冷区使用大小从139M变成了114M,减小了25M,这与热区增加的大小相符。

小结:
磁盘空间使用目录记录了每个ASM磁盘组上每块盘每一个zone的AU使用数。它为11.2版本中智能数据存放特性提供支持。这个特性的一个可行的用途在于我们可以控制数据在冷区、热区的存放。对于做了RAID或是通过存储所创建出来的虚拟盘,磁盘的热区和冷区将会失去作用,同样,对于SSD盘,也是这样。

]]>
http://www.jydba.net/index.php/archives/2009/feed 0
Oracle ASM Volume Directory http://www.jydba.net/index.php/archives/2006 http://www.jydba.net/index.php/archives/2006#respond Fri, 30 Dec 2016 03:05:06 +0000 http://www.jydba.net/?p=2006 Volume Directory
从oracle 11gR2开始,引入了ACFS,其中11gR2同时又引入了ASM Dynamic Volume Manager (ADVM)去支持ACFS。在11.2的asm中,不仅仅用于存储database files,还能存储一些非结构化的数据,例如clusterware 文件、以及一些通常的二进制文件、external files和text files。逻辑卷目录的ASM文件号为7,它用于跟踪与ADVM有关的文件。ASM动态逻辑卷设备是由ASM动态逻辑卷构建的。一个磁盘组中可以配置一个或多个ASM动态逻辑卷设备。ASM集群文件系统通过ADVM接口构建在ASM磁盘组之上。ADVM像数据库一样,也是ASM的一个客户端。当一个逻辑卷被访问时,相应的ASM文件会被打开并且ASM extent的信息会被发送到ADVM驱动。有两种与ADVM逻辑卷相关的文件类型:
.ASMVOL:逻辑卷文件,作为逻辑卷存储的容器。
.ASMVDRL:包含脏数据记录区域信息的文件;重新同步镜像数据时会用到此文件。

在未创建ADVM之前,直接查询是看不到file 7的

SQL> select number_kffxp file#, disk_kffxp disk#, count(disk_kffxp) extents
  2  from x$kffxp
  3  where group_kffxp=3
  4   and disk_kffxp <> 65534
  5   and number_kffxp=7
  6  group by number_kffxp, disk_kffxp
  7  order by 1;

no rows selected

SQL> select x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au",x.disk_kffxp "disk #",d.name "disk name"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=3
  6  and x.number_kffxp=7
  7  order by 1,2,3;

no rows selected

创建ADVM

为ADVM创建一个单独的磁盘组并不是必须的,但这样做确实是有意义的,通过这种方式,可以把数据库文件与ACFS文件隔离开。要创建逻辑卷,首先需要有一个磁盘组,下面创建了一个名称为acfs的磁盘组。为了能在磁盘组中创建逻辑卷,磁盘组的COMPATIBLE.ASM与COMPATIBLE.ADVM必须设置为11.2或以上,同时ADVM/ACFS驱动要被加载(在集群环境中,已经默认加载,而在单实例环境中,需要手动加载)。

单实例加载ADVM/ACFS驱动的命令如下,RAC环境不需要,因为已经默认加载

[root@jyrac1 bin]# ./acfsroot install
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9118: oracleadvm.ko driver in use - cannot unload.
ACFS-9312: Existing ADVM/ACFS installation detected.
ACFS-9118: oracleadvm.ko driver in use - cannot unload.
ACFS-9314: Removing previous ADVM/ACFS installation.
ACFS-9315: Previous ADVM/ACFS components successfully removed.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9154: Loading 'oracleacfs.ko' driver.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9309: ADVM/ACFS installation correctness verified.
[root@jyrac1 bin]#  ./acfsload  start
ACFS-9391: Checking for existing ADVM/ACFS installation.
ACFS-9392: Validating ADVM/ACFS installation files for operating system.
ACFS-9393: Verifying ASM Administrator setup.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9322: completed
[root@jyrac1 bin]# ./acfsdriverstate version
ACFS-9325:     Driver OS kernel version = 2.6.18-8.el5(x86_64).
ACFS-9326:     Driver Oracle version = 130707.


SQL> create diskgroup acfs disk '/dev/raw/raw5','/dev/raw/raw6' attribute 'COMPATIBLE.ASM' = '11.2', 'COMPATIBLE.ADVM' = '11.2'; 

Diskgroup created.

创建advm卷组

SQL> select
  2     nvl(a.name, '[candidate]')                       disk_group_name
  3   , b.path                                           disk_file_path
  4   , b.name                                           disk_file_name
  5   , b.failgroup                                      disk_file_fail_group
  6   , b.total_mb                                       total_mb
  7   , (b.total_mb - b.free_mb)                         used_mb
  8  -- , round((1- (b.free_mb / b.total_mb))*100, 2)      pct_used
  9  from
 10      v$asm_diskgroup a,v$asm_disk b  where a.group_number(+)=b.group_number
 11  order by  1,3,2,4  ;

disk group name      path              file name            fail group           file size (mb) used size (mb)
-------------------- ----------------- -------------------- -------------------- -------------- --------------
ACFS                 /dev/raw/raw5     ACFS_0000            ACFS_0000                     5,120             53
ACFS                 /dev/raw/raw6     ACFS_0001            ACFS_0001                     5,120             53
ARCHDG               /dev/raw/raw2     ARCHDG_0000          ARCHDG_0000                   5,120          3,447
ARCHDG               /dev/raw/raw9     ARCHDG_0001          ARCHDG_0001                   5,120          3,447
CRSDG                /dev/raw/raw1     CRSDG_0000           CRSDG_0000                    5,120            215
CRSDG                /dev/raw/raw8     CRSDG_0001           CRSDG_0001                    5,120            183
DATADG               /dev/raw/raw10    DATADG_0000          DATADG_0000                   5,120          1,673
DATADG               /dev/raw/raw11    DATADG_0001          DATADG_0001                   5,120          1,670
DATADG               /dev/raw/raw3     DATADG_0002          DATADG_0002                   5,120          1,666
DATADG               /dev/raw/raw4     DATADG_0003          DATADG_0003                   5,120          1,666
[candidate]          /dev/raw/raw12                                                           0              0
[candidate]          /dev/raw/raw13                                                           0              0
[candidate]          /dev/raw/raw14                                                           0              0
[candidate]          /dev/raw/raw7                                                            0              0

14 rows selected.

上面的查询显示ACFS磁盘中可用空间还有大约5G,那么在磁盘组ACFS中有足够的空间来创建2个2G大小的建逻辑卷

[grid@jyrac1 ~]$ asmcmd volcreate -G ACFS -s 2G ACFS_VOL1
[grid@jyrac1 ~]$ asmcmd volcreate -G ACFS -s 2G ACFS_VOL2
[grid@jyrac1 ~]$ asmcmd volinfo -a
Diskgroup Name: ACFS

         Volume Name: ACFS_VOL1
         Volume Device: /dev/asm/acfs_vol1-319
         State: ENABLED
         Size (MB): 2048
         Resize Unit (MB): 32
         Redundancy: MIRROR
         Stripe Columns: 4
         Stripe Width (K): 128
         Usage: 
         Mountpath: 

         Volume Name: ACFS_VOL2
         Volume Device: /dev/asm/acfs_vol2-319
         State: ENABLED
         Size (MB): 2048
         Resize Unit (MB): 32
         Redundancy: MIRROR
         Stripe Columns: 4
         Stripe Width (K): 128
         Usage: 
         Mountpath: 

从上面,大家可以看到,默认创建advm是必须镜像的,且其分配单元是32m,条带宽度是128k。创建完advm之后,我们再次查询试图,看能否看到asm file 7

SQL> select number_kffxp file#, disk_kffxp disk#, count(disk_kffxp) extents
  2  from x$kffxp
  3  where group_kffxp=4
  4  and disk_kffxp <> 65534
  5  and number_kffxp=7
  6  group by number_kffxp, disk_kffxp
  7  order by 1;

     FILE#      DISK#    EXTENTS
---------- ---------- ----------
         7          0          1
         7          1          1

从查询结果可以看到在磁盘组ACFS创建ADVM之后可以查询到7号文件。

到现在,还没有相应的挂载目录与逻辑卷相关联,所以还不能使用它们。这时,我们可以先看看ADVM逻辑卷元信息,我们先从逻辑卷目录获取其所在的分配单元

SQL> select x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au",x.disk_kffxp "disk #",d.name "disk name",d.path "disk path"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=4
  6  and x.number_kffxp=7
  7  order by 1,2,3;

virtual extent physical extent         au     disk # disk name                      disk path
-------------- --------------- ---------- ---------- ------------------------------ ----------------------------------------
             0               0         53          0 ACFS_0000                      /dev/raw/raw5
             0               1         53          1 ACFS_0001                      /dev/raw/raw6

上面的结果显示有一个虚拟区,并且它有两个物理区,说明7号文件Volume Directory存在镜像,逻辑卷目录分布在0号磁盘(/dev/raw/raw5)的53号AU,与1号磁盘(/dev/raw/raw6)的53号AU中。

通过kfed来获取逻辑卷目录的AU分布情况
由于1号文件总是开始在0号磁盘2号AU,记住这个位置:0号盘2号AU。这是ASM中定位文件的起点,它的作用,有点相当于磁盘上的引导区,在电脑开机后负责将OS启动起来。1号文件在最少情况下,至少有两个AU。在1号文件中,每个文件占用一个元数据块,存放自身的空间分布信息。每个元数据块大小是4K,一个AU是1M,哪么,每个AU中,可以存储256个文件的空间分布信息。这其中,0号盘2号AU中,全是元文件的信息。再具体一点,0号盘2号AU,第一个元数据块被系统占用,从第二个块开始,到255为止,共255个元数据块,对应索引号1至255的文件。其实,也就是全部的元文件了。也就是说0号盘2号AU,保存了全部元文件的空间分布信息。1号文件的第二个AU,从第一个块开始,保存256号文件。第二个块对应257号文件,等等。每次从ASM中读数据时,Oracle都要先读到1号文件,从中找出要读的目标文件在磁盘上的分布位置,然后再去读取相应的文件的数据。由于逻辑卷目录是7号文件,所以要读取0号磁盘(/dev/raw/raw11)的2号AU的7号块

[grid@jyrac1 ~]$ kfed read /dev/raw/raw5 aun=2 blkn=7 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       7 ; 0x004: blk=7
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                  3972298863 ; 0x00c: 0xecc4786f
kfbh.fcn.base:                     6805 ; 0x010: 0x00001a95
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:                   1 ; 0x000: A=1 NUMM=0x0
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 1048576 ; 0x010: 0x00100000
kfffdb.xtntcnt:                       3 ; 0x014: 0x00000003
kfffdb.xtnteof:                       3 ; 0x018: 0x00000003
kfffdb.blkSize:                    4096 ; 0x01c: 0x00001000
kfffdb.flags:                         1 ; 0x020: O=1 S=0 S=0 D=0 C=0 I=0 R=0 A=0
kfffdb.fileType:                     15 ; 0x021: 0x0f
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                       3 ; 0x03c: 0x0003
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:             4294967295 ; 0x044: 0xffffffff
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      0 ; 0x04c: 0x00
kfffdb.strpsz:                        0 ; 0x04d: 0x00
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33043401 ; 0x050: HOUR=0x9 DAYS=0x1e MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:             1933631488 ; 0x054: USEC=0x0 MSEC=0x38 SECS=0x34 MINS=0x1c
kfffdb.modts.hi:               33043401 ; 0x058: HOUR=0x9 DAYS=0x1e MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:             1933631488 ; 0x05c: USEC=0x0 MSEC=0x38 SECS=0x34 MINS=0x1c
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                   53 ; 0x4a0: 0x00000035
kfffde[0].xptr.disk:                  0 ; 0x4a4: 0x0000
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                  31 ; 0x4a7: 0x1f
kfffde[1].xptr.au:                   53 ; 0x4a8: 0x00000035
kfffde[1].xptr.disk:                  1 ; 0x4ac: 0x0001
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                  30 ; 0x4af: 0x1e
kfffde[2].xptr.au:           4294967294 ; 0x4b0: 0xfffffffe
kfffde[2].xptr.disk:              65534 ; 0x4b4: 0xfffe
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                  42 ; 0x4b7: 0x2a

从kfffde[0].xptr.au=53,kfffde[0].xptr.disk=0与kfffde[1].xptr.au=53,kfffde[1].xptr.disk=1可以确定逻辑卷目录分布在0号磁盘(/dev/raw/raw5)的53号AU与1号磁盘(/dev/raw/raw6)的53号AU中,与上面SQL语句所查询的分布情况完全一致。

在逻辑卷设备上创建ASM集群文件系统(ACFS)

[root@jyrac1 bin]# /sbin/mkfs -t acfs /dev/asm/acfs_vol1-319
mkfs.acfs: version                   = 11.2.0.4.0
mkfs.acfs: on-disk version           = 39.0
mkfs.acfs: volume                    = /dev/asm/acfs_vol1-319
mkfs.acfs: volume size               = 2147483648
mkfs.acfs: Format complete.
[root@jyrac1 bin]# /sbin/mkfs -t acfs /dev/asm/acfs_vol2-319
mkfs.acfs: version                   = 11.2.0.4.0
mkfs.acfs: on-disk version           = 39.0
mkfs.acfs: volume                    = /dev/asm/acfs_vol2-319
mkfs.acfs: volume size               = 2147483648
mkfs.acfs: Format complete.


[root@jyrac1 bin]# mkdir /acfs1
[root@jyrac1 bin]# mkdir /acfs2

[root@jyrac1 bin]# chown -R grid:oinstall /acfs1
[root@jyrac1 bin]# chown -R grid:oinstall /acfs2

[root@jyrac1 bin]# mount -t acfs /dev/asm/acfs_vol1-319 /acfs1
[root@jyrac1 bin]# mount -t acfs /dev/asm/acfs_vol2-319 /acfs2
[root@jyrac1 bin]# mount
/dev/sda1 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/asm/acfs_vol1-319 on /acfs1 type acfs (rw)
/dev/asm/acfs_vol2-319 on /acfs2 type acfs (rw)


[grid@jyrac1 ~]$ asmcmd volinfo -G ACFS ACFS_VOL1
Diskgroup Name: ACFS

         Volume Name: ACFS_VOL1
         Volume Device: /dev/asm/acfs_vol1-319
         State: ENABLED
         Size (MB): 2048
         Resize Unit (MB): 32
         Redundancy: MIRROR
         Stripe Columns: 4
         Stripe Width (K): 128
         Usage: ACFS
         Mountpath: /acfs1 

[grid@jyrac1 ~]$ asmcmd volinfo -G ACFS ACFS_VOL2
Diskgroup Name: ACFS

         Volume Name: ACFS_VOL2
         Volume Device: /dev/asm/acfs_vol2-319
         State: ENABLED
         Size (MB): 2048
         Resize Unit (MB): 32
         Redundancy: MIRROR
         Stripe Columns: 4
         Stripe Width (K): 128
         Usage: ACFS
         Mountpath: /acfs2 


[root@jyrac1 bin]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              35G   25G  8.4G  75% /
tmpfs                 3.9G  170M  3.8G   5% /dev/shm
/dev/asm/acfs_vol1-319
                      2.0G   43M  2.0G   3% /acfs1
/dev/asm/acfs_vol2-319
                      2.0G   43M  2.0G   3% /acfs2

使用kfed工具查看ADVM真实的元数据

[grid@jyrac1 ~]$ kfed read /dev/raw/raw5 aun=53 blkn=0 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           22 ; 0x002: KFBTYP_VOLUMEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       7 ; 0x008: file=7
kfbh.check:                  1546379724 ; 0x00c: 0x5c2be1cc
kfbh.fcn.base:                     7356 ; 0x010: 0x00001cbc
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:                 1 ; 0x00c: 0x00000001
kffdnd.overfl.incarn:                 1 ; 0x010: A=1 NUMM=0x0
kffdnd.parent.number:                 0 ; 0x014: 0x00000000
kffdnd.parent.incarn:                 1 ; 0x018: A=1 NUMM=0x0
kffdnd.fstblk.number:                 0 ; 0x01c: 0x00000000
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfvvde.entry.incarn:                  1 ; 0x024: A=1 NUMM=0x0
kfvvde.entry.hash:                    0 ; 0x028: 0x00000000
kfvvde.entry.refer.number:   4294967295 ; 0x02c: 0xffffffff
kfvvde.entry.refer.incarn:            0 ; 0x030: A=0 NUMM=0x0
kfvvde.volnm:           ++AVD_DG_NUMBER ; 0x034: length=15
kfvvde.usage:                           ; 0x054: length=0
kfvvde.dgname:                          ; 0x074: length=0
kfvvde.clname:                          ; 0x094: length=0
kfvvde.mountpath:                       ; 0x0b4: length=0
kfvvde.drlinit:                       0 ; 0x4b5: 0x00
kfvvde.pad1:                          0 ; 0x4b6: 0x0000
kfvvde.volfnum.number:                0 ; 0x4b8: 0x00000000
kfvvde.volfnum.incarn:                0 ; 0x4bc: 0x00000000
kfvvde.drlfnum.number:                0 ; 0x4c0: 0x00000000
kfvvde.drlfnum.incarn:                0 ; 0x4c4: 0x00000000
kfvvde.volnum:                        0 ; 0x4c8: 0x0000
kfvvde.avddgnum:                    319 ; 0x4ca: 0x013f
kfvvde.extentsz:                      0 ; 0x4cc: 0x00000000
kfvvde.volstate:                      4 ; 0x4d0: D=0 C=0 R=1
kfvvde.pad[0]:                        0 ; 0x4d1: 0x00
kfvvde.pad[1]:                        0 ; 0x4d2: 0x00
kfvvde.pad[2]:                        0 ; 0x4d3: 0x00
kfvvde.pad[3]:                        0 ; 0x4d4: 0x00
kfvvde.pad[4]:                        0 ; 0x4d5: 0x00

上面的输出信息显示了53号AU的block 0.它只包含了ADVM逻辑卷的标记(++AVD_DG_NUMBER),而真正的逻辑卷的信息其实位于1号块与之后的块中

[grid@jyrac1 ~]$ kfed read /dev/raw/raw5 aun=53 blkn=1 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           22 ; 0x002: KFBTYP_VOLUMEDIR --指数据类型
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       1 ; 0x004: blk=1 --该数据所在的au block号
kfbh.block.obj:                       7 ; 0x008: file=7 --指该元数据的asm file number,advm是file 7,所以这里看到的是7
kfbh.check:                  3589956819 ; 0x00c: 0xd5fa64d3
kfbh.fcn.base:                     7697 ; 0x010: 0x00001e11
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0 --分配信息,包括block的分支号和指向next freelist block的指针
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:                 2 ; 0x00c: 0x00000002 --overfl,表示指向同层级的下一个block
kffdnd.overfl.incarn:                 1 ; 0x010: A=1 NUMM=0x0
kffdnd.parent.number:        4294967295 ; 0x014: 0xffffffff
kffdnd.parent.incarn:                 0 ; 0x018: A=0 NUMM=0x0
kffdnd.fstblk.number:                 0 ; 0x01c: 0x00000000 --表示指向上一层的block
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfvvde.entry.incarn:                  1 ; 0x024: A=1 NUMM=0x0
kfvvde.entry.hash:                    0 ; 0x028: 0x00000000
kfvvde.entry.refer.number:   4294967295 ; 0x02c: 0xffffffff
kfvvde.entry.refer.incarn:            0 ; 0x030: A=0 NUMM=0x0
kfvvde.volnm:                 ACFS_VOL1 ; 0x034: length=9 --表示asm advm 卷名称
kfvvde.usage:                      ACFS ; 0x054: length=4 --advm的type类型,这里是使用的acfs
kfvvde.dgname:                          ; 0x074: length=0
kfvvde.clname:                          ; 0x094: length=0
kfvvde.mountpath:                /acfs1 ; 0x0b4: length=6 --这里表示acfs mount的路径
kfvvde.drlinit:                       1 ; 0x4b5: 0x01
kfvvde.pad1:                          0 ; 0x4b6: 0x0000
kfvvde.volfnum.number:              257 ; 0x4b8: 0x00000101 --这里表示volume file number.
kfvvde.volfnum.incarn:        931944533 ; 0x4bc: 0x378c5855
kfvvde.drlfnum.number:              256 ; 0x4c0: 0x00000100 --这里表示volume dirty region logging 信息对应的file number
kfvvde.drlfnum.incarn:        931944533 ; 0x4c4: 0x378c5855
kfvvde.volnum:                        1 ; 0x4c8: 0x0001 --这里表示对应的卷组number号,从1开始
kfvvde.avddgnum:                    319 ; 0x4ca: 0x013f
kfvvde.extentsz:                      8 ; 0x4cc: 0x00000008 --这里表示advm的extent大小,有点类似database中的extent概念。这里stripe是4,而其分配unit是32m,所以这里是64.
kfvvde.volstate:                      2 ; 0x4d0: D=0 C=1 R=0 --这里表示advm卷组状态。2应该是表示可用
kfvvde.pad[0]:                        0 ; 0x4d1: 0x00
kfvvde.pad[1]:                        0 ; 0x4d2: 0x00
kfvvde.pad[2]:                        0 ; 0x4d3: 0x00
kfvvde.pad[3]:                        0 ; 0x4d4: 0x00
kfvvde.pad[4]:                        0 ; 0x4d5: 0x00

ASM元信息7号文件block 1包含的是第一个逻辑卷的信息(kfvvde.volnm: ACFS_VOL1),并且有两个文件关联到这个逻辑卷:
.DRL脏数据记录区域文件(kfvvde.drlfnum.number: 256)
.逻辑卷文件(kfvvde.volfnum.number: 257)

[grid@jyrac1 ~]$ kfed read /dev/raw/raw5 aun=53 blkn=2 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           22 ; 0x002: KFBTYP_VOLUMEDIR --指数据类型
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       2 ; 0x004: blk=2 --该数据所在的au block号
kfbh.block.obj:                       7 ; 0x008: file=7 --指该元数据的asm file number,advm是file 7,所以这里看到的是7
kfbh.check:                   705009710 ; 0x00c: 0x2a05982e
kfbh.fcn.base:                     7699 ; 0x010: 0x00001e13
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0 --分配信息,包括block的分支号和指向next freelist block的指针
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:        4294967295 ; 0x00c: 0xffffffff --overfl,表示指向同层级的下一个block
kffdnd.overfl.incarn:                 0 ; 0x010: A=0 NUMM=0x0
kffdnd.parent.number:        4294967295 ; 0x014: 0xffffffff
kffdnd.parent.incarn:                 0 ; 0x018: A=0 NUMM=0x0
kffdnd.fstblk.number:                 0 ; 0x01c: 0x00000000 --表示指向上一层的block
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfvvde.entry.incarn:                  1 ; 0x024: A=1 NUMM=0x0
kfvvde.entry.hash:                    0 ; 0x028: 0x00000000
kfvvde.entry.refer.number:   4294967295 ; 0x02c: 0xffffffff
kfvvde.entry.refer.incarn:            0 ; 0x030: A=0 NUMM=0x0
kfvvde.volnm:                 ACFS_VOL2 ; 0x034: length=9 --表示asm advm 卷名称
kfvvde.usage:                      ACFS ; 0x054: length=4 --advm的type类型,这里是使用的acfs
kfvvde.dgname:                          ; 0x074: length=0
kfvvde.clname:                          ; 0x094: length=0
kfvvde.mountpath:                /acfs2 ; 0x0b4: length=6 --这里表示acfs mount的路径
kfvvde.drlinit:                       1 ; 0x4b5: 0x01
kfvvde.pad1:                          0 ; 0x4b6: 0x0000
kfvvde.volfnum.number:              259 ; 0x4b8: 0x00000103 --这里表示volume file number.
kfvvde.volfnum.incarn:        931944539 ; 0x4bc: 0x378c585b
kfvvde.drlfnum.number:              258 ; 0x4c0: 0x00000102 --这里表示volume dirty region logging 信息对应的file number
kfvvde.drlfnum.incarn:        931944539 ; 0x4c4: 0x378c585b
kfvvde.volnum:                        2 ; 0x4c8: 0x0002 --这里表示对应的卷组number号,从1开始
kfvvde.avddgnum:                    319 ; 0x4ca: 0x013f
kfvvde.extentsz:                      8 ; 0x4cc: 0x00000008 --这里表示advm的extent大小,有点类似database中的extent概念。这里stripe是4,而其分配unit是32m,所以这里是64.
kfvvde.volstate:                      2 ; 0x4d0: D=0 C=1 R=0 --这里表示advm卷组状态。2应该是表示可用
kfvvde.pad[0]:                        0 ; 0x4d1: 0x00
kfvvde.pad[1]:                        0 ; 0x4d2: 0x00
kfvvde.pad[2]:                        0 ; 0x4d3: 0x00
kfvvde.pad[3]:                        0 ; 0x4d4: 0x00
kfvvde.pad[4]:                        0 ; 0x4d5: 0x00

ASM元信息7号文件block 2包含的是第二个逻辑卷的信息(kfvvde.volnm: ACFS_VOL2),并且有两个文件关联到这个逻辑卷:
.DRL脏数据记录区域文件(kfvvde.drlfnum.number: 258)
.逻辑卷文件(kfvvde.volfnum.number: 259)

小结:
一个磁盘组中可以配置一个或多个ASM动态逻辑卷设备。ASM集群文件系统通过ADVM接口构建于ASM磁盘组之上。ADVM跟数据库一样,也是ASM的一个客户端。有两种与ADVM逻辑卷相关的文件类型:
.ASMVOL:逻辑卷文件,作为逻辑卷存储设备的容器
.ASMVDRL:包含脏数据记录区域信息的文件,恢复镜像数据时会用到此文件。

]]>
http://www.jydba.net/index.php/archives/2006/feed 0
Oracle ASM Alias Directory http://www.jydba.net/index.php/archives/2003 http://www.jydba.net/index.php/archives/2003#respond Thu, 29 Dec 2016 13:37:04 +0000 http://www.jydba.net/?p=2003 Alias Directory
别名目录的ASM文件号为6,它对磁盘组中的所有文件提供了一种层次结构的命名系统。对每个文件创建的系统文件名是基于文件类型,数据库实例与类型特定信息,比如表空间名。当文件创建时如果指定了完整路径名将会生成用户别名。Alias Directory包含所有别名元数据以及每种系统别名,系统目录,用户目录与用户别名。通过别名号进行索引。

别名目录条目包含以下信息:
.Alias name(or directory name)
.Alias incarnation number
.File number
.File incarnation number
.Parent directory
.System flag

Alias Directory在每个磁盘组中的文件号为6(F6)。Alias incarnation number,很像file incarnation number,被用来区分可能重复使用相同别名号的别名或目录。system flag被设置给系统创建的别名与目录,但不对用户创建的别名或目录进行设置。

可以通过查询v$asm_alias视图来获得ASM别名信息。

SQL> select * from v$asm_alias where group_number=3 and file_number<>4294967295;

NAME                                     GROUP_NUMBER FILE_NUMBER FILE_INCARNATION ALIAS_INDEX ALIAS_INCARNATION PARENT_INDEX REFERENCE_INDEX ALIAS_DIRECTORY      SYSTEM_CREATED
---------------------------------------- ------------ ----------- ---------------- ----------- ----------------- ------------ --------------- -------------------- --------------------
SYSAUX.258.930413055                                3         258        930413055         106                 1     50331754        67108863 N                    Y
SYSTEM.259.930413057                                3         259        930413057         107                 1     50331754        67108863 N                    Y
EXAMPLE.260.930413057                               3         260        930413057         108                 1     50331754        67108863 N                    Y
UNDOTBS2.261.930413057                              3         261        930413057         109                 1     50331754        67108863 N                    Y
UNDOTBS1.262.930413057                              3         262        930413057         110                 1     50331754        67108863 N                    Y
USERS.263.930413057                                 3         263        930413057         111                 1     50331754        67108863 N                    Y
FILE_TRANSFER.270.930515465                         3         270        930515465         112                 1     50331754        67108863 N                    Y
test01.dbf                                          3         270        930515465         113                 1     50331754        67108863 N                    N
CS.271.931880499                                    3         271        931880499         114                 5     50331754        67108863 N                    Y
CS_STRIPE_COARSE.272.931882089                      3         272        931882089         115                 3     50331754        67108863 N                    Y
NOT_IMPORTANT.273.931882831                         3         273        931882831         116                 1     50331754        67108863 N                    Y
current.257.930412709                               3         257        930412709         159                 3     50331807        67108863 N                    Y
group_1.264.930413221                               3         264        930413221         265                 1     50331913        67108863 N                    Y
group_2.265.930413225                               3         265        930413225         266                 1     50331913        67108863 N                    Y
group_3.266.930413227                               3         266        930413227         267                 1     50331913        67108863 N                    Y
group_4.267.930413231                               3         267        930413231         268                 1     50331913        67108863 N                    Y
TEMP.268.930413239                                  3         268        930413239         318                 1     50331966        67108863 N                    Y
spfilejyrac.ora                                     3         256        930411925          60                 1     50331701        67108863 N                    N
FILE_TRANSFER_0_0.269.930515105                     3         269        930515105         583                 3     50332231        67108863 N                    Y
SPFILE.256.930411925                                3         256        930411925         530                 1     50332178        67108863 N                    Y
tts.dmp                                             3         269        930515105           2                 3     50331648        67108863 N                    N

21 rows selected.

下面查询的输出将包含目录列表,接下来文件的完整路径名列表。并且假设所有文件是使用ASM文件命名规则所创建,在特定情况下,将假设在别名中(full_path列)将会存在指定的数据库名。查询中的full_path变量引用别名。DIR列指示目录,SYS列指示是否由系统创建。

SQL> col full_path format a64
SQL> col dir format a3
SQL> col sys format a3
SQL> set pagesize 1000
SQL> set linesize 200
SQL> select concat ('+'|| gname, sys_connect_by_path (aname,'/')) full_path, dir, sys
  2   from (
  3    select g.name gname, 
  4     a.parent_index pindex, 
  5     a.name aname,
  6     a.reference_index rindex, 
  7     a.alias_directory dir, 
  8     a.system_created sys 
  9    from v$asm_alias a, v$asm_diskgroup g 
 10    where a.group_number = g.group_number and a.group_number=3)
 11   start with (mod(pindex, power(2, 24))) = 0
 12   connect by prior rindex = pindex
 13   order by dir desc, full_path asc;

FULL_PATH                                                        DIR SYS
---------------------------------------------------------------- --- ---
+DATADG/DB_UNKNOWN                                               Y   Y
+DATADG/DB_UNKNOWN/PARAMETERFILE                                 Y   Y
+DATADG/JYRAC                                                    Y   N
+DATADG/JYRAC/CONTROLFILE                                        Y   N
+DATADG/JYRAC/DATAFILE                                           Y   N
+DATADG/JYRAC/DUMPSET                                            Y   Y
+DATADG/JYRAC/ONLINELOG                                          Y   N
+DATADG/JYRAC/PARAMETERFILE                                      Y   N
+DATADG/JYRAC/TEMPFILE                                           Y   N
+DATADG/JYRAC/oradata                                            Y   N
+DATADG/JYRAC/temp_files                                         Y   N
+DATADG/DB_UNKNOWN/PARAMETERFILE/SPFILE.256.930411925            N   Y
+DATADG/JYRAC/CONTROLFILE/current.257.930412709                  N   Y
+DATADG/JYRAC/DATAFILE/CS.271.931880499                          N   Y
+DATADG/JYRAC/DATAFILE/CS_STRIPE_COARSE.272.931882089            N   Y
+DATADG/JYRAC/DATAFILE/EXAMPLE.260.930413057                     N   Y
+DATADG/JYRAC/DATAFILE/FILE_TRANSFER.270.930515465               N   Y
+DATADG/JYRAC/DATAFILE/NOT_IMPORTANT.273.931882831               N   Y
+DATADG/JYRAC/DATAFILE/SYSAUX.258.930413055                      N   Y
+DATADG/JYRAC/DATAFILE/SYSTEM.259.930413057                      N   Y
+DATADG/JYRAC/DATAFILE/UNDOTBS1.262.930413057                    N   Y
+DATADG/JYRAC/DATAFILE/UNDOTBS2.261.930413057                    N   Y
+DATADG/JYRAC/DATAFILE/USERS.263.930413057                       N   Y
+DATADG/JYRAC/DATAFILE/test01.dbf                                N   N
+DATADG/JYRAC/DUMPSET/FILE_TRANSFER_0_0.269.930515105            N   Y
+DATADG/JYRAC/ONLINELOG/group_1.264.930413221                    N   Y
+DATADG/JYRAC/ONLINELOG/group_2.265.930413225                    N   Y
+DATADG/JYRAC/ONLINELOG/group_3.266.930413227                    N   Y
+DATADG/JYRAC/ONLINELOG/group_4.267.930413231                    N   Y
+DATADG/JYRAC/TEMPFILE/TEMP.268.930413239                        N   Y
+DATADG/JYRAC/spfilejyrac.ora                                    N   N
+DATADG/tts.dmp                                                  N   N

32 rows selected.

别名目录内容其实很重要,因为如果说你需要从asm中抽取数据文件,你起码要知道数据库里面的文件名信息吧?知道了以后,你使用amdu来进行数据文件的抽取,将事半功倍。那么asm alias元数据到底在什么地方呢,如下我们来查询磁盘组3的别名AU分布情况。

SQL> select group_number,name,type from v$asm_diskgroup;

GROUP_NUMBER NAME                                     TYPE
------------ ---------------------------------------- ------------
           1 ARCHDG                                   NORMAL
           2 CRSDG                                    EXTERN
           3 DATADG                                   NORMAL
           4 TESTDG                                   NORMAL

SQL> select group_number, disk_number, state, name,failgroup,path from v$asm_disk where group_number=3;

GROUP_NUMBER DISK_NUMBER STATE                          NAME                                     FAILGROUP                      PATH
------------ ----------- ------------------------------ ---------------------------------------- ------------------------------ ------------------------------
           3           0 NORMAL                         DATADG_0001                              DATADG_0001                    /dev/raw/raw11
           3           3 NORMAL                         DATADG_0000                              DATADG_0000                    /dev/raw/raw10
           3           1 NORMAL                         DATADG_0003                              DATADG_0003                    /dev/raw/raw4
           3           2 NORMAL                         DATADG_0002                              DATADG_0002                    /dev/raw/raw3

SQL> select number_kffxp file#, disk_kffxp disk#, count(disk_kffxp) extents
  2  from x$kffxp
  3  where group_kffxp=3
  4   and disk_kffxp <> 65534
  5   and number_kffxp=6
  6  group by number_kffxp, disk_kffxp
  7  order by 1;

     FILE#      DISK#    EXTENTS
---------- ---------- ----------
         6          1          1
         6          2          1
         6          3          1

上面的查询显示别名目录分布在1,2,3号磁盘中,因为磁盘组DATADG是normal冗余,并且故障磁盘组有4个,所以别名目录有3份镜像,下面的查询将会返回别名目录分别存储在1,2,3号磁盘(/dev/raw/raw4,/dev/raw/raw3,/dev/raw/raw10)中的36,38,37号AU中。

SQL> select x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au",x.disk_kffxp "disk #",d.name "disk name"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=3
  6  and x.number_kffxp=6
  7  order by 1,2,3;

virtual extent physical extent         au     disk # disk name
-------------- --------------- ---------- ---------- ------------------------------------------------------------
             0               0         38          2 DATADG_0002
             0               1         37          3 DATADG_0000
             0               2         36          1 DATADG_0003

通过kfed工具来查询别名目录的分布情况
由于1号文件总是开始在0号磁盘2号AU,记住这个位置:0号盘2号AU。这是ASM中定位文件的起点,它的作用,有点相当于磁盘上的引导区,在电脑开机后负责将OS启动起来。1号文件在最少情况下,至少有两个AU。在1号文件中,每个文件占用一个元数据块,存放自身的空间分布信息。每个元数据块大小是4K,一个AU是1M,哪么,每个AU中,可以存储256个文件的空间分布信息。这其中,0号盘2号AU中,全是元文件的信息。再具体一点,0号盘2号AU,第一个元数据块被系统占用,从第二个块开始,到255为止,共255个元数据块,对应索引号1至255的文件。其实,也就是全部的元文件了。也就是说0号盘2号AU,保存了全部元文件的空间分布信息。1号文件的第二个AU,从第一个块开始,保存256号文件。第二个块对应257号文件,等等。每次从ASM中读数据时,Oracle都要先读到1号文件,从中找出要读的目标文件在磁盘上的分布位置,然后再去读取相应的文件的数据。由于别名目录是6号文件,所以要读取0号磁盘(/dev/raw/raw11)的2号AU的6号块

[grid@jyrac1 ~]$ kfed read /dev/raw/raw11 aun=2 blkn=6 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       6 ; 0x004: blk=6
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                   893084381 ; 0x00c: 0x353b62dd
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:                   1 ; 0x000: A=1 NUMM=0x0
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 1048576 ; 0x010: 0x00100000
kfffdb.xtntcnt:                       3 ; 0x014: 0x00000003
kfffdb.xtnteof:                       3 ; 0x018: 0x00000003
kfffdb.blkSize:                    4096 ; 0x01c: 0x00001000
kfffdb.flags:                         1 ; 0x020: O=1 S=0 S=0 D=0 C=0 I=0 R=0 A=0
kfffdb.fileType:                     15 ; 0x021: 0x0f
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                       3 ; 0x03c: 0x0003
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:             4294967295 ; 0x044: 0xffffffff
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      0 ; 0x04c: 0x00
kfffdb.strpsz:                        0 ; 0x04d: 0x00
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33042831 ; 0x050: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:             2457465856 ; 0x054: USEC=0x0 MSEC=0x27d SECS=0x27 MINS=0x24
kfffdb.modts.hi:               33042831 ; 0x058: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:             2457465856 ; 0x05c: USEC=0x0 MSEC=0x27d SECS=0x27 MINS=0x24
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                   38 ; 0x4a0: 0x00000026
kfffde[0].xptr.disk:                  2 ; 0x4a4: 0x0002
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                  14 ; 0x4a7: 0x0e
kfffde[1].xptr.au:                   37 ; 0x4a8: 0x00000025
kfffde[1].xptr.disk:                  3 ; 0x4ac: 0x0003
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                  12 ; 0x4af: 0x0c
kfffde[2].xptr.au:                   36 ; 0x4b0: 0x00000024
kfffde[2].xptr.disk:                  1 ; 0x4b4: 0x0001
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                  15 ; 0x4b7: 0x0f
kfffde[3].xptr.au:           4294967295 ; 0x4b8: 0xffffffff
kfffde[3].xptr.disk:              65535 ; 0x4bc: 0xffff
kfffde[3].xptr.flags:                 0 ; 0x4be: L=0 E=0 D=0 S=0

从kfffde[0].xptr.au=38,kfffde[0].xptr.disk=2,kfffde[1].xptr.au=37,kfffde[1].xptr.disk=3,kfffde[2].xptr.au=36,kfffde[2].xptr.disk=1,可知别名目录存储在1,2,3号磁盘(/dev/raw/raw4,/dev/raw/raw3,/dev/raw/raw10)中的36,38,37号AU中,而且这三个AU存储的内容相同,与之前用查询语句所获得的分布情况完全一致。

下面使用kfed工具来读取别名目录的元数据

[grid@jyrac1 ~]$ kfed read /dev/raw/raw4 aun=36 blkn=0 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           11 ; 0x002: KFBTYP_ALIASDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       6 ; 0x008: file=6
kfbh.check:                  2235498606 ; 0x00c: 0x853f006e
kfbh.fcn.base:                     3565 ; 0x010: 0x00000ded
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:        4294967295 ; 0x00c: 0xffffffff
kffdnd.overfl.incarn:                 0 ; 0x010: A=0 NUMM=0x0
kffdnd.parent.number:                 0 ; 0x014: 0x00000000
kffdnd.parent.incarn:                 1 ; 0x018: A=1 NUMM=0x0
kffdnd.fstblk.number:                 0 ; 0x01c: 0x00000000
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfade[0].entry.incarn:                1 ; 0x024: A=1 NUMM=0x0
kfade[0].entry.hash:         2990280982 ; 0x028: 0xb23c1116
kfade[0].entry.refer.number:          1 ; 0x02c: 0x00000001 --指向下一层的block号
kfade[0].entry.refer.incarn:          1 ; 0x030: A=1 NUMM=0x0 --entry部分内容,表示分支号,hash值和指向下一层block的指针等信息,这部分不需要过多关注
kfade[0].name:                    JYRAC ; 0x034: length=5 --表示对应的alias元数据名称
kfade[0].fnum:               4294967295 ; 0x064: 0xffffffff --对应文件号,这里为最大值,表示无意义
kfade[0].finc:               4294967295 ; 0x068: 0xffffffff --文件分支号
kfade[0].flags:                       8 ; 0x06c: U=0 S=0 S=0 U=1 F=0 --标志信息
--标志信息有以下内容
O - File is original, not snapshot
S - File is striped
S - Strict allocation policy
D - File is damaged
C - File creation is committed
I - File has empty indirect block
R - File has known at-risk value
A - The at-risk value itsefl

kfade[0].ub1spare:                    0 ; 0x06d: 0x00
kfade[0].ub2spare:                    0 ; 0x06e: 0x0000
kfade[1].entry.incarn:                1 ; 0x070: A=1 NUMM=0x0
kfade[1].entry.hash:         3585957073 ; 0x074: 0xd5bd5cd1
kfade[1].entry.refer.number:          9 ; 0x078: 0x00000009
kfade[1].entry.refer.incarn:          1 ; 0x07c: A=1 NUMM=0x0
kfade[1].name:               DB_UNKNOWN ; 0x080: length=10
kfade[1].fnum:               4294967295 ; 0x0b0: 0xffffffff
kfade[1].finc:               4294967295 ; 0x0b4: 0xffffffff
kfade[1].flags:                       4 ; 0x0b8: U=0 S=0 S=1 U=0 F=0
kfade[1].ub1spare:                    0 ; 0x0b9: 0x00
kfade[1].ub2spare:                    0 ; 0x0ba: 0x0000
kfade[2].entry.incarn:                3 ; 0x0bc: A=1 NUMM=0x1
kfade[2].entry.hash:         1585230659 ; 0x0c0: 0x5e7cb343
kfade[2].entry.refer.number: 4294967295 ; 0x0c4: 0xffffffff
kfade[2].entry.refer.incarn:          0 ; 0x0c8: A=0 NUMM=0x0
kfade[2].name:                  tts.dmp ; 0x0cc: length=7
kfade[2].fnum:                      269 ; 0x0fc: 0x0000010d
kfade[2].finc:                930515105 ; 0x100: 0x377688a1
kfade[2].flags:                      17 ; 0x104: U=1 S=0 S=0 U=0 F=1
kfade[2].ub1spare:                    0 ; 0x105: 0x00
kfade[2].ub2spare:                    0 ; 0x106: 0x0000

从kfade[0].entry.refer.number=1,说明我们需要读取1号块来查看其它文件的别名目录

[grid@jyrac1 ~]$ kfed read /dev/raw/raw4 aun=36 blkn=1 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           11 ; 0x002: KFBTYP_ALIASDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       1 ; 0x004: blk=1
kfbh.block.obj:                       6 ; 0x008: file=6
kfbh.check:                  3120935190 ; 0x00c: 0xba05b116
kfbh.fcn.base:                     3558 ; 0x010: 0x00000de6
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:        4294967295 ; 0x00c: 0xffffffff
kffdnd.overfl.incarn:                 0 ; 0x010: A=0 NUMM=0x0
kffdnd.parent.number:                 0 ; 0x014: 0x00000000
kffdnd.parent.incarn:                 1 ; 0x018: A=1 NUMM=0x0
kffdnd.fstblk.number:                 1 ; 0x01c: 0x00000001
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfade[0].entry.incarn:                1 ; 0x024: A=1 NUMM=0x0
kfade[0].entry.hash:          710518681 ; 0x028: 0x2a59a799
kfade[0].entry.refer.number:          2 ; 0x02c: 0x00000002 --指向下一层的block号
kfade[0].entry.refer.incarn:          1 ; 0x030: A=1 NUMM=0x0
kfade[0].name:                 DATAFILE ; 0x034: length=8   --表示对应的alias元数据名称,datafile指数据文件
kfade[0].fnum:               4294967295 ; 0x064: 0xffffffff
kfade[0].finc:               4294967295 ; 0x068: 0xffffffff
kfade[0].flags:                       8 ; 0x06c: U=0 S=0 S=0 U=1 F=0
kfade[0].ub1spare:                    0 ; 0x06d: 0x00
kfade[0].ub2spare:                    0 ; 0x06e: 0x0000
kfade[1].entry.incarn:                1 ; 0x070: A=1 NUMM=0x0
kfade[1].entry.hash:         4053320104 ; 0x074: 0xf198c1a8
kfade[1].entry.refer.number:          3 ; 0x078: 0x00000003   --指向下一层的block号
kfade[1].entry.refer.incarn:          1 ; 0x07c: A=1 NUMM=0x0
kfade[1].name:              CONTROLFILE ; 0x080: length=11    --表示对应的alias元数据名称,controlfile指控制文件
kfade[1].fnum:               4294967295 ; 0x0b0: 0xffffffff
kfade[1].finc:               4294967295 ; 0x0b4: 0xffffffff
kfade[1].flags:                       8 ; 0x0b8: U=0 S=0 S=0 U=1 F=0
kfade[1].ub1spare:                    0 ; 0x0b9: 0x00
kfade[1].ub2spare:                    0 ; 0x0ba: 0x0000
kfade[2].entry.incarn:                1 ; 0x0bc: A=1 NUMM=0x0
kfade[2].entry.hash:          873035404 ; 0x0c0: 0x3409768c
kfade[2].entry.refer.number:          4 ; 0x0c4: 0x00000004
kfade[2].entry.refer.incarn:          1 ; 0x0c8: A=1 NUMM=0x0
kfade[2].name:               temp_files ; 0x0cc: length=10   ---表示对应的alias元数据名称,temp_files指临时文件
kfade[2].fnum:               4294967295 ; 0x0fc: 0xffffffff
kfade[2].finc:               4294967295 ; 0x100: 0xffffffff
kfade[2].flags:                       8 ; 0x104: U=0 S=0 S=0 U=1 F=0
kfade[2].ub1spare:                    0 ; 0x105: 0x00
kfade[2].ub2spare:                    0 ; 0x106: 0x0000
kfade[3].entry.incarn:                1 ; 0x108: A=1 NUMM=0x0
kfade[3].entry.hash:         2803485489 ; 0x10c: 0xa719cb31
kfade[3].entry.refer.number:          5 ; 0x110: 0x00000005
kfade[3].entry.refer.incarn:          1 ; 0x114: A=1 NUMM=0x0
kfade[3].name:                ONLINELOG ; 0x118: length=9  --表示对应的alias元数据名称,onlinelog指联机重做日志
kfade[3].fnum:               4294967295 ; 0x148: 0xffffffff
kfade[3].finc:               4294967295 ; 0x14c: 0xffffffff
kfade[3].flags:                       8 ; 0x150: U=0 S=0 S=0 U=1 F=0
kfade[3].ub1spare:                    0 ; 0x151: 0x00
kfade[3].ub2spare:                    0 ; 0x152: 0x0000
kfade[4].entry.incarn:                1 ; 0x154: A=1 NUMM=0x0
kfade[4].entry.hash:         2905271101 ; 0x158: 0xad2aeb3d
kfade[4].entry.refer.number:          6 ; 0x15c: 0x00000006
kfade[4].entry.refer.incarn:          1 ; 0x160: A=1 NUMM=0x0
kfade[4].name:                 TEMPFILE ; 0x164: length=8 --表示对应的alias元数据名称,tempfile指临时文件
kfade[4].fnum:               4294967295 ; 0x194: 0xffffffff
kfade[4].finc:               4294967295 ; 0x198: 0xffffffff
kfade[4].flags:                       8 ; 0x19c: U=0 S=0 S=0 U=1 F=0
kfade[4].ub1spare:                    0 ; 0x19d: 0x00
kfade[4].ub2spare:                    0 ; 0x19e: 0x0000
kfade[5].entry.incarn:                1 ; 0x1a0: A=1 NUMM=0x0
kfade[5].entry.hash:         3261836913 ; 0x1a4: 0xc26bae71
kfade[5].entry.refer.number:          7 ; 0x1a8: 0x00000007
kfade[5].entry.refer.incarn:          1 ; 0x1ac: A=1 NUMM=0x0
kfade[5].name:            PARAMETERFILE ; 0x1b0: length=13 --表示对应的alias元数据名称,PARAMETERFILE指参数文件
kfade[5].fnum:               4294967295 ; 0x1e0: 0xffffffff
kfade[5].finc:               4294967295 ; 0x1e4: 0xffffffff
kfade[5].flags:                       8 ; 0x1e8: U=0 S=0 S=0 U=1 F=0
kfade[5].ub1spare:                    0 ; 0x1e9: 0x00
kfade[5].ub2spare:                    0 ; 0x1ea: 0x0000
kfade[6].entry.incarn:                1 ; 0x1ec: A=1 NUMM=0x0
kfade[6].entry.hash:         1858399388 ; 0x1f0: 0x6ec4ec9c
kfade[6].entry.refer.number:          8 ; 0x1f4: 0x00000008
kfade[6].entry.refer.incarn:          1 ; 0x1f8: A=1 NUMM=0x0
kfade[6].name:                  oradata ; 0x1fc: length=7 
kfade[6].fnum:               4294967295 ; 0x22c: 0xffffffff
kfade[6].finc:               4294967295 ; 0x230: 0xffffffff
kfade[6].flags:                       8 ; 0x234: U=0 S=0 S=0 U=1 F=0
kfade[6].ub1spare:                    0 ; 0x235: 0x00
kfade[6].ub2spare:                    0 ; 0x236: 0x0000
kfade[7].entry.incarn:                1 ; 0x238: A=1 NUMM=0x0
kfade[7].entry.hash:         4097001356 ; 0x23c: 0xf433478c
kfade[7].entry.refer.number: 4294967295 ; 0x240: 0xffffffff
kfade[7].entry.refer.incarn:          0 ; 0x244: A=0 NUMM=0x0
kfade[7].name:          spfilejyrac.ora ; 0x248: length=15
kfade[7].fnum:                      256 ; 0x278: 0x00000100
kfade[7].finc:                930411925 ; 0x27c: 0x3774f595
kfade[7].flags:                      17 ; 0x280: U=1 S=0 S=0 U=0 F=1
kfade[7].ub1spare:                    0 ; 0x281: 0x00
kfade[7].ub2spare:                    0 ; 0x282: 0x0000
kfade[8].entry.incarn:                1 ; 0x284: A=1 NUMM=0x0
kfade[8].entry.hash:         2514510081 ; 0x288: 0x95e06101
kfade[8].entry.refer.number:         11 ; 0x28c: 0x0000000b
kfade[8].entry.refer.incarn:          3 ; 0x290: A=1 NUMM=0x1
kfade[8].name:                  DUMPSET ; 0x294: length=7
kfade[8].fnum:               4294967295 ; 0x2c4: 0xffffffff
kfade[8].finc:               4294967295 ; 0x2c8: 0xffffffff
kfade[8].flags:                       4 ; 0x2cc: U=0 S=0 S=1 U=0 F=0
kfade[8].ub1spare:                    0 ; 0x2cd: 0x00
kfade[8].ub2spare:                    0 ; 0x2ce: 0x0000

如果要查看数据文件别名目录,根据kfade[0].entry.refer.number=2与kfade[0].name=DATAFILE,可知在2号块

[grid@jyrac1 ~]$ kfed read /dev/raw/raw4 aun=36 blkn=2 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           11 ; 0x002: KFBTYP_ALIASDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       2 ; 0x004: blk=2
kfbh.block.obj:                       6 ; 0x008: file=6
kfbh.check:                  2753078160 ; 0x00c: 0xa418a390
kfbh.fcn.base:                     6551 ; 0x010: 0x00001997
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:        4294967295 ; 0x00c: 0xffffffff
kffdnd.overfl.incarn:                 0 ; 0x010: A=0 NUMM=0x0
kffdnd.parent.number:                 1 ; 0x014: 0x00000001
kffdnd.parent.incarn:                 1 ; 0x018: A=1 NUMM=0x0
kffdnd.fstblk.number:                 2 ; 0x01c: 0x00000002
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfade[0].entry.incarn:                1 ; 0x024: A=1 NUMM=0x0
kfade[0].entry.hash:         3486922491 ; 0x028: 0xcfd636fb
kfade[0].entry.refer.number: 4294967295 ; 0x02c: 0xffffffff
kfade[0].entry.refer.incarn:          0 ; 0x030: A=0 NUMM=0x0
kfade[0].name:                   SYSAUX ; 0x034: length=6
kfade[0].fnum:                      258 ; 0x064: 0x00000102
kfade[0].finc:                930413055 ; 0x068: 0x3774f9ff
kfade[0].flags:                      18 ; 0x06c: U=0 S=1 S=0 U=0 F=1
kfade[0].ub1spare:                    0 ; 0x06d: 0x00
kfade[0].ub2spare:                    0 ; 0x06e: 0x0000
kfade[1].entry.incarn:                1 ; 0x070: A=1 NUMM=0x0
kfade[1].entry.hash:          564369944 ; 0x074: 0x21a39a18
kfade[1].entry.refer.number: 4294967295 ; 0x078: 0xffffffff
kfade[1].entry.refer.incarn:          0 ; 0x07c: A=0 NUMM=0x0
kfade[1].name:                   SYSTEM ; 0x080: length=6
kfade[1].fnum:                      259 ; 0x0b0: 0x00000103
kfade[1].finc:                930413057 ; 0x0b4: 0x3774fa01
kfade[1].flags:                      18 ; 0x0b8: U=0 S=1 S=0 U=0 F=1
kfade[1].ub1spare:                    0 ; 0x0b9: 0x00
kfade[1].ub2spare:                    0 ; 0x0ba: 0x0000
kfade[2].entry.incarn:                1 ; 0x0bc: A=1 NUMM=0x0
kfade[2].entry.hash:           75817004 ; 0x0c0: 0x0484e02c
kfade[2].entry.refer.number: 4294967295 ; 0x0c4: 0xffffffff
kfade[2].entry.refer.incarn:          0 ; 0x0c8: A=0 NUMM=0x0
kfade[2].name:                  EXAMPLE ; 0x0cc: length=7
kfade[2].fnum:                      260 ; 0x0fc: 0x00000104
kfade[2].finc:                930413057 ; 0x100: 0x3774fa01
kfade[2].flags:                      18 ; 0x104: U=0 S=1 S=0 U=0 F=1
kfade[2].ub1spare:                    0 ; 0x105: 0x00
kfade[2].ub2spare:                    0 ; 0x106: 0x0000
kfade[3].entry.incarn:                1 ; 0x108: A=1 NUMM=0x0
kfade[3].entry.hash:         3945580605 ; 0x10c: 0xeb2cc83d
kfade[3].entry.refer.number: 4294967295 ; 0x110: 0xffffffff
kfade[3].entry.refer.incarn:          0 ; 0x114: A=0 NUMM=0x0
kfade[3].name:                 UNDOTBS2 ; 0x118: length=8
kfade[3].fnum:                      261 ; 0x148: 0x00000105
kfade[3].finc:                930413057 ; 0x14c: 0x3774fa01
kfade[3].flags:                      18 ; 0x150: U=0 S=1 S=0 U=0 F=1
kfade[3].ub1spare:                    0 ; 0x151: 0x00
kfade[3].ub2spare:                    0 ; 0x152: 0x0000
kfade[4].entry.incarn:                1 ; 0x154: A=1 NUMM=0x0
kfade[4].entry.hash:         1431819651 ; 0x158: 0x5557d583
kfade[4].entry.refer.number: 4294967295 ; 0x15c: 0xffffffff
kfade[4].entry.refer.incarn:          0 ; 0x160: A=0 NUMM=0x0
kfade[4].name:                 UNDOTBS1 ; 0x164: length=8
kfade[4].fnum:                      262 ; 0x194: 0x00000106
kfade[4].finc:                930413057 ; 0x198: 0x3774fa01
kfade[4].flags:                      18 ; 0x19c: U=0 S=1 S=0 U=0 F=1
kfade[4].ub1spare:                    0 ; 0x19d: 0x00
kfade[4].ub2spare:                    0 ; 0x19e: 0x0000
kfade[5].entry.incarn:                1 ; 0x1a0: A=1 NUMM=0x0
kfade[5].entry.hash:         3705183464 ; 0x1a4: 0xdcd89ce8
kfade[5].entry.refer.number: 4294967295 ; 0x1a8: 0xffffffff
kfade[5].entry.refer.incarn:          0 ; 0x1ac: A=0 NUMM=0x0
kfade[5].name:                    USERS ; 0x1b0: length=5
kfade[5].fnum:                      263 ; 0x1e0: 0x00000107
kfade[5].finc:                930413057 ; 0x1e4: 0x3774fa01
kfade[5].flags:                      18 ; 0x1e8: U=0 S=1 S=0 U=0 F=1
kfade[5].ub1spare:                    0 ; 0x1e9: 0x00
kfade[5].ub2spare:                    0 ; 0x1ea: 0x0000
kfade[6].entry.incarn:                1 ; 0x1ec: A=1 NUMM=0x0
kfade[6].entry.hash:         1752863906 ; 0x1f0: 0x687a94a2
kfade[6].entry.refer.number: 4294967295 ; 0x1f4: 0xffffffff
kfade[6].entry.refer.incarn:          0 ; 0x1f8: A=0 NUMM=0x0
kfade[6].name:            FILE_TRANSFER ; 0x1fc: length=13
kfade[6].fnum:                      270 ; 0x22c: 0x0000010e
kfade[6].finc:                930515465 ; 0x230: 0x37768a09
kfade[6].flags:                      18 ; 0x234: U=0 S=1 S=0 U=0 F=1
kfade[6].ub1spare:                    0 ; 0x235: 0x00
kfade[6].ub2spare:                    0 ; 0x236: 0x0000
kfade[7].entry.incarn:                1 ; 0x238: A=1 NUMM=0x0
kfade[7].entry.hash:         2844469351 ; 0x23c: 0xa98b2867
kfade[7].entry.refer.number: 4294967295 ; 0x240: 0xffffffff
kfade[7].entry.refer.incarn:          0 ; 0x244: A=0 NUMM=0x0
kfade[7].name:               test01.dbf ; 0x248: length=10
kfade[7].fnum:                      270 ; 0x278: 0x0000010e
kfade[7].finc:                930515465 ; 0x27c: 0x37768a09
kfade[7].flags:                      17 ; 0x280: U=1 S=0 S=0 U=0 F=1
kfade[7].ub1spare:                    0 ; 0x281: 0x00
kfade[7].ub2spare:                    0 ; 0x282: 0x0000
kfade[8].entry.incarn:                5 ; 0x284: A=1 NUMM=0x2
kfade[8].entry.hash:         2512381731 ; 0x288: 0x95bfe723
kfade[8].entry.refer.number: 4294967295 ; 0x28c: 0xffffffff
kfade[8].entry.refer.incarn:          0 ; 0x290: A=0 NUMM=0x0
kfade[8].name:                       CS ; 0x294: length=2
kfade[8].fnum:                      271 ; 0x2c4: 0x0000010f
kfade[8].finc:                931880499 ; 0x2c8: 0x378b5e33
kfade[8].flags:                      18 ; 0x2cc: U=0 S=1 S=0 U=0 F=1
kfade[8].ub1spare:                    0 ; 0x2cd: 0x00
kfade[8].ub2spare:                    0 ; 0x2ce: 0x0000
kfade[9].entry.incarn:                3 ; 0x2d0: A=1 NUMM=0x1
kfade[9].entry.hash:         4011892030 ; 0x2d4: 0xef209d3e
kfade[9].entry.refer.number: 4294967295 ; 0x2d8: 0xffffffff
kfade[9].entry.refer.incarn:          0 ; 0x2dc: A=0 NUMM=0x0
kfade[9].name:         CS_STRIPE_COARSE ; 0x2e0: length=16
kfade[9].fnum:                      272 ; 0x310: 0x00000110
kfade[9].finc:                931882089 ; 0x314: 0x378b6469
kfade[9].flags:                      18 ; 0x318: U=0 S=1 S=0 U=0 F=1
kfade[9].ub1spare:                    0 ; 0x319: 0x00
kfade[9].ub2spare:                    0 ; 0x31a: 0x0000
kfade[10].entry.incarn:               1 ; 0x31c: A=1 NUMM=0x0
kfade[10].entry.hash:        1365029949 ; 0x320: 0x515cb43d
kfade[10].entry.refer.number:4294967295 ; 0x324: 0xffffffff
kfade[10].entry.refer.incarn:         0 ; 0x328: A=0 NUMM=0x0
kfade[10].name:           NOT_IMPORTANT ; 0x32c: length=13
kfade[10].fnum:                     273 ; 0x35c: 0x00000111
kfade[10].finc:               931882831 ; 0x360: 0x378b674f
kfade[10].flags:                     18 ; 0x364: U=0 S=1 S=0 U=0 F=1
kfade[10].ub1spare:                   0 ; 0x365: 0x00
kfade[10].ub2spare:                   0 ; 0x366: 0x0000

从上面信息,我们可以知道sysaux对应是file 258,system对应file 259,EXAMPLE对应file 260,UNDOTBS2 对应file 261等等,与视图中查询的结果完全一致。

同理,controlfile控制文件在3号块。

[grid@jyrac1 ~]$ kfed read /dev/raw/raw4 aun=36 blkn=3 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           11 ; 0x002: KFBTYP_ALIASDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       3 ; 0x004: blk=3
kfbh.block.obj:                       6 ; 0x008: file=6
kfbh.check:                  3091636595 ; 0x00c: 0xb846a173
kfbh.fcn.base:                      734 ; 0x010: 0x000002de
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:        4294967295 ; 0x00c: 0xffffffff
kffdnd.overfl.incarn:                 0 ; 0x010: A=0 NUMM=0x0
kffdnd.parent.number:                 1 ; 0x014: 0x00000001
kffdnd.parent.incarn:                 1 ; 0x018: A=1 NUMM=0x0
kffdnd.fstblk.number:                 3 ; 0x01c: 0x00000003
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfade[0].entry.incarn:                3 ; 0x024: A=1 NUMM=0x1
kfade[0].entry.hash:           62930150 ; 0x028: 0x03c03ce6
kfade[0].entry.refer.number: 4294967295 ; 0x02c: 0xffffffff
kfade[0].entry.refer.incarn:          0 ; 0x030: A=0 NUMM=0x0
kfade[0].name:                  current ; 0x034: length=7
kfade[0].fnum:                      257 ; 0x064: 0x00000101
kfade[0].finc:                930412709 ; 0x068: 0x3774f8a5
kfade[0].flags:                      18 ; 0x06c: U=0 S=1 S=0 U=0 F=1
kfade[0].ub1spare:                    0 ; 0x06d: 0x00
kfade[0].ub2spare:                    0 ; 0x06e: 0x0000
kfade[1].entry.incarn:                0 ; 0x070: A=0 NUMM=0x0
kfade[1].entry.hash:                  0 ; 0x074: 0x00000000
kfade[1].entry.refer.number:          0 ; 0x078: 0x00000000
kfade[1].entry.refer.incarn:          0 ; 0x07c: A=0 NUMM=0x0
kfade[1].name:                          ; 0x080: length=0
kfade[1].fnum:                        0 ; 0x0b0: 0x00000000
kfade[1].finc:                        0 ; 0x0b4: 0x00000000
kfade[1].flags:                       0 ; 0x0b8: U=0 S=0 S=0 U=0 F=0
kfade[1].ub1spare:                    0 ; 0x0b9: 0x00
kfade[1].ub2spare:                    0 ; 0x0ba: 0x0000
kfade[2].entry.incarn:                0 ; 0x0bc: A=0 NUMM=0x0
kfade[2].entry.hash:                  0 ; 0x0c0: 0x00000000
kfade[2].entry.refer.number:          0 ; 0x0c4: 0x00000000
kfade[2].entry.refer.incarn:          0 ; 0x0c8: A=0 NUMM=0x0

从上面信息,你可以知道目前数据库的controlfile名称为:current.257.930412709

同理,onlinelog联机重做日志在5号块

[grid@jyrac1 ~]$ kfed read /dev/raw/raw4 aun=36 blkn=5 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           11 ; 0x002: KFBTYP_ALIASDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       5 ; 0x004: blk=5
kfbh.block.obj:                       6 ; 0x008: file=6
kfbh.check:                  1209488605 ; 0x00c: 0x481754dd
kfbh.fcn.base:                     3491 ; 0x010: 0x00000da3
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:        4294967295 ; 0x00c: 0xffffffff
kffdnd.overfl.incarn:                 0 ; 0x010: A=0 NUMM=0x0
kffdnd.parent.number:                 1 ; 0x014: 0x00000001
kffdnd.parent.incarn:                 1 ; 0x018: A=1 NUMM=0x0
kffdnd.fstblk.number:                 5 ; 0x01c: 0x00000005
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfade[0].entry.incarn:                1 ; 0x024: A=1 NUMM=0x0
kfade[0].entry.hash:         2375841806 ; 0x028: 0x8d9c780e
kfade[0].entry.refer.number: 4294967295 ; 0x02c: 0xffffffff
kfade[0].entry.refer.incarn:          0 ; 0x030: A=0 NUMM=0x0
kfade[0].name:                  group_1 ; 0x034: length=7
kfade[0].fnum:                      264 ; 0x064: 0x00000108
kfade[0].finc:                930413221 ; 0x068: 0x3774faa5
kfade[0].flags:                      18 ; 0x06c: U=0 S=1 S=0 U=0 F=1
kfade[0].ub1spare:                    0 ; 0x06d: 0x00
kfade[0].ub2spare:                    0 ; 0x06e: 0x0000
kfade[1].entry.incarn:                1 ; 0x070: A=1 NUMM=0x0
kfade[1].entry.hash:         1478106543 ; 0x074: 0x581a1daf
kfade[1].entry.refer.number: 4294967295 ; 0x078: 0xffffffff
kfade[1].entry.refer.incarn:          0 ; 0x07c: A=0 NUMM=0x0
kfade[1].name:                  group_2 ; 0x080: length=7
kfade[1].fnum:                      265 ; 0x0b0: 0x00000109
kfade[1].finc:                930413225 ; 0x0b4: 0x3774faa9
kfade[1].flags:                      18 ; 0x0b8: U=0 S=1 S=0 U=0 F=1
kfade[1].ub1spare:                    0 ; 0x0b9: 0x00
kfade[1].ub2spare:                    0 ; 0x0ba: 0x0000
kfade[2].entry.incarn:                1 ; 0x0bc: A=1 NUMM=0x0
kfade[2].entry.hash:          429163817 ; 0x0c0: 0x19948529
kfade[2].entry.refer.number: 4294967295 ; 0x0c4: 0xffffffff
kfade[2].entry.refer.incarn:          0 ; 0x0c8: A=0 NUMM=0x0
kfade[2].name:                  group_3 ; 0x0cc: length=7
kfade[2].fnum:                      266 ; 0x0fc: 0x0000010a
kfade[2].finc:                930413227 ; 0x100: 0x3774faab
kfade[2].flags:                      18 ; 0x104: U=0 S=1 S=0 U=0 F=1
kfade[2].ub1spare:                    0 ; 0x105: 0x00
kfade[2].ub2spare:                    0 ; 0x106: 0x0000
kfade[3].entry.incarn:                1 ; 0x108: A=1 NUMM=0x0
kfade[3].entry.hash:         2232040441 ; 0x10c: 0x850a3bf9
kfade[3].entry.refer.number: 4294967295 ; 0x110: 0xffffffff
kfade[3].entry.refer.incarn:          0 ; 0x114: A=0 NUMM=0x0
kfade[3].name:                  group_4 ; 0x118: length=7
kfade[3].fnum:                      267 ; 0x148: 0x0000010b
kfade[3].finc:                930413231 ; 0x14c: 0x3774faaf
kfade[3].flags:                      18 ; 0x150: U=0 S=1 S=0 U=0 F=1
kfade[3].ub1spare:                    0 ; 0x151: 0x00
kfade[3].ub2spare:                    0 ; 0x152: 0x0000

从上面信息,你可以知道目前数据库的联机重做日志文件名称为:group_1.264.930413221,group_2.265.930413225,group_3.266.930413227,group_4.267.930413231

同理,找到spfile的alias信息为:SPFILE.256.930411925,当知道数据库文件的alias名称之后,如果利用amdu从asm diskgroup中抽取某个文件,那么就很容易了,而且如果你后面需要用来恢复,甚至你连文件名都不用改,如下例子抽取上述的spfile:

[grid@jyrac1 ~]$ amdu -dis '/dev/raw/raw*' -extract datadg.256 -output spfile.256.930411925
amdu_2016_12_29_21_15_43/
AMDU-00204: Disk N0003 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0003: '/dev/raw/raw11'
AMDU-00204: Disk N0009 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0009: '/dev/raw/raw4'
AMDU-00204: Disk N0008 is in currently mounted diskgroup DATADG
AMDU-00201: Disk N0008: '/dev/raw/raw3'

[grid@jyrac1 ~]$ cat spfile.256.930411925 

jyrac1.__db_cache_size=1795162112                                                                                                                                       jyrac2.__db_cache_size=1795162112
jyrac2.__java_pool_size=16777216
jyrac1.__java_pool_size=16777216
jyrac2.__large_pool_size=33554432
jyrac1.__large_pool_size=33554432
jyrac1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
jyrac2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
jyrac2.__pga_aggregate_target=838860800
jyrac1.__pga_aggregate_target=838860800
jyrac2.__sga_target=2516582400
jyrac1.__sga_target=2516582400
jyrac2.__shared_io_pool_size=0
jyrac1.__shared_io_pool_size=0
jyrac1.__shared_pool_size=587202560
jyrac2.__shared_pool_size=637534208
jyrac2.__streams_pool_size=0
jyrac1.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/jyrac/adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='11.2.0.4.0'
*.control_files='+DATADG/jyrac/controlfile/current.257.930412709'
*.db_block_size=8192
*.db_create_file_dest='+DATADG'
*.db_domain=''
*.db_name='jyrac'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=jyracXDB)'
jyrac1.dispatchers='(PROTOCOL=TCP) (SERVICE=jyrac1XDB)'
jyrac2.dispatchers='(PROTOCOL=TCP) (SERVICE=jyrac2XDB)'
jyrac2.instance_number=2
jyrac1.instance_number=1
*.job_queue_processes=1000
JYRAC1.listener_networks='((NAME=network1)(LOCAL_LISTENER=10.138.130.152:1521)(REMOTE_LISTENER=10.138.130.155:1521))','((NAME=network2)(LOCAL_LISTENER=10.138.130.152:1521)(REMOTE_LISTENER=10.138.130.156:1521))','((NAME=network3)(LOCAL_LISTENER=10.138.130.152:1521)(REMOTE_LISTENER=10.138.130.157:1521))'
JYRAC2.listener_networks='((NAME=network1)(LOCAL_LISTENER=10.138.130.154:1521)(REMOTE_LISTENER=10.138.130.155:1521))','((NAME=network2)(LOCAL_LISTENER=10.138.130.154:1521)(REMOTE_LISTENER=10.138.130.156:1521))','((NAME=network3)(LOCAL_LISTENER=10.138.130.154:1521)(REMOTE_LISTENER=10.138.130.157:1521))'
jyrac2.listener_networks='((NAME=network1)(LOCAL_LISTENER=10.138.130.154:1521)(REMOTE_LISTENER=10.138.130.155:1521))','((NAME=network2)(LOCAL_LISTENER=10.138.130.154:1521)(REMOTE_LISTENER=10.138.130.156:1521))','((NAME=network3)(LOCAL_LISTENER=10.138.130.154:1521)(REMOTE_LISTENER=10.138.130.157:1521))'
jyrac1.listener_networks='((NAME=network1)(LOCAL_LISTENER=10.138.130.153:1521)(REMOTE_LISTENER=10.138.130.155:1521))','((NAME=network2)(LOCAL_LISTENER=10.138.130.153:1521)(REMOTE_LISTENER=10.138.130.156:1521))','((NAME=network3)(LOCAL_LISTENER=10.138.130.153:1521)(REMOTE_LISTENER=10.138.130.157:1521))'
*.log_archive_dest_1='location=+archdg/jyrac/'
*.open_cursors=300
*.pga_aggregate_target=836763648
*.processes=150
*.remote_listener='jyrac-scan:1521'
*.remote_login_passwordfile='exclusive'
*.sga_target=2510290944
jyrac2.thread=2
jyrac1.thread=1
jyrac1.undo_tablespace='UNDOTBS1'
jyrac2.undo_tablespace='UNDOTBS2'   

小结:
别名目录用来跟踪ASM磁盘组中的所有别名,可以通过查询v$asm_alias来查看现有文件的别名。

]]>
http://www.jydba.net/index.php/archives/2003/feed 0
Oracle ASM Template Directory http://www.jydba.net/index.php/archives/2001 http://www.jydba.net/index.php/archives/2001#respond Thu, 29 Dec 2016 08:41:00 +0000 http://www.jydba.net/?p=2001 Template Directory包含关于磁盘组所有文件模板的信息。有两种类型的模板:一种是系统自带的,一种是用户创建的,默认的模板(系统自带的)已经包含ASM的所有文件类型,创建磁盘组时对于每种支持的文件类型将使用缺省的系统模板进行填充。用户创建的模板只会在用户特别指定时会使用。如果用户创建自己的模板将会增加新的条目,模板目录通过模板号进行索引。

每种模板条目包含以下内容:
.每个模板的名称(对于默认模板它的名称其实就是文件类型)
.文件冗余度(默认是磁盘组的冗余度)
.文件条带(默认是根据文件类型来决定文件的条带)
.系统标识(是否为系统自带的模板)

Template Directory在每个磁盘组中的文件号为5(F5)。缺省模板的模板名与文件类型相关。文件冗余默认为磁盘组冗余。文件条带默认是文件类型特定。系统标识(system flag)被设置为系统模板。用户创建的模板不会设置系统标识(system flag)。

通过查询视图V$ASM_TEMPLATE可查看完整的模板信息
10G:

sql> col system for a20
sql> col primary_region for a20
sql> col mirror_region for a20
sql> select * from v$asm_template where group_number=1; 

GROUP_NUMBER ENTRY_NUMBER REDUNDANCY   STRIPE       SYSTEM               NAME
------------ ------------ ------------ ------------ -------------------- ------------------------------
           1            0 MIRROR       COARSE       Y                    PARAMETERFILE
           1            1 MIRROR       COARSE       Y                    DUMPSET
           1            2 HIGH         FINE         Y                    CONTROLFILE
           1            3 MIRROR       COARSE       Y                    ARCHIVELOG
           1            4 MIRROR       FINE         Y                    ONLINELOG
           1            5 MIRROR       COARSE       Y                    DATAFILE
           1            6 MIRROR       COARSE       Y                    TEMPFILE
           1            7 MIRROR       COARSE       Y                    BACKUPSET
           1            8 MIRROR       COARSE       Y                    AUTOBACKUP
           1            9 MIRROR       COARSE       Y                    XTRANSPORT
           1           10 MIRROR       COARSE       Y                    CHANGETRACKING
           1           11 MIRROR       FINE         Y                    FLASHBACK
           1           12 MIRROR       COARSE       Y                    DATAGUARDCONFIG

13 rows selected.

11G:

sql> col system for a20
sql> col primary_region for a20
sql> col mirror_region for a20
sql> select * from v$asm_template where group_number=3;

GROUP_NUMBER ENTRY_NUMBER REDUNDANCY   STRIPE       SYSTEM               NAME                           PRIMARY_REGION       MIRROR_REGION
------------ ------------ ------------ ------------ -------------------- ------------------------------ -------------------- --------------------
           3           60 MIRROR       COARSE       Y                    PARAMETERFILE                  COLD                 COLD
           3           61 MIRROR       COARSE       Y                    ASMPARAMETERFILE               COLD                 COLD
           3           63 MIRROR       COARSE       Y                    DUMPSET                        COLD                 COLD
           3           64 HIGH         FINE         Y                    CONTROLFILE                    COLD                 COLD
           3           65 MIRROR       COARSE       Y                    FLASHFILE                      COLD                 COLD
           3           66 MIRROR       COARSE       Y                    ARCHIVELOG                     COLD                 COLD
           3           67 MIRROR       COARSE       Y                    ONLINELOG                      COLD                 COLD
           3           68 MIRROR       COARSE       Y                    DATAFILE                       COLD                 COLD
           3           69 MIRROR       COARSE       Y                    TEMPFILE                       COLD                 COLD
           3          170 MIRROR       COARSE       Y                    BACKUPSET                      COLD                 COLD
           3          171 MIRROR       COARSE       Y                    XTRANSPORT BACKUPSET           COLD                 COLD
           3          172 MIRROR       COARSE       Y                    AUTOBACKUP                     COLD                 COLD
           3          173 MIRROR       COARSE       Y                    XTRANSPORT                     COLD                 COLD
           3          174 MIRROR       COARSE       Y                    CHANGETRACKING                 COLD                 COLD
           3          175 MIRROR       COARSE       Y                    FLASHBACK                      COLD                 COLD
           3          176 MIRROR       COARSE       Y                    DATAGUARDCONFIG                COLD                 COLD
           3          177 MIRROR       COARSE       Y                    OCRFILE                        COLD                 COLD
17 rows selected.

redundancy字段所显示的mirror为存在镜像副本,high为存在三份镜像副本,unprot为不支持镜像。其中name为controlfile的控制文件redundancy为high,stripe为fine,即为控制文件存在三份镜像,并使用细粒度条带。这是默认的数据库控制文件的模板,这是为什么每一个控制文件都会被做三重镜像的原因。有意思的是,我们可以使用它创建任何的数据库文件。例如下面将使用控制文件模板来创建一个表空间的数据文件。

连接数据库的实例

SQL> create tablespace cs datafile '+DATADG(CONTROLFILE)' size 10m;

Tablespace created.



SQL> select name from v$datafile where name like '%cs%';

NAME
--------------------------------------------------------------------------------
+DATADG/jyrac/datafile/cs.271.931879611

上面创建了一个表空间,ASM给我新创建的数据文件分配了编号271。

查看该数据文件的冗余度
连接ASM实例

SQL> select group_number, name, type "redundancy" from v$asm_diskgroup where name='DATADG';

GROUP_NUMBER NAME                           redundancy
------------ ------------------------------ ------------------------------
           3 DATADG                         NORMAL

这是一个normal冗余的磁盘组,但是由于使用了控制文件模板来创建数据文件,因此通过查询内部视图 x$kffxp来获得想要的信息

SQL> select x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au",x.disk_kffxp "disk #",d.name "disk name"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=3
  6  and x.number_kffxp=271
  7  order by 1,2,3;

virtual extent physical extent         au     disk # disk name
-------------- --------------- ---------- ---------- ------------------------------------------------------------
             0               0       1654          0 DATADG_0001
             0               1       1647          2 DATADG_0002
             0               2       1647          1 DATADG_0003
             1               3       1648          2 DATADG_0002
             1               4       1648          1 DATADG_0003
             1               5       1655          3 DATADG_0000
             2               6       1649          1 DATADG_0003
             2               7       1655          0 DATADG_0001
             2               8       1656          3 DATADG_0000
             3               9       1657          3 DATADG_0000
             3              10       1650          1 DATADG_0003
             3              11       1656          0 DATADG_0001
             4              12       1657          0 DATADG_0001
             4              13       1658          3 DATADG_0000
             4              14       1649          2 DATADG_0002
             5              15       1650          2 DATADG_0002
             5              16       1658          0 DATADG_0001
             5              17       1651          1 DATADG_0003
             6              18       1652          1 DATADG_0003
             6              19       1651          2 DATADG_0002
             6              20       1659          0 DATADG_0001
             7              21       1659          3 DATADG_0000
             7              22       1652          2 DATADG_0002
             7              23       1653          1 DATADG_0003

24 rows selected.

这个文件被做了三重镜像,因为每一个虚拟区都由三个物理区组成,但是为什么我的数据文件仅仅只有1MB,但是却有8个虚拟区呢,这是因为控制文件的模板是一个细粒度条带的模板。隐含参数_asm_stripesize代表了细粒度条带的大小,默认为128K,隐含参数_asm_stripewidth代表了条带的宽度,默认为8。但是有一点很奇怪,细粒度条带下,数据文件头好像没有独占一个extent,因为上面查询显示了这个1MB的文件一共占用了8个extent,而不是9个extent,按照条带宽度是8的设定,文件内容本身就应该占用了8个extent。
10g:

SQL> col name for a30
SQL> col value for a50
SQL> col describ for a50
SQL> select x.ksppinm NAME,y.ksppstvl value,x.ksppdesc describ
  2  from x$ksppi x, x$ksppcv y
  3  where x.inst_id=USERENV('Instance')
  4  and y.inst_id=USERENV('Instance')
  5  and x.indx=y.indx
  6  and x.ksppinm like '%asm_strip%';   

NAME                           VALUE                                              DESCRIB
------------------------------ -------------------------------------------------- --------------------------------------------------
_asm_stripewidth               8                                                  ASM file stripe width
_asm_stripesize                131072                                             ASM file stripe size

11g:

SQL> col value for a50
SQL> col describ for a50
SQL> select x.ksppinm NAME,y.ksppstvl value,x.ksppdesc describ
  2  from x$ksppi x, x$ksppcv y
  3  where x.inst_id=USERENV('Instance')
  4  and y.inst_id=USERENV('Instance')
  5  and x.indx=y.indx
  6  and x.ksppinm like '%asm_strip%';   

NAME                           VALUE                                              DESCRIB
------------------------------ -------------------------------------------------- --------------------------------------------------
_asm_stripewidth               8                                                  ASM file stripe width
_asm_stripesize                131072                                             ASM file stripe size

而stripsize * stripwidth 恰好是1m,这也正是我们的AU size大小,而1m通常也是大多数操作系统所能达到的单次最大io量。asm 的条带分为两种COARSE和FINE,也被称为粗粒度条带和细粒度条带。粗粒度条带,默认就等于你的AU size,比如我这里au size为1m,那么粗粒度条带大小就是1m,这种情况下的条带,通常实用于连续性的大IO操作,例如全表扫描。细粒度条带,默认是128k,8个条带组成一个AU,这种条带类型通常适用于对于读写延迟比较敏感的文件,比如redo logfile,controlfile。从前面10g的查询结果可以看出,其中也就redo和controlfile以及flashback是fine类型的,其他的均为粗条带,而在11g中变为只有controlfile为fine类型。

用户模板
如果想要文件具有三重镜像但是粗粒度的条带,该怎么做?可以手工创建一个我们自己的模板,COARSE关键字指定了这是一个粗粒度的条带:

连接ASM实例

SQL> alter diskgroup datadg add template cs_stripe_coarse attributes (HIGH COARSE);  

Diskgroup altered.

SQL> select * from v$asm_template where group_number=3;

GROUP_NUMBER ENTRY_NUMBER REDUNDANCY   STRIPE       SYSTEM               NAME                           PRIMARY_REGION       MIRROR_REGION
------------ ------------ ------------ ------------ -------------------- ------------------------------ -------------------- --------------------
           3           60 MIRROR       COARSE       Y                    PARAMETERFILE                  COLD                 COLD
           3           61 MIRROR       COARSE       Y                    ASMPARAMETERFILE               COLD                 COLD
           3           63 MIRROR       COARSE       Y                    DUMPSET                        COLD                 COLD
           3           64 HIGH         FINE         Y                    CONTROLFILE                    COLD                 COLD
           3           65 MIRROR       COARSE       Y                    FLASHFILE                      COLD                 COLD
           3           66 MIRROR       COARSE       Y                    ARCHIVELOG                     COLD                 COLD
           3           67 MIRROR       COARSE       Y                    ONLINELOG                      COLD                 COLD
           3           68 MIRROR       COARSE       Y                    DATAFILE                       COLD                 COLD
           3           69 MIRROR       COARSE       Y                    TEMPFILE                       COLD                 COLD
           3          170 MIRROR       COARSE       Y                    BACKUPSET                      COLD                 COLD
           3          171 MIRROR       COARSE       Y                    XTRANSPORT BACKUPSET           COLD                 COLD
           3          172 MIRROR       COARSE       Y                    AUTOBACKUP                     COLD                 COLD
           3          173 MIRROR       COARSE       Y                    XTRANSPORT                     COLD                 COLD
           3          174 MIRROR       COARSE       Y                    CHANGETRACKING                 COLD                 COLD
           3          175 MIRROR       COARSE       Y                    FLASHBACK                      COLD                 COLD
           3          176 MIRROR       COARSE       Y                    DATAGUARDCONFIG                COLD                 COLD
           3          177 MIRROR       COARSE       Y                    OCRFILE                        COLD                 COLD
           3          280 HIGH         COARSE       N                    CS_STRIPE_COARSE               COLD                 COLD

18 rows selected.

从上面name=CS_STRIPE_COARSE,stripe=COARSE可以看到创建的模板为粗粒度条带,连接数据库实例

SQL> create tablespace cs_stripe_coarse datafile '+DATADG(CS_STRIPE_COARSE)' size 1m;

Tablespace created.

SQL> select name from v$datafile where name like 'cs_stripe_coarse%';

no rows selected

SQL> select name from v$datafile where name like '%cs_stripe_coarse%';

NAME
--------------------------------------------------------------------------------
+DATADG/jyrac/datafile/cs_stripe_coarse.272.931882089

创建的数据文件的文件号为272,连接ASM实例

SQL> select x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au",x.disk_kffxp "disk #",d.name "disk name"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=3
  6  and x.number_kffxp=272
  7  order by 1,2,3;

virtual extent physical extent         au     disk # disk name
-------------- --------------- ---------- ---------- ------------------------------------------------------------
             0               0       1664          0 DATADG_0001
             0               1       1664          3 DATADG_0000
             0               2       1659          1 DATADG_0003
             1               3       1660          2 DATADG_0002
             1               4       1665          0 DATADG_0001
             1               5       1665          3 DATADG_0000

6 rows selected.

上面的结果显示了只为1MB的文件分配了2个虚拟区, 一个是ASM的文件头,一个用于文件。注意这个文件是三重的镜像和粗粒度的条带。也可以创建一个根本不做镜像的模板,例如:

连接ASM实例

SQL> alter diskgroup datadg add template no_mirroring attributes (UNPROTECTED); 

Diskgroup altered.

SQL> select * from v$asm_template where group_number=3;

GROUP_NUMBER ENTRY_NUMBER REDUNDANCY   STRIPE       SYSTEM               NAME                           PRIMARY_REGION       MIRROR_REGION
------------ ------------ ------------ ------------ -------------------- ------------------------------ -------------------- --------------------
           3           60 MIRROR       COARSE       Y                    PARAMETERFILE                  COLD                 COLD
           3           61 MIRROR       COARSE       Y                    ASMPARAMETERFILE               COLD                 COLD
           3           63 MIRROR       COARSE       Y                    DUMPSET                        COLD                 COLD
           3           64 HIGH         FINE         Y                    CONTROLFILE                    COLD                 COLD
           3           65 MIRROR       COARSE       Y                    FLASHFILE                      COLD                 COLD
           3           66 MIRROR       COARSE       Y                    ARCHIVELOG                     COLD                 COLD
           3           67 MIRROR       COARSE       Y                    ONLINELOG                      COLD                 COLD
           3           68 MIRROR       COARSE       Y                    DATAFILE                       COLD                 COLD
           3           69 MIRROR       COARSE       Y                    TEMPFILE                       COLD                 COLD
           3          170 MIRROR       COARSE       Y                    BACKUPSET                      COLD                 COLD
           3          171 MIRROR       COARSE       Y                    XTRANSPORT BACKUPSET           COLD                 COLD
           3          172 MIRROR       COARSE       Y                    AUTOBACKUP                     COLD                 COLD
           3          173 MIRROR       COARSE       Y                    XTRANSPORT                     COLD                 COLD
           3          174 MIRROR       COARSE       Y                    CHANGETRACKING                 COLD                 COLD
           3          175 MIRROR       COARSE       Y                    FLASHBACK                      COLD                 COLD
           3          176 MIRROR       COARSE       Y                    DATAGUARDCONFIG                COLD                 COLD
           3          177 MIRROR       COARSE       Y                    OCRFILE                        COLD                 COLD
           3          280 HIGH         COARSE       N                    CS_STRIPE_COARSE               COLD                 COLD
           3          281 UNPROT       COARSE       N                    NO_MIRRORING                   COLD                 COLD

19 rows selected.

从上面name=CS_STRIPE_COARSE,redundancy=unprot可以看到创建的模板不支持镜像,连接数据库实例

SQL> create tablespace not_important datafile '+DATADG(NO_MIRRORING)' size 1m; 

Tablespace created.

SQL> select name from v$datafile where name like '%not_important%';

NAME
--------------------------------------------------------------------------------
+DATADG/jyrac/datafile/not_important.273.931882831

创建的数据文件的文件号为273,连接ASM实例

SQL> select x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au",x.disk_kffxp "disk #",d.name "disk name"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=3
  6  and x.number_kffxp=273
  7  order by 1,2,3;

virtual extent physical extent         au     disk # disk name
-------------- --------------- ---------- ---------- ------------------------------------------------------------
             0               0       1661          2 DATADG_0002
             1               1       1660          1 DATADG_0003

上面的结果显示一个虚拟extent只有一个物理extent,所以这个文件没有被镜像(虽然它是在一个normal冗余的磁盘组中)。

小结:
模板目录包含了磁盘组中文件模板的信息,每一个磁盘组都会有默认的一系列的系统自带的模板,用户也可以额外根据需要创建自己的模板。一个比较好的使用模板的方法是在一个normal冗余的磁盘中创建一个三重镜像的模板,注意如果想要使这个做法生效,我们至少需要这个磁盘组中有3个故障磁盘组(failgroup)。

]]>
http://www.jydba.net/index.php/archives/2001/feed 0
Oracle ASM Continuing Operations Directory http://www.jydba.net/index.php/archives/1999 http://www.jydba.net/index.php/archives/1999#respond Thu, 29 Dec 2016 04:41:16 +0000 http://www.jydba.net/?p=1999 Continuing Operations Directory(COD)
COD用来跟踪ASM中长时间执行的操作,例如rebalance, drop disk, create/delete/resize file,这些信息ACD的简要结构不足以描述其变化,这些操作需要通过ASM的COD目录去追踪,COD是ASM的4号文件,A每一个磁盘组都会有一个COD。如果进程在执行长时间操作未正常完成前异常终止,将会有恢复进程查看COD区域的记录尝试完成或回退这个操作,有两种类型的持续性操作:后台操作和回滚操作。

后台操作
后台操作是由ASM实例的后台进程去执行的,它作为磁盘组的维护任务的一部分,而非特殊要求,直到完成或者ASM实例挂掉,如果ASM实例挂掉,执行恢复的实例需要重新执行后台操作,磁盘组的rebalance就是一个很好的后台操作的例子。我们查询内部视图X$KFFXP找到磁盘组3的COD的AU分布,COD是ASM的文件4,因此在查询中设置了number_kffxp=4。

SQL> select group_number,name,type from v$asm_diskgroup;

GROUP_NUMBER NAME                           TYPE
------------ ------------------------------ ------------
           1 ARCHDG                         NORMAL
           2 CRSDG                          EXTERN
           3 DATADG                         NORMAL
           4 TESTDG                         NORMAL

SQL> select group_number, disk_number, state, name,failgroup,mount_status from v$asm_disk where group_number=3;

GROUP_NUMBER DISK_NUMBER STATE                          NAME                           FAILGROUP                                                    MOUNT_STATUS
------------ ----------- ------------------------------ ------------------------------ ------------------------------------------------------------ --------------
           3           0 NORMAL                         DATADG_0001                    DATADG_0001                                                  CACHED
           3           3 NORMAL                         DATADG_0000                    DATADG_0000                                                  CACHED
           3           1 NORMAL                         DATADG_0003                    DATADG_0003                                                  CACHED
           3           2 NORMAL                         DATADG_0002                    DATADG_0002                                                  CACHED

SQL> select group_number,disk_number,name,path,state from v$asm_disk where group_number=3;

GROUP_NUMBER DISK_NUMBER NAME                           PATH                           STATE
------------ ----------- ------------------------------ ------------------------------ ------------------------------
           3           0 DATADG_0001                    /dev/raw/raw11                 NORMAL
           3           3 DATADG_0000                    /dev/raw/raw10                 NORMAL
           3           1 DATADG_0003                    /dev/raw/raw4                  NORMAL
           3           2 DATADG_0002                    /dev/raw/raw3                  NORMAL


SQL> select number_kffxp file#, disk_kffxp disk#, count(disk_kffxp) extents
  2  from x$kffxp
  3  where group_kffxp=3
  4   and disk_kffxp <> 65534
  5   and number_kffxp=4
  6  group by number_kffxp, disk_kffxp
  7  order by 1;

     FILE#      DISK#    EXTENTS
---------- ---------- ----------
         4          0          6
         4          1          5
         4          2          6
         4          3          7

可以看到,上面显示file #的信息有4条,每个COD大小分别是6个AU,5个AU,6个AU,7个AU。因磁盘组DATADG有4个磁盘所以有4行记录。

查询COD在磁盘组DATADG中的AU分布情况

SQL> select x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au",x.disk_kffxp "disk #",d.name "disk name"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=3
  6  and x.number_kffxp=4
  7  order by 1, 2,3;

virtual extent physical extent         au     disk # disk name
-------------- --------------- ---------- ---------- ------------------------------------------------------------
             0               0         36          2 DATADG_0002
             0               1         35          3 DATADG_0000
             0               2         35          1 DATADG_0003
             1               3         36          3 DATADG_0000
             1               4         37          0 DATADG_0001
             1               5         37          2 DATADG_0002
             2               6         72          1 DATADG_0003
             2               7         71          0 DATADG_0001
             2               8         71          3 DATADG_0000
             3               9         72          0 DATADG_0001
             3              10         72          3 DATADG_0000
             3              11         73          1 DATADG_0003
             4              12         73          2 DATADG_0002
             4              13         73          0 DATADG_0001
             4              14         73          3 DATADG_0000
             5              15         74          3 DATADG_0000
             5              16         74          1 DATADG_0003
             5              17         74          2 DATADG_0002
             6              18         75          1 DATADG_0003
             6              19         75          2 DATADG_0002
             6              20         74          0 DATADG_0001
             7              21         75          0 DATADG_0001
             7              22         76          2 DATADG_0002
             7              23         75          3 DATADG_0000

24 rows selected.

因磁盘组DATADG为normal冗余,并且有4个故障磁盘组所以COD信息将会有三份副本。也就是说virtual extent对应的3个physical extent所对应的3个AU所存储的内容是一样的。

通过kfed工具来查看COD的AU分布情况
由于1号文件总是开始在0号磁盘2号AU,记住这个位置:0号盘2号AU。这是ASM中定位文件的起点,它的作用,有点相当于磁盘上的引导区,在电脑开机后负责将OS启动起来。1号文件在最少情况下,至少有两个AU。在1号文件中,每个文件占用一个元数据块,存放自身的空间分布信息。每个元数据块大小是4K,一个AU是1M,哪么,每个AU中,可以存储256个文件的空间分布信息。这其中,0号盘2号AU中,全是元文件的信息。再具体一点,0号盘2号AU,第一个元数据块被系统占用,从第二个块开始,到255为止,共255个元数据块,对应索引号1至255的文件。其实,也就是全部的元文件了。也就是说0号盘2号AU,保存了全部元文件的空间分布信息。1号文件的第二个AU,从第一个块开始,保存256号文件。第二个块对应257号文件,等等。每次从ASM中读数据时,Oracle都要先读到1号文件,从中找出要读的目标文件在磁盘上的分布位置,然后再去读取相应的文件的数据。由于COD是4号文件,所以要读取0号磁盘(/dev/raw/raw11)的2号AU的4号块

[grid@jyrac1 ~]$ kfed read /dev/raw/raw11 aun=2 blkn=4 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       4 ; 0x004: blk=4
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                  3953869782 ; 0x00c: 0xebab43d6
kfbh.fcn.base:                      307 ; 0x010: 0x00000133
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:                   1 ; 0x000: A=1 NUMM=0x0
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 8331264 ; 0x010: 0x007f2000
kfffdb.xtntcnt:                      24 ; 0x014: 0x00000018
kfffdb.xtnteof:                      24 ; 0x018: 0x00000018
kfffdb.blkSize:                    4096 ; 0x01c: 0x00001000
kfffdb.flags:                         1 ; 0x020: O=1 S=0 S=0 D=0 C=0 I=0 R=0 A=0
kfffdb.fileType:                     15 ; 0x021: 0x0f
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                      24 ; 0x03c: 0x0018
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:             4294967295 ; 0x044: 0xffffffff
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      0 ; 0x04c: 0x00
kfffdb.strpsz:                        0 ; 0x04d: 0x00
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33042831 ; 0x050: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:             2457465856 ; 0x054: USEC=0x0 MSEC=0x27d SECS=0x27 MINS=0x24
kfffdb.modts.hi:               33042831 ; 0x058: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:             2457465856 ; 0x05c: USEC=0x0 MSEC=0x27d SECS=0x27 MINS=0x24
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                   36 ; 0x4a0: 0x00000024
kfffde[0].xptr.disk:                  2 ; 0x4a4: 0x0002
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                  12 ; 0x4a7: 0x0c
kfffde[1].xptr.au:                   35 ; 0x4a8: 0x00000023
kfffde[1].xptr.disk:                  3 ; 0x4ac: 0x0003
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                  10 ; 0x4af: 0x0a
kfffde[2].xptr.au:                   35 ; 0x4b0: 0x00000023
kfffde[2].xptr.disk:                  1 ; 0x4b4: 0x0001
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                   8 ; 0x4b7: 0x08
kfffde[3].xptr.au:                   36 ; 0x4b8: 0x00000024
kfffde[3].xptr.disk:                  3 ; 0x4bc: 0x0003
kfffde[3].xptr.flags:                 0 ; 0x4be: L=0 E=0 D=0 S=0
kfffde[3].xptr.chk:                  13 ; 0x4bf: 0x0d
kfffde[4].xptr.au:                   37 ; 0x4c0: 0x00000025
kfffde[4].xptr.disk:                  0 ; 0x4c4: 0x0000
kfffde[4].xptr.flags:                 0 ; 0x4c6: L=0 E=0 D=0 S=0
kfffde[4].xptr.chk:                  15 ; 0x4c7: 0x0f
kfffde[5].xptr.au:                   37 ; 0x4c8: 0x00000025
kfffde[5].xptr.disk:                  2 ; 0x4cc: 0x0002
kfffde[5].xptr.flags:                 0 ; 0x4ce: L=0 E=0 D=0 S=0
kfffde[5].xptr.chk:                  13 ; 0x4cf: 0x0d
kfffde[6].xptr.au:                   72 ; 0x4d0: 0x00000048
kfffde[6].xptr.disk:                  1 ; 0x4d4: 0x0001
kfffde[6].xptr.flags:                 0 ; 0x4d6: L=0 E=0 D=0 S=0
kfffde[6].xptr.chk:                  99 ; 0x4d7: 0x63
kfffde[7].xptr.au:                   71 ; 0x4d8: 0x00000047
kfffde[7].xptr.disk:                  0 ; 0x4dc: 0x0000
kfffde[7].xptr.flags:                 0 ; 0x4de: L=0 E=0 D=0 S=0
kfffde[7].xptr.chk:                 109 ; 0x4df: 0x6d
kfffde[8].xptr.au:                   71 ; 0x4e0: 0x00000047
kfffde[8].xptr.disk:                  3 ; 0x4e4: 0x0003
kfffde[8].xptr.flags:                 0 ; 0x4e6: L=0 E=0 D=0 S=0
kfffde[8].xptr.chk:                 110 ; 0x4e7: 0x6e
kfffde[9].xptr.au:                   72 ; 0x4e8: 0x00000048
kfffde[9].xptr.disk:                  0 ; 0x4ec: 0x0000
kfffde[9].xptr.flags:                 0 ; 0x4ee: L=0 E=0 D=0 S=0
kfffde[9].xptr.chk:                  98 ; 0x4ef: 0x62
kfffde[10].xptr.au:                  72 ; 0x4f0: 0x00000048
kfffde[10].xptr.disk:                 3 ; 0x4f4: 0x0003
kfffde[10].xptr.flags:                0 ; 0x4f6: L=0 E=0 D=0 S=0
kfffde[10].xptr.chk:                 97 ; 0x4f7: 0x61
kfffde[11].xptr.au:                  73 ; 0x4f8: 0x00000049
kfffde[11].xptr.disk:                 1 ; 0x4fc: 0x0001
kfffde[11].xptr.flags:                0 ; 0x4fe: L=0 E=0 D=0 S=0
kfffde[11].xptr.chk:                 98 ; 0x4ff: 0x62
kfffde[12].xptr.au:                  73 ; 0x500: 0x00000049
kfffde[12].xptr.disk:                 2 ; 0x504: 0x0002
kfffde[12].xptr.flags:                0 ; 0x506: L=0 E=0 D=0 S=0
kfffde[12].xptr.chk:                 97 ; 0x507: 0x61
kfffde[13].xptr.au:                  73 ; 0x508: 0x00000049
kfffde[13].xptr.disk:                 0 ; 0x50c: 0x0000
kfffde[13].xptr.flags:                0 ; 0x50e: L=0 E=0 D=0 S=0
kfffde[13].xptr.chk:                 99 ; 0x50f: 0x63
kfffde[14].xptr.au:                  73 ; 0x510: 0x00000049
kfffde[14].xptr.disk:                 3 ; 0x514: 0x0003
kfffde[14].xptr.flags:                0 ; 0x516: L=0 E=0 D=0 S=0
kfffde[14].xptr.chk:                 96 ; 0x517: 0x60
kfffde[15].xptr.au:                  74 ; 0x518: 0x0000004a
kfffde[15].xptr.disk:                 3 ; 0x51c: 0x0003
kfffde[15].xptr.flags:                0 ; 0x51e: L=0 E=0 D=0 S=0
kfffde[15].xptr.chk:                 99 ; 0x51f: 0x63
kfffde[16].xptr.au:                  74 ; 0x520: 0x0000004a
kfffde[16].xptr.disk:                 1 ; 0x524: 0x0001
kfffde[16].xptr.flags:                0 ; 0x526: L=0 E=0 D=0 S=0
kfffde[16].xptr.chk:                 97 ; 0x527: 0x61
kfffde[17].xptr.au:                  74 ; 0x528: 0x0000004a
kfffde[17].xptr.disk:                 2 ; 0x52c: 0x0002
kfffde[17].xptr.flags:                0 ; 0x52e: L=0 E=0 D=0 S=0
kfffde[17].xptr.chk:                 98 ; 0x52f: 0x62
kfffde[18].xptr.au:                  75 ; 0x530: 0x0000004b
kfffde[18].xptr.disk:                 1 ; 0x534: 0x0001
kfffde[18].xptr.flags:                0 ; 0x536: L=0 E=0 D=0 S=0
kfffde[18].xptr.chk:                 96 ; 0x537: 0x60
kfffde[19].xptr.au:                  75 ; 0x538: 0x0000004b
kfffde[19].xptr.disk:                 2 ; 0x53c: 0x0002
kfffde[19].xptr.flags:                0 ; 0x53e: L=0 E=0 D=0 S=0
kfffde[19].xptr.chk:                 99 ; 0x53f: 0x63
kfffde[20].xptr.au:                  74 ; 0x540: 0x0000004a
kfffde[20].xptr.disk:                 0 ; 0x544: 0x0000
kfffde[20].xptr.flags:                0 ; 0x546: L=0 E=0 D=0 S=0
kfffde[20].xptr.chk:                 96 ; 0x547: 0x60
kfffde[21].xptr.au:                  75 ; 0x548: 0x0000004b
kfffde[21].xptr.disk:                 0 ; 0x54c: 0x0000
kfffde[21].xptr.flags:                0 ; 0x54e: L=0 E=0 D=0 S=0
kfffde[21].xptr.chk:                 97 ; 0x54f: 0x61
kfffde[22].xptr.au:                  76 ; 0x550: 0x0000004c
kfffde[22].xptr.disk:                 2 ; 0x554: 0x0002
kfffde[22].xptr.flags:                0 ; 0x556: L=0 E=0 D=0 S=0
kfffde[22].xptr.chk:                100 ; 0x557: 0x64
kfffde[23].xptr.au:                  75 ; 0x558: 0x0000004b
kfffde[23].xptr.disk:                 3 ; 0x55c: 0x0003
kfffde[23].xptr.flags:                0 ; 0x55e: L=0 E=0 D=0 S=0
kfffde[23].xptr.chk:                 98 ; 0x55f: 0x62
kfffde[24].xptr.au:          4294967295 ; 0x560: 0xffffffff
kfffde[24].xptr.disk:             65535 ; 0x564: 0xffff
kfffde[24].xptr.flags:                0 ; 0x566: L=0 E=0 D=0 S=0
kfffde[24].xptr.chk:                 42 ; 0x567: 0x2a

从kfffde[0].xptr.au=36,kfffde[0].xptr.disk=2可知COD存储在2号磁盘的36号AU,依此类推,这与上面的查询结果是一致的。

下面通过kfed工具来验证0号virtual extent的3个phyiscal extent所对的3个AU所存储的内容

[grid@jyrac1 ~]$ kfed read /dev/raw/raw3 aun=36 blkn=0 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            9 ; 0x002: KFBTYP_COD_BGO
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       4 ; 0x008: file=4
kfbh.check:                    17403005 ; 0x00c: 0x01098c7d
kfbh.fcn.base:                     3704 ; 0x010: 0x00000e78
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfrcbg.size:                          0 ; 0x000: 0x0000
kfrcbg.op:                            0 ; 0x002: 0x0000
kfrcbg.inum:                          0 ; 0x004: 0x00000000
kfrcbg.iser:                          0 ; 0x008: 0x00000000
[grid@jyrac1 ~]$ kfed read /dev/raw/raw10 aun=35 blkn=0 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            9 ; 0x002: KFBTYP_COD_BGO
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       4 ; 0x008: file=4
kfbh.check:                    17403005 ; 0x00c: 0x01098c7d
kfbh.fcn.base:                     3704 ; 0x010: 0x00000e78
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfrcbg.size:                          0 ; 0x000: 0x0000
kfrcbg.op:                            0 ; 0x002: 0x0000
kfrcbg.inum:                          0 ; 0x004: 0x00000000
kfrcbg.iser:                          0 ; 0x008: 0x00000000
[grid@jyrac1 ~]$ kfed read /dev/raw/raw4 aun=35 blkn=0 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            9 ; 0x002: KFBTYP_COD_BGO
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       4 ; 0x008: file=4
kfbh.check:                    17403005 ; 0x00c: 0x01098c7d
kfbh.fcn.base:                     3704 ; 0x010: 0x00000e78
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfrcbg.size:                          0 ; 0x000: 0x0000
kfrcbg.op:                            0 ; 0x002: 0x0000 -- 表示后台进程操作,有2种属性值,0 表示当前没有后台进程操作,1表示当前后台进程正在进行reblance operation.
kfrcbg.inum:                          0 ; 0x004: 0x00000000 --表示后台进程所运作的asm instance number
kfrcbg.iser:                          0 ; 0x008: 0x00000000

从上面的输出可以看到,0号virtual extent的3个phyiscal extent所对应的3个AU(2号磁盘[/dev/raw/raw3]的36号AU,3号磁盘[/dev/raw/raw10]的35号AU,1号磁盘[/dev/raw/raw4]的35号AU)所存储的内容是一样的。上面显示了一个COD的块,kfbh.type=KFBTYP_COD_BGO显示为background类型的操作,不过此刻并没有后台操作发生,因为所有的kfrcbg区域都是0,这代表了当前没有活跃的后台操作,如果操作代码kfrcbg.op为1,那么将表示有活跃的磁盘的rebalance操作在进行。

回滚操作
Rollback操作类型类似于数据库的事务。ASM的前台进程发起请求,为了能够记录这个rollback操作,必须在ASM的COD目录中申请一个槽位,COD目录的block 1展示了所有的槽位和使用状态,如果所有的槽位当时都是忙的,那么这个操作会休息一段时间,直到发现其中一个可以使用。rollback类型操作过程中,磁盘组是一个不一致的状态,这个操作需要完成或者回退所有它对磁盘组的更改。数据库实例大多时候会去执行这个操作(例如添加数据文件)。如果数据库实例挂掉或者ASM前台进程挂掉,一个不可恢复的错误会发生,然后这个操作会被终止。创建文件是一个rollback操作非常好的例子,如果在文件空间分配过程中发生错误,那么已经分配过的空间需要被删除,如果数据库实例没有提交文件的创建操作,这个文件必须被自动删除,如果ASM实例挂掉,这个删除操作会由恢复实例来执行。

使用kfed来查看COD的1号块:
先执行数据文件的创建操作

SQL> create tablespace jycs datafile '+DATADG/jyrac/datafile/jycs01.dbf' size 1G ; 

再查看cod的1号块

[grid@jyrac1 ~]$ kfed read /dev/raw/raw3 aun=36 blkn=1 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           15 ; 0x002: KFBTYP_COD_RBO --表示操作类型,15即为 KFBTYP_COD_RBO,RBO 即为rollback operation的简写
kfbh.datfmt:                          2 ; 0x003: 0x02
kfbh.block.blk:                       1 ; 0x004: blk=1 --表示当前元数据所在的block号
kfbh.block.obj:                       4 ; 0x008: file=4
kfbh.check:                    34575077 ; 0x00c: 0x020f92e5
kfbh.fcn.base:                     4320 ; 0x010: 0x000010e0
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
--kfrcrb rb即为rollback
kfrcrb[0].opcode:                     1 ; 0x000: 0x0001 --表示具体的操作类型,该opcode有很多种属性值
kfrcrb[1].opcode:                     0 ; 0x002: 0x0000
kfrcrb[2].opcode:                     0 ; 0x004: 0x0000
kfrcrb[3].opcode:                     0 ; 0x006: 0x0000
kfrcrb[4].opcode:                     0 ; 0x008: 0x0000
kfrcrb[5].opcode:                     0 ; 0x00a: 0x0000
kfrcrb[6].opcode:                     0 ; 0x00c: 0x0000
kfrcrb[7].opcode:                     0 ; 0x00e: 0x0000
kfrcrb[8].opcode:                     0 ; 0x010: 0x0000
kfrcrb[9].opcode:                     0 ; 0x012: 0x0000
kfrcrb[10].opcode:                    0 ; 0x014: 0x0000
kfrcrb[11].opcode:                    0 ; 0x016: 0x0000
kfrcrb[12].opcode:                    0 ; 0x018: 0x0000
kfrcrb[13].opcode:                    0 ; 0x01a: 0x0000
kfrcrb[14].opcode:                    0 ; 0x01c: 0x0000

kfrcrb[i] 区域跟踪了所有活跃的rollback类型操作,上面的信息可以看到有一个操作正在进行中,kfrcrb[0]的值为1,从操作代码我们可以知道这是一个文件的创建操作,rollback操作类型的代码参照表如下:

1 - Create a file
2 - Delete a file
3 - Resize a file
4 - Drop alias entry
5 - Rename alias entry
6 - Rebalance space COD
7 - Drop disks force
8 - Attribute drop
9 - Disk Resync
10 - Disk Repair Time
11 - Volume create
12 - Volume delete
13 - Attribute directory creation
14 - Set zone attributes
15 - User drop

上面的操作是11G,如果是10G,kfrcrb[i]则是不一样的多了kfrcrb[i].inum,kfrcrb[i].iser,kfrcrb[i].pnum例如:

[oracle@jyrac3 ~]$ kfed read /dev/raw/raw5 aun=7 blkn=1 aus=16777216 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           15 ; 0x002: KFBTYP_COD_RBO
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       1 ; 0x004: T=0 NUMB=0x1
kfbh.block.obj:                       4 ; 0x008: TYPE=0x0 NUMB=0x4
kfbh.check:                    17797779 ; 0x00c: 0x010f9293
kfbh.fcn.base:                     4247 ; 0x010: 0x00001097
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfrcrb[0].opcode:                     0 ; 0x000: 0x0000 --表示具体的操作类型,该opcode有很多种属性值
kfrcrb[0].inum:                       0 ; 0x002: 0x0000 --表示asm instance number
kfrcrb[0].iser:                       0 ; 0x004: 0x00000000
kfrcrb[0].pnum:                       0 ; 0x008: 0x00000000
kfrcrb[1].opcode:                     0 ; 0x00c: 0x0000
kfrcrb[1].inum:                       0 ; 0x00e: 0x0000
kfrcrb[1].iser:                       0 ; 0x010: 0x00000000
kfrcrb[1].pnum:                       0 ; 0x014: 0x00000000
kfrcrb[2].opcode:                     0 ; 0x018: 0x0000
kfrcrb[2].inum:                       0 ; 0x01a: 0x0000
kfrcrb[2].iser:                       0 ; 0x01c: 0x00000000
kfrcrb[2].pnum:                       0 ; 0x020: 0x00000000
kfrcrb[3].opcode:                     0 ; 0x024: 0x0000
kfrcrb[3].inum:                       0 ; 0x026: 0x0000
kfrcrb[3].iser:                       0 ; 0x028: 0x00000000
kfrcrb[3].pnum:                       0 ; 0x02c: 0x00000000
kfrcrb[4].opcode:                     0 ; 0x030: 0x0000
kfrcrb[4].inum:                       0 ; 0x032: 0x0000
kfrcrb[4].iser:                       0 ; 0x034: 0x00000000
kfrcrb[4].pnum:                       0 ; 0x038: 0x00000000
kfrcrb[5].opcode:                     0 ; 0x03c: 0x0000
kfrcrb[5].inum:                       0 ; 0x03e: 0x0000
kfrcrb[5].iser:                       0 ; 0x040: 0x00000000
kfrcrb[5].pnum:                       0 ; 0x044: 0x00000000
kfrcrb[6].opcode:                     0 ; 0x048: 0x0000
kfrcrb[6].inum:                       0 ; 0x04a: 0x0000
kfrcrb[6].iser:                       0 ; 0x04c: 0x00000000
kfrcrb[6].pnum:                       0 ; 0x050: 0x00000000
kfrcrb[7].opcode:                     0 ; 0x054: 0x0000
kfrcrb[7].inum:                       0 ; 0x056: 0x0000
kfrcrb[7].iser:                       0 ; 0x058: 0x00000000
kfrcrb[7].pnum:                       0 ; 0x05c: 0x00000000
kfrcrb[8].opcode:                     0 ; 0x060: 0x0000

接下来才是COD DATA

[grid@jyrac1 ~]$ kfed read /dev/raw/raw3 aun=36 blkn=2 
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           16 ; 0x002: KFBTYP_COD_DATA
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       2 ; 0x004: blk=2
kfbh.block.obj:                       4 ; 0x008: file=4
kfbh.check:                   916174568 ; 0x00c: 0x369bb6e8
kfbh.fcn.base:                     4320 ; 0x010: 0x000010e0
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000

这部分是COD DATA内容,从上面可以看到,基本上就是只要头部信息,唯一更新的也就是check值和bash。

小结:
ASM的COD目录跟踪所有长时间运行的ASM操作,对于由于任何原因导致的问题,COD目录中相关记录可以用来把这些操作完成或回退。这些操作可能由另一个实例来完成或者由故障实例重启后来完成。

]]>
http://www.jydba.net/index.php/archives/1999/feed 0
Oracle ASM Active Change Directory http://www.jydba.net/index.php/archives/1997 http://www.jydba.net/index.php/archives/1997#respond Thu, 29 Dec 2016 01:57:25 +0000 http://www.jydba.net/?p=1997 Active Change Directory(ACD)
当ASM对多个数据结构执行原子性改变。ASM的active change directory 简称ACD会记录相应的日志,它类似于RDBMS的重做日志。ACD的ASM文件号为3。对应的日志记录会以单次IO的方式写入,来确保操作原子性。ACD被分成多个chunk或者thread,每个运行的ASM实例将得到一个大小为42MB的ACD chunk。当一个磁盘组被创建时,会分配一个独立的chunk给ACD。随着更多的实例挂载了该磁盘组,ACD的chunk数也会同比例增长,每个实例会使用属于自己的ACD chunk区。

ACD包含以下内容:
.ACDC – ACD checkpoint ACD检查点
.ABA – ACD block address ACD块地址
.LGE – ACD redo log record ACD 重做日志记录
.BCD – ACD block change descriptor ACD块变更描述

定位ASM active change directory
可以通过查询X$KFFXP视图来获取ACD目录包含的AU。ACD的文件号为3,因此在我们的查询中我们使number_kffxp=3

SQL> select group_number,name,type from v$asm_diskgroup;

GROUP_NUMBER NAME                           TYPE
------------ ------------------------------ ------------
           1 ARCHDG                         NORMAL
           2 CRSDG                          EXTERN
           3 DATADG                         NORMAL

SQL> select group_number,disk_number,name,path,state from v$asm_disk where group_number=3;

GROUP_NUMBER DISK_NUMBER NAME                           PATH                           STATE
------------ ----------- ------------------------------ ------------------------------ ------------------------------
           3           0 DATADG_0001                    /dev/raw/raw11                 NORMAL
           3           3 DATADG_0000                    /dev/raw/raw10                 NORMAL
           3           1 DATADG_0003                    /dev/raw/raw4                  NORMAL
           3           2 DATADG_0002                    /dev/raw/raw3                  NORMAL

SQL> select number_kffxp file#, disk_kffxp disk#, count(disk_kffxp) extents
  2  from x$kffxp
  3  where group_kffxp=3
  4   and disk_kffxp <> 65534
  5   and number_kffxp=3
  6  group by number_kffxp, disk_kffxp
  7  order by 1;

     FILE#      DISK#    EXTENTS
---------- ---------- ----------
         3          0         64
         3          1         63
         3          2         64
         3          3         64

file 3,即是active change directory,一个占据255个au

SQL> select x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au",x.disk_kffxp "disk #",d.name "disk name"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=3
  6  and x.number_kffxp=3
  7  order by 1, 2,3;

virtual extent physical extent         au     disk # disk name
-------------- --------------- ---------- ---------- ------------------------------------------------------------
             0               0          2          3 DATADG_0000
             0               1          4          0 DATADG_0001
             0               2          4          2 DATADG_0002
             1               3          4          1 DATADG_0003
             1               4          3          3 DATADG_0000
             1               5          5          0 DATADG_0001
             2               6          6          0 DATADG_0001
             2               7          5          2 DATADG_0002
             2               8          4          3 DATADG_0000
             3               9          6          2 DATADG_0002
             3              10          7          0 DATADG_0001
             3              11          5          1 DATADG_0003
             4              12          5          3 DATADG_0000
             4              13          6          1 DATADG_0003
             4              14          8          0 DATADG_0001
             5              15          7          1 DATADG_0003
             5              16          7          2 DATADG_0002
             5              17          6          3 DATADG_0000
             6              18          9          0 DATADG_0001
             6              19          8          1 DATADG_0003
             6              20          8          2 DATADG_0002
             7              21          9          2 DATADG_0002
             7              22          7          3 DATADG_0000
             7              23         10          0 DATADG_0001
             8              24          8          3 DATADG_0000
             8              25         10          2 DATADG_0002
             8              26          9          1 DATADG_0003
             9              27         10          1 DATADG_0003
             9              28         11          0 DATADG_0001
             9              29         11          2 DATADG_0002
            10              30         12          0 DATADG_0001
            10              31          9          3 DATADG_0000
            10              32         11          1 DATADG_0003
            11              33         12          2 DATADG_0002
            11              34         12          1 DATADG_0003
            11              35         10          3 DATADG_0000
            12              36         11          3 DATADG_0000
            12              37         13          0 DATADG_0001
            12              38         13          2 DATADG_0002
            13              39         13          1 DATADG_0003
            13              40         12          3 DATADG_0000
            13              41         14          0 DATADG_0001
            14              42         15          0 DATADG_0001
            14              43         14          2 DATADG_0002
            14              44         13          3 DATADG_0000
            15              45         15          2 DATADG_0002
            15              46         16          0 DATADG_0001
            15              47         14          1 DATADG_0003
            16              48         14          3 DATADG_0000
            16              49         15          1 DATADG_0003
            16              50         17          0 DATADG_0001
            17              51         16          1 DATADG_0003
            17              52         16          2 DATADG_0002
            17              53         15          3 DATADG_0000
            18              54         18          0 DATADG_0001
            18              55         17          1 DATADG_0003
            18              56         17          2 DATADG_0002
            19              57         18          2 DATADG_0002
            19              58         16          3 DATADG_0000
            19              59         19          0 DATADG_0001
            20              60         18          3 DATADG_0000
            20              61         20          2 DATADG_0002
            20              62         18          1 DATADG_0003
            21              63         19          1 DATADG_0003
            21              64         21          0 DATADG_0001
            21              65         21          2 DATADG_0002
            22              66         22          0 DATADG_0001
            22              67         19          3 DATADG_0000
            22              68         20          1 DATADG_0003
            23              69         22          2 DATADG_0002
            23              70         21          1 DATADG_0003
            23              71         20          3 DATADG_0000
            24              72         21          3 DATADG_0000
            24              73         23          0 DATADG_0001
            24              74         23          2 DATADG_0002
            25              75         22          1 DATADG_0003
            25              76         22          3 DATADG_0000
            25              77         24          0 DATADG_0001
            26              78         25          0 DATADG_0001
            26              79         24          2 DATADG_0002
            26              80         23          3 DATADG_0000
            27              81         25          2 DATADG_0002
            27              82         26          0 DATADG_0001
            27              83         23          1 DATADG_0003
            28              84         24          3 DATADG_0000
            28              85         24          1 DATADG_0003
            28              86         27          0 DATADG_0001
            29              87         25          1 DATADG_0003
            29              88         26          2 DATADG_0002
            29              89         25          3 DATADG_0000
            30              90         28          0 DATADG_0001
            30              91         26          1 DATADG_0003
            30              92         27          2 DATADG_0002
            31              93         28          2 DATADG_0002
            31              94         26          3 DATADG_0000
            31              95         29          0 DATADG_0001
            32              96         27          3 DATADG_0000
            32              97         29          2 DATADG_0002
            32              98         27          1 DATADG_0003
            33              99         28          1 DATADG_0003
            33             100         30          0 DATADG_0001
            33             101         30          2 DATADG_0002
            34             102         31          0 DATADG_0001
            34             103         28          3 DATADG_0000
            34             104         29          1 DATADG_0003
            35             105         31          2 DATADG_0002
            35             106         30          1 DATADG_0003
            35             107         29          3 DATADG_0000
            36             108         30          3 DATADG_0000
            36             109         32          0 DATADG_0001
            36             110         32          2 DATADG_0002
            37             111         31          1 DATADG_0003
            37             112         31          3 DATADG_0000
            37             113         33          0 DATADG_0001
            38             114         34          0 DATADG_0001
            38             115         33          2 DATADG_0002
            38             116         32          3 DATADG_0000
            39             117         34          2 DATADG_0002
            39             118         35          0 DATADG_0001
            39             119         32          1 DATADG_0003
            40             120         33          3 DATADG_0000
            40             121         33          1 DATADG_0003
            40             122         36          0 DATADG_0001
            41             123         34          1 DATADG_0003
            41             124         35          2 DATADG_0002
            41             125         34          3 DATADG_0000
            42             126         40          0 DATADG_0001
            42             127         40          1 DATADG_0003
            42             128         40          3 DATADG_0000
            43             129         41          2 DATADG_0002
            43             130         41          3 DATADG_0000
            43             131         41          0 DATADG_0001
            44             132         42          3 DATADG_0000
            44             133         42          2 DATADG_0002
            44             134         41          1 DATADG_0003
            45             135         42          1 DATADG_0003
            45             136         42          0 DATADG_0001
            45             137         43          2 DATADG_0002
            46             138         43          0 DATADG_0001
            46             139         43          1 DATADG_0003
            46             140         44          2 DATADG_0002
            47             141         45          2 DATADG_0002
            47             142         44          1 DATADG_0003
            47             143         43          3 DATADG_0000
            48             144         44          3 DATADG_0000
            48             145         44          0 DATADG_0001
            48             146         46          2 DATADG_0002
            49             147         45          1 DATADG_0003
            49             148         45          0 DATADG_0001
            49             149         47          2 DATADG_0002
            50             150         46          0 DATADG_0001
            50             151         46          1 DATADG_0003
            50             152         45          3 DATADG_0000
            51             153         48          2 DATADG_0002
            51             154         47          0 DATADG_0001
            51             155         47          1 DATADG_0003
            52             156         46          3 DATADG_0000
            52             157         48          1 DATADG_0003
            52             158         48          0 DATADG_0001
            53             159         49          1 DATADG_0003
            53             160         47          3 DATADG_0000
            53             161         49          0 DATADG_0001
            54             162         50          0 DATADG_0001
            54             163         49          2 DATADG_0002
            54             164         50          1 DATADG_0003
            55             165         50          2 DATADG_0002
            55             166         48          3 DATADG_0000
            55             167         51          0 DATADG_0001
            56             168         49          3 DATADG_0000
            56             169         51          2 DATADG_0002
            56             170         51          1 DATADG_0003
            57             171         52          1 DATADG_0003
            57             172         52          2 DATADG_0002
            57             173         50          3 DATADG_0000
            58             174         52          0 DATADG_0001
            58             175         51          3 DATADG_0000
            58             176         53          2 DATADG_0002
            59             177         54          2 DATADG_0002
            59             178         53          1 DATADG_0003
            59             179         52          3 DATADG_0000
            60             180         53          3 DATADG_0000
            60             181         53          0 DATADG_0001
            60             182         55          2 DATADG_0002
            61             183         54          1 DATADG_0003
            61             184         54          0 DATADG_0001
            61             185         56          2 DATADG_0002
            62             186         55          0 DATADG_0001
            62             187         55          1 DATADG_0003
            62             188         54          3 DATADG_0000
            63             189         57          2 DATADG_0002
            63             190         56          0 DATADG_0001
            63             191         56          1 DATADG_0003
            64             192         55          3 DATADG_0000
            64             193         57          1 DATADG_0003
            64             194         57          0 DATADG_0001
            65             195         58          1 DATADG_0003
            65             196         56          3 DATADG_0000
            65             197         58          0 DATADG_0001
            66             198         59          0 DATADG_0001
            66             199         58          2 DATADG_0002
            66             200         59          1 DATADG_0003
            67             201         59          2 DATADG_0002
            67             202         57          3 DATADG_0000
            67             203         60          0 DATADG_0001
            68             204         58          3 DATADG_0000
            68             205         60          2 DATADG_0002
            68             206         60          1 DATADG_0003
            69             207         61          1 DATADG_0003
            69             208         61          2 DATADG_0002
            69             209         59          3 DATADG_0000
            70             210         61          0 DATADG_0001
            70             211         60          3 DATADG_0000
            70             212         62          2 DATADG_0002
            71             213         63          2 DATADG_0002
            71             214         62          1 DATADG_0003
            71             215         61          3 DATADG_0000
            72             216         62          3 DATADG_0000
            72             217         62          0 DATADG_0001
            72             218         64          2 DATADG_0002
            73             219         63          1 DATADG_0003
            73             220         63          0 DATADG_0001
            73             221         65          2 DATADG_0002
            74             222         64          0 DATADG_0001
            74             223         64          1 DATADG_0003
            74             224         63          3 DATADG_0000
            75             225         66          2 DATADG_0002
            75             226         65          0 DATADG_0001
            75             227         65          1 DATADG_0003
            76             228         64          3 DATADG_0000
            76             229         66          1 DATADG_0003
            76             230         66          0 DATADG_0001
            77             231         67          1 DATADG_0003
            77             232         65          3 DATADG_0000
            77             233         67          0 DATADG_0001
            78             234         68          0 DATADG_0001
            78             235         67          2 DATADG_0002
            78             236         68          1 DATADG_0003
            79             237         68          2 DATADG_0002
            79             238         66          3 DATADG_0000
            79             239         69          0 DATADG_0001
            80             240         67          3 DATADG_0000
            80             241         69          2 DATADG_0002
            80             242         69          1 DATADG_0003
            81             243         70          1 DATADG_0003
            81             244         70          2 DATADG_0002
            81             245         68          3 DATADG_0000
            82             246         70          0 DATADG_0001
            82             247         69          3 DATADG_0000
            82             248         71          2 DATADG_0002
            83             249         72          2 DATADG_0002
            83             250         71          1 DATADG_0003
            83             251         70          3 DATADG_0000
    2147483648               0         20          0 DATADG_0001
    2147483648               1         17          3 DATADG_0000
    2147483648               2         19          2 DATADG_0002

255 rows selected.

查询返回了255行,即255个AU。其中虚拟extent为2147483648是当前没有被格式化的AU有3个,那么真正使用的是255-3=252,也就从虚拟extent的编号(0-83),由于磁盘组DATADG是normal冗余磁盘组并且存在三个或多个故障磁盘组,所以ACD有三份镜像,252/3=84个AU,而这里有2个ASM实例,所以每个ASM实例有42个AU,AU的大小为1MB,所以就是42MB。

使用kfed来读取元数据
首先读取磁盘组datadg的0号磁盘(/dev/raw/raw11)中的2号AU的3号块来定位到active change directory 所在AU

[grid@jyrac1 ~]$ kfed read /dev/raw/raw11 aun=2 blkn=3 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       3 ; 0x004: blk=3
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                    40592047 ; 0x00c: 0x026b62af
kfbh.fcn.base:                      121 ; 0x010: 0x00000079
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:                   1 ; 0x000: A=1 NUMM=0x0
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                88080384 ; 0x010: 0x05400000
kfffdb.xtntcnt:                     252 ; 0x014: 0x000000fc
kfffdb.xtnteof:                     252 ; 0x018: 0x000000fc
kfffdb.blkSize:                    4096 ; 0x01c: 0x00001000
kfffdb.flags:                         1 ; 0x020: O=1 S=0 S=0 D=0 C=0 I=0 R=0 A=0
kfffdb.fileType:                     15 ; 0x021: 0x0f
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                      63 ; 0x03c: 0x003f
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:             4294967295 ; 0x044: 0xffffffff
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      0 ; 0x04c: 0x00
kfffdb.strpsz:                        0 ; 0x04d: 0x00
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33042831 ; 0x050: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:             2457465856 ; 0x054: USEC=0x0 MSEC=0x27d SECS=0x27 MINS=0x24
kfffdb.modts.hi:               33042831 ; 0x058: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:             2457465856 ; 0x05c: USEC=0x0 MSEC=0x27d SECS=0x27 MINS=0x24
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                    2 ; 0x4a0: 0x00000002
kfffde[0].xptr.disk:                  3 ; 0x4a4: 0x0003
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                  43 ; 0x4a7: 0x2b
kfffde[1].xptr.au:                    4 ; 0x4a8: 0x00000004
kfffde[1].xptr.disk:                  0 ; 0x4ac: 0x0000
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                  46 ; 0x4af: 0x2e
kfffde[2].xptr.au:                    4 ; 0x4b0: 0x00000004
kfffde[2].xptr.disk:                  2 ; 0x4b4: 0x0002
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                  44 ; 0x4b7: 0x2c
kfffde[3].xptr.au:                    4 ; 0x4b8: 0x00000004
kfffde[3].xptr.disk:                  1 ; 0x4bc: 0x0001
kfffde[3].xptr.flags:                 0 ; 0x4be: L=0 E=0 D=0 S=0
kfffde[3].xptr.chk:                  47 ; 0x4bf: 0x2f
kfffde[4].xptr.au:                    3 ; 0x4c0: 0x00000003
kfffde[4].xptr.disk:                  3 ; 0x4c4: 0x0003
kfffde[4].xptr.flags:                 0 ; 0x4c6: L=0 E=0 D=0 S=0
kfffde[4].xptr.chk:                  42 ; 0x4c7: 0x2a
kfffde[5].xptr.au:                    5 ; 0x4c8: 0x00000005
kfffde[5].xptr.disk:                  0 ; 0x4cc: 0x0000
kfffde[5].xptr.flags:                 0 ; 0x4ce: L=0 E=0 D=0 S=0
kfffde[5].xptr.chk:                  47 ; 0x4cf: 0x2f

从kfffde[0].xptr.au=2,kfffde[0].xptr.disk=3,可知ACD所在AU为3号磁盘的2号AU,它还有两个镜像副本存储在0号磁盘4号AU与2号磁盘4号AU上,这与查询视图所得到的结果完全一致。

SQL> select x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au",x.disk_kffxp "disk #",d.name "disk name"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=3
  6  and x.number_kffxp=3
  7  order by 1, 2,3;

virtual extent physical extent         au     disk # disk name
-------------- --------------- ---------- ---------- ------------------------------------------------------------
             0               0          2          3 DATADG_0000
             0               1          4          0 DATADG_0001
             0               2          4          2 DATADG_0002
.....

使用kfed查看ASM active change directory
接下来使用kfed工具对ACD进行查看。因为上一个查询显示ACD是从DATADG_0000(/dev/raw/raw10)磁盘的第2个AU开始的,而且有2个ASM实例,总共有84个AU,每个AU有三份镜像,那么是不是1

号ASM实例的ACD分布在虚拟extent(0-41),而2号ASM实例的ACD分布在虚拟extent(42-83),下面进行验证。

[grid@jyrac1 ~]$ kfed read /dev/raw/raw10 aun=2 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            7 ; 0x002: KFBTYP_ACDC --ACDC,即是Active Change Diectory Checkpoint
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0 --这是对应的block号
kfbh.block.obj:                       3 ; 0x008: file=3
kfbh.check:                  1111751467 ; 0x00c: 0x4243fb2b
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
--kfracdc,这部分即表示active change directory checkpoint信息。
kfracdc.eyec[0]:                     65 ; 0x000: 0x41
kfracdc.eyec[1]:                     67 ; 0x001: 0x43
kfracdc.eyec[2]:                     68 ; 0x002: 0x44
kfracdc.eyec[3]:                     67 ; 0x003: 0x43
kfracdc.thread:                       1 ; 0x004: 0x00000001 --thread 1,表示对应1号asm实例。
kfracdc.lastAba.seq:         4294967295 ; 0x008: 0xffffffff --last ACD block address sequences
kfracdc.lastAba.blk:         4294967295 ; 0x00c: 0xffffffff --last ACD block address block number
kfracdc.blk0:                         1 ; 0x010: 0x00000001
kfracdc.blks:                     10751 ; 0x014: 0x000029ff --ACD 数据(元数据和数据)所占block总数,换算一下即为42m. 10751*4096/1024/1024=42MB
kfracdc.ckpt.seq:                     3 ; 0x018: 0x00000003 --checkpoint当前的sequences号
kfracdc.ckpt.blk:                   598 ; 0x01c: 0x00000256 --checkpoint信息所占的block数
kfracdc.fcn.base:                  4288 ; 0x020: 0x000010c0
kfracdc.fcn.wrap:                     0 ; 0x024: 0x00000000
kfracdc.bufBlks:                    256 ; 0x028: 0x00000100 --block总数
kfracdc.strt112.seq:                  2 ; 0x02c: 0x00000002
kfracdc.strt112.blk:                  0 ; 0x030: 0x00000000

从上面的kfracdc.thread=1,可以确定存储在3号磁盘DATADG_0000(/dev/raw/raw10)的第2个AU上的ACD属于1号ASM实例,从之前的查询结果可知从虚拟extent(42-83)所存储的ACD应该属于2号

ASM实例,2号ASM实例上第一个ACD就分布在虚拟extent 42所对应的3号磁盘DATADG_0000(/dev/raw/raw10)的第40个AU(0号磁盘与1号磁盘上的第40个AU为镜像副本)

[grid@jyrac1 ~]$ kfed read /dev/raw/raw10 aun=40 blkn=0 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            7 ; 0x002: KFBTYP_ACDC --ACDC,即是Active Change Diectory Checkpoint
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                   10752 ; 0x004: blk=10752 --这是对应的block号,这也1号ASM实例中所查到的ACD元数据中的kfracdc.blks:=10751对应上了
kfbh.block.obj:                       3 ; 0x008: file=3
kfbh.check:                  1111751043 ; 0x00c: 0x4243f983
kfbh.fcn.base:                       77 ; 0x010: 0x0000004d
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfracdc.eyec[0]:                     65 ; 0x000: 0x41
kfracdc.eyec[1]:                     67 ; 0x001: 0x43
kfracdc.eyec[2]:                     68 ; 0x002: 0x44
kfracdc.eyec[3]:                     67 ; 0x003: 0x43
kfracdc.thread:                       2 ; 0x004: 0x00000002  --thread 2,表示对应2号asm实例。
kfracdc.lastAba.seq:         4294967295 ; 0x008: 0xffffffff  --last ACD block address sequences
kfracdc.lastAba.blk:         4294967295 ; 0x00c: 0xffffffff  --last ACD block address block number
kfracdc.blk0:                     10753 ; 0x010: 0x00002a01
kfracdc.blks:                     10751 ; 0x014: 0x000029ff  --ACD 数据(元数据和数据)所占block总数,换算一下即为42m. 10751*4096/1024/1024=42MB
kfracdc.ckpt.seq:                     3 ; 0x018: 0x00000003  --checkpoint当前的sequences号
kfracdc.ckpt.blk:                   187 ; 0x01c: 0x000000bb  --checkpoint信息所占的block数
kfracdc.fcn.base:                  4299 ; 0x020: 0x000010cb
kfracdc.fcn.wrap:                     0 ; 0x024: 0x00000000
kfracdc.bufBlks:                    256 ; 0x028: 0x00000100  --block总数
kfracdc.strt112.seq:                  2 ; 0x02c: 0x00000002
kfracdc.strt112.blk:                  0 ; 0x030: 0x00000000

以上是ACD的开始,也就是block 0。我们来看一下block 1,也就是ACD的实际数据

[grid@jyrac1 ~]$ kfed read /dev/raw/raw10 aun=2 blkn=1 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            8 ; 0x002: KFBTYP_CHNGDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       1 ; 0x004: blk=1
kfbh.block.obj:                       3 ; 0x008: file=3
kfbh.check:                    17400326 ; 0x00c: 0x01098206
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfracdb.aba.seq:                      2 ; 0x000: 0x00000002 --ACD block address sequences
kfracdb.aba.blk:                      0 ; 0x004: 0x00000000 --ACD block address block number
kfracdb.ents:                         2 ; 0x008: 0x0002 --这里应该是指的包含的extent数。即2m
kfracdb.ub2spare:                     0 ; 0x00a: 0x0000
--下面的信息为ACD redo log records记录
kfracdb.lge[0].valid:                 1 ; 0x00c: V=1 B=0 M=0
kfracdb.lge[0].chgCount:              1 ; 0x00d: 0x01
kfracdb.lge[0].len:                  64 ; 0x00e: 0x0040
kfracdb.lge[0].kfcn.base:             1 ; 0x010: 0x00000001
kfracdb.lge[0].kfcn.wrap:             0 ; 0x014: 0x00000000

--下面的bcd信息是表示ACD block change description信息
kfracdb.lge[0].bcd[0].kfbl.blk:       0 ; 0x018: blk=0
kfracdb.lge[0].bcd[0].kfbl.obj:       4 ; 0x01c: file=4
kfracdb.lge[0].bcd[0].kfcn.base:      0 ; 0x020: 0x00000000
kfracdb.lge[0].bcd[0].kfcn.wrap:      0 ; 0x024: 0x00000000
kfracdb.lge[0].bcd[0].oplen:          4 ; 0x028: 0x0004 --表示长度,类似logfile dump的LEN
kfracdb.lge[0].bcd[0].blkIndex:       0 ; 0x02a: 0x0000
kfracdb.lge[0].bcd[0].flags:         28 ; 0x02c: F=0 N=0 F=1 L=1 V=1 A=0 C=0
kfracdb.lge[0].bcd[0].opcode:       212 ; 0x02e: 0x00d4 --opcode,类似数据库实例中的update/delete/insert操作的opcode编号
kfracdb.lge[0].bcd[0].kfbtyp:         9 ; 0x030: KFBTYP_COD_BGO  --操作类型,类似数据库实例中的update/delete/insert等类型
kfracdb.lge[0].bcd[0].redund:        19 ; 0x031: SCHE=0x1 NUMB=0x3 --这里表示冗余级别,17是unport,18是mirror,19表示high
kfracdb.lge[0].bcd[0].pad:        63903 ; 0x032: 0xf99f
kfracdb.lge[0].bcd[0].KFRCOD_CRASH:   1 ; 0x034: 0x00000001
kfracdb.lge[0].bcd[0].au[0]:         36 ; 0x038: 0x00000024
kfracdb.lge[0].bcd[0].au[1]:         35 ; 0x03c: 0x00000023
kfracdb.lge[0].bcd[0].au[2]:         35 ; 0x040: 0x00000023
kfracdb.lge[0].bcd[0].disks[0]:       2 ; 0x044: 0x0002
kfracdb.lge[0].bcd[0].disks[1]:       3 ; 0x046: 0x0003
kfracdb.lge[0].bcd[0].disks[2]:       1 ; 0x048: 0x0001
kfracdb.lge[1].valid:                 1 ; 0x04c: V=1 B=0 M=0 
kfracdb.lge[1].chgCount:              1 ; 0x04d: 0x01
kfracdb.lge[1].len:                  64 ; 0x04e: 0x0040
kfracdb.lge[1].kfcn.base:             2 ; 0x050: 0x00000002
kfracdb.lge[1].kfcn.wrap:             0 ; 0x054: 0x00000000
kfracdb.lge[1].bcd[0].kfbl.blk:       1 ; 0x058: blk=1
kfracdb.lge[1].bcd[0].kfbl.obj:       4 ; 0x05c: file=4
kfracdb.lge[1].bcd[0].kfcn.base:      0 ; 0x060: 0x00000000
kfracdb.lge[1].bcd[0].kfcn.wrap:      0 ; 0x064: 0x00000000
kfracdb.lge[1].bcd[0].oplen:          4 ; 0x068: 0x0004
kfracdb.lge[1].bcd[0].blkIndex:       1 ; 0x06a: 0x0001
kfracdb.lge[1].bcd[0].flags:         28 ; 0x06c: F=0 N=0 F=1 L=1 V=1 A=0 C=0
kfracdb.lge[1].bcd[0].opcode:       212 ; 0x06e: 0x00d4
kfracdb.lge[1].bcd[0].kfbtyp:        15 ; 0x070: KFBTYP_COD_RBO
kfracdb.lge[1].bcd[0].redund:        19 ; 0x071: SCHE=0x1 NUMB=0x3
kfracdb.lge[1].bcd[0].pad:        63903 ; 0x072: 0xf99f
kfracdb.lge[1].bcd[0].KFRCOD_CRASH:   0 ; 0x074: 0x00000000
kfracdb.lge[1].bcd[0].au[0]:         36 ; 0x078: 0x00000024
kfracdb.lge[1].bcd[0].au[1]:         35 ; 0x07c: 0x00000023
kfracdb.lge[1].bcd[0].au[2]:         35 ; 0x080: 0x00000023
kfracdb.lge[1].bcd[0].disks[0]:       2 ; 0x084: 0x0002
kfracdb.lge[1].bcd[0].disks[1]:       3 ; 0x086: 0x0003
kfracdb.lge[1].bcd[0].disks[2]:       1 ; 0x088: 0x0001

当运行的asm实例,突然crash后,那么重启asm实例以后,asm可以根据ACD信息去进行instance recover

小结:
Active change dictectory,也就是asm元数据file 3,一共占据42个AU 大小,简称ACD. 每个asm实例对应一份ACD信息,换句话讲,你是双节点asm rac,那么就有84M的ACD数据,以此类推.(事实上不管你AU是多大ACD的信息都是固定的大小)。asm中ACD就类似数据库实例中的redo,记录asm元数据操作记录,以便于asm crash后进行instance recover。ACD信息所在AU,第一个block是其元数据,后面的block是data信息。 ACD data的数据,跟redo的结构有点类似,里面记录的也是thread,sequence,len,opcode等信息。

]]>
http://www.jydba.net/index.php/archives/1997/feed 0
Oracle ASM Disk Directory http://www.jydba.net/index.php/archives/1995 http://www.jydba.net/index.php/archives/1995#respond Wed, 28 Dec 2016 07:20:31 +0000 http://www.jydba.net/?p=1995 ASM的2号文件就是ASM磁盘目录,它用来跟踪磁盘组中的所有磁盘。当磁盘组是一个独立的存储单元时,每个磁盘组有属于它自己的ASM磁盘目录。ASM中每一个磁盘组都是自解释的,磁盘组之间没有任何的信息上依赖。

对ASM来说,磁盘目录只是一个普通的ASM文件。在ASM的文件目录中也会有它的条目,当磁盘组应用冗余策略时,磁盘目录也会生成镜像副本,并且也会像其他文件一样根据实际需要而增长。

每个磁盘目录条目维护以下内容:
.磁盘号
.磁盘的状态
.磁盘的名称(可能与操作系统所显示的磁盘名不一样)
.所在的Failgroup名称
.创建的时间戳
.故障的时间戳
.故障时间(自失败时间戳截止目前的时间)
.resize目标值
.磁盘修复时间
.Zone的信息

V$ASM_DISK视图
磁盘目录中所维护的大部分信息都可以通过查询v$asm_disk视图来获得。对于被发现的每个磁盘在视图中都有一条记录来表示,包括那些不属于任何磁盘组的磁盘。当每次查询v$asm_disk视图时,ASM就会执行磁盘发现操作,因此对这个视图执行查询的是有代价的。

下面的例子显示了在ASM实例中查询V$ASM_DISK视图的输出

SQL> select group_number, disk_number, state, name, mount_status from v$asm_disk order by 1,2;

GROUP_NUMBER DISK_NUMBER STATE                          NAME                           MOUNT_STATUS
------------ ----------- ------------------------------ ------------------------------ --------------
           0           0 NORMAL                                                        CLOSED
           0           1 NORMAL                                                        CLOSED
           0           2 NORMAL                                                        CLOSED
           0           3 NORMAL                                                        CLOSED
           0           4 NORMAL                                                        CLOSED
           0           5 NORMAL                                                        CLOSED
           1           0 NORMAL                         ARCHDG_0000                    CACHED
           1           1 NORMAL                         ARCHDG_0001                    CACHED
           2           0 NORMAL                         CRSDG_0000                     CACHED
           2           1 NORMAL                         CRSDG_0001                     CACHED
           3           0 NORMAL                         DATADG_0001                    CACHED
           3           1 NORMAL                         DATADG_0003                    CACHED
           3           2 NORMAL                         DATADG_0002                    CACHED
           3           3 NORMAL                         DATADG_0000                    CACHED

14 rows selected.

获得了所有ASM识别到的磁盘,包括了哪些不是当前正mount磁盘组(GROUP_NUMBER=0)的磁盘。

V$ASM_DISK_STAT视图
视图V$ASM_DISK_STAT展示了跟V$ASM_DISK相同的信息,不过查询V$ASM_DISK_STAT并不会执行发现所有磁盘的操作。它的信息来自于ASM实例的SGA区,查询V$ASM_DISK_STAT的代价不大,因为并不进行发现磁盘的操作,但是这个查询的结果可能并不能实时反应系统磁盘的现状。并且V$ASM_DISK_STAT中的信息只能反映出当前挂载磁盘组的磁盘信息,而不仅仅是不能反映出系统新加入的盘的信息。

下面的查询显示了在ASM实例中查询V$ASM_DISK_STAT视图的输出

SQL> select group_number, disk_number, state, name, mount_status from v$asm_disk_stat order by 1,2;

GROUP_NUMBER DISK_NUMBER STATE                          NAME                           MOUNT_STATUS
------------ ----------- ------------------------------ ------------------------------ --------------
           1           0 NORMAL                         ARCHDG_0000                    CACHED
           1           1 NORMAL                         ARCHDG_0001                    CACHED
           2           0 NORMAL                         CRSDG_0000                     CACHED
           2           1 NORMAL                         CRSDG_0001                     CACHED
           3           0 NORMAL                         DATADG_0001                    CACHED
           3           1 NORMAL                         DATADG_0003                    CACHED
           3           2 NORMAL                         DATADG_0002                    CACHED
           3           3 NORMAL                         DATADG_0000                    CACHED

8 rows selected.

只看到了mount的磁盘组上的磁盘

磁盘目录存储位置
可通过在ASM实例中查询固定表X$KFFXP来找到属于ASM 2号文件磁盘目录的AU分布情况。并且通过关联v$asm_disk_stat视图可以获得磁盘名,下面来查询磁盘组3(datadg)的磁盘目录AU分布情况。

SQL> select group_number,disk_number,name,path,state from v$asm_disk where group_number=3 order by 1,2;

GROUP_NUMBER DISK_NUMBER NAME                           PATH                           STATE
------------ ----------- ------------------------------ ------------------------------ ------------------------------
           3           0 DATADG_0001                    /dev/raw/raw11                 NORMAL
           3           1 DATADG_0003                    /dev/raw/raw4                  NORMAL
           3           2 DATADG_0002                    /dev/raw/raw3                  NORMAL
           3           3 DATADG_0000                    /dev/raw/raw10                 NORMAL



SQL> select x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au",x.disk_kffxp "disk #",d.name "disk name"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=3
  6  and x.number_kffxp=2
  7  order by 1, 2;

virtual extent physical extent         au     disk # disk name
-------------- --------------- ---------- ---------- ------------------------------------------------------------
             0               0          3          2 DATADG_0002
             0               1          3          0 DATADG_0001
             0               2          3          1 DATADG_0003

上面的结果可以看出ASM的磁盘目录有三份镜像,当前磁盘目录的大小是3个物理extent(本例中也就是3个AU),再次强调,即使在一个normal冗余的磁盘组中,ASM的磁盘目录也有三份镜像。让我们使用kfed工具查看下磁盘目录的具体内容,由于数据在3个AU中是一样的,我们只需要查看第一个AU的内容就可以了,这里是DATADG_0001(/dev/raw/raw11)的AU 3:

[grid@jyrac1 ~]$ kfed read /dev/raw/raw11 aun=3 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            6 ; 0x002: KFBTYP_DISKDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       2 ; 0x008: file=2
kfbh.check:                    17204021 ; 0x00c: 0x01068335
kfbh.fcn.base:                      311 ; 0x010: 0x00000137
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:        4294967295 ; 0x00c: 0xffffffff
kffdnd.overfl.incarn:                 0 ; 0x010: A=0 NUMM=0x0
kffdnd.parent.number:                 0 ; 0x014: 0x00000000
kffdnd.parent.incarn:                 1 ; 0x018: A=1 NUMM=0x0
kffdnd.fstblk.number:                 0 ; 0x01c: 0x00000000
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfddde[0].entry.incarn:               1 ; 0x024: A=1 NUMM=0x0
kfddde[0].entry.hash:                 0 ; 0x028: 0x00000000
kfddde[0].entry.refer.number:4294967295 ; 0x02c: 0xffffffff
kfddde[0].entry.refer.incarn:         0 ; 0x030: A=0 NUMM=0x0
kfddde[0].dsknum:                     0 ; 0x034: 0x0000  --diskgroup中,该disk的disk编号,从0开始排序,该值为0,说明该disk是这个磁盘组中的第一个disk
kfddde[0].state:                      2 ; 0x036: KFDSTA_NORMAL--disk状态。其中2表示normal。在asm中,该值对应v$asm_disk.state,主要有如下几种值: 
UNKNOWN ---disk不被diskgroup所识别,通常是diskgroup没有mount。
NORMAL  ---disk目前处于online状态,操作正常。
ADDING  ---表示disk正在被加入到diskgroup当中,其中add disk过程涉及到一系列的操作,包括更新pst,fst,disk dir以及reblance等各项操作。
DROPPING ---表示disk正在被从某个diskgroup中删除,该操作可以认为是adding的相反过程
HUNG   ---该状态表示在drop disk的过程中,由于diskgroup 空间不足而不能完成reblance操作而导致disk处于hung状态。
FORCING ---该状态表示disk已经从diskgroup中移除,但是其disk上的数据尚未被卸载,很可能是强制drop。
DROPPED ---表示disk已经从diskgroup中删除且完成了一系列的相关操作。
#define KFDSTA_INVALID  ((kfdsta)0)  /* Illegal value */
#define KFDSTA_UNKNOWN  ((kfdsta)1)  /* ASM disk state not known */
#define KFDSTA_NORMAL   ((kfdsta)2)  /* Happy disk */
#define KFDSTA_UNUSED   ((kfdsta)3)  /* Unused State - Open */
#define KFDSTA_DROPPING ((kfdsta)4)  /* Disk being dropped from group */
#define KFDSTA_HUNG     ((kfdsta)5)  /* Disk drop operation hung */
#define KFDSTA_FORCING  ((kfdsta)6)  /* Disk beinng drop forced */
#define KFDSTA_DROPPED  ((kfdsta)7)  /* Disk no longer part of group */
#define KFDSTA_ADDING   ((kfdsta)8)  /* Disk being globally validated */

kfddde[0].ddchgfl:                  132 ; 0x037: 0x84
kfddde[0].dskname:          DATADG_0001 ; 0x038: length=11 --磁盘名称,这是asm中定义的diskname.
kfddde[0].fgname:           DATADG_0001 ; 0x058: length=11 --这表示failgroup diskname
kfddde[0].crestmp.hi:          33042831 ; 0x078: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfddde[0].crestmp.lo:        2456905728 ; 0x07c: USEC=0x0 MSEC=0x5a SECS=0x27 MINS=0x24
kfddde[0].failstmp.hi:                0 ; 0x080: HOUR=0x0 DAYS=0x0 MNTH=0x0 YEAR=0x0
kfddde[0].failstmp.lo:                0 ; 0x084: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfddde[0].timer:                      0 ; 0x088: 0x00000000
kfddde[0].size:                    5120 ; 0x08c: 0x00001400 --disk大小,由于au默认是1m,所以这里是5120m
kfddde[0].srRloc.super.hiStart:       0 ; 0x090: 0x00000000
kfddde[0].srRloc.super.loStart:       0 ; 0x094: 0x00000000
kfddde[0].srRloc.super.length:        0 ; 0x098: 0x00000000
kfddde[0].srRloc.incarn:              0 ; 0x09c: 0x00000000
kfddde[0].dskrprtm:                   0 ; 0x0a0: 0x00000000
kfddde[0].start0:                     0 ; 0x0a4: 0x00000000
kfddde[0].size0:                   5120 ; 0x0a8: 0x00001400
kfddde[0].used0:                     76 ; 0x0ac: 0x0000004c
kfddde[0].slot:                       0 ; 0x0b0: 0x00000000
.....
kfddde[1].entry.incarn:               1 ; 0x1e4: A=1 NUMM=0x0
kfddde[1].entry.hash:                 1 ; 0x1e8: 0x00000001
kfddde[1].entry.refer.number:4294967295 ; 0x1ec: 0xffffffff
kfddde[1].entry.refer.incarn:         0 ; 0x1f0: A=0 NUMM=0x0
kfddde[1].dsknum:                     1 ; 0x1f4: 0x0001
kfddde[1].state:                      2 ; 0x1f6: KFDSTA_NORMAL
kfddde[1].ddchgfl:                  132 ; 0x1f7: 0x84
kfddde[1].dskname:          DATADG_0003 ; 0x1f8: length=11
kfddde[1].fgname:           DATADG_0003 ; 0x218: length=11
kfddde[1].crestmp.hi:          33042831 ; 0x238: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfddde[1].crestmp.lo:        2456905728 ; 0x23c: USEC=0x0 MSEC=0x5a SECS=0x27 MINS=0x24
kfddde[1].failstmp.hi:                0 ; 0x240: HOUR=0x0 DAYS=0x0 MNTH=0x0 YEAR=0x0
kfddde[1].failstmp.lo:                0 ; 0x244: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfddde[1].timer:                      0 ; 0x248: 0x00000000
kfddde[1].size:                    5120 ; 0x24c: 0x00001400
kfddde[1].srRloc.super.hiStart:       0 ; 0x250: 0x00000000
kfddde[1].srRloc.super.loStart:       0 ; 0x254: 0x00000000
kfddde[1].srRloc.super.length:        0 ; 0x258: 0x00000000
kfddde[1].srRloc.incarn:              0 ; 0x25c: 0x00000000
kfddde[1].dskrprtm:                   0 ; 0x260: 0x00000000
kfddde[1].start0:                     0 ; 0x264: 0x00000000
kfddde[1].size0:                   5120 ; 0x268: 0x00001400
kfddde[1].used0:                     76 ; 0x26c: 0x0000004c
kfddde[1].slot:                       0 ; 0x270: 0x00000000
....
kfddde[2].entry.incarn:               1 ; 0x3a4: A=1 NUMM=0x0
kfddde[2].entry.hash:                 2 ; 0x3a8: 0x00000002
kfddde[2].entry.refer.number:4294967295 ; 0x3ac: 0xffffffff
kfddde[2].entry.refer.incarn:         0 ; 0x3b0: A=0 NUMM=0x0
kfddde[2].dsknum:                     2 ; 0x3b4: 0x0002
kfddde[2].state:                      2 ; 0x3b6: KFDSTA_NORMAL
kfddde[2].ddchgfl:                  132 ; 0x3b7: 0x84
kfddde[2].dskname:          DATADG_0002 ; 0x3b8: length=11
kfddde[2].fgname:           DATADG_0002 ; 0x3d8: length=11
kfddde[2].crestmp.hi:          33042831 ; 0x3f8: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfddde[2].crestmp.lo:        2456905728 ; 0x3fc: USEC=0x0 MSEC=0x5a SECS=0x27 MINS=0x24
kfddde[2].failstmp.hi:                0 ; 0x400: HOUR=0x0 DAYS=0x0 MNTH=0x0 YEAR=0x0
kfddde[2].failstmp.lo:                0 ; 0x404: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfddde[2].timer:                      0 ; 0x408: 0x00000000
kfddde[2].size:                    5120 ; 0x40c: 0x00001400
kfddde[2].srRloc.super.hiStart:       0 ; 0x410: 0x00000000
kfddde[2].srRloc.super.loStart:       0 ; 0x414: 0x00000000
kfddde[2].srRloc.super.length:        0 ; 0x418: 0x00000000
kfddde[2].srRloc.incarn:              0 ; 0x41c: 0x00000000
kfddde[2].dskrprtm:                   0 ; 0x420: 0x00000000
kfddde[2].start0:                     0 ; 0x424: 0x00000000
kfddde[2].size0:                   5120 ; 0x428: 0x00001400
kfddde[2].used0:                     77 ; 0x42c: 0x0000004d
kfddde[2].slot:                       0 ; 0x430: 0x00000000
...
kfddde[3].entry.incarn:               1 ; 0x564: A=1 NUMM=0x0
kfddde[3].entry.hash:                 3 ; 0x568: 0x00000003
kfddde[3].entry.refer.number:4294967295 ; 0x56c: 0xffffffff
kfddde[3].entry.refer.incarn:         0 ; 0x570: A=0 NUMM=0x0
kfddde[3].dsknum:                     3 ; 0x574: 0x0003
kfddde[3].state:                      2 ; 0x576: KFDSTA_NORMAL
kfddde[3].ddchgfl:                  132 ; 0x577: 0x84
kfddde[3].dskname:          DATADG_0000 ; 0x578: length=11
kfddde[3].fgname:           DATADG_0000 ; 0x598: length=11
kfddde[3].crestmp.hi:          33042831 ; 0x5b8: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfddde[3].crestmp.lo:        2456905728 ; 0x5bc: USEC=0x0 MSEC=0x5a SECS=0x27 MINS=0x24
kfddde[3].failstmp.hi:                0 ; 0x5c0: HOUR=0x0 DAYS=0x0 MNTH=0x0 YEAR=0x0
kfddde[3].failstmp.lo:                0 ; 0x5c4: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfddde[3].timer:                      0 ; 0x5c8: 0x00000000
kfddde[3].size:                    5120 ; 0x5cc: 0x00001400
kfddde[3].srRloc.super.hiStart:       0 ; 0x5d0: 0x00000000
kfddde[3].srRloc.super.loStart:       0 ; 0x5d4: 0x00000000
kfddde[3].srRloc.super.length:        0 ; 0x5d8: 0x00000000
kfddde[3].srRloc.incarn:              0 ; 0x5dc: 0x00000000
kfddde[3].dskrprtm:                   0 ; 0x5e0: 0x00000000
kfddde[3].start0:                     0 ; 0x5e4: 0x00000000
kfddde[3].size0:                   5120 ; 0x5e8: 0x00001400
kfddde[3].used0:                     76 ; 0x5ec: 0x0000004c
kfddde[3].slot:                       0 ; 0x5f0: 0x00000000
....

输出信息中的kfbh.type为KFBTYP_DISKDIR代表了这是一个磁盘目录。ASM中的磁盘的信息存储在上面输出内容的kfddde的区域,kfddde[0] 是关于磁盘0,kfddde[1]是关于磁盘1,以此类推。

以这种方式我们可以知道磁盘组中所有磁盘的信息,就像你看到的,大部分的信息都可以通过视图V$ASM_DISK去获取,而不需要通过kfed这种工具去查看。

使用kfed工具来判断磁盘目录位置
使用查询得到的磁盘目录AU分布如下:

SQL> select group_number,disk_number,name,path,state from v$asm_disk where group_number=3 order by 1,2;

GROUP_NUMBER DISK_NUMBER NAME                           PATH                           STATE
------------ ----------- ------------------------------ ------------------------------ ------------------------------
           3           0 DATADG_0001                    /dev/raw/raw11                 NORMAL
           3           1 DATADG_0003                    /dev/raw/raw4                  NORMAL
           3           2 DATADG_0002                    /dev/raw/raw3                  NORMAL
           3           3 DATADG_0000                    /dev/raw/raw10                 NORMAL



SQL> select x.xnum_kffxp "virtual extent",pxn_kffxp "physical extent",x.au_kffxp "au",x.disk_kffxp "disk #",d.name "disk name"
  2  from x$kffxp x, v$asm_disk_stat d
  3  where x.group_kffxp=d.group_number
  4  and x.disk_kffxp=d.disk_number
  5  and x.group_kffxp=3
  6  and x.number_kffxp=2
  7  order by 1, 2;

virtual extent physical extent         au     disk # disk name
-------------- --------------- ---------- ---------- ------------------------------------------------------------
             0               0          3          2 DATADG_0002
             0               1          3          0 DATADG_0001
             0               2          3          1 DATADG_0003

上面的结果可以看出ASM的磁盘目录有三份镜像,当前磁盘目录的大小是3个物理extent(本例中也就是3个AU),再次强调,即使在一个normal冗余的磁盘组中,ASM的磁盘目录也有三份镜像。

读取磁盘头

[grid@jyrac1 ~]$ kfed read /dev/raw/raw3  | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483650 ; 0x008: disk=2
kfbh.check:                  3693686872 ; 0x00c: 0xdc293058
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:         ORCLDISK ; 0x000: length=8
kfdhdb.driver.reserved[0]:            0 ; 0x008: 0x00000000
kfdhdb.driver.reserved[1]:            0 ; 0x00c: 0x00000000
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                186646528 ; 0x020: 0x0b200000
kfdhdb.dsknum:                        2 ; 0x024: 0x0002
kfdhdb.grptyp:                        2 ; 0x026: KFDGTP_NORMAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:             DATADG_0002 ; 0x028: length=11
kfdhdb.grpname:                  DATADG ; 0x048: length=6
kfdhdb.fgname:              DATADG_0002 ; 0x068: length=11
kfdhdb.capname:                         ; 0x088: length=0
kfdhdb.crestmp.hi:             33042831 ; 0x0a8: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfdhdb.crestmp.lo:           2456905728 ; 0x0ac: USEC=0x0 MSEC=0x5a SECS=0x27 MINS=0x24
kfdhdb.mntstmp.hi:             33042897 ; 0x0b0: HOUR=0x11 DAYS=0xe MNTH=0xc YEAR=0x7e0
kfdhdb.mntstmp.lo:            144833536 ; 0x0b4: USEC=0x0 MSEC=0x7f SECS=0xa MINS=0x2
kfdhdb.secsize:                     512 ; 0x0b8: 0x0200
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.mfact:                    113792 ; 0x0c0: 0x0001bc80
kfdhdb.dsksize:                    5120 ; 0x0c4: 0x00001400
kfdhdb.pmcnt:                         2 ; 0x0c8: 0x00000002
kfdhdb.fstlocn:                       1 ; 0x0cc: 0x00000001
kfdhdb.altlocn:                       2 ; 0x0d0: 0x00000002 --allocate table
kfdhdb.f1b1locn:                      2 ; 0x0d4: 0x00000002 --file directory
kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0x0000
kfdhdb.redomirrors[1]:                0 ; 0x0da: 0x0000
kfdhdb.redomirrors[2]:                0 ; 0x0dc: 0x0000
kfdhdb.redomirrors[3]:                0 ; 0x0de: 0x0000
kfdhdb.dbcompat:              168820736 ; 0x0e0: 0x0a100000

allocate table元数据在第2个AU里面,而那么必然disk directory信息也在该AU里面,因为进行在读取alocate table信息时,必然要先读取disk directory。file directoryb也在第2个AU中

[grid@jyrac1 ~]$ kfed read /dev/raw/raw11  | grep kfdhdb.f1b1locn
kfdhdb.f1b1locn:                      2 ; 0x0d4: 0x00000002

由于1号文件总是开始在0号磁盘2号AU,记住这个位置:0号盘2号AU。这是ASM中定位文件的起点,它的作用,有点相当于磁盘上的引导区,在电脑开机后负责将OS启动起来。1号文件在最少情况下,至少有两个AU。在1号文件中,每个文件占用一个元数据块,存放自身的空间分布信息。每个元数据块大小是4K,一个AU是1M,哪么,每个AU中,可以存储256个文件的空间分布信息。这其中,0号盘2号AU中,全是元文件的信息。再具体一点,0号盘2号AU,第一个元数据块被系统占用,从第二个块开始,到255为止,共255个元数据块,对应索引号1至255的文件。其实,也就是全部的元文件了。也就是说0号盘2号AU,保存了全部元文件的空间分布信息。1号文件的第二个AU,从第一个块开始,保存256号文件。第二个块对应257号文件,等等。每次从ASM中读数据时,Oracle都要先读到1号文件,从中找出要读的目标文件在磁盘上的分布位置,然后再去读取相应的文件的数据。

由于磁盘目录的文件号为2,所以读取2号AU的2号块

[grid@jyrac1 ~]$ kfed read /dev/raw/raw11 aun=2 blkn=2 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       2 ; 0x004: blk=2
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                   305881854 ; 0x00c: 0x123b62fe
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:                   1 ; 0x000: A=1 NUMM=0x0
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 1048576 ; 0x010: 0x00100000
kfffdb.xtntcnt:                       3 ; 0x014: 0x00000003
kfffdb.xtnteof:                       3 ; 0x018: 0x00000003
kfffdb.blkSize:                    4096 ; 0x01c: 0x00001000
kfffdb.flags:                         1 ; 0x020: O=1 S=0 S=0 D=0 C=0 I=0 R=0 A=0
kfffdb.fileType:                     15 ; 0x021: 0x0f
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                       3 ; 0x03c: 0x0003
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:             4294967295 ; 0x044: 0xffffffff
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      0 ; 0x04c: 0x00
kfffdb.strpsz:                        0 ; 0x04d: 0x00
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33042831 ; 0x050: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:             2457465856 ; 0x054: USEC=0x0 MSEC=0x27d SECS=0x27 MINS=0x24
kfffdb.modts.hi:               33042831 ; 0x058: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:             2457465856 ; 0x05c: USEC=0x0 MSEC=0x27d SECS=0x27 MINS=0x24
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                    3 ; 0x4a0: 0x00000003
kfffde[0].xptr.disk:                  2 ; 0x4a4: 0x0002
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                  43 ; 0x4a7: 0x2b
kfffde[1].xptr.au:                    3 ; 0x4a8: 0x00000003
kfffde[1].xptr.disk:                  0 ; 0x4ac: 0x0000
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                  41 ; 0x4af: 0x29
kfffde[2].xptr.au:                    3 ; 0x4b0: 0x00000003
kfffde[2].xptr.disk:                  1 ; 0x4b4: 0x0001
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                  40 ; 0x4b7: 0x28
kfffde[3].xptr.au:           4294967295 ; 0x4b8: 0xffffffff
kfffde[3].xptr.disk:              65535 ; 0x4bc: 0xffff
kfffde[3].xptr.flags:                 0 ; 0x4be: L=0 E=0 D=0 S=0
kfffde[3].xptr.chk:                  42 ; 0x4bf: 0x2a
kfffde[4].xptr.au:           4294967295 ; 0x4c0: 0xffffffff
kfffde[4].xptr.disk:              65535 ; 0x4c4: 0xffff
kfffde[4].xptr.flags:                 0 ; 0x4c6: L=0 E=0 D=0 S=0
kfffde[4].xptr.chk:                  42 ; 0x4c7: 0x2a
kfffde[5].xptr.au:           4294967295 ; 0x4c8: 0xffffffff

kfffde,是结构数组。由于我这里磁盘目录有三份镜像,kfffde[0]的数据元素,
存放了2号文件第一个AU的位置。kfffde[1]存放了2号文件第二个AU位置,kfffde[2]存放了2号文件第三个AU位置等等,依次类推。我们来看一下上面的信息:
kfffde[0].xptr.au=3 –3号AU
kfffde[0].xptr.disk=2 –2号磁盘
上两个信息合起来,2号盘3号AU,这就是2号文件(磁盘目录)第一个AU的位置
kfffde[1].xptr.au=3 –3号AU
kfffde[1].xptr.disk=0 –0号磁盘
kfffde[2].xptr.au=3 –3号AU
kfffde[2].xptr.disk=1 –1号磁盘
上面的信息显示了2号文件的镜像副本存储在0号盘3号AU与1号盘的3号AU中。这与用查询x$kffxp视图所得到的元数据分布情况一致。

]]>
http://www.jydba.net/index.php/archives/1995/feed 0
Oracle ASM File Directory http://www.jydba.net/index.php/archives/1988 http://www.jydba.net/index.php/archives/1988#respond Tue, 27 Dec 2016 13:31:07 +0000 http://www.jydba.net/?p=1988 Virtual Metadata
ASM虚拟元数据被存储在ASM文件中。元数据文件目录被ASM实例排他访问。目录的文件号从1开始。注册者会被保留给ASM文件,它可以让RDBMS实例来访问与ASM实例一样。注册号小于255。文件号被保留在将来使用。v$asm_file不会显示元数据目录或注册号。

像其它ASM文件一样,虚拟元数据文件概括磁盘组的冗余类型进行镜像。ASM不会对外部冗余磁盘组提供镜像。虚拟元数据在正常冗余与高级冗余磁盘组中是三重镜像。虚拟元数据包含以下结构:

.File Directory
.Disk Directory
.Active Change Directory(ACD)
.Continuing Operations Directory(COD)
.Template Directory
.Alias Directory
.Attribute Directory
.Staleness Directory
.Staleness Registry

ASM的1号文件–ASM文件目录,它用来跟踪磁盘组中的所有文件。当磁盘组是一个独立的存储单元时,每个磁盘组将会包含属于它自己的ASM文件目录。

如果一个文件被删除,对于新创建的文件ASM会使用它的文件号,那么文件号将是唯一的,有不同的incarnation号。incarnation号是当文件创建时由时间来推算出来的,它保证对于相同文件号来说incarnation号是唯一的。

ASM文件的块大小是独立于ASM元数据块大小。所有的ASM元数据目录的块大小为4K。RDBMS数据文件在创建表空间时指定的块大小可以是2K,4K,8K,16K或32K。RDBMS的redo log文件通常的块大小是512 byte。ASM跟踪逻辑文件大小,通过数据库,与磁盘组中所占用的物理空间,考虑文件的冗余可以很显然的看出。RDBMS在文件创建时也提供了文件类型。对于不是由RDBMS实例显式创建的文件(比如由XML DB ftp命令或ASMCMD cp命令所创建的文件),ASM为了在创建时判断文件类型会检查文件头。

文件创建时间有一个明显的语义。文件修改时间,然后不是每次更新时间文件都会写入。而是当文件被打开执地写操作时文件修改时间会被更新。这意味着即使文件没有被写入,文件的修改时间也可能发生改变,并且存储的修改时间可能比修改文件内容所处的时间早。为了减小在集群ASM实例中的buffer cache中的文件目录块的竞争,修改时间以小时为精度。因此,在RAC中多个实例以较短时间打开文件,那么只有第一个访问文件的实例需要更新文件修改时间。

文件的布局信息由一系列的区指针组成。区指针指定了磁盘号与区所在的AU号。文件目录条目包含了一个文件的前60个区指针,有时也叫direct extens。文件目录剩余的部分包含了指向indirect extents的指针。indirect extents是其它虚拟元数据区,包含ASM文件的区指针。每个indirect extent是一个AU。每个文件目录条目可以包含最多300个indirect exten。indirect指针的概念存在于大多数传统文件系统中,比如Unix的BSD文件系统。文件布局的解释也叫extent map,文件布局受条带与冗余的影响。

虽然这是一个内部(ASM元数据)文件,但是它像磁盘组中的任何其它ASM文件一样被管理。在ASM文件目录中有属于它自己的条目(指向了它自己),在normal与high冗余磁盘组中,它也会生成镜像副本并且随着新文件的产生,ASM文件目录的大小也会增长。

每个ASM文件目录条目包含了以下内容:
.文件大小
.文件块大小
.文件类型
.文件冗余级别
.文件带条配置
.前60个区指针(也叫Direct区指针)
.当文件超过60个区,就会有indirect区指针
.文件创建时间戳
.文件最后修改时间戳
.指向ASM Alias目录的文件名

每个新增加的ASM文件会分配到一个号码,这个号码是随着新增文件而顺序递增的。文件的号码与文件目录中的block号码也是完全对应的,也就是说,文件目录的1号block描述了他自己也就是1号文件的信息。2号block是描述2号文件的,300号block是描述300号文件的,4000号block是关于4000号文件的,以此类推。如下,是ASM的263号文件,kfbh.block.blk指出了文件目录里块的编号,此编号也是文件的编号。

[grid@jyrac1 ~]$ kfed read /dev/raw/raw3 aun=77 blkn=7 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                     263 ; 0x004: blk=263
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                   857258416 ; 0x00c: 0x3318b9b0
kfbh.fcn.base:                     3715 ; 0x010: 0x00000e83
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:           930413057 ; 0x000: A=1 NUMM=0x1bba7d00
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 5251072 ; 0x010: 0x00502000
kfffdb.xtntcnt:                      12 ; 0x014: 0x0000000c
kfffdb.xtnteof:                      12 ; 0x018: 0x0000000c
kfffdb.blkSize:                    8192 ; 0x01c: 0x00002000
kfffdb.flags:                        17 ; 0x020: O=1 S=0 S=0 D=0 C=1 I=0 R=0 A=0
kfffdb.fileType:                      2 ; 0x021: 0x02
kfffdb.dXrs:                         18 ; 0x022: SCHE=0x1 NUMB=0x2
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                      12 ; 0x03c: 0x000c
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:                    111 ; 0x044: 0x0000006f
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      1 ; 0x04c: 0x01
kfffdb.strpsz:                       20 ; 0x04d: 0x14
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33042832 ; 0x050: HOUR=0x10 DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:              286709760 ; 0x054: USEC=0x0 MSEC=0x1b6 SECS=0x11 MINS=0x4
kfffdb.modts.hi:               33042897 ; 0x058: HOUR=0x11 DAYS=0xe MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:                      0 ; 0x05c: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfffdb.dasz[0]:                       0 ; 0x060: 0x00

不存在编号为0的ASM文件,所以文件目录的0号block不描述任何文件的信息。ASM文件目录与ASM的AT表是两个相辅相成的数据结构。ALTER DISKGROUP CHECK命令可以检查两个数据结构是不是一致的。通过为ALTER DISKGROUP CHECK语句可以用来校验磁盘组元信息的内部一致性,可以指定在磁盘组、磁盘、文件、failgroup级别进行元信息一致性的校验,能够成功执行此命令的前提条件是磁盘组必须处于mount状态。默认情况下,check disk group 子句会校验所有的元信息目录,在校验过程中如果有错误信息,会记录在ASM的alert文件中,check语句一般会执行如下的操作:
1.检查磁盘的一致性
2.检查文件extent map和AT表之间的一致性
3.检查alias元信息目录和文件目录之间对应关系的正确性
4.检查alias目录树的正确性
5.检查ASM元信息目录是否有不可访问的块。
我们可以在语句中添加repair或norepair关键字来指定ASM是否尝试修复检查过程中发生的错误,默认为norepair。

V$ASM_FILE and V$ASM_ALIAS视图
SM文件目录中描述的大部分信息都可以通过V$ASM_FILE视图查询到。对于处于mount状态的磁盘组中的每个文件,该视图中会以一行来展示。然而,该视图中并不会显示ASM元信息文件的信息。V$ASM_FILE视图中没有描述文件名的列,所以为了得到一个有意义的输出,同时我们还需要联合V$ASM_ALIAS视图。例如:

SQL> select f.group_number, f.file_number, a.name, f.type from v$asm_file f, v$asm_alias a where f.group_number=a.group_number  and f.group_number=3 and 

f.file_number=a.file_number order by 1, 2;

GROUP_NUMBER FILE_NUMBER NAME                                     TYPE
------------ ----------- ---------------------------------------- --------------------
           3         256 SPFILE.256.930411925                     PARAMETERFILE
           3         256 spfilejyrac.ora                          PARAMETERFILE
           3         257 current.257.930412709                    CONTROLFILE
           3         258 SYSAUX.258.930413055                     DATAFILE
           3         259 SYSTEM.259.930413057                     DATAFILE
           3         260 EXAMPLE.260.930413057                    DATAFILE
           3         261 UNDOTBS2.261.930413057                   DATAFILE
           3         262 UNDOTBS1.262.930413057                   DATAFILE
           3         263 USERS.263.930413057                      DATAFILE
           3         264 group_1.264.930413221                    ONLINELOG
           3         265 group_2.265.930413225                    ONLINELOG
           3         266 group_3.266.930413227                    ONLINELOG
           3         267 group_4.267.930413231                    ONLINELOG
           3         268 TEMP.268.930413239                       TEMPFILE
           3         269 FILE_TRANSFER_0_0.269.930515105          DUMPSET
           3         269 tts.dmp                                  DUMPSET
           3         270 test01.dbf                               DATAFILE
           3         270 FILE_TRANSFER.270.930515465              DATAFILE

18 rows selected.

不同磁盘组中的文件可以有相同的文件编号。例如,在1号磁盘组中的归档重做日志的文件号为(261-266),而上面3号磁盘组中的文件号为(256-270),两个磁盘组存在重复的文件号。

SQL> select f.group_number, f.file_number, a.name, f.type from v$asm_file f, v$asm_alias a where f.group_number=a.group_number  and f.group_number=1 and 

f.file_number=a.file_number order by 1, 2;


GROUP_NUMBER FILE_NUMBER NAME                                     TYPE
------------ ----------- ---------------------------------------- --------------------
           1         261 thread_2_seq_117.261.930410687           ARCHIVELOG
           1         262 thread_2_seq_118.262.930410761           ARCHIVELOG
           1         262 2_118_928610797.dbf                      ARCHIVELOG
           1         263 1_56_928610797.dbf                       ARCHIVELOG
           1         263 thread_1_seq_56.263.930410761            ARCHIVELOG
           1         264 1_57_928610797.dbf                       ARCHIVELOG
           1         264 thread_1_seq_57.264.930411019            ARCHIVELOG
           1         265 thread_2_seq_1.265.930413237             ARCHIVELOG
           1         265 2_1_930413221.dbf                        ARCHIVELOG
           1         266 2_2_930413221.dbf                        ARCHIVELOG
           1         266 thread_2_seq_2.266.930434449             ARCHIVELOG

ASM文件目录存储位置
我们可以在ASM实例中通过查询X$KFFXP视图来获取磁盘组DATADG中编号为1的文件所分配的AU。

SQL> select group_number,name from v$asm_diskgroup;

GROUP_NUMBER NAME
------------ ----------------------------------------
           1 ARCHDG
           2 CRSDG
           3 DATADG
           4 TESTDG


SQL> select group_number,disk_number,name,state,path from v$asm_disk where group_number=3;

GROUP_NUMBER DISK_NUMBER NAME                                     STATE      PATH
------------ ----------- ---------------------------------------- ---------- ------------------------------
           3           0 DATADG_0001                              NORMAL     /dev/raw/raw11
           3           3 DATADG_0000                              NORMAL     /dev/raw/raw10
           3           1 DATADG_0003                              NORMAL     /dev/raw/raw4
           3           2 DATADG_0002                              NORMAL     /dev/raw/raw3

SQL> select xnum_kffxp "virtual extent",pxn_kffxp "physical extent",au_kffxp "allocation unit",disk_kffxp "disk"
  2  from x$kffxp
  3  where group_kffxp=3 and number_kffxp=1 order by 1, 2;

virtual extent physical extent allocation unit       disk
-------------- --------------- --------------- ----------
             0               0               2          0
             0               1               2          2
             0               2               2          1
             1               3              76          3
             1               4              77          2
             1               5              76          1

6 rows selected.

从以上输出结果可以看到:ASM文件目录有三份镜像(每个virtual extent都有3个physical extent);当前ASM文件目录包含两个virtual extent(0,1)。当AU大小为1MB且ASM元信息block大小为4KB时,一个AU可以容纳256个目录条目。文件编号1-255是为ASM元信息文件预留,所以0号extent只用来容纳元信息文件的条目,1号extent则容纳接下来的256个非元信息文件的信息,以此类推。

通过kfed 读取asm磁盘头的kfdhdb.f1b1locn部分,可以获得ASM一号文件所在的AU,例如下面的例子里显示了磁盘组DATADG中的一号文件在磁盘(/dev/raw/raw3)的2号AU处,磁盘(/dev/raw/ra10)中kfdhdb.f1b1locn=0代表这个磁盘并没有一号文件的拷贝。因为0号(虚拟)extent用来存储ASM元信息并且分别存储在disk:0(/dev/raw/raw11),1(/dev/raw/raw4),2(/dev/raw/raw3中的AU 2中,下面的输出信息与上面所查询的ASM文件目录分布完全一致。

SQL> select group_number,disk_number,name,state,path from v$asm_disk where group_number=3 order by disk_number asc;

GROUP_NUMBER DISK_NUMBER NAME                           STATE                          PATH
------------ ----------- ------------------------------ ------------------------------ ------------------------------
           3           0 DATADG_0001                    NORMAL                         /dev/raw/raw11
           3           1 DATADG_0003                    NORMAL                         /dev/raw/raw4
           3           2 DATADG_0002                    NORMAL                         /dev/raw/raw3
           3           3 DATADG_0000                    NORMAL                         /dev/raw/raw10

[grid@jyrac1 ~]$ kfed read /dev/raw/raw11 | grep kfdhdb.f1b1locn
kfdhdb.f1b1locn:                      2 ; 0x0d4: 0x00000002
[grid@jyrac1 ~]$ kfed read /dev/raw/raw4 | grep kfdhdb.f1b1locn
kfdhdb.f1b1locn:                      2 ; 0x0d4: 0x00000002
[grid@jyrac1 ~]$ kfed read /dev/raw/raw3 | grep kfdhdb.f1b1locn
kfdhdb.f1b1locn:                      2 ; 0x0d4: 0x00000002
[grid@jyrac1 ~]$ kfed read /dev/raw/raw10 | grep kfdhdb.f1b1locn
kfdhdb.f1b1locn:                      0 ; 0x0d4: 0x00000000


1号文件总是开始在0号磁盘2号AU,记住这个位置:0号盘2号AU。这是ASM中定位文件的起点,它的作用,有点相当于磁盘上的引导区,在电脑开机后负责将OS启动起来。1号文件在最少情况下,至少有两个AU。上面我们提到过了,在1号文件中,每个文件占用一个元数据块,存放自身的空间分布信息。每个元数据块大小是4K,一个AU是1M,哪么,每个AU中,可以存储256个文件的空间分布信息。这其中,0号盘2号AU中,全是元文件的信息。再具体一点,0号盘2号AU,第一个元数据块被系统占用,从第二个块开始,到255为止,共255个元数据块,对应索引号1至255的文件。其实,也就是全部的元文件了。也就是说0号盘2号AU,保存了全部元文件的空间分布信息。1号文件的第二个AU,从第一个块开始,保存256号文件。第二个块对应257号文件,等等。每次从ASM中读数据时,Oracle都要先读到1号文件,从中找出要读的目标文件在磁盘上的分布位置,然后再去读取相应的文件的数据。

[grid@jyrac1 ~]$ kfed read /dev/raw/raw11 aun=2 blkn=1 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       1 ; 0x004: blk=1
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                  2717147277 ; 0x00c: 0xa1f4608d
kfbh.fcn.base:                      569 ; 0x010: 0x00000239
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:                   1 ; 0x000: A=1 NUMM=0x0
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 2097152 ; 0x010: 0x00200000
kfffdb.xtntcnt:                       6 ; 0x014: 0x00000006
kfffdb.xtnteof:                       6 ; 0x018: 0x00000006
kfffdb.blkSize:                    4096 ; 0x01c: 0x00001000
kfffdb.flags:                         1 ; 0x020: O=1 S=0 S=0 D=0 C=0 I=0 R=0 A=0
kfffdb.fileType:                     15 ; 0x021: 0x0f
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                       6 ; 0x03c: 0x0006
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:             4294967295 ; 0x044: 0xffffffff
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      0 ; 0x04c: 0x00
kfffdb.strpsz:                        0 ; 0x04d: 0x00
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33042831 ; 0x050: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:             2457402368 ; 0x054: USEC=0x0 MSEC=0x23f SECS=0x27 MINS=0x24
kfffdb.modts.hi:               33042831 ; 0x058: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:             2457402368 ; 0x05c: USEC=0x0 MSEC=0x23f SECS=0x27 MINS=0x24
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                    2 ; 0x4a0: 0x00000002
kfffde[0].xptr.disk:                  0 ; 0x4a4: 0x0000
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                  40 ; 0x4a7: 0x28
kfffde[1].xptr.au:                    2 ; 0x4a8: 0x00000002
kfffde[1].xptr.disk:                  2 ; 0x4ac: 0x0002
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                  42 ; 0x4af: 0x2a
kfffde[2].xptr.au:                    2 ; 0x4b0: 0x00000002
kfffde[2].xptr.disk:                  1 ; 0x4b4: 0x0001
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                  41 ; 0x4b7: 0x29
kfffde[3].xptr.au:                   76 ; 0x4b8: 0x0000004c
kfffde[3].xptr.disk:                  3 ; 0x4bc: 0x0003
kfffde[3].xptr.flags:                 0 ; 0x4be: L=0 E=0 D=0 S=0
kfffde[3].xptr.chk:                 101 ; 0x4bf: 0x65
kfffde[4].xptr.au:                   77 ; 0x4c0: 0x0000004d
kfffde[4].xptr.disk:                  2 ; 0x4c4: 0x0002
kfffde[4].xptr.flags:                 0 ; 0x4c6: L=0 E=0 D=0 S=0
kfffde[4].xptr.chk:                 101 ; 0x4c7: 0x65
kfffde[5].xptr.au:                   76 ; 0x4c8: 0x0000004c
kfffde[5].xptr.disk:                  1 ; 0x4cc: 0x0001
kfffde[5].xptr.flags:                 0 ; 0x4ce: L=0 E=0 D=0 S=0
kfffde[5].xptr.chk:                 103 ; 0x4cf: 0x67
kfffde[6].xptr.au:           4294967295 ; 0x4d0: 0xffffffff
kfffde[6].xptr.disk:              65535 ; 0x4d4: 0xffff
kfffde[6].xptr.flags:                 0 ; 0x4d6: L=0 E=0 D=0 S=0
kfffde[6].xptr.chk:                  42 ; 0x4d7: 0x2a

kfffde,是结构数组。由于我这里ASM文件目录有三份镜像,kfffde[0]的数据元素,
存放了1号文件第一个AU的位置。kfffde[1]存放了1号文件第二个AU位置,kfffde[2]存放了1号文件第三个AU位置等等,依次类推。我们来看一下上面的信息:
kfffde[0].xptr.au=2 –2号AU
kfffde[0].xptr.disk=0 –0号磁盘
上两个信息合起来,0号盘2号AU,这就是1号文件第一个AU的位置
kfffde[1].xptr.au=2 –2号AU
kfffde[1].xptr.disk=2 –2号磁盘
kfffde[2].xptr.au=2 –2号AU
kfffde[2].xptr.disk=1 –1号磁盘
上面的信息显示了1号文件的镜像副本存储在2号盘2号AU与1号盘的2号AU中。这与用查询x$kffxp视图所得到的元数据分布情况一致。

kfffde[3]到kfffde[5]存储的就是2号文件(磁盘目录)的分布情况
kfffde[3].xptr.au=76 –76号AU
kfffde[3].xptr.disk=3 –3号磁盘
上两个信息合起来,3号盘76号AU,这就是2号文件第一个AU的位置
kfffde[4].xptr.au=77 –77号AU
kfffde[4].xptr.disk=2 –2号磁盘
kfffde[5].xptr.au=76 –76号AU
kfffde[5].xptr.disk=1 –1号磁盘
上面的信息显示了2号文件(磁盘目录)的镜像副本存储在2号盘77号AU与1号盘的76号AU中。这与用查询x$kffxp视图所得到的非元信息文件的信息分布情况一致。

ASM file directory entries for database files
通过以下查询可以知道哪些文件是被我的ASM实例所管理的

SQL> select file_number "asm file number", name "file name" from v$asm_alias where group_number=3 order by 1;

asm file number file name
--------------- --------------------------------------------------------------------------------------------------------------------------------------------
            256 SPFILE.256.930411925
            256 spfilejyrac.ora
            257 current.257.930412709
            258 SYSAUX.258.930413055
            259 SYSTEM.259.930413057
            260 EXAMPLE.260.930413057
            261 UNDOTBS2.261.930413057
            262 UNDOTBS1.262.930413057
            263 USERS.263.930413057
            264 group_1.264.930413221
            265 group_2.265.930413225
            266 group_3.266.930413227
            267 group_4.267.930413231
            268 TEMP.268.930413239
            269 tts.dmp
            270 test01.dbf
            ...
27 rows selected.

接下来看一下263号文件(USERS.263.930413057)对应的的文件目录条目。首先,通过查询X$KFFXP获得该文件的extent和AU分布:

SQL> select xnum_kffxp "virtual extent",
  2  pxn_kffxp "physical extent",
  3  au_kffxp "allocation unit",
  4  disk_kffxp "disk"
  5  from x$kffxp
  6  where group_kffxp=3 
  7  and number_kffxp=263 
  8  and xnum_kffxp <> 2147483648
  9  order by 1, 2;

virtual extent physical extent allocation unit       disk
-------------- --------------- --------------- ----------
             0               0            1309          3
             0               1            1309          2
             1               2            1310          2
             1               3            1310          3
             2               4            1310          1
             2               5            1314          0
             3               6            1315          0
             3               7            1311          2
             4               8            1311          3
             4               9            1311          1
             5              10            1312          2
             5              11            1316          0

12 rows selected.

我们看到实例为该文件分配了6个virtual extent,并且该文件是两倍冗余。接下来查询DATA磁盘组包含的磁盘的编号和路径。

SQL> select disk_number, path from v$asm_disk where group_number=3 order by 1;

DISK_NUMBER PATH
----------- ------------------------------
          0 /dev/raw/raw11
          1 /dev/raw/raw4
          2 /dev/raw/raw3
          3 /dev/raw/raw10

之前所查询到的ASM文件目录分布如下:

SQL> select xnum_kffxp "virtual extent",pxn_kffxp "physical extent",au_kffxp "allocation unit",disk_kffxp "disk"
  2  from x$kffxp
  3  where group_kffxp=3 and number_kffxp=1 order by 1, 2;

virtual extent physical extent allocation unit       disk
-------------- --------------- --------------- ----------
             0               0               2          0
             0               1               2          2
             0               2               2          1
             1               3              76          3
             1               4              77          2
             1               5              76          1

6 rows selected.

现在我们通过kfed工具来查看该文件的ASM文件目录条目,它会在文件目录的263号block,也就是文件目录中1号(虚拟)extent的7号block(263减去256得出7)。1号extent位于3号磁盘的第76个AU,并在1号磁盘的第76个AU和2号磁盘的第77个AU上分别存在一份冗余。。下面我们来看看3号磁盘(/dev/raw/raw10)的第76个AU,1号磁盘(1 /dev/raw/raw4)的第76个AU与2号磁盘(/dev/raw/raw3)的第77个AU是否存储相同信息。

[grid@jyrac1 ~]$ kfed read /dev/raw/raw10 aun=76 blkn=7 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                     263 ; 0x004: blk=263
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                   857258416 ; 0x00c: 0x3318b9b0
kfbh.fcn.base:                     3715 ; 0x010: 0x00000e83
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:           930413057 ; 0x000: A=1 NUMM=0x1bba7d00
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 5251072 ; 0x010: 0x00502000
kfffdb.xtntcnt:                      12 ; 0x014: 0x0000000c
kfffdb.xtnteof:                      12 ; 0x018: 0x0000000c
kfffdb.blkSize:                    8192 ; 0x01c: 0x00002000
kfffdb.flags:                        17 ; 0x020: O=1 S=0 S=0 D=0 C=1 I=0 R=0 A=0
kfffdb.fileType:                      2 ; 0x021: 0x02
kfffdb.dXrs:                         18 ; 0x022: SCHE=0x1 NUMB=0x2
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                      12 ; 0x03c: 0x000c
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:                    111 ; 0x044: 0x0000006f
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      1 ; 0x04c: 0x01
kfffdb.strpsz:                       20 ; 0x04d: 0x14
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33042832 ; 0x050: HOUR=0x10 DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:              286709760 ; 0x054: USEC=0x0 MSEC=0x1b6 SECS=0x11 MINS=0x4
kfffdb.modts.hi:               33042897 ; 0x058: HOUR=0x11 DAYS=0xe MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:                      0 ; 0x05c: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                 1309 ; 0x4a0: 0x0000051d
kfffde[0].xptr.disk:                  3 ; 0x4a4: 0x0003
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                  49 ; 0x4a7: 0x31
kfffde[1].xptr.au:                 1309 ; 0x4a8: 0x0000051d
kfffde[1].xptr.disk:                  2 ; 0x4ac: 0x0002
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                  48 ; 0x4af: 0x30
kfffde[2].xptr.au:                 1310 ; 0x4b0: 0x0000051e
kfffde[2].xptr.disk:                  2 ; 0x4b4: 0x0002
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                  51 ; 0x4b7: 0x33
kfffde[3].xptr.au:                 1310 ; 0x4b8: 0x0000051e
kfffde[3].xptr.disk:                  3 ; 0x4bc: 0x0003
kfffde[3].xptr.flags:                 0 ; 0x4be: L=0 E=0 D=0 S=0
kfffde[3].xptr.chk:                  50 ; 0x4bf: 0x32
kfffde[4].xptr.au:                 1310 ; 0x4c0: 0x0000051e
kfffde[4].xptr.disk:                  1 ; 0x4c4: 0x0001
kfffde[4].xptr.flags:                 0 ; 0x4c6: L=0 E=0 D=0 S=0
kfffde[4].xptr.chk:                  48 ; 0x4c7: 0x30
kfffde[5].xptr.au:                 1314 ; 0x4c8: 0x00000522
kfffde[5].xptr.disk:                  0 ; 0x4cc: 0x0000
kfffde[5].xptr.flags:                 0 ; 0x4ce: L=0 E=0 D=0 S=0
kfffde[5].xptr.chk:                  13 ; 0x4cf: 0x0d
kfffde[6].xptr.au:                 1315 ; 0x4d0: 0x00000523
kfffde[6].xptr.disk:                  0 ; 0x4d4: 0x0000
kfffde[6].xptr.flags:                 0 ; 0x4d6: L=0 E=0 D=0 S=0
kfffde[6].xptr.chk:                  12 ; 0x4d7: 0x0c
kfffde[7].xptr.au:                 1311 ; 0x4d8: 0x0000051f
kfffde[7].xptr.disk:                  2 ; 0x4dc: 0x0002
kfffde[7].xptr.flags:                 0 ; 0x4de: L=0 E=0 D=0 S=0
kfffde[7].xptr.chk:                  50 ; 0x4df: 0x32
kfffde[8].xptr.au:                 1311 ; 0x4e0: 0x0000051f
kfffde[8].xptr.disk:                  3 ; 0x4e4: 0x0003
kfffde[8].xptr.flags:                 0 ; 0x4e6: L=0 E=0 D=0 S=0
kfffde[8].xptr.chk:                  51 ; 0x4e7: 0x33
kfffde[9].xptr.au:                 1311 ; 0x4e8: 0x0000051f
kfffde[9].xptr.disk:                  1 ; 0x4ec: 0x0001
kfffde[9].xptr.flags:                 0 ; 0x4ee: L=0 E=0 D=0 S=0
kfffde[9].xptr.chk:                  49 ; 0x4ef: 0x31
kfffde[10].xptr.au:                1312 ; 0x4f0: 0x00000520
kfffde[10].xptr.disk:                 2 ; 0x4f4: 0x0002
kfffde[10].xptr.flags:                0 ; 0x4f6: L=0 E=0 D=0 S=0
kfffde[10].xptr.chk:                 13 ; 0x4f7: 0x0d
kfffde[11].xptr.au:                1316 ; 0x4f8: 0x00000524
kfffde[11].xptr.disk:                 0 ; 0x4fc: 0x0000
kfffde[11].xptr.flags:                0 ; 0x4fe: L=0 E=0 D=0 S=0
kfffde[11].xptr.chk:                 11 ; 0x4ff: 0x0b

[grid@jyrac1 ~]$ kfed read /dev/raw/raw4 aun=76 blkn=7 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                     263 ; 0x004: blk=263
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                   857258416 ; 0x00c: 0x3318b9b0
kfbh.fcn.base:                     3715 ; 0x010: 0x00000e83
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:           930413057 ; 0x000: A=1 NUMM=0x1bba7d00
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 5251072 ; 0x010: 0x00502000
kfffdb.xtntcnt:                      12 ; 0x014: 0x0000000c
kfffdb.xtnteof:                      12 ; 0x018: 0x0000000c
kfffdb.blkSize:                    8192 ; 0x01c: 0x00002000
kfffdb.flags:                        17 ; 0x020: O=1 S=0 S=0 D=0 C=1 I=0 R=0 A=0
kfffdb.fileType:                      2 ; 0x021: 0x02
kfffdb.dXrs:                         18 ; 0x022: SCHE=0x1 NUMB=0x2
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                      12 ; 0x03c: 0x000c
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:                    111 ; 0x044: 0x0000006f
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      1 ; 0x04c: 0x01
kfffdb.strpsz:                       20 ; 0x04d: 0x14
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33042832 ; 0x050: HOUR=0x10 DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:              286709760 ; 0x054: USEC=0x0 MSEC=0x1b6 SECS=0x11 MINS=0x4
kfffdb.modts.hi:               33042897 ; 0x058: HOUR=0x11 DAYS=0xe MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:                      0 ; 0x05c: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                 1309 ; 0x4a0: 0x0000051d
kfffde[0].xptr.disk:                  3 ; 0x4a4: 0x0003
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                  49 ; 0x4a7: 0x31
kfffde[1].xptr.au:                 1309 ; 0x4a8: 0x0000051d
kfffde[1].xptr.disk:                  2 ; 0x4ac: 0x0002
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                  48 ; 0x4af: 0x30
kfffde[2].xptr.au:                 1310 ; 0x4b0: 0x0000051e
kfffde[2].xptr.disk:                  2 ; 0x4b4: 0x0002
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                  51 ; 0x4b7: 0x33
kfffde[3].xptr.au:                 1310 ; 0x4b8: 0x0000051e
kfffde[3].xptr.disk:                  3 ; 0x4bc: 0x0003
kfffde[3].xptr.flags:                 0 ; 0x4be: L=0 E=0 D=0 S=0
kfffde[3].xptr.chk:                  50 ; 0x4bf: 0x32
kfffde[4].xptr.au:                 1310 ; 0x4c0: 0x0000051e
kfffde[4].xptr.disk:                  1 ; 0x4c4: 0x0001
kfffde[4].xptr.flags:                 0 ; 0x4c6: L=0 E=0 D=0 S=0
kfffde[4].xptr.chk:                  48 ; 0x4c7: 0x30
kfffde[5].xptr.au:                 1314 ; 0x4c8: 0x00000522
kfffde[5].xptr.disk:                  0 ; 0x4cc: 0x0000
kfffde[5].xptr.flags:                 0 ; 0x4ce: L=0 E=0 D=0 S=0
kfffde[5].xptr.chk:                  13 ; 0x4cf: 0x0d
kfffde[6].xptr.au:                 1315 ; 0x4d0: 0x00000523
kfffde[6].xptr.disk:                  0 ; 0x4d4: 0x0000
kfffde[6].xptr.flags:                 0 ; 0x4d6: L=0 E=0 D=0 S=0
kfffde[6].xptr.chk:                  12 ; 0x4d7: 0x0c
kfffde[7].xptr.au:                 1311 ; 0x4d8: 0x0000051f
kfffde[7].xptr.disk:                  2 ; 0x4dc: 0x0002
kfffde[7].xptr.flags:                 0 ; 0x4de: L=0 E=0 D=0 S=0
kfffde[7].xptr.chk:                  50 ; 0x4df: 0x32
kfffde[8].xptr.au:                 1311 ; 0x4e0: 0x0000051f
kfffde[8].xptr.disk:                  3 ; 0x4e4: 0x0003
kfffde[8].xptr.flags:                 0 ; 0x4e6: L=0 E=0 D=0 S=0
kfffde[8].xptr.chk:                  51 ; 0x4e7: 0x33
kfffde[9].xptr.au:                 1311 ; 0x4e8: 0x0000051f
kfffde[9].xptr.disk:                  1 ; 0x4ec: 0x0001
kfffde[9].xptr.flags:                 0 ; 0x4ee: L=0 E=0 D=0 S=0
kfffde[9].xptr.chk:                  49 ; 0x4ef: 0x31
kfffde[10].xptr.au:                1312 ; 0x4f0: 0x00000520
kfffde[10].xptr.disk:                 2 ; 0x4f4: 0x0002
kfffde[10].xptr.flags:                0 ; 0x4f6: L=0 E=0 D=0 S=0
kfffde[10].xptr.chk:                 13 ; 0x4f7: 0x0d
kfffde[11].xptr.au:                1316 ; 0x4f8: 0x00000524
kfffde[11].xptr.disk:                 0 ; 0x4fc: 0x0000
kfffde[11].xptr.flags:                0 ; 0x4fe: L=0 E=0 D=0 S=0
kfffde[11].xptr.chk:                 11 ; 0x4ff: 0x0b
[grid@jyrac1 ~]$ kfed read /dev/raw/raw3 aun=77 blkn=7 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                     263 ; 0x004: blk=263
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                   857258416 ; 0x00c: 0x3318b9b0
kfbh.fcn.base:                     3715 ; 0x010: 0x00000e83
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:           930413057 ; 0x000: A=1 NUMM=0x1bba7d00
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                 5251072 ; 0x010: 0x00502000
kfffdb.xtntcnt:                      12 ; 0x014: 0x0000000c
kfffdb.xtnteof:                      12 ; 0x018: 0x0000000c
kfffdb.blkSize:                    8192 ; 0x01c: 0x00002000
kfffdb.flags:                        17 ; 0x020: O=1 S=0 S=0 D=0 C=1 I=0 R=0 A=0
kfffdb.fileType:                      2 ; 0x021: 0x02
kfffdb.dXrs:                         18 ; 0x022: SCHE=0x1 NUMB=0x2
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                      12 ; 0x03c: 0x000c
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:                    111 ; 0x044: 0x0000006f
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      1 ; 0x04c: 0x01
kfffdb.strpsz:                       20 ; 0x04d: 0x14
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33042832 ; 0x050: HOUR=0x10 DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:              286709760 ; 0x054: USEC=0x0 MSEC=0x1b6 SECS=0x11 MINS=0x4
kfffdb.modts.hi:               33042897 ; 0x058: HOUR=0x11 DAYS=0xe MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:                      0 ; 0x05c: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                 1309 ; 0x4a0: 0x0000051d
kfffde[0].xptr.disk:                  3 ; 0x4a4: 0x0003
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                  49 ; 0x4a7: 0x31
kfffde[1].xptr.au:                 1309 ; 0x4a8: 0x0000051d
kfffde[1].xptr.disk:                  2 ; 0x4ac: 0x0002
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                  48 ; 0x4af: 0x30
kfffde[2].xptr.au:                 1310 ; 0x4b0: 0x0000051e
kfffde[2].xptr.disk:                  2 ; 0x4b4: 0x0002
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                  51 ; 0x4b7: 0x33
kfffde[3].xptr.au:                 1310 ; 0x4b8: 0x0000051e
kfffde[3].xptr.disk:                  3 ; 0x4bc: 0x0003
kfffde[3].xptr.flags:                 0 ; 0x4be: L=0 E=0 D=0 S=0
kfffde[3].xptr.chk:                  50 ; 0x4bf: 0x32
kfffde[4].xptr.au:                 1310 ; 0x4c0: 0x0000051e
kfffde[4].xptr.disk:                  1 ; 0x4c4: 0x0001
kfffde[4].xptr.flags:                 0 ; 0x4c6: L=0 E=0 D=0 S=0
kfffde[4].xptr.chk:                  48 ; 0x4c7: 0x30
kfffde[5].xptr.au:                 1314 ; 0x4c8: 0x00000522
kfffde[5].xptr.disk:                  0 ; 0x4cc: 0x0000
kfffde[5].xptr.flags:                 0 ; 0x4ce: L=0 E=0 D=0 S=0
kfffde[5].xptr.chk:                  13 ; 0x4cf: 0x0d
kfffde[6].xptr.au:                 1315 ; 0x4d0: 0x00000523
kfffde[6].xptr.disk:                  0 ; 0x4d4: 0x0000
kfffde[6].xptr.flags:                 0 ; 0x4d6: L=0 E=0 D=0 S=0
kfffde[6].xptr.chk:                  12 ; 0x4d7: 0x0c
kfffde[7].xptr.au:                 1311 ; 0x4d8: 0x0000051f
kfffde[7].xptr.disk:                  2 ; 0x4dc: 0x0002
kfffde[7].xptr.flags:                 0 ; 0x4de: L=0 E=0 D=0 S=0
kfffde[7].xptr.chk:                  50 ; 0x4df: 0x32
kfffde[8].xptr.au:                 1311 ; 0x4e0: 0x0000051f
kfffde[8].xptr.disk:                  3 ; 0x4e4: 0x0003
kfffde[8].xptr.flags:                 0 ; 0x4e6: L=0 E=0 D=0 S=0
kfffde[8].xptr.chk:                  51 ; 0x4e7: 0x33
kfffde[9].xptr.au:                 1311 ; 0x4e8: 0x0000051f
kfffde[9].xptr.disk:                  1 ; 0x4ec: 0x0001
kfffde[9].xptr.flags:                 0 ; 0x4ee: L=0 E=0 D=0 S=0
kfffde[9].xptr.chk:                  49 ; 0x4ef: 0x31
kfffde[10].xptr.au:                1312 ; 0x4f0: 0x00000520
kfffde[10].xptr.disk:                 2 ; 0x4f4: 0x0002
kfffde[10].xptr.flags:                0 ; 0x4f6: L=0 E=0 D=0 S=0
kfffde[10].xptr.chk:                 13 ; 0x4f7: 0x0d
kfffde[11].xptr.au:                1316 ; 0x4f8: 0x00000524
kfffde[11].xptr.disk:                 0 ; 0x4fc: 0x0000
kfffde[11].xptr.flags:                0 ; 0x4fe: L=0 E=0 D=0 S=0
kfffde[11].xptr.chk:                 11 ; 0x4ff: 0x0b

从上面的信息可以看到3号磁盘(/dev/raw/raw10)的第76个AU,1号磁盘(1 /dev/raw/raw4)的第76个AU与2号磁盘(/dev/raw/raw3)的第77个AU是否存储相同信息。从上面数据可以看出,file directory元数据结构分为3个部分:
1.第一部分kfbh字段
确认这是一个ASM文件目录的block(kfbh.type=KFBTYP_FILEDIR),而且是描述263号文件的(kfbh.block.blk=263)。
2.第二部分kfffdb字段则包含:
File incarnation number(kfffdb.node.incarn=930413057)文件的incarnation号,属于文件名的一部分(263号文件(USERS.263.930413057))
File size in bytes (kfffdb.lobytes=5251072) 文件的大小
Physical extent count (kfffdb.xtntcnt=12) 该文件分配的物理extent的数量
File block size in bytes (kfffdb.blkSize=8192) 文件的块大小
File type (kfffdb.fileType=2), i.e. the database data file 文件的类型,这里为数据文件
3.第三部分kfffde为物理extent分布,这部分输出与从X$KFFXP中查询到的结果一致
例如,物理extent 0在磁盘3(kfffde[0].xptr.disk=3)的AU 1309上(kfffde[0].xptr.au:=1309),物理extent 1在磁盘2(kfffde[0].xptr.disk=2)的AU 1309上(kfffde[0].xptr.au:=1309),
物理extent 2在磁盘2(kfffde[0].xptr.disk=2)的AU 1309上(kfffde[0].xptr.au:=1310),以此类推。

File directory entries for control files
查询数据库的控制文件

SQL>  select name "file",block_size "block size",block_size*(file_size_blks+1) "file size" from v$controlfile;

file                                               block size  file size
-------------------------------------------------- ---------- ----------
+DATADG/jyrac/controlfile/current.257.930412709         16384   18595840

接下来看一下257号文件(current.257.930412709)对应的的文件目录条目。首先,通过查询X$KFFXP获得该文件的extent和AU分布:

SQL> select xnum_kffxp "virtual extent",
  2  pxn_kffxp "physical extent",
  3  au_kffxp "allocation unit",
  4  disk_kffxp "disk"
  5  from x$kffxp
  6  where group_kffxp=3 
  7  and number_kffxp=257 
  8  and xnum_kffxp <> 2147483648
  9  order by 1, 2;

virtual extent physical extent allocation unit       disk
-------------- --------------- --------------- ----------
             0               0              78          1
             0               1              78          2
             0               2              77          3
             1               3              78          3
             1               4              79          1
             1               5              77          0
             2               6              79          2
             2               7              79          3
             2               8              80          1
             3               9              78          0
             3              10              80          2
             3              11              81          1
             4              12              82          1
             4              13              79          0
             4              14              81          2
             5              15              80          3
             5              16              82          2
             5              17              83          1
             6              18              83          2
             6              19              80          0
             6              20              81          3
             7              21              81          0
             7              22              82          3
             7              23              84          2
             8              24              84          1
             8              25              83          3
             8              26              82          0
             9              27              84          3
             9              28              83          0
             9              29              85          2
            10              30              86          2
            10              31              85          1
            10              32              84          0
            11              33              85          0
            11              34              86          1
            11              35              85          3
            12              36              87          1
            12              37              87          2
            12              38              86          3
            13              39              87          3
            13              40              88          1
            13              41              86          0
            14              42              88          2
            14              43              88          3
            14              44              89          1
            15              45              87          0
            15              46              89          2
            15              47              90          1
            16              48              91          1
            16              49              88          0
            16              50              90          2
            17              51              89          3
            17              52              91          2
            17              53              92          1
            18              54              92          2
            18              55              89          0
            18              56              90          3
            19              57              90          0
            19              58              91          3
            19              59              93          2
            20              60              94          1
            20              61              92          3
            20              62              92          0
            21              63              93          3
            21              64              93          0
            21              65              95          2
            22              66              96          2
            22              67              95          1
            22              68              94          0
            23              69              95          0
            23              70              96          1
            23              71              94          3

72 rows selected.

我们看到实例为该文件分配了24个virtual extent,并且该文件是三倍冗余。接下来查询DATADG磁盘组包含的磁盘的编号和路径。

SQL> select disk_number, path from v$asm_disk where group_number=3 order by 1;

DISK_NUMBER PATH
----------- ------------------------------
          0 /dev/raw/raw11
          1 /dev/raw/raw4
          2 /dev/raw/raw3
          3 /dev/raw/raw10

之前所查询到的ASM文件目录分布如下:

SQL> select xnum_kffxp "virtual extent",pxn_kffxp "physical extent",au_kffxp "allocation unit",disk_kffxp "disk"
  2  from x$kffxp
  3  where group_kffxp=3 and number_kffxp=1 order by 1, 2;

virtual extent physical extent allocation unit       disk
-------------- --------------- --------------- ----------
             0               0               2          0
             0               1               2          2
             0               2               2          1
             1               3              76          3
             1               4              77          2
             1               5              76          1

6 rows selected.

现在我们通过kfed工具来查看该文件的ASM文件目录条目,它会在文件目录的257号block,也就是文件目录中1号(虚拟)extent的1号block(257减去256得出1)。1号extent位于3号磁盘的第76个AU,并在1号磁盘的第76个AU和2号磁盘的第77个AU上分别存在一份冗余。下面我们来看看3号磁盘(/dev/raw/raw10)的第76个AU

[grid@jyrac1 ~]$ kfed read /dev/raw/raw10 aun=76 blkn=1 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                     257 ; 0x004: blk=257
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                  1835729298 ; 0x00c: 0x6d6b0192
kfbh.fcn.base:                     3723 ; 0x010: 0x00000e8b
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:           930412709 ; 0x000: A=1 NUMM=0x1bba7c52
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:                18595840 ; 0x010: 0x011bc000
kfffdb.xtntcnt:                      72 ; 0x014: 0x00000048
kfffdb.xtnteof:                      72 ; 0x018: 0x00000048
kfffdb.blkSize:                   16384 ; 0x01c: 0x00004000
kfffdb.flags:                        19 ; 0x020: O=1 S=1 S=0 D=0 C=1 I=0 R=0 A=0
kfffdb.fileType:                      1 ; 0x021: 0x01
kfffdb.dXrs:                         19 ; 0x022: SCHE=0x1 NUMB=0x3
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                      63 ; 0x03c: 0x003f
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:                    159 ; 0x044: 0x0000009f
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      8 ; 0x04c: 0x08
kfffdb.strpsz:                       17 ; 0x04d: 0x11
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33042831 ; 0x050: HOUR=0xf DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:             3922781184 ; 0x054: USEC=0x0 MSEC=0x39 SECS=0x1d MINS=0x3a
kfffdb.modts.hi:               33042902 ; 0x058: HOUR=0x16 DAYS=0xe MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:                      0 ; 0x05c: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                   78 ; 0x4a0: 0x0000004e
kfffde[0].xptr.disk:                  1 ; 0x4a4: 0x0001
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                 101 ; 0x4a7: 0x65
kfffde[1].xptr.au:                   78 ; 0x4a8: 0x0000004e
kfffde[1].xptr.disk:                  2 ; 0x4ac: 0x0002
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                 102 ; 0x4af: 0x66
kfffde[2].xptr.au:                   77 ; 0x4b0: 0x0000004d
kfffde[2].xptr.disk:                  3 ; 0x4b4: 0x0003
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                 100 ; 0x4b7: 0x64
kfffde[3].xptr.au:                   78 ; 0x4b8: 0x0000004e
kfffde[3].xptr.disk:                  3 ; 0x4bc: 0x0003
kfffde[3].xptr.flags:                 0 ; 0x4be: L=0 E=0 D=0 S=0
kfffde[3].xptr.chk:                 103 ; 0x4bf: 0x67
kfffde[4].xptr.au:                   79 ; 0x4c0: 0x0000004f
kfffde[4].xptr.disk:                  1 ; 0x4c4: 0x0001
kfffde[4].xptr.flags:                 0 ; 0x4c6: L=0 E=0 D=0 S=0
kfffde[4].xptr.chk:                 100 ; 0x4c7: 0x64
kfffde[5].xptr.au:                   77 ; 0x4c8: 0x0000004d
kfffde[5].xptr.disk:                  0 ; 0x4cc: 0x0000
kfffde[5].xptr.flags:                 0 ; 0x4ce: L=0 E=0 D=0 S=0
kfffde[5].xptr.chk:                 103 ; 0x4cf: 0x67
kfffde[6].xptr.au:                   79 ; 0x4d0: 0x0000004f
kfffde[6].xptr.disk:                  2 ; 0x4d4: 0x0002
kfffde[6].xptr.flags:                 0 ; 0x4d6: L=0 E=0 D=0 S=0
kfffde[6].xptr.chk:                 103 ; 0x4d7: 0x67
kfffde[7].xptr.au:                   79 ; 0x4d8: 0x0000004f
kfffde[7].xptr.disk:                  3 ; 0x4dc: 0x0003
kfffde[7].xptr.flags:                 0 ; 0x4de: L=0 E=0 D=0 S=0
kfffde[7].xptr.chk:                 102 ; 0x4df: 0x66
kfffde[8].xptr.au:                   80 ; 0x4e0: 0x00000050
kfffde[8].xptr.disk:                  1 ; 0x4e4: 0x0001
kfffde[8].xptr.flags:                 0 ; 0x4e6: L=0 E=0 D=0 S=0
kfffde[8].xptr.chk:                 123 ; 0x4e7: 0x7b
kfffde[9].xptr.au:                   78 ; 0x4e8: 0x0000004e
kfffde[9].xptr.disk:                  0 ; 0x4ec: 0x0000
kfffde[9].xptr.flags:                 0 ; 0x4ee: L=0 E=0 D=0 S=0
kfffde[9].xptr.chk:                 100 ; 0x4ef: 0x64
kfffde[10].xptr.au:                  80 ; 0x4f0: 0x00000050
kfffde[10].xptr.disk:                 2 ; 0x4f4: 0x0002
kfffde[10].xptr.flags:                0 ; 0x4f6: L=0 E=0 D=0 S=0
kfffde[10].xptr.chk:                120 ; 0x4f7: 0x78
kfffde[11].xptr.au:                  81 ; 0x4f8: 0x00000051
kfffde[11].xptr.disk:                 1 ; 0x4fc: 0x0001
kfffde[11].xptr.flags:                0 ; 0x4fe: L=0 E=0 D=0 S=0
kfffde[11].xptr.chk:                122 ; 0x4ff: 0x7a
kfffde[12].xptr.au:                  82 ; 0x500: 0x00000052
kfffde[12].xptr.disk:                 1 ; 0x504: 0x0001
kfffde[12].xptr.flags:                0 ; 0x506: L=0 E=0 D=0 S=0
kfffde[12].xptr.chk:                121 ; 0x507: 0x79
kfffde[13].xptr.au:                  79 ; 0x508: 0x0000004f
kfffde[13].xptr.disk:                 0 ; 0x50c: 0x0000
kfffde[13].xptr.flags:                0 ; 0x50e: L=0 E=0 D=0 S=0
kfffde[13].xptr.chk:                101 ; 0x50f: 0x65
kfffde[14].xptr.au:                  81 ; 0x510: 0x00000051
kfffde[14].xptr.disk:                 2 ; 0x514: 0x0002
kfffde[14].xptr.flags:                0 ; 0x516: L=0 E=0 D=0 S=0
kfffde[14].xptr.chk:                121 ; 0x517: 0x79
kfffde[15].xptr.au:                  80 ; 0x518: 0x00000050
kfffde[15].xptr.disk:                 3 ; 0x51c: 0x0003
kfffde[15].xptr.flags:                0 ; 0x51e: L=0 E=0 D=0 S=0
kfffde[15].xptr.chk:                121 ; 0x51f: 0x79
kfffde[16].xptr.au:                  82 ; 0x520: 0x00000052
kfffde[16].xptr.disk:                 2 ; 0x524: 0x0002
kfffde[16].xptr.flags:                0 ; 0x526: L=0 E=0 D=0 S=0
kfffde[16].xptr.chk:                122 ; 0x527: 0x7a
kfffde[17].xptr.au:                  83 ; 0x528: 0x00000053
kfffde[17].xptr.disk:                 1 ; 0x52c: 0x0001
kfffde[17].xptr.flags:                0 ; 0x52e: L=0 E=0 D=0 S=0
kfffde[17].xptr.chk:                120 ; 0x52f: 0x78
kfffde[18].xptr.au:                  83 ; 0x530: 0x00000053
kfffde[18].xptr.disk:                 2 ; 0x534: 0x0002
kfffde[18].xptr.flags:                0 ; 0x536: L=0 E=0 D=0 S=0
kfffde[18].xptr.chk:                123 ; 0x537: 0x7b
kfffde[19].xptr.au:                  80 ; 0x538: 0x00000050
kfffde[19].xptr.disk:                 0 ; 0x53c: 0x0000
kfffde[19].xptr.flags:                0 ; 0x53e: L=0 E=0 D=0 S=0
kfffde[19].xptr.chk:                122 ; 0x53f: 0x7a
kfffde[20].xptr.au:                  81 ; 0x540: 0x00000051
kfffde[20].xptr.disk:                 3 ; 0x544: 0x0003
kfffde[20].xptr.flags:                0 ; 0x546: L=0 E=0 D=0 S=0
kfffde[20].xptr.chk:                120 ; 0x547: 0x78
kfffde[21].xptr.au:                  81 ; 0x548: 0x00000051
kfffde[21].xptr.disk:                 0 ; 0x54c: 0x0000
kfffde[21].xptr.flags:                0 ; 0x54e: L=0 E=0 D=0 S=0
kfffde[21].xptr.chk:                123 ; 0x54f: 0x7b
kfffde[22].xptr.au:                  82 ; 0x550: 0x00000052
kfffde[22].xptr.disk:                 3 ; 0x554: 0x0003
kfffde[22].xptr.flags:                0 ; 0x556: L=0 E=0 D=0 S=0
kfffde[22].xptr.chk:                123 ; 0x557: 0x7b
kfffde[23].xptr.au:                  84 ; 0x558: 0x00000054
kfffde[23].xptr.disk:                 2 ; 0x55c: 0x0002
kfffde[23].xptr.flags:                0 ; 0x55e: L=0 E=0 D=0 S=0
kfffde[23].xptr.chk:                124 ; 0x55f: 0x7c
kfffde[24].xptr.au:                  84 ; 0x560: 0x00000054
kfffde[24].xptr.disk:                 1 ; 0x564: 0x0001
kfffde[24].xptr.flags:                0 ; 0x566: L=0 E=0 D=0 S=0
kfffde[24].xptr.chk:                127 ; 0x567: 0x7f
kfffde[25].xptr.au:                  83 ; 0x568: 0x00000053
kfffde[25].xptr.disk:                 3 ; 0x56c: 0x0003
kfffde[25].xptr.flags:                0 ; 0x56e: L=0 E=0 D=0 S=0
kfffde[25].xptr.chk:                122 ; 0x56f: 0x7a
kfffde[26].xptr.au:                  82 ; 0x570: 0x00000052
kfffde[26].xptr.disk:                 0 ; 0x574: 0x0000
kfffde[26].xptr.flags:                0 ; 0x576: L=0 E=0 D=0 S=0
kfffde[26].xptr.chk:                120 ; 0x577: 0x78
kfffde[27].xptr.au:                  84 ; 0x578: 0x00000054
kfffde[27].xptr.disk:                 3 ; 0x57c: 0x0003
kfffde[27].xptr.flags:                0 ; 0x57e: L=0 E=0 D=0 S=0
kfffde[27].xptr.chk:                125 ; 0x57f: 0x7d
kfffde[28].xptr.au:                  83 ; 0x580: 0x00000053
kfffde[28].xptr.disk:                 0 ; 0x584: 0x0000
kfffde[28].xptr.flags:                0 ; 0x586: L=0 E=0 D=0 S=0
kfffde[28].xptr.chk:                121 ; 0x587: 0x79
kfffde[29].xptr.au:                  85 ; 0x588: 0x00000055
kfffde[29].xptr.disk:                 2 ; 0x58c: 0x0002
kfffde[29].xptr.flags:                0 ; 0x58e: L=0 E=0 D=0 S=0
kfffde[29].xptr.chk:                125 ; 0x58f: 0x7d
kfffde[30].xptr.au:                  86 ; 0x590: 0x00000056
kfffde[30].xptr.disk:                 2 ; 0x594: 0x0002
kfffde[30].xptr.flags:                0 ; 0x596: L=0 E=0 D=0 S=0
kfffde[30].xptr.chk:                126 ; 0x597: 0x7e
kfffde[31].xptr.au:                  85 ; 0x598: 0x00000055
kfffde[31].xptr.disk:                 1 ; 0x59c: 0x0001
kfffde[31].xptr.flags:                0 ; 0x59e: L=0 E=0 D=0 S=0
kfffde[31].xptr.chk:                126 ; 0x59f: 0x7e
kfffde[32].xptr.au:                  84 ; 0x5a0: 0x00000054
kfffde[32].xptr.disk:                 0 ; 0x5a4: 0x0000
kfffde[32].xptr.flags:                0 ; 0x5a6: L=0 E=0 D=0 S=0
kfffde[32].xptr.chk:                126 ; 0x5a7: 0x7e
kfffde[33].xptr.au:                  85 ; 0x5a8: 0x00000055
kfffde[33].xptr.disk:                 0 ; 0x5ac: 0x0000
kfffde[33].xptr.flags:                0 ; 0x5ae: L=0 E=0 D=0 S=0
kfffde[33].xptr.chk:                127 ; 0x5af: 0x7f
kfffde[34].xptr.au:                  86 ; 0x5b0: 0x00000056
kfffde[34].xptr.disk:                 1 ; 0x5b4: 0x0001
kfffde[34].xptr.flags:                0 ; 0x5b6: L=0 E=0 D=0 S=0
kfffde[34].xptr.chk:                125 ; 0x5b7: 0x7d
kfffde[35].xptr.au:                  85 ; 0x5b8: 0x00000055
kfffde[35].xptr.disk:                 3 ; 0x5bc: 0x0003
kfffde[35].xptr.flags:                0 ; 0x5be: L=0 E=0 D=0 S=0
kfffde[35].xptr.chk:                124 ; 0x5bf: 0x7c
kfffde[36].xptr.au:                  87 ; 0x5c0: 0x00000057
kfffde[36].xptr.disk:                 1 ; 0x5c4: 0x0001
kfffde[36].xptr.flags:                0 ; 0x5c6: L=0 E=0 D=0 S=0
kfffde[36].xptr.chk:                124 ; 0x5c7: 0x7c
kfffde[37].xptr.au:                  87 ; 0x5c8: 0x00000057
kfffde[37].xptr.disk:                 2 ; 0x5cc: 0x0002
kfffde[37].xptr.flags:                0 ; 0x5ce: L=0 E=0 D=0 S=0
kfffde[37].xptr.chk:                127 ; 0x5cf: 0x7f
kfffde[38].xptr.au:                  86 ; 0x5d0: 0x00000056
kfffde[38].xptr.disk:                 3 ; 0x5d4: 0x0003
kfffde[38].xptr.flags:                0 ; 0x5d6: L=0 E=0 D=0 S=0
kfffde[38].xptr.chk:                127 ; 0x5d7: 0x7f
kfffde[39].xptr.au:                  87 ; 0x5d8: 0x00000057
kfffde[39].xptr.disk:                 3 ; 0x5dc: 0x0003
kfffde[39].xptr.flags:                0 ; 0x5de: L=0 E=0 D=0 S=0
kfffde[39].xptr.chk:                126 ; 0x5df: 0x7e
kfffde[40].xptr.au:                  88 ; 0x5e0: 0x00000058
kfffde[40].xptr.disk:                 1 ; 0x5e4: 0x0001
kfffde[40].xptr.flags:                0 ; 0x5e6: L=0 E=0 D=0 S=0
kfffde[40].xptr.chk:                115 ; 0x5e7: 0x73
kfffde[41].xptr.au:                  86 ; 0x5e8: 0x00000056
kfffde[41].xptr.disk:                 0 ; 0x5ec: 0x0000
kfffde[41].xptr.flags:                0 ; 0x5ee: L=0 E=0 D=0 S=0
kfffde[41].xptr.chk:                124 ; 0x5ef: 0x7c
kfffde[42].xptr.au:                  88 ; 0x5f0: 0x00000058
kfffde[42].xptr.disk:                 2 ; 0x5f4: 0x0002
kfffde[42].xptr.flags:                0 ; 0x5f6: L=0 E=0 D=0 S=0
kfffde[42].xptr.chk:                112 ; 0x5f7: 0x70
kfffde[43].xptr.au:                  88 ; 0x5f8: 0x00000058
kfffde[43].xptr.disk:                 3 ; 0x5fc: 0x0003
kfffde[43].xptr.flags:                0 ; 0x5fe: L=0 E=0 D=0 S=0
kfffde[43].xptr.chk:                113 ; 0x5ff: 0x71
kfffde[44].xptr.au:                  89 ; 0x600: 0x00000059
kfffde[44].xptr.disk:                 1 ; 0x604: 0x0001
kfffde[44].xptr.flags:                0 ; 0x606: L=0 E=0 D=0 S=0
kfffde[44].xptr.chk:                114 ; 0x607: 0x72
kfffde[45].xptr.au:                  87 ; 0x608: 0x00000057
kfffde[45].xptr.disk:                 0 ; 0x60c: 0x0000
kfffde[45].xptr.flags:                0 ; 0x60e: L=0 E=0 D=0 S=0
kfffde[45].xptr.chk:                125 ; 0x60f: 0x7d
kfffde[46].xptr.au:                  89 ; 0x610: 0x00000059
kfffde[46].xptr.disk:                 2 ; 0x614: 0x0002
kfffde[46].xptr.flags:                0 ; 0x616: L=0 E=0 D=0 S=0
kfffde[46].xptr.chk:                113 ; 0x617: 0x71
kfffde[47].xptr.au:                  90 ; 0x618: 0x0000005a
kfffde[47].xptr.disk:                 1 ; 0x61c: 0x0001
kfffde[47].xptr.flags:                0 ; 0x61e: L=0 E=0 D=0 S=0
kfffde[47].xptr.chk:                113 ; 0x61f: 0x71
kfffde[48].xptr.au:                  91 ; 0x620: 0x0000005b
kfffde[48].xptr.disk:                 1 ; 0x624: 0x0001
kfffde[48].xptr.flags:                0 ; 0x626: L=0 E=0 D=0 S=0
kfffde[48].xptr.chk:                112 ; 0x627: 0x70
kfffde[49].xptr.au:                  88 ; 0x628: 0x00000058
kfffde[49].xptr.disk:                 0 ; 0x62c: 0x0000
kfffde[49].xptr.flags:                0 ; 0x62e: L=0 E=0 D=0 S=0
kfffde[49].xptr.chk:                114 ; 0x62f: 0x72
kfffde[50].xptr.au:                  90 ; 0x630: 0x0000005a
kfffde[50].xptr.disk:                 2 ; 0x634: 0x0002
kfffde[50].xptr.flags:                0 ; 0x636: L=0 E=0 D=0 S=0
kfffde[50].xptr.chk:                114 ; 0x637: 0x72
kfffde[51].xptr.au:                  89 ; 0x638: 0x00000059
kfffde[51].xptr.disk:                 3 ; 0x63c: 0x0003
kfffde[51].xptr.flags:                0 ; 0x63e: L=0 E=0 D=0 S=0
kfffde[51].xptr.chk:                112 ; 0x63f: 0x70
kfffde[52].xptr.au:                  91 ; 0x640: 0x0000005b
kfffde[52].xptr.disk:                 2 ; 0x644: 0x0002
kfffde[52].xptr.flags:                0 ; 0x646: L=0 E=0 D=0 S=0
kfffde[52].xptr.chk:                115 ; 0x647: 0x73
kfffde[53].xptr.au:                  92 ; 0x648: 0x0000005c
kfffde[53].xptr.disk:                 1 ; 0x64c: 0x0001
kfffde[53].xptr.flags:                0 ; 0x64e: L=0 E=0 D=0 S=0
kfffde[53].xptr.chk:                119 ; 0x64f: 0x77
kfffde[54].xptr.au:                  92 ; 0x650: 0x0000005c
kfffde[54].xptr.disk:                 2 ; 0x654: 0x0002
kfffde[54].xptr.flags:                0 ; 0x656: L=0 E=0 D=0 S=0
kfffde[54].xptr.chk:                116 ; 0x657: 0x74
kfffde[55].xptr.au:                  89 ; 0x658: 0x00000059
kfffde[55].xptr.disk:                 0 ; 0x65c: 0x0000
kfffde[55].xptr.flags:                0 ; 0x65e: L=0 E=0 D=0 S=0
kfffde[55].xptr.chk:                115 ; 0x65f: 0x73
kfffde[56].xptr.au:                  90 ; 0x660: 0x0000005a
kfffde[56].xptr.disk:                 3 ; 0x664: 0x0003
kfffde[56].xptr.flags:                0 ; 0x666: L=0 E=0 D=0 S=0
kfffde[56].xptr.chk:                115 ; 0x667: 0x73
kfffde[57].xptr.au:                  90 ; 0x668: 0x0000005a
kfffde[57].xptr.disk:                 0 ; 0x66c: 0x0000
kfffde[57].xptr.flags:                0 ; 0x66e: L=0 E=0 D=0 S=0
kfffde[57].xptr.chk:                112 ; 0x66f: 0x70
kfffde[58].xptr.au:                  91 ; 0x670: 0x0000005b
kfffde[58].xptr.disk:                 3 ; 0x674: 0x0003
kfffde[58].xptr.flags:                0 ; 0x676: L=0 E=0 D=0 S=0
kfffde[58].xptr.chk:                114 ; 0x677: 0x72
kfffde[59].xptr.au:                  93 ; 0x678: 0x0000005d
kfffde[59].xptr.disk:                 2 ; 0x67c: 0x0002
kfffde[59].xptr.flags:                0 ; 0x67e: L=0 E=0 D=0 S=0
kfffde[59].xptr.chk:                117 ; 0x67f: 0x75
kfffde[60].xptr.au:                  93 ; 0x680: 0x0000005d
kfffde[60].xptr.disk:                 1 ; 0x684: 0x0001
kfffde[60].xptr.flags:                0 ; 0x686: L=0 E=0 D=0 S=0
kfffde[60].xptr.chk:                118 ; 0x687: 0x76
kfffde[61].xptr.au:                  91 ; 0x688: 0x0000005b
kfffde[61].xptr.disk:                 0 ; 0x68c: 0x0000
kfffde[61].xptr.flags:                0 ; 0x68e: L=0 E=0 D=0 S=0
kfffde[61].xptr.chk:                113 ; 0x68f: 0x71
kfffde[62].xptr.au:                  94 ; 0x690: 0x0000005e
kfffde[62].xptr.disk:                 2 ; 0x694: 0x0002
kfffde[62].xptr.flags:                0 ; 0x696: L=0 E=0 D=0 S=0
kfffde[62].xptr.chk:                118 ; 0x697: 0x76

上面数据可以看出,file directory元数据结构分为3个部分:
1.第一部分kfbh字段
确认这是一个ASM文件目录的block(kfbh.type=KFBTYP_FILEDIR),而且是描述263号文件的(kfbh.block.blk=257)。
2.第二部分kfffdb字段则包含:
File incarnation number(kfffdb.node.incarn=930412709)文件的incarnation号,属于文件名的一部分(257号文件(current.257.930412709))
File size in bytes (kfffdb.lobytes=18595840) 文件的大小
Physical extent count (kfffdb.xtntcnt=72) 该文件分配的物理extent的数量
File block size in bytes (kfffdb.blkSize=16384) 文件的块大小
File type (kfffdb.fileType=1), i.e. the database control file 文件的类型,这里为控制文件
3.第三部分kfffde为物理extent分布,这部分输出与从X$KFFXP中查询到的结果一致
例如,物理extent 0在磁盘1(kfffde[0].xptr.disk=1)的AU 78上(kfffde[0].xptr.au:=1309),物理extent 1在磁盘2(kfffde[0].xptr.disk=2)的AU 78上(kfffde[0].xptr.au:=1309),
物理extent 2在磁盘3(kfffde[0].xptr.disk=3)的AU 77上(kfffde[0].xptr.au:=1310),以此类推。

File directory entries for large files
本文中所指的大文件指的是超过60个extent的文件。先到数据库中找出几个大的文件:

SQL> SELECT name, bytes/1024/1024 "Size (MB)"
  2  FROM v$datafile;

NAME                                                Size (MB)
-------------------------------------------------- ----------
+DATADG/jyrac/datafile/system.259.930413057               760
+DATADG/jyrac/datafile/sysaux.258.930413055              1370
+DATADG/jyrac/datafile/undotbs1.262.930413057             100
+DATADG/jyrac/datafile/users.263.930413057                  5
+DATADG/jyrac/datafile/example.260.930413057           346.25
+DATADG/jyrac/datafile/undotbs2.261.930413057             150
+DATADG/jyrac/datafile/test01.dbf                         100

7 rows selected.

Directly addressed extents
以system表空间的数据文件为例,我们看一下该文件对应的文件目录条目。该文件编号为259,大小为760MB。

SQL> select xnum_kffxp "extent", au_kffxp "au", disk_kffxp "disk"
  2  from x$kffxp
  3  where group_kffxp=3 and number_kffxp=259 and xnum_kffxp <> 2147483648
  4  order by 1,2;

    extent         au       disk
---------- ---------- ----------
         0        628          0
         0        629          1
         1        626          3
         1        629          0
         2        627          3
         2        627          2
         3        628          3
         3        630          0
         4        628          2
         4        630          1
         5        629          3
......
---------- ---------- ----------
       759       1006          3
       759       1009          0
       760       1007          2
       760       1009          1

1522 rows selected.

我们看到ASM实例为该文件分配了1552个物理extent。现在我们通过kfed工具来查看该文件的ASM文件目录条目,它会在文件目录的259号block,也就是文件目录中1号(虚拟)extent的3号block(259减去256得出3)。1号extent位于3号磁盘的第76个AU,并在1号磁盘的第76个AU和2号磁盘的第77个AU上分别存在一份冗余。。下面我们来看看3号磁盘(/dev/raw/raw10)的第76个AU的3号块

[grid@jyrac1 ~]$ kfed read /dev/raw/raw10 aun=76 blkn=3 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                     259 ; 0x004: blk=259
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                  1713481479 ; 0x00c: 0x6621a707
kfbh.fcn.base:                     3712 ; 0x010: 0x00000e80
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:           930413057 ; 0x000: A=1 NUMM=0x1bba7d00
kfffdb.node.frlist.number:   4294967295 ; 0x004: 0xffffffff
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:               796925952 ; 0x010: 0x2f802000
kfffdb.xtntcnt:                    1522 ; 0x014: 0x000005f2
kfffdb.xtnteof:                    1522 ; 0x018: 0x000005f2
kfffdb.blkSize:                    8192 ; 0x01c: 0x00002000
kfffdb.flags:                        17 ; 0x020: O=1 S=0 S=0 D=0 C=1 I=0 R=0 A=0
kfffdb.fileType:                      2 ; 0x021: 0x02
kfffdb.dXrs:                         18 ; 0x022: SCHE=0x1 NUMB=0x2
kfffdb.iXrs:                         19 ; 0x023: SCHE=0x1 NUMB=0x3
kfffdb.dXsiz[0]:             4294967295 ; 0x024: 0xffffffff
kfffdb.dXsiz[1]:                      0 ; 0x028: 0x00000000
kfffdb.dXsiz[2]:                      0 ; 0x02c: 0x00000000
kfffdb.iXsiz[0]:             4294967295 ; 0x030: 0xffffffff
kfffdb.iXsiz[1]:                      0 ; 0x034: 0x00000000
kfffdb.iXsiz[2]:                      0 ; 0x038: 0x00000000
kfffdb.xtntblk:                      63 ; 0x03c: 0x003f
kfffdb.break:                        60 ; 0x03e: 0x003c
kfffdb.priZn:                         0 ; 0x040: KFDZN_COLD
kfffdb.secZn:                         0 ; 0x041: KFDZN_COLD
kfffdb.ub2spare:                      0 ; 0x042: 0x0000
kfffdb.alias[0]:                    107 ; 0x044: 0x0000006b
kfffdb.alias[1]:             4294967295 ; 0x048: 0xffffffff
kfffdb.strpwdth:                      1 ; 0x04c: 0x01
kfffdb.strpsz:                       20 ; 0x04d: 0x14
kfffdb.usmsz:                         0 ; 0x04e: 0x0000
kfffdb.crets.hi:               33042832 ; 0x050: HOUR=0x10 DAYS=0xc MNTH=0xc YEAR=0x7e0
kfffdb.crets.lo:              285262848 ; 0x054: USEC=0x0 MSEC=0x31 SECS=0x10 MINS=0x4
kfffdb.modts.hi:               33042897 ; 0x058: HOUR=0x11 DAYS=0xe MNTH=0xc YEAR=0x7e0
kfffdb.modts.lo:                      0 ; 0x05c: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfffdb.dasz[0]:                       0 ; 0x060: 0x00
kfffdb.dasz[1]:                       0 ; 0x061: 0x00
kfffdb.dasz[2]:                       0 ; 0x062: 0x00
kfffdb.dasz[3]:                       0 ; 0x063: 0x00
kfffdb.permissn:                      0 ; 0x064: 0x00
kfffdb.ub1spar1:                      0 ; 0x065: 0x00
kfffdb.ub2spar2:                      0 ; 0x066: 0x0000
kfffdb.user.entnum:                   0 ; 0x068: 0x0000
kfffdb.user.entinc:                   0 ; 0x06a: 0x0000
kfffdb.group.entnum:                  0 ; 0x06c: 0x0000
kfffdb.group.entinc:                  0 ; 0x06e: 0x0000
kfffdb.spare[0]:                      0 ; 0x070: 0x00000000
kfffdb.spare[1]:                      0 ; 0x074: 0x00000000
kfffdb.spare[2]:                      0 ; 0x078: 0x00000000
kfffdb.spare[3]:                      0 ; 0x07c: 0x00000000
kfffdb.spare[4]:                      0 ; 0x080: 0x00000000
kfffdb.spare[5]:                      0 ; 0x084: 0x00000000
kfffdb.spare[6]:                      0 ; 0x088: 0x00000000
kfffdb.spare[7]:                      0 ; 0x08c: 0x00000000
kfffdb.spare[8]:                      0 ; 0x090: 0x00000000
kfffdb.spare[9]:                      0 ; 0x094: 0x00000000
kfffdb.spare[10]:                     0 ; 0x098: 0x00000000
kfffdb.spare[11]:                     0 ; 0x09c: 0x00000000
kfffdb.usm:                             ; 0x0a0: length=0
kfffde[0].xptr.au:                  629 ; 0x4a0: 0x00000275
kfffde[0].xptr.disk:                  1 ; 0x4a4: 0x0001
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0
kfffde[0].xptr.chk:                  92 ; 0x4a7: 0x5c
kfffde[1].xptr.au:                  628 ; 0x4a8: 0x00000274
kfffde[1].xptr.disk:                  0 ; 0x4ac: 0x0000
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0
kfffde[1].xptr.chk:                  92 ; 0x4af: 0x5c
kfffde[2].xptr.au:                  626 ; 0x4b0: 0x00000272
kfffde[2].xptr.disk:                  3 ; 0x4b4: 0x0003
kfffde[2].xptr.flags:                 0 ; 0x4b6: L=0 E=0 D=0 S=0
kfffde[2].xptr.chk:                  89 ; 0x4b7: 0x59
......
kfffde[58].xptr.au:                 641 ; 0x670: 0x00000281
kfffde[58].xptr.disk:                 3 ; 0x674: 0x0003
kfffde[58].xptr.flags:                0 ; 0x676: L=0 E=0 D=0 S=0
kfffde[58].xptr.chk:                170 ; 0x677: 0xaa
kfffde[59].xptr.au:                 643 ; 0x678: 0x00000283
kfffde[59].xptr.disk:                 1 ; 0x67c: 0x0001
kfffde[59].xptr.flags:                0 ; 0x67e: L=0 E=0 D=0 S=0
kfffde[59].xptr.chk:                170 ; 0x67f: 0xaa
kfffde[60].xptr.au:                 641 ; 0x680: 0x00000281
kfffde[60].xptr.disk:                 2 ; 0x684: 0x0002
kfffde[60].xptr.flags:                0 ; 0x686: L=0 E=0 D=0 S=0
kfffde[60].xptr.chk:                171 ; 0x687: 0xab
kfffde[61].xptr.au:                 643 ; 0x688: 0x00000283
kfffde[61].xptr.disk:                 0 ; 0x68c: 0x0000
kfffde[61].xptr.flags:                0 ; 0x68e: L=0 E=0 D=0 S=0
kfffde[61].xptr.chk:                171 ; 0x68f: 0xab
.....

0-59号extent(kfffde[0]-kfffde[59])被称作directly addressed extent,因为它们直接指向数据extent。而编号59以上的extent,被称为indirectly addressed extent,因为它们指向的extent持有的是剩余extent的信息。

Indirectly addressed extents
下面查看2号磁盘(/dev/raw/raw3)(kfffde[60].xptr.disk=2)的641号AU(kfffde[60].xptr.au=641)的内容

[grid@jyrac1 ~]$ kfed read /dev/raw/raw3 aun=641 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           12 ; 0x002: KFBTYP_INDIRECT
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:              2147483648 ; 0x004: blk=0 (indirect)
kfbh.block.obj:                     259 ; 0x008: file=259
kfbh.check:                  4179528366 ; 0x00c: 0xf91e8aae
kfbh.fcn.base:                     2090 ; 0x010: 0x0000082a
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kffixb.dxsn:                         30 ; 0x000: 0x0000001e
kffixb.xtntblk:                     480 ; 0x004: 0x01e0
kffixb.dXrs:                         18 ; 0x006: SCHE=0x1 NUMB=0x2
kffixb.ub1spare:                      0 ; 0x007: 0x00
kffixb.ub4spare:                      0 ; 0x008: 0x00000000
kffixe[0].xptr.au:                  642 ; 0x00c: 0x00000282
kffixe[0].xptr.disk:                  2 ; 0x010: 0x0002
kffixe[0].xptr.flags:                 0 ; 0x012: L=0 E=0 D=0 S=0
kffixe[0].xptr.chk:                 168 ; 0x013: 0xa8
kffixe[1].xptr.au:                  644 ; 0x014: 0x00000284
kffixe[1].xptr.disk:                  0 ; 0x018: 0x0000
kffixe[1].xptr.flags:                 0 ; 0x01a: L=0 E=0 D=0 S=0
kffixe[1].xptr.chk:                 172 ; 0x01b: 0xac
kffixe[2].xptr.au:                  645 ; 0x01c: 0x00000285
kffixe[2].xptr.disk:                  0 ; 0x020: 0x0000
kffixe[2].xptr.flags:                 0 ; 0x022: L=0 E=0 D=0 S=0
kffixe[2].xptr.chk:                 173 ; 0x023: 0xad
kffixe[3].xptr.au:                  645 ; 0x024: 0x00000285
kffixe[3].xptr.disk:                  1 ; 0x028: 0x0001
kffixe[3].xptr.flags:                 0 ; 0x02a: L=0 E=0 D=0 S=0
kffixe[3].xptr.chk:                 172 ; 0x02b: 0xac
kffixe[4].xptr.au:                  646 ; 0x02c: 0x00000286
kffixe[4].xptr.disk:                  1 ; 0x030: 0x0001
kffixe[4].xptr.flags:                 0 ; 0x032: L=0 E=0 D=0 S=0
kffixe[4].xptr.chk:                 175 ; 0x033: 0xaf
kffixe[5].xptr.au:                  642 ; 0x034: 0x00000282
kffixe[5].xptr.disk:                  3 ; 0x038: 0x0003
kffixe[5].xptr.flags:                 0 ; 0x03a: L=0 E=0 D=0 S=0
kffixe[5].xptr.chk:                 169 ; 0x03b: 0xa9
kffixe[6].xptr.au:                  643 ; 0x03c: 0x00000283
kffixe[6].xptr.disk:                  3 ; 0x040: 0x0003
kffixe[6].xptr.flags:                 0 ; 0x042: L=0 E=0 D=0 S=0
kffixe[6].xptr.chk:                 168 ; 0x043: 0xa8
kffixe[7].xptr.au:                  643 ; 0x044: 0x00000283
kffixe[7].xptr.disk:                  2 ; 0x048: 0x0002
kffixe[7].xptr.flags:                 0 ; 0x04a: L=0 E=0 D=0 S=0
kffixe[7].xptr.chk:                 169 ; 0x04b: 0xa9
kffixe[8].xptr.au:                  644 ; 0x04c: 0x00000284
kffixe[8].xptr.disk:                  2 ; 0x050: 0x0002
kffixe[8].xptr.flags:                 0 ; 0x052: L=0 E=0 D=0 S=0
kffixe[8].xptr.chk:                 174 ; 0x053: 0xae
kffixe[9].xptr.au:                  647 ; 0x054: 0x00000287
kffixe[9].xptr.disk:                  1 ; 0x058: 0x0001
kffixe[9].xptr.flags:                 0 ; 0x05a: L=0 E=0 D=0 S=0
kffixe[9].xptr.chk:                 174 ; 0x05b: 0xae
kffixe[10].xptr.au:                 646 ; 0x05c: 0x00000286
kffixe[10].xptr.disk:                 0 ; 0x060: 0x0000
kffixe[10].xptr.flags:                0 ; 0x062: L=0 E=0 D=0 S=0
kffixe[10].xptr.chk:                174 ; 0x063: 0xae

通过上面的信息可以确定这确实是一个indirect extent block(kfbh.type=KFBTYP_INDIRECT),它持有该数据文件剩余的extent的分布信息。

ASM 10G版本,ASM实例在初始化时,会向数据库实例发送所有数据文件的Extent map,由于这种方式非常影响性能,数据库文件如果很大,需要消耗很多的时间,因此在ASM 11G版本以后,初始化时仅发送Extent map中的前60个Extent(也就是元文件1中记录的60个Extent),其余的在数据库实例有需要时再发送。

小结:
ASM文件目录维护了磁盘组中所有文件的相关信息,包括元信息文件、用户创建的文件、数据库文件。我们可以通过查询v$asm_file视图来获取数据库文件的信息,通过v$asm_alias视图来获取相关文件的文件号。

]]>
http://www.jydba.net/index.php/archives/1988/feed 0