zoukankan      html  css  js  c++  java
  • oracle rac 巡检过程详解

    一 RAC环境

    RAC架构,2节点信息

    节点1

    SQL> show parameter instance

    NAME                                 TYPE        VALUE

    ------------------------------------ ----------- -----------------------------------------------

    active_instance_count                    integer

    cluster_database_instances                integer     2

    instance_groups                         string

    instance_name                          string      RACDB1

    instance_number                        Integer     1

    instance_type                           string      RDBMS

    open_links_per_instance                  integer     4

    parallel_instance_group                   string

    parallel_server_instances                  integer     2

    节点2

    SQL> show parameter instance

    NAME                                 TYPE        VALUE

    ------------------------------------ ----------- ------------------------------------------

    active_instance_count                    integer

    cluster_database_instances                integer     2

    instance_groups                         string

    instance_name                          string      RACDB2

    instance_number                        integer     2

    instance_type                           string      RDBMS

    open_links_per_instance                  integer     4

    parallel_instance_group                   string

    parallel_server_instances                  integer     2

    数据库版本

    SQL> select * from v$version;

    BANNER

    ----------------------------------------------------------------

    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod

    PL/SQL Release 10.2.0.1.0 - Production

    CORE    10.2.0.1.0      Production

    TNS for Linux: Version 10.2.0.1.0 - Production

    NLSRTL Version 10.2.0.1.0 - Production

    操作系统信息

    节点1

    [oracle@rac1 ~]$ uname -a

    Linux rac1 2.6.18-53.el5 #1 SMP Wed Oct 10 16:34:02 EDT 2007 i686 i686 i386 GNU/Linux

    节点2

    [oracle@rac2 ~]$ uname -a

    Linux rac2 2.6.18-53.el5 #1 SMP Wed Oct 10 16:34:02 EDT 2007 i686 i686 i386 GNU/Linux

    RAC所有资源信息

    [oracle@rac2 ~]$ crs_stat -t

    Name           Type            Target     State      Host       

    ----------------------------------------------------------------------------------------------

    ora....B1.inst    application        ONLINE    ONLINE    rac1       

    ora....B2.inst    application        ONLINE    ONLINE    rac2       

    ora....DB1.srv   application        ONLINE    ONLINE    rac2       

    ora.....TAF.cs    application        ONLINE    ONLINE    rac2       

    ora.RACDB.db  application         ONLINE    ONLINE    rac2       

    ora....SM1.asm  application        ONLINE    ONLINE    rac1       

    ora....C1.lsnr    application        ONLINE    ONLINE    rac1       

    ora.rac1.gsd    application        ONLINE    ONLINE    rac1       

    ora.rac1.ons    application        ONLINE    ONLINE    rac1       

    ora.rac1.vip    application        ONLINE    ONLINE    rac1       

    ora....SM2.asm  application        ONLINE    ONLINE    rac2       

    ora....C2.lsnr    application       ONLINE    ONLINE    rac2       

    ora.rac2.gsd    application        ONLINE    ONLINE     rac2       

    ora.rac2.ons    application        ONLINE    ONLINE     rac2       

    ora.rac2.vip    application         ONLINE    ONLINE     rac2

    二 模拟两个节点内联网不通,观察RAC会出现什么现象?给出故障定位的整个过程

    本小题会模拟RAC的私有网络不通现象,然后定位故障原因,最后排除故障。

    1.首先RAC是一个非常健康的状态

    [oracle@rac2 ~]$ crs_stat -t

    Name           Type            Target     State      Host       

    ----------------------------------------------------------------------------------------------

    ora....B1.inst    application        ONLINE    ONLINE    rac1       

    ora....B2.inst    application        ONLINE    ONLINE    rac2       

    ora....DB1.srv   application        ONLINE    ONLINE    rac2       

    ora.....TAF.cs    application        ONLINE    ONLINE    rac2       

    ora.RACDB.db  application         ONLINE    ONLINE    rac2       

    ora....SM1.asm  application        ONLINE    ONLINE    rac1       

    ora....C1.lsnr    application        ONLINE    ONLINE    rac1       

    ora.rac1.gsd    application        ONLINE    ONLINE    rac1       

    ora.rac1.ons    application        ONLINE    ONLINE    rac1       

    ora.rac1.vip    application        ONLINE    ONLINE    rac1       

    ora....SM2.asm  application        ONLINE    ONLINE    rac2       

    ora....C2.lsnr    application       ONLINE    ONLINE    rac2       

    ora.rac2.gsd    application        ONLINE    ONLINE     rac2       

    ora.rac2.ons    application        ONLINE    ONLINE     rac2       

    ora.rac2.vip    application         ONLINE    ONLINE     rac2 

    检查CRS进程状态(CRS  CSS  EVM)

    [oracle@rac2 ~]$ crsctl check crs

    CSS appears healthy

    CRS appears healthy

    EVM appears healthy

    检查OCR磁盘状态,没有问题

    [oracle@rac2 ~]$ ocrcheck

    Status of Oracle Cluster Registry is as follows :

             Version                  :          2

             Total space (kbytes)     :     104344

             Used space (kbytes)      :       4344

             Available space (kbytes) :     100000

             ID                       : 1752469369

             Device/File Name         : /dev/raw/raw1

                                        Device/File integrity check succeeded

                                        Device/File not configured

             Cluster registry integrity check succeeded

    检查vote disk状态

    [oracle@rac2 ~]$ crsctl query css votedisk

    0.     0    /dev/raw/raw2                      显示2号裸设备为表决磁盘

    located 1 votedisk(s).                              只定位1个表决磁盘

    2.手工禁用一个私有网卡

    [oracle@rac2 ~]$ cat /etc/hosts

    127.0.0.1       localhost.localdomain   localhost

    ::1     localhost6.localdomain6 localhost6

    ##Public Network - (eth0)

    ##Private Interconnect - (eth1)

    ##Public Virtual IP (VIP) addresses - (eth0)

    192.168.1.101   rac1                        这是RAC的共有网卡

    192.168.1.102   rac2

    192.168.2.101   rac1-priv                    这是RAC的私有网卡

    192.168.2.102   rac2-priv

    192.168.1.201   rac1-vip                     这是RAC虚拟网卡

    192.168.1.202   rac2-vip

    看一下IP地址和网卡的对应关系

    [oracle@rac2 ~]$ ifconfig

    eth0      Link encap:Ethernet  HWaddr 00:0C:29:8F:F1:87 

              inet addr:192.168.1.102  Bcast:192.168.1.255  Mask:255.255.255.0

              inet6 addr: fe80::20c:29ff:fe8f:f187/64 Scope:Link

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

              RX packets:360 errors:0 dropped:0 overruns:0 frame:0

              TX packets:593 errors:0 dropped:0 overruns:0 carrier:0

              collisions:0 txqueuelen:1000

              RX bytes:46046 (44.9 KiB)  TX bytes:62812 (61.3 KiB)

              Interrupt:185 Base address:0x14a4

    eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:8F:F1:87 

              inet addr:192.168.1.202  Bcast:192.168.1.255  Mask:255.255.255.0

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

              Interrupt:185 Base address:0x14a4

    eth1      Link encap:Ethernet  HWaddr 00:0C:29:8F:F1:91 

              inet addr:192.168.2.102  Bcast:192.168.2.255  Mask:255.255.255.0

              inet6 addr: fe80::20c:29ff:fe8f:f191/64 Scope:Link

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

              RX packets:76588 errors:0 dropped:0 overruns:0 frame:0

              TX packets:58002 errors:0 dropped:0 overruns:0 carrier:0

              collisions:0 txqueuelen:1000

              RX bytes:65185420 (62.1 MiB)  TX bytes:37988820 (36.2 MiB)

              Interrupt:193 Base address:0x1824

    eth2      Link encap:Ethernet  HWaddr 00:0C:29:8F:F1:9B 

              inet addr:192.168.203.129  Bcast:192.168.203.255  Mask:255.255.255.0

              inet6 addr: fe80::20c:29ff:fe8f:f19b/64 Scope:Link

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

              RX packets:339 errors:0 dropped:0 overruns:0 frame:0

              TX packets:83 errors:0 dropped:0 overruns:0 carrier:0

              collisions:0 txqueuelen:1000

              RX bytes:42206 (41.2 KiB)  TX bytes:10199 (9.9 KiB)

              Interrupt:169 Base address:0x18a4

    lo        Link encap:Local Loopback 

              inet addr:127.0.0.1  Mask:255.0.0.0

              inet6 addr: ::1/128 Scope:Host

              UP LOOPBACK RUNNING  MTU:16436  Metric:1

              RX packets:99403 errors:0 dropped:0 overruns:0 frame:0

              TX packets:99403 errors:0 dropped:0 overruns:0 carrier:0

              collisions:0 txqueuelen:0

              RX bytes:18134658 (17.2 MiB)  TX bytes:18134658 (17.2 MiB)

    eth 0 对应RAC的共有网卡

    eth 1 对应RAC的私有网卡

    eth0:1对应RAC的虚拟网卡

    我们现在禁止eth1私有网卡来完成内联网网络不通现象,方法很简单

    ifdown eth1                             禁用网卡

    ifup   eth1                             激活网卡

    [oracle@rac2 ~]$ su – root                 需要使用root用户哦,否则提示Users cannot control this device.

    Password:

    [root@rac2 ~]# ifdown eth1               

    我从17:18:51敲入这个命令,4分钟之后节点2重启,大家知道发生了什么现象嘛?

    Good 这就是传说中RAC脑裂brain split问题,当节点间的内联网不通时,无法信息共享,就会出现脑裂现象,RAC必须驱逐其中一部分节点来保护数据的一致性,被驱逐的节点被强制重启, 这不节点2自动重启了么。又说回来,那为什么节点2重启,其他节点不重启呢。

    这里有个驱逐原则:(1)子集群中少节点的被驱逐

                     (2)节点号大的被驱逐

                     (3)负载高的节点被驱逐

    我们中的就是第二条,OK,节点2重启来了,我们登陆系统,输出用户名/密码

    3.定位故障原因

    (1)查看操作系统日志

    [oracle@rac2 ~]$ su - root

    Password:

    [root@rac2 ~]# tail -30f /var/log/messages

    我又重新模拟了一遍,由于信息量很大,我从里面找出与网络有关的告警信息

    Jul 17 20:05:25 rac2 avahi-daemon[3659]: Withdrawing address record for 192.168.2.102 on eth1.

    收回eth1网卡的ip地址,导致节点1驱逐节点2,节点2自动重启

    Jul 17 20:05:25 rac2 avahi-daemon[3659]: Leaving mDNS multicast group on interface eth1.IPv4 with address 192.168.2.102.

    网卡eth1脱离多组播组

    Jul 17 20:05:25 rac2 avahi-daemon[3659]: iface.c: interface_mdns_mcast_join() called but no local address available.

    Jul 17 20:05:25 rac2 avahi-daemon[3659]: Interface eth1.IPv4 no longer relevant for mDNS.

    网卡eth1不在与mDNS有关

    Jul 17 20:09:54 rac2 logger: Oracle Cluster Ready Services starting up automatically.

    Oracle集群自动启动

    Jul 17 20:09:59 rac2 avahi-daemon[3664]: Registering new address record for fe80::20c:29ff:fe8f:f191 on eth1.

    Jul 17 20:09:59 rac2 avahi-daemon[3664]: Registering new address record for 192.168.2.102 on eth1.

    注册新ip地址

    Jul 17 20:10:17 rac2 logger: Cluster Ready Services completed waiting on dependencies.

    CRS完成等待依赖关系

    从上面信息我们大体知道,是因为eth1网卡的问题导致节点2重启的,为了进一步分析问题我们还需要看一下CRS排错日志

    [root@rac2 crsd]# tail -100f $ORA_CRS_HOME/log/rac2/crsd/crsd.log

    Abnormal termination by CSS, ret = 8

    异常终止CSS

    2013-07-17 20:11:18.115: [ default][1244944]0CRS Daemon Starting

    2013-07-17 20:11:18.116: [ CRSMAIN][1244944]0Checking the OCR device

    2013-07-17 20:11:18.303: [ CRSMAIN][1244944]0Connecting to the CSS Daemon

    重启CRS  CSS进程

    [root@rac2 cssd]# pwd

    /u01/crs1020/log/rac2/cssd

    [root@rac2 cssd]# more ocssd.log       查看cssd进程日志

    [CSSD]2013-07-17 17:26:18.319 [86104976] >TRACE:   clssgmclientlsnr: listening on (ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_rac2_crs))

    这里可以看到rac2节点的cssd进程监听出了问题

    [CSSD]2013-07-17 17:26:19.296 [75615120] >TRACE:   clssnmHandleSync: Acknowledging sync: src[1] srcName[rac1] seq[13] sync[12]

    请确认两个节点的同步问题

    从以上一系列信息可以分析出这是内联网通信问题,由于两个节点的信息无法同步导致信息无法共享从而引起脑裂现象

    4.节点2重启自动恢复正常状态

    [root@rac2 cssd]# ifconfig

    eth0      Link encap:Ethernet  HWaddr 00:0C:29:8F:F1:87 

              inet addr:192.168.1.102  Bcast:192.168.1.255  Mask:255.255.255.0

              inet6 addr: fe80::20c:29ff:fe8f:f187/64 Scope:Link

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

              RX packets:567 errors:0 dropped:0 overruns:0 frame:0

              TX packets:901 errors:0 dropped:0 overruns:0 carrier:0

              collisions:0 txqueuelen:1000

              RX bytes:65402 (63.8 KiB)  TX bytes:96107 (93.8 KiB)

              Interrupt:185 Base address:0x14a4

    eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:8F:F1:87 

              inet addr:192.168.1.202  Bcast:192.168.1.255  Mask:255.255.255.0

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

              Interrupt:185 Base address:0x14a4

    eth1      Link encap:Ethernet  HWaddr 00:0C:29:8F:F1:91 

              inet addr:192.168.2.102  Bcast:192.168.2.255  Mask:255.255.255.0

              inet6 addr: fe80::20c:29ff:fe8f:f191/64 Scope:Link

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

              RX packets:76659 errors:0 dropped:0 overruns:0 frame:0

              TX packets:51882 errors:0 dropped:0 overruns:0 carrier:0

              collisions:0 txqueuelen:1000

              RX bytes:61625763 (58.7 MiB)  TX bytes:26779167 (25.5 MiB)

              Interrupt:193 Base address:0x1824

    eth2      Link encap:Ethernet  HWaddr 00:0C:29:8F:F1:9B 

              inet addr:192.168.203.129  Bcast:192.168.203.255  Mask:255.255.255.0

              inet6 addr: fe80::20c:29ff:fe8f:f19b/64 Scope:Link

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

              RX packets:409 errors:0 dropped:0 overruns:0 frame:0

              TX packets:58 errors:0 dropped:0 overruns:0 carrier:0

              collisions:0 txqueuelen:1000

              RX bytes:45226 (44.1 KiB)  TX bytes:9567 (9.3 KiB)

              Interrupt:169 Base address:0x18a4

    lo        Link encap:Local Loopback 

              inet addr:127.0.0.1  Mask:255.0.0.0

              inet6 addr: ::1/128 Scope:Host

              UP LOOPBACK RUNNING  MTU:16436  Metric:1

              RX packets:49025 errors:0 dropped:0 overruns:0 frame:0

              TX packets:49025 errors:0 dropped:0 overruns:0 carrier:0

              collisions:0 txqueuelen:0

              RX bytes:11292111 (10.7 MiB)  TX bytes:11292111 (10.7 MiB)

    我们看一下网卡ip地址,被收回的私有eth1网卡ip现在已经恢复了,这是因为刚刚节点2进行了重启操作。重启后会初始化所有网卡,被我们禁用的eth1网卡被重新启用,重新恢复ip。

    检查CRS进程状态,全都是健康的

    [root@rac2 cssd]# crsctl check crs

    CSS appears healthy

    CRS appears healthy

    EVM appears healthy

    检查集群,实例,数据库,监听,ASM服务状态,也都是完好无损,全部启动了

    [root@rac2 cssd]# crs_stat -t

    Name           Type           Target    State     Host       

    ------------------------------------------------------------

    ora....B1.inst   application    ONLINE    ONLINE    rac1       

    ora....B2.inst   application    ONLINE    ONLINE    rac2       

    ora....DB1.srv   application    ONLINE    ONLINE    rac1       

    ora.....TAF.cs   application    ONLINE    ONLINE    rac1       

    ora.RACDB.db  application    ONLINE    ONLINE    rac1       

    ora....SM1.asm  application    ONLINE    ONLINE    rac1       

    ora....C1.lsnr   application    ONLINE    ONLINE    rac1       

    ora.rac1.gsd   application    ONLINE    ONLINE    rac1       

    ora.rac1.ons   application    ONLINE    ONLINE    rac1       

    ora.rac1.vip    application    ONLINE    ONLINE    rac1       

    ora....SM2.asm  application    ONLINE    ONLINE    rac2       

    ora....C2.lsnr   application    ONLINE    ONLINE    rac2       

    ora.rac2.gsd   application    ONLINE    ONLINE    rac2       

    ora.rac2.ons   application    ONLINE    ONLINE    rac2       

    ora.rac2.vip    application    ONLINE    ONLINE    rac2       

    RAC故障分析并解决的整个过程到此结束

    三 模拟OCR磁盘不可用时,RAC会出现什么现象?给出故障定位的整个过程

    OCR磁盘:OCR磁盘中注册了RAC所有的资源信息,包含集群、数据库、实例、监听、服务、ASM、存储、网络等等,只有被OCR磁盘注册的资源 才能被CRS集群管理,CRS进程就是按照OCR磁盘中记录的资源来管理的,在我们的运维过程中可能会发生OCR磁盘信息丢失的情况,例如 在增减节点时,添加 or 删除OCR磁盘时可能都会发生。接下来我们模拟一下当OCR磁盘信息丢失时,如果定位故障并解决。

    实验

    1.检查OCR磁盘和CRS进程

    (1)检查OCR磁盘,只有OCR磁盘没有问题,CRS进程才可以顺利管理

    [root@rac2 cssd]# ocrcheck

    Status of Oracle Cluster Registry is as follows :

             Version                  :           2

             Total space (kbytes)        :      104344

             Used space (kbytes)        :        4344

             Available space (kbytes)     :      100000

             ID                       :  1752469369

             Device/File Name          : /dev/raw/raw1            这个就是OCR磁盘所属的裸设备

                                        Device/File integrity check succeeded

                                        Device/File not configured

             Cluster registry integrity check succeeded                 完整检查完毕没有问题

    (2)检查CRS状态

    [root@rac2 cssd]# crsctl check crs

    CSS appears healthy

    CRS appears healthy

    EVM appears healthy

    集群进程全部健康

    (3)关闭CRS守护进程

    [root@rac2 sysconfig]# crsctl stop crs

    Stopping resources.                        停止资源

    Successfully stopped CRS resources            停止CRS进程

    Stopping CSSD.                            停止CSSD进程

    Shutting down CSS daemon.

    Shutdown request successfully issued.        

    关闭请求执行成功

    [root@rac2 sysconfig]# crsctl check crs

    Failure 1 contacting CSS daemon               连接CSS守护进程失败

    Cannot communicate with CRS                无法与CRS通信

    Cannot communicate with EVM               无法与EVM通信

    2.用root用户导出OCR磁盘内容进行OCR备份

    [root@rac2 sysconfig]# ocrconfig -export /home/oracle/ocr.exp

    [oracle@rac2 ~]$ pwd

    /home/oracle

    [oracle@rac2 ~]$ ll

    total 108

    -rw-r--r-- 1 root   root     98074 Jul 18 11:20 ocr.exp         已经生成OCR导出文件

    3.重启CRS守护进程

    [root@rac2 sysconfig]# crsctl start crs

    Attempting to start CRS stack                     尝试启动CRS

    The CRS stack will be started shortly         CRS即将启动

    检查CRS状态

    [root@rac2 sysconfig]# crsctl check crs       很好,我们重新启动后就变正常了

    CSS appears healthy

    CRS appears healthy

    EVM appears healthy

    4.使用裸设备命令0字节覆盖OCR磁盘内容模拟丢失状态

    [root@rac2 sysconfig]# dd if=/dev/zero of=/dev/raw/raw1 bs=1024 count=102400

    102400+0 records in       102400记录输入

    102400+0 records out      102400记录输出

    104857600 bytes (105 MB) copied, 76.7348 seconds, 1.4 MB/s

    命令解释

    dd:                               指定大小的块拷贝一个文件,并在拷贝的同时进行指定的转换

    if=/dev/zero                 指定源文件,0设备

    of=/dev/raw/raw1     指定目标文件,OCR磁盘

    bs=1024                        指定块大小1024个字节,即1k

    count=102400             指定拷贝的块数,102400个块

    5.再次检查OCR磁盘状态

    [root@rac2 sysconfig]# ocrcheck

    PROT-601: Failed to initialize ocrcheck                  初始化OCR磁盘失败

    检查CRS状态

    [root@rac2 sysconfig]# crsctl check crs

    Failure 1 contacting CSS daemon                      连接CSS守护进程失败

    Cannot communicate with CRS                       无法与CRS通信

    EVM appears healthy

    CRS进程失败很正常,你想想连记录的资源信息都丢失了,还怎么管理呢

    6.使用import恢复OCR磁盘内容

    [root@rac2 crs1020]# ocrconfig -import /home/oracle/ocr.exp

    7.最后检查OCR磁盘状态

    谢天谢地顺顺利利恢复回来了

    [root@rac2 crs1020]# ocrcheck

    Status of Oracle Cluster Registry is as follows :

             Version                  :          2

             Total space (kbytes)     :     104344

             Used space (kbytes)      :       4348

             Available space (kbytes) :      99996

             ID                       :  425383787

             Device/File Name         : /dev/raw/raw1

                                        Device/File integrity check succeeded

                                        Device/File not configured

             Cluster registry integrity check succeeded

    8.关注CRS守护进程

    [root@rac2 crs1020]# crsctl check crs

    CSS appears healthy

    CRS appears healthy

    EVM appears healthy

    非常好,当OCR磁盘恢复之后自动重启CRS守护进程

    [root@rac2 crs1020]# crs_stat -t

    Name           Type           Target    State     Host       

    ------------------------------------------------------------

    ora....B1.inst    application    ONLINE    ONLINE    rac1       

    ora....B2.inst    application    ONLINE    OFFLINE              

    ora....DB1.srv   application    ONLINE    ONLINE    rac1       

    ora.....TAF.cs    application    ONLINE    ONLINE    rac1       

    ora.RACDB.db   application    ONLINE    ONLINE    rac1       

    ora....SM1.asm  application    ONLINE    ONLINE    rac1       

    ora....C1.lsnr    application    ONLINE    ONLINE    rac1       

    ora.rac1.gsd    application    ONLINE    ONLINE    rac1       

    ora.rac1.ons    application    ONLINE    ONLINE    rac1       

    ora.rac1.vip     application    ONLINE    ONLINE    rac1       

    ora....SM2.asm  application    ONLINE    OFFLINE              

    ora....C2.lsnr    application    ONLINE    OFFLINE              

    ora.rac2.gsd    application    ONLINE    OFFLINE              

    ora.rac2.ons    application    ONLINE    OFFLINE              

    ora.rac2.vip     application    ONLINE    ONLINE    rac2

    我重启了一遍CRS集群服务

    [root@rac2 init.d]# ./init.crs stop

    Shutting down Oracle Cluster Ready Services (CRS):

    Stopping resources.

    Successfully stopped CRS resources

    Stopping CSSD.

    Shutting down CSS daemon.

    Shutdown request successfully issued.

    Shutdown has begun. The daemons should exit soon.

    [root@rac2 init.d]# crs_stat -t

    CRS-0184: Cannot communicate with the CRS daemon.

    [root@rac2 init.d]# ./init.crs start

    Startup will be queued to init within 90 seconds.

    现在都恢复了

    [oracle@rac2 ~]$ crs_stat -t

    Name           Type            Target     State      Host       

    ----------------------------------------------------------------------------------------------

    ora....B1.inst    application        ONLINE    ONLINE    rac1       

    ora....B2.inst    application        ONLINE    ONLINE    rac2       

    ora....DB1.srv   application        ONLINE    ONLINE    rac2       

    ora.....TAF.cs    application        ONLINE    ONLINE    rac2       

    ora.RACDB.db  application         ONLINE    ONLINE    rac2       

    ora....SM1.asm  application        ONLINE    ONLINE    rac1       

    ora....C1.lsnr    application        ONLINE    ONLINE    rac1       

    ora.rac1.gsd    application        ONLINE    ONLINE    rac1       

    ora.rac1.ons    application        ONLINE    ONLINE    rac1       

    ora.rac1.vip    application        ONLINE    ONLINE    rac1       

    ora....SM2.asm  application        ONLINE    ONLINE    rac2       

    ora....C2.lsnr    application       ONLINE    ONLINE    rac2       

    ora.rac2.gsd    application        ONLINE    ONLINE     rac2       

    ora.rac2.ons    application        ONLINE    ONLINE     rac2       

    ora.rac2.vip    application         ONLINE    ONLINE     rac2 

    四 模拟votedisk不可用时,RAC会出现什么现象?给出故障定位的整个过程

    表决磁盘:在发生脑裂问题时,通过表决磁盘来决定驱逐哪个节点。这是发生在集群层上的脑裂。

    控制文件:如果是发生在实例层上的脑裂问题,通过控制文件来决定驱逐哪个节点。

    Votedisk冗余策略:

    (1)votedisk可以选择外部冗余,通过外部的机制进行保护

    (2)votedisk还可以选择Oracle自己的内部冗余,通过添加votedisk磁盘镜像来实现内部冗余

    实验

    1.检查vote disk状态

    [oracle@rac1 ~]$ crsctl query css votedisk

    0.     0    /dev/raw/raw2                 显示2号裸设备为表决磁盘

    located 1 votedisk(s).                         只定位1个表决磁盘

    2.停止CRS集群

    [root@rac1 sysconfig]# crsctl stop crs

    Stopping resources.                        停止资源

    Successfully stopped CRS resources            停止CRS进程

    Stopping CSSD.                            停止CSSD进程

    Shutting down CSS daemon.

    Shutdown request successfully issued.        

    3.添加votedisk表决磁盘,实现内部冗余,

    crsctl  add  css  votedisk /dev/raw/raw3 –force   把raw3这块裸设备添加入表决磁盘组

    添加之后Oracle就会把原来表决磁盘内容复制一份到新表决磁盘中

    4.再次检查vote disk状态

    crsctl  query  css  votedisk

    5.启动CRS集群

    [root@rac2 sysconfig]# crsctl start crs

    Attempting to start CRS stack               尝试启动CRS

    The CRS stack will be started shortly         CRS即将启动

    小结:当表决磁盘/dev/raw/raw2损坏时,可以用其镜像/dev/raw/raw3来代替,使其RAC可以继续对外提供服务。

  • 相关阅读:
    vue 多层级嵌套组件传值 provide 和 inject
    vue 消息订阅与发布 实现任意组件间的通信
    成功
    疯掉的拼接
    解析发送
    一条条发
    com发送
    字符串拼接
    COM
    笨方法的combox级联
  • 原文地址:https://www.cnblogs.com/zwl715/p/3729912.html
Copyright © 2011-2022 走看看