zoukankan      html  css  js  c++  java
  • ORACLE 12C R2 RAC 安装配置指南

    >> from zhuhaiqing.info

    ASM磁盘空间最低要求

    求12C R2相比前一版本,OCR的磁盘占用需求有了明显增长。
    为了方便操作,设置如下:
    External: 1个卷x40G
    Normal: 3个卷x30G
    Hight: 5个卷x25G
    Flex: 3个卷x30G
    OCR+VOLTING+MGMT存储通常放到一个磁盘组,且选择Normal的冗余方式,也即最少3块asm磁盘80G空间。

    操作系统安装

    操作系统安装时把“Server with GUI“和”Compatibility Libraries”勾上,其他都不用选择。
    版本采用CentOS 7、RHEL 7或者Oracle Linux 7

    安装oracle预安装包

    wget http://yum.oracle.com/public-yum-ol7.repo -P /etc/yum.repos.d/
    yum install -y oracle-rdbms-server-12cR1-preinstall

    创建用户和组

    oracle用户和dba、oinstall组已经在上一步创建完毕。
    rac所有节点的oracle用户和grid用户的uid和gid必须一致,所以创建的时候最好制定uid和gid。

    groupadd --gid 54323 asmdba
    groupadd --gid 54324 asmoper
    groupadd --gid 54325 asmadmin
    groupadd --gid 54326 oper
    groupadd --gid 54327 backupdba
    groupadd --gid 54328 dgdba
    groupadd --gid 54329 kmdba
    usermod --uid 54321 --gid oinstall --groups dba,oper,asmdba,asmoper,backupdba,dgdba,kmdba oracle
    useradd --uid 54322 --gid oinstall --groups dba,asmadmin,asmdba,asmoper grid

    安装目录

    mkdir -p /u01/app/12.2.0/grid
    mkdir -p /u01/app/grid
    mkdir -p /u01/app/oracle
    chown -R grid:oinstall /u01
    chown oracle:oinstall /u01/app/oracle
    chmod -R 775 /u01/

    环境变量

    grid环境变量

    cat <<EOF >>/home/grid/.bash_profile
    ORACLE_SID=+ASM1
    ORACLE_HOME=/u01/12.2.0/grid
    PATH=$ORACLE_HOME/bin:$PATH
    LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

    export ORACLE_SID CLASSPATH ORACLE_HOME LD_LIBRARY_PATH PATH

    EOF

    在节点2,ORACLE_SID=+ASM2

    oracle环境变量

    cat <<EOF >>/home/oracle/.bash_profile
    ORACLE_SID=starboss1
    ORACLE_HOME=/u01/app/oracle/product/12.2.0/db_1
    ORACLE_HOSTNAME=rac01
    PATH=$ORACLE_HOME/bin:$PATH
    LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

    export ORACLE_SID ORACLE_HOME ORACLE_HOSTNAME PATH LD_LIBRARY_PATH CLASSPATH
    EOF

    在节点2,ORACLE_SID=starboss2,ORACLE_HOSTNAME=rac02

    修改logind.conf

    # vi /etc/systemd/logind.conf
    RemoveIPC=no
    # systemctl daemon-reload
    # systemctl restart systemcd-logind

    加载pam_limits.so模块

    echo "session required pam_limits.so" >> /etc/pam.d/login

    禁用selinux

    setenforce 0
    vi /etc/sysconfig/selinux

    禁用防火墙

    # systemctl stop firewalld && systemctl disable firewalld

    修改ulimit

    cat <<EOF >> /etc/security/limits.d/99-grid-oracle-limits.conf
    oracle soft nproc 16384
    oracle hard nproc 16384
    oracle soft nofile 1024
    oracle hard nofile 65536
    oracle soft stack 10240
    oracle hard stack 32768
    grid soft nproc 16384
    grid hard nproc 16384
    grid soft nofile 1024
    grid hard nofile 65536
    grid soft stack 10240
    grid hard stack 32768
    EOF

    创建自定义的ulimit

    cat <<EOF >> /etc/profile.d/oracle-grid.sh
    if [ $USER = "oracle" ]; then
    if [ $SHELL = "/bin/ksh" ]; then
    ulimit -u 16384
    ulimit -n 65536
    else
    ulimit -u 16384 -n 65536
    fi
    fi
    if [ $USER = "grid" ]; then
    if [ $SHELL = "/bin/ksh" ]; then
    ulimit -u 16384
    ulimit -n 65536
    else
    ulimit -u 16384 -n 65536
    fi
    fi
    EOF

    修改共享内存分区大小

    将如下参数添加到/etc/fstab,具体大小数值根据实际情况调整,因为这个数值和物理内存以及MEMORY_TARGET有关。
    echo “shm /dev/shm tmpfs size=12g 0 0” >> /etc/fstab
    修改后,只需重新对shm进行挂载即可:
    mount -o remount /dev/shm

    多路径

    # yum install device-mapper-multipath
    # cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc/

    获取scsi id
    # /usr/lib/udev/scsi_id --whitelisted --replace-whitespace –-device=/dev/sda
    # vi /etc/multipath.conf
    multipaths {
    multipath {
    wwid 36000d310012522000000000000000006
    alias vol01
    }
    multipath {
    wwid 36000d310012522000000000000000005
    alias vol02
    }
    }
    # systemctl start multipathd.service
    # multipath -ll

    配置ASM磁盘

    ASMlib方式

    安装ASMLib
    Oracle Linux 7
    yum install -y kmod-oracleasm
    CentOS 7
    yum install -y http://mirror.centos.org/centos/7/os/x86_64/Packages/kmod-oracleasm-2.0.8-17.el7.centos.x86_64.rpm

    yum install -y http://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.12-1.el7.x86_64.rpm
    yum install -y http://public-yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracleasm-support-2.1.8-3.1.el7.x86_64.rpm

    其他版本下载:
    http://www.oracle.com/technetwork/server-storage/linux/asmlib/index-101839.html
    ASM磁盘配置
    12C R2中对磁盘组空间要求比12C R1更大。

    [root@rac01 ~]# /etc/init.d/oracleasm configure -i
    Configuring the Oracle ASM library driver.

    This will configure the on-boot properties of the Oracle ASM library
    driver. The following questions will determine whether the driver is
    loaded on boot and what permissions it will have. The current values
    will be shown in brackets ('[]'). Hitting <ENTER> without typing an
    answer will keep that current value. Ctrl-C will abort.

    Default user to own the driver interface []: grid
    Default group to own the driver interface []: asmadmin
    Start Oracle ASM library driver on boot (y/n) [n]: y
    Scan for Oracle ASM disks on boot (y/n) [y]: y
    Writing Oracle ASM library driver configuration: done
    Initializing the Oracle ASMLib driver: [ OK ]
    Scanning the system for Oracle ASMLib disks: [ OK ]
    [root@rac01 ~]# reboot

    用fdisk在共享磁盘上创建主分区:
    [root@rac01 ~]# fdisk /dev/sdd
    Welcome to fdisk (util-linux 2.23.2).

    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.

    Device does not contain a recognized partition table
    Building a new DOS disklabel with disk identifier 0x86f899a0.

    Command (m for help): n
    Partition type:
    p primary (0 primary, 0 extended, 4 free)
    e extended
    Select (default p): p
    Partition number (1-4, default 1):
    First sector (2048-39976959, default 2048):
    Using default value 2048
    Last sector, +sectors or +size{K,M,G} (2048-39976959, default 39976959):
    Using default value 39976959
    Partition 1 of type Linux and of size 19.1 GiB is set

    Command (m for help): w
    The partition table has been altered!

    Calling ioctl() to re-read partition table.
    Syncing disks.

    在集群的任意节点创建asm磁盘:
    [root@rac01 ~]# /etc/init.d/oracleasm createdisk OCR01 /dev/sdd1
    Marking disk "OCR01" as an ASM disk: [ OK ]
    [root@rac01 ~]# /etc/init.d/oracleasm createdisk OCR02 /dev/sde1
    Marking disk "OCR02" as an ASM disk: [ OK ]
    [root@rac01 ~]# /etc/init.d/oracleasm createdisk OCR03 /dev/sdf1
    Marking disk "OCR03" as an ASM disk: [ OK ]
    [root@rac01 ~]# /etc/init.d/oracleasm createdisk DATA01 /dev/sdb1
    Marking disk "DATA01" as an ASM disk: [ OK ]
    [root@rac01 ~]# /etc/init.d/oracleasm createdisk DATA02 /dev/sdc1
    Marking disk "DATA02" as an ASM disk: [ OK ]
    分别两个节点执行:
    [root@rac01 ~]# /etc/init.d/oracleasm scandisks
    [root@rac01 ~]# /etc/init.d/oracleasm listdisks

    注:
    如果需要清空磁盘,重新部署asm,需要使用dd命令,如:
    dd if=/dev/zero of=/dev/sdb1 bs=8192 count=128000

    UDEV方式

    centos6和centos7有所不同,具体如下:

    确认在所有RAC节点上已经安装了必要的UDEV包
    [root@rh2 ~]# rpm -qa|grep udev
    udev-095-14.21.el5

    CentOS 6/Oracle Linux 6/RHEL 6

    1.通过scsi_id获取设备的块设备的唯一标识名,假设系统上已有LUN sdc-sdp
    for i in c d e f g h i j k l m n o p ;
    do
    echo "sd$i" "`scsi_id -g -u -s /block/sd$i` ";
    done

    sdc 1IET_00010001
    sdd 1IET_00010002
    sde 1IET_00010003
    sdf 1IET_00010004

    以上列出于块设备名对应的唯一标识名

    2.创建必要的UDEV配置文件,
    首先切换到配置文件目录
    [root@rh2 ~]# cd /etc/udev/rules.d
    定义必要的规则配置文件
    [root@rh2 rules.d]# touch 99-oracle-asmdevices.rules
    [root@rh2 rules.d]# cat 99-oracle-asmdevices.rules
    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010001", NAME="ocr1", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010002", NAME="ocr2", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010003", NAME="asm-disk1", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010004", NAME="asm-disk2", OWNER="grid", GROUP="asmadmin", MODE="0660"

    Result 为/sbin/scsi_id -g -u -s %p的输出, 按顺序填入刚才获取的唯一标识名即可
    OWNER一般为grid,GROUP为asmadmin,MODE即为磁盘读写权限,采用0660即可
    NAME为UDEV映射后的设备名,
    建议为OCR和VOTE DISK创建独立的DISKGROUP,为了容易区分将该DISKGROUP专用的设备命名为ocr1..ocrn的形式
    其余磁盘可以根据其实际用途或磁盘组名来命名

    3.将该规则文件拷贝到其他节点上
    [root@rh2 rules.d]# scp 99-oracle-asmdevices.rules Other_node:/etc/udev/rules.d

    4.在所有节点上启动udev服务,或者重启服务器即可

    [root@rh2 rules.d]# /sbin/udevcontrol reload_rules
    [root@rh2 rules.d]# /sbin/start_udev
    Starting udev: [ OK ]

    5.检查设备是否到位

    [root@rh2 rules.d]# cd /dev
    [root@rh2 dev]# ls -l ocr*
    brw-rw---- 1 grid asmadmin 8, 32 Jul 10 17:31 ocr1
    brw-rw---- 1 grid asmadmin 8, 48 Jul 10 17:31 ocr2
    [root@rh2 dev]# ls -l asm-disk*
    brw-rw---- 1 grid asmadmin 8, 64 Jul 10 17:31 asm-disk1
    brw-rw---- 1 grid asmadmin 8, 80 Jul 10 17:31 asm-disk2
    brw-rw---- 1 grid asmadmin 8, 96 Jul 10 17:31 asm-disk3
    brw-rw---- 1 grid asmadmin 8, 112 Jul 10 17:31 asm-disk4

    CentOS 7/Oracle Linux 7/RHEL 7

    获取块设备id
    # /usr/lib/udev/scsi_id -g -u -d /dev/sdb1
    14f504e46494c45526a75744363422d796357662d4b436a65
    # /usr/lib/udev/scsi_id -g -u -d /dev/sdc1
    14f504e46494c455254535a7a414d2d62494b6f2d5a6f6a42
    # /usr/lib/udev/scsi_id -g -u -d /dev/sdd1
    14f504e46494c45526566324e626c2d4770654c2d6b443064
    # /usr/lib/udev/scsi_id -g -u -d /dev/sde1
    14f504e46494c455266326e7547552d384953442d6135576a
    # /usr/lib/udev/scsi_id -g -u -d /dev/sdf1
    14f504e46494c4552774263526f742d534a75392d36374f69

    创建参数文件
    touch /etc/scsi_id.config
    options=-g

    # cat /etc/udev/rules.d/99-oracle-asmdevices.rules
    KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="14f504e46494c45526a75744363422d796357662d4b436a65", SYMLINK+="asm-disk1", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="14f504e46494c455254535a7a414d2d62494b6f2d5a6f6a42", SYMLINK+="asm-disk2", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="14f504e46494c45526566324e626c2d4770654c2d6b443064", SYMLINK+="asm-disk3", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="14f504e46494c455266326e7547552d384953442d6135576a", SYMLINK+="asm-disk4", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="14f504e46494c4552774263526f742d534a75392d36374f69", SYMLINK+="asm-disk5", OWNER="grid", GROUP="asmadmin", MODE="0660"


    加载并刷新块设备分区表
    # /sbin/partprobe /dev/sdb1
    # /sbin/partprobe /dev/sdc1
    # /sbin/partprobe /dev/sdd1
    # /sbin/partprobe /dev/sde1
    # /sbin/partprobe /dev/sdf1

    udev测试
    # /sbin/udevadm test /block/sdb/sdb1
    # /sbin/udevadm test /block/sdc/sdc1
    # /sbin/udevadm test /block/sdd/sdd1
    # /sbin/udevadm test /block/sde/sde1
    # /sbin/udevadm test /block/sdf/sdf1

    启动服务
    # /sbin/udevadm control --reload-rules

    检查连接生成
    [root@udev ~]# ls -l /dev/asm-disk*
    lrwxrwxrwx 1 root root 4 Aug 22 13:19 /dev/asm-disk1 -> sdb1
    lrwxrwxrwx 1 root root 4 Aug 22 13:19 /dev/asm-disk2 -> sdc1
    lrwxrwxrwx 1 root root 4 Aug 22 13:19 /dev/asm-disk3 -> sdd1
    lrwxrwxrwx 1 root root 4 Aug 22 13:19 /dev/asm-disk4 -> sde1
    lrwxrwxrwx 1 root root 4 Aug 22 13:19 /dev/asm-disk5 -> sdf1

    禁用ntp

    /sbin/service ntpd stop

    chkconfig ntpd off

    mv /etc/ntp.conf /etc/ntp.conf.org
    rm /var/run/ntpd.pid

    停止avahi-daemon服务

    systemctl stop avahi-dnsconfd
    systemctl stop avahi-daemon
    systemctl disable avahi-dnsconfd
    systemctl disable avahi-daemon

    IP配置

    如果不安装DNS服务,通过hosts文件来解析,则只能配置一个SCAN IP,只能连接rac某一个节点,无法实现负载均衡。DNS的配置参加后面的介绍。

    #public,接业务交换机,bond
    192.168.245.134 rac01
    192.168.245.140 rac02

    #private,直连心跳,bond
    10.0.1.1 rac01-priv
    10.0.1.2 rac02-priv

    #virtual
    192.168.245.136 rac01-vip
    192.168.245.142 rac02-vip

    #scan-ip,oracle rac service
    192.168.245.135 rac-cluster-scan

    安装cvuqdisk软件包

    安装包在数据库安装软件压缩包的rpm文件夹下。

    rpm -ivh cvuqdisk-1.0.10-1.rpm

    安装GI

    从Oracle Grid Infrastructure 12c Release 2 (12.2)开始,GI 安装方式变成了image-based方式,Oracle 提供的Grid 安装文件是直接已经安装好的ORACLE_HOME,
    因此我们需要把GRID 的安装文件直接解压到预先创建好的GIRD ORACLE_HOME 中,然后运行gridSetup.sh 启动图形界面,开始配置GRID。
    # su - grid
    $cd /u01/12.2.0/grid
    $unzip /oracle_soft/grid_12201.zip
    $ ./gridSetup.sh
    选择“Configure Oracle Grid Infrastructure for a New Cluster”,点击Next

    如果我们没有在环境中配置DNS和GNS服务,检查就会报DNS和resolve.conf的错误,因为我们采用的是hosts文件解析,略过即可。

    在一台机器上复制安装文件,并开始GI的安装,系统会复制文件到其它节点机型同步安装。

    安装的最后用root用户执行脚本,必须先在本地节点挨个启动执行,成功之后,才能再在其他节点并行。
    [root@rac01 ~]# sh /u01/app/oraInventory/orainstRoot.sh
    Changing permissions of /u01/app/oraInventory.
    Adding read,write permissions for group.
    Removing read,write,execute permissions for world.

    Changing groupname of /u01/app/oraInventory to oinstall.
    The execution of the script is complete.
    [root@rac01 ~]# sh /u01/app/12.2.0/grid/root.sh
    Performing root user operation.

    The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME= /u01/app/12.2.0/grid

    Enter the full pathname of the local bin directory: [/usr/local/bin]:
    Copying dbhome to /usr/local/bin ...
    Copying oraenv to /usr/local/bin ...
    Copying coraenv to /usr/local/bin ...

    Creating /etc/oratab file...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    Relinking oracle with rac_on option
    Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params
    The log of current session can be found at:
    /u01/app/grid/crsdata/rac01/crsconfig/rootcrs_rac01_2017-08-16_02-48-07PM.log
    2017/08/16 14:48:16 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
    2017/08/16 14:48:16 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
    2017/08/16 14:48:59 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
    2017/08/16 14:48:59 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
    2017/08/16 14:49:04 CLSRSC-363: User ignored prerequisites during installation
    2017/08/16 14:49:04 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
    2017/08/16 14:49:05 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
    2017/08/16 14:49:06 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.
    2017/08/16 14:49:12 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.
    2017/08/16 14:49:13 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.
    2017/08/16 14:49:13 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.
    2017/08/16 14:49:44 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
    2017/08/16 14:49:51 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
    2017/08/16 14:49:52 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
    2017/08/16 14:49:57 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
    2017/08/16 14:50:12 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
    2017/08/16 14:50:35 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
    2017/08/16 14:50:40 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
    CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac01'
    CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac01' has completed
    CRS-4133: Oracle High Availability Services has been stopped.
    CRS-4123: Oracle High Availability Services has been started.
    2017/08/16 14:51:20 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
    2017/08/16 14:51:25 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
    CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac01'
    CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac01' has completed
    CRS-4133: Oracle High Availability Services has been stopped.
    CRS-4123: Oracle High Availability Services has been started.
    CRS-2672: Attempting to start 'ora.evmd' on 'rac01'
    CRS-2672: Attempting to start 'ora.mdnsd' on 'rac01'
    CRS-2676: Start of 'ora.mdnsd' on 'rac01' succeeded
    CRS-2676: Start of 'ora.evmd' on 'rac01' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'rac01'
    CRS-2676: Start of 'ora.gpnpd' on 'rac01' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac01'
    CRS-2672: Attempting to start 'ora.gipcd' on 'rac01'
    CRS-2676: Start of 'ora.cssdmonitor' on 'rac01' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'rac01' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'rac01'
    CRS-2672: Attempting to start 'ora.diskmon' on 'rac01'
    CRS-2676: Start of 'ora.diskmon' on 'rac01' succeeded
    CRS-2676: Start of 'ora.cssd' on 'rac01' succeeded

    Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-170816PM025203.log for details.

    2017/08/16 14:52:52 CLSRSC-482: Running command: '/u01/app/12.2.0/grid/bin/ocrconfig -upgrade grid oinstall'
    CRS-2672: Attempting to start 'ora.crf' on 'rac01'
    CRS-2672: Attempting to start 'ora.storage' on 'rac01'
    CRS-2676: Start of 'ora.storage' on 'rac01' succeeded
    CRS-2676: Start of 'ora.crf' on 'rac01' succeeded
    CRS-2672: Attempting to start 'ora.crsd' on 'rac01'
    CRS-2676: Start of 'ora.crsd' on 'rac01' succeeded
    CRS-4256: Updating the profile
    Successful addition of voting disk 252d21a926494fd5bfdcbc163b9fd646.
    Successful addition of voting disk 6f00d3b3ba454f14bfc15f10a6466e3e.
    Successful addition of voting disk 5aed4ef45df94ff1bf4934d8883d39a3.
    Successfully replaced voting disk group with +DATA.
    CRS-4256: Updating the profile
    CRS-4266: Voting file(s) successfully replaced
    ## STATE File Universal Id File Name Disk group
    -- ----- ----------------- --------- ---------
    1. ONLINE 252d21a926494fd5bfdcbc163b9fd646 (/dev/oracleasm/disks/OCR03) [DATA]
    2. ONLINE 6f00d3b3ba454f14bfc15f10a6466e3e (/dev/oracleasm/disks/OCR02) [DATA]
    3. ONLINE 5aed4ef45df94ff1bf4934d8883d39a3 (/dev/oracleasm/disks/OCR01) [DATA]
    Located 3 voting disk(s).
    CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac01'
    CRS-2673: Attempting to stop 'ora.crsd' on 'rac01'
    CRS-2677: Stop of 'ora.crsd' on 'rac01' succeeded
    CRS-2673: Attempting to stop 'ora.storage' on 'rac01'
    CRS-2673: Attempting to stop 'ora.crf' on 'rac01'
    CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac01'
    CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac01'
    CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac01'
    CRS-2677: Stop of 'ora.drivers.acfs' on 'rac01' succeeded
    CRS-2677: Stop of 'ora.gpnpd' on 'rac01' succeeded
    CRS-2677: Stop of 'ora.crf' on 'rac01' succeeded
    CRS-2677: Stop of 'ora.storage' on 'rac01' succeeded
    CRS-2673: Attempting to stop 'ora.asm' on 'rac01'
    CRS-2677: Stop of 'ora.mdnsd' on 'rac01' succeeded
    CRS-2677: Stop of 'ora.asm' on 'rac01' succeeded
    CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac01'
    CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac01' succeeded
    CRS-2673: Attempting to stop 'ora.ctssd' on 'rac01'
    CRS-2673: Attempting to stop 'ora.evmd' on 'rac01'
    CRS-2677: Stop of 'ora.evmd' on 'rac01' succeeded
    CRS-2677: Stop of 'ora.ctssd' on 'rac01' succeeded
    CRS-2673: Attempting to stop 'ora.cssd' on 'rac01'
    CRS-2677: Stop of 'ora.cssd' on 'rac01' succeeded
    CRS-2673: Attempting to stop 'ora.gipcd' on 'rac01'
    CRS-2677: Stop of 'ora.gipcd' on 'rac01' succeeded
    CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac01' has completed
    CRS-4133: Oracle High Availability Services has been stopped.
    2017/08/16 14:54:18 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
    CRS-4123: Starting Oracle High Availability Services-managed resources
    CRS-2672: Attempting to start 'ora.mdnsd' on 'rac01'
    CRS-2672: Attempting to start 'ora.evmd' on 'rac01'
    CRS-2676: Start of 'ora.mdnsd' on 'rac01' succeeded
    CRS-2676: Start of 'ora.evmd' on 'rac01' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'rac01'
    CRS-2676: Start of 'ora.gpnpd' on 'rac01' succeeded
    CRS-2672: Attempting to start 'ora.gipcd' on 'rac01'
    CRS-2676: Start of 'ora.gipcd' on 'rac01' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac01'
    CRS-2676: Start of 'ora.cssdmonitor' on 'rac01' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'rac01'
    CRS-2672: Attempting to start 'ora.diskmon' on 'rac01'
    CRS-2676: Start of 'ora.diskmon' on 'rac01' succeeded
    CRS-2676: Start of 'ora.cssd' on 'rac01' succeeded
    CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac01'
    CRS-2672: Attempting to start 'ora.ctssd' on 'rac01'
    CRS-2676: Start of 'ora.ctssd' on 'rac01' succeeded
    CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac01' succeeded
    CRS-2672: Attempting to start 'ora.asm' on 'rac01'
    CRS-2676: Start of 'ora.asm' on 'rac01' succeeded
    CRS-2672: Attempting to start 'ora.storage' on 'rac01'
    CRS-2676: Start of 'ora.storage' on 'rac01' succeeded
    CRS-2672: Attempting to start 'ora.crf' on 'rac01'
    CRS-2676: Start of 'ora.crf' on 'rac01' succeeded
    CRS-2672: Attempting to start 'ora.crsd' on 'rac01'
    CRS-2676: Start of 'ora.crsd' on 'rac01' succeeded
    CRS-6023: Starting Oracle Cluster Ready Services-managed resources
    CRS-6017: Processing resource auto-start for servers: rac01
    CRS-6016: Resource auto-start has completed for server rac01
    CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
    CRS-4123: Oracle High Availability Services has been started.
    2017/08/16 14:57:01 CLSRSC-343: Successfully started Oracle Clusterware stack
    2017/08/16 14:57:01 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
    CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac01'
    CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac01' succeeded
    CRS-2672: Attempting to start 'ora.asm' on 'rac01'
    CRS-2676: Start of 'ora.asm' on 'rac01' succeeded
    CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac01'
    CRS-2676: Start of 'ora.DATA.dg' on 'rac01' succeeded
    2017/08/16 15:01:35 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
    2017/08/16 15:04:36 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

    asmca配置数据盘

    GI安装完成后,我们需要使用asmca来创建存放业务数据库的asm磁盘组,为oracle数据库的安装做准备。
    #su - grid
    $/u01/app/12.2.0/grid/bin/asmca
    在asm配置助手中,选择磁盘组菜单,可以看到已经mount的OCR磁盘组,并且2个节点的ASM实力全部是UP状态。

    点击Create按钮,创建业务数据库的ASM磁盘组,取名DATA,然后选择前面创建的磁盘,并点击OK完成创建。

    完成后的结果。

    安装oracle

    grid安装完成后,下一步工作是安装oracle数据库软件和业务数据库实例。
    将linuxx64_12201_database.zip上传到rac01的任意目录,解压后,用oracle用户启动runInstaller。
    数据库开始安装后,oracle会将软件同步复制到其余节点进行同步安装。

    第五步,默认是“Policy managed”,如无特殊需求,可以选择”Admin managed“
    第六步,先填写oracle密码,然后点击setup,程序会自动设置各个节点的oracle用户免密登陆。

    内存较大的情况下,一般不要勾选自动内存管理,只调整oracle可用内存即可。

    由于没有安装DNS并使用GNS服务,resove.conf错误可以直接忽略。

    安装完后的集群状态

    [grid@rac01 ~]$ crsctl status res -t
    --------------------------------------------------------------------------------
    Name Target State Server State details
    --------------------------------------------------------------------------------
    Local Resources
    --------------------------------------------------------------------------------
    ora.ASMNET1LSNR_ASM.lsnr
    ONLINE ONLINE rac01 STABLE
    ONLINE ONLINE rac02 STABLE
    ora.DATA.dg
    ONLINE ONLINE rac01 STABLE
    ONLINE ONLINE rac02 STABLE
    ora.LISTENER.lsnr
    ONLINE ONLINE rac01 STABLE
    ONLINE ONLINE rac02 STABLE
    ora.OCR.dg
    ONLINE ONLINE rac01 STABLE
    ONLINE ONLINE rac02 STABLE
    ora.chad
    ONLINE ONLINE rac01 STABLE
    ONLINE ONLINE rac02 STABLE
    ora.net1.network
    ONLINE ONLINE rac01 STABLE
    ONLINE ONLINE rac02 STABLE
    ora.ons
    ONLINE ONLINE rac01 STABLE
    ONLINE ONLINE rac02 STABLE
    ora.proxy_advm
    OFFLINE OFFLINE rac01 STABLE
    OFFLINE OFFLINE rac02 STABLE
    --------------------------------------------------------------------------------
    Cluster Resources
    --------------------------------------------------------------------------------
    ora.LISTENER_SCAN1.lsnr
    1 ONLINE ONLINE rac01 STABLE
    ora.MGMTLSNR
    1 ONLINE ONLINE rac01 169.254.107.91 10.0. 0.1,STABLE
    ora.asm
    1 ONLINE ONLINE rac01 Started,STABLE
    2 ONLINE ONLINE rac02 Started,STABLE
    3 OFFLINE OFFLINE STABLE
    ora.cvu
    1 ONLINE ONLINE rac01 STABLE
    ora.mgmtdb
    1 ONLINE ONLINE rac01 Open,STABLE
    ora.qosmserver
    1 ONLINE ONLINE rac01 STABLE
    ora.rac01.vip
    1 ONLINE ONLINE rac01 STABLE
    ora.rac02.vip
    1 ONLINE ONLINE rac02 STABLE
    ora.scan1.vip
    1 ONLINE ONLINE rac01 STABLE
    ora.starboss.db
    1 ONLINE ONLINE rac02 Open,HOME=/u01/app/oracle/product/12.2.0/dbhome_1,STABLE
    2 ONLINE ONLINE rac01 Open,HOME=/u01/app/oracle/product/12.2.0/dbhome_1,STABLE

    ---------------以下为附录-----------------

    RAC数据库集群启动、停止

    RAC数据库目前是全自动的,当操作系统启动时,ASM设备会自动挂载,数据库也会随之自动启动。
    如果需要手动启动或者停止数据库,请参照如下说明。

    启动、停止oracle数据库实例

    监听:
    [root@RAC01 ~]$ srvctl start listener --启动监听
    [root@RAC01 ~]$ srvctl stop listener --停止监听

    数据库
    [root@RAC01 ~]$ srvctl start database -d starboss --启动数据库
    [root@RAC01 ~]$ srvctl stop database -d starboss --停止数据库
    或者
    [root@RAC01 ~]$ srvctl stop database -d starboss -o immediate --停止数据库
    [root@RAC01 ~]$ srvctl start database -d starboss -o open/mount/'read only' --启动到打开、挂载、只读模式

    启停Oracle RAC集群

    这个操作会停止数据库,并停止rac其他所有的集群服务(如asm实例、vip、监听以及rac高可用环境):
    [root@rac01 ~]$ crsctl start cluster -all --启动
    [root@rac01 ~]$ crsctl stop cluster -all --停止

    增加swap分区大小

    [root@rac02 grid]# free -m
    total used free shared buff/cache available
    Mem: 11757 136 5078 8 6542 11539
    Swap: 6015 0 6015
    [root@rac02 grid]# mkdir /swap
    [root@rac02 grid]# dd if=/dev/zero of=/swap/swap bs=1024 count=6291456 #一个block是1k,6291456就是6G
    6291456+0 records in
    6291456+0 records out
    6442450944 bytes (6.4 GB) copied, 8.93982 s, 721 MB/s
    [root@rac02 grid]# /sbin/mkswap /swap/swap
    Setting up swapspace version 1, size = 6291452 KiB
    no label, UUID=35c98431-eb56-4ad7-99cd-d3414cce75ca
    [root@rac02 grid]# /sbin/swapon /swap/swap
    swapon: /swap/swap: insecure permissions 0644, 0600 suggested.
    [root@rac02 grid]# free -m
    total used free shared buff/cache available
    Mem: 11757 141 5074 8 6542 11534
    Swap: 12159 0 12159

    检查决策盘

    [grid@rac01 ~]$ crsctl query css votedisk
    ## STATE File Universal Id File Name Disk group
    -- ----- ----------------- --------- ---------
    1. ONLINE 95b79a3ef6274fdebfe1d1323f0cc829 (/dev/oracleasm/disks/OCR03) [OCR]
    2. ONLINE 404499d583f04f15bf24c89a4269bbe9 (/dev/oracleasm/disks/OCR02) [OCR]
    3. ONLINE 6e010b265aee4f15bfd1d4260ab5ac9c (/dev/oracleasm/disks/OCR01) [OCR]
    Located 5 voting disk(s).

    RAC服务检查

    grid用户任意节点执行如下命令
    [grid@rac01 ~]$ crsctl check cluster -all
    **************************************************************
    rac01:
    CRS-4537: Cluster Ready Services is online
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    **************************************************************
    rac02:
    CRS-4537: Cluster Ready Services is online
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    **************************************************************

    手动切换SCAN到其他节点

    /u01/app/12.2.0/grid/bin/srvctl relocate scan_listener -i 1 -n rac02
    执行完成后,scan_listener和scan_vip都会切换到指定节点

    设置EM访问

    SQL> exec DBMS_XDB_CONFIG.SETHTTPSPORT(5501)
    SQL> exec DBMS_XDB_CONFIG.SETHTTPPORT(5500)

    DNS配置

    使用dns来解析scanip而不是hosts文件的好处是,hosts文件只能配置一个scanip,这意味着外部程序只能连接rac集群的一个节点,而使用dns则可以配置多个scanip,且scanip可以被解析到任意节点,从而达到负载均衡。

    配置DNS服务端

    [root@rac-dns ~]# cat /etc/named.conf
    ……

    options {
    listen-on port 53 { 192.168.32.119; };    //dns服务器地址
    // listen-on-v6 port 53 { ::1; };    //ipv6,注释掉
    directory "/var/named";
    dump-file "/var/named/data/cache_dump.db";
    statistics-file "/var/named/data/named_stats.txt";
    memstatistics-file "/var/named/data/named_mem_stats.txt";
    allow-query { any; };

    …….

    添加正向和反向配置文件信息

    [root@rac-dns ~]# cat /etc/named.rfc1912.zones
    … …

    zone "32.168.192.in-addr.arpa" IN {   //反向解析的名称必须是这个格式,且ip反写
    type master;
    file "32.168.192.in-addr.arpa";   //文件名任意
    allow-update { none; };
    };

    zone "oracle.local" IN {
    type master;
    file "oracle.local.zone";
    allow-update { none; };
    };

    正向解析

    [root@rac-dns ~]# cat /var/named/oracle.local.zone
    $TTL 86400
    @ IN SOA dns.oracle.local. root.oracle.local.(
    42 ; serial (d. adams)
    3H ; refresh
    15M ; retry
    1W ; expiry
    1D ) ; minimum
    @ IN NS dns.oracle.local.
    dns IN A 192.168.32.119
    rac01 IN A 192.168.32.110
    rac02 IN A 192.168.32.113
    rac-cluster-scan IN A 192.168.32.120
    rac-cluster-scan IN A 192.168.32.121
    rac-cluster-scan IN A 192.168.32.122
    rac01-vip IN A 192.168.32.115
    rac02-vip IN A 192.168.32.116

    * 本机的dns解析也需要写进去

    反向解析

    [root@rac-dns ~]# cat /var/named/32.168.192.in-addr.arpa
    $TTL 86400
    @ IN SOA dns.oracle.local. root.oracle.local. (
    1997022700 ; Serial
    28800 ; Refresh
    14400 ; Retry
    3600000 ; Expire
    86400 ) ; Minimum
    @ IN NS dns.oracle.local.
    110 IN PTR rac01.oracle.local.
    113 IN PTR rac02.oracle.local.
    120 IN PTR rac-cluster-scan.oracle.local.
    121 IN PTR rac-cluster-scan.oracle.local.
    122 IN PTR rac-cluster-scan.oracle.local.
    115 IN PTR rac01-vip.oracle.local.
    116 IN PTR rac02-vip.oracle.local.

    注:
    1. 第一列为ip,第四列为对应的域名
    2. 反向解析文件名前面的ip必须是倒着写的,否则客户机会提示解析不到。

    启动dns服务

    # systemctl start named
    [root@rac-dns ~]# systemctl status named
    ‚óè named.service - Berkeley Internet Name Domain (DNS)
    Loaded: loaded (/usr/lib/systemd/system/named.service; enabled; vendor preset: disabled)
    Active: active (running) since Fri 2017-08-25 15:05:32 CST; 1s ago
    Process: 13679 ExecStop=/bin/sh -c /usr/sbin/rndc stop > /dev/null 2>&1 || /bin/kill -TERM $MAINPID (code=exited, status=0/SUCCESS)
    Process: 13691 ExecStart=/usr/sbin/named -u named $OPTIONS (code=exited, status=0/SUCCESS)
    Process: 13688 ExecStartPre=/bin/bash -c if [ ! "$DISABLE_ZONE_CHECKING" == "yes" ]; then /usr/sbin/named-checkconf -z /etc/named.conf; else echo "Checking of zone files is disabled"; fi (code=exited, status=0/SUCCESS)
    Main PID: 13694 (named)
    CGroup: /system.slice/named.service
    └─13694 /usr/sbin/named -u named

    Aug 25 15:05:32 rac-dns named[13694]: zone 0.in-addr.arpa/IN: loaded serial 0
    Aug 25 15:05:32 rac-dns systemd[1]: Started Berkeley Internet Name Domain (DNS).
    Aug 25 15:05:32 rac-dns named[13694]: zone 1.0.0.127.in-addr.arpa/IN: loaded serial 0
    Aug 25 15:05:32 rac-dns named[13694]: zone localhost/IN: loaded serial 0
    Aug 25 15:05:32 rac-dns named[13694]: zone 32.168.192.in-addr.arpa/IN: loaded serial 1997022700
    Aug 25 15:05:32 rac-dns named[13694]: zone localhost.localdomain/IN: loaded serial 0
    Aug 25 15:05:32 rac-dns named[13694]: zone 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN: loaded serial 0
    Aug 25 15:05:32 rac-dns named[13694]: zone oracle.local/IN: loaded serial 42
    Aug 25 15:05:32 rac-dns named[13694]: all zones loaded
    Aug 25 15:05:32 rac-dns named[13694]: running

    客户端hosts文件

    启用dns解析的话,客户机HOST配置大致如下
    cat /etc/hosts
    #public
    192.168.32.110 rac01.oracle.local rac01
    192.168.32.113 rac02.oracle.local rac02

    #private
    10.0.0.1 rac01-priv
    10.0.0.2 rac02-priv

    #virtual
    192.168.32.115 rac01-vip.oracle.local rac01-vip
    192.168.32.116 rac02-vip.oracle.local rac02-vip

    DNS测试

    客户机添加dns服务器配置,只需将搜索域和dns服务器地址添加到resolve.conf即可:
    # echo "search oracle.local" >> /etc/resolv.conf
    # echo "nameserver 192.168.32.119" >> /etc/resolv.conf

    正向测试
    [root@rac02 ~]# nslookup rac01.oracle.local
    Server: 192.168.32.119
    Address: 192.168.32.119#53

    Name: rac01.oracle.local
    Address: 192.168.32.110

    反向测试
    [root@rac02 ~]# nslookup 192.168.32.110
    Server: 192.168.32.119
    Address: 192.168.32.119#53

    110.32.168.192.in-addr.arpa name = rac01.oracle.local.

    GNS配置

    配置GNS,可以让系统自动分配VIP,这里只需要在DNS服务器上安装DHCP服务即可。这个我觉得对于实际的RAC集群带来的收益并不大,只能说是锦上添花吧,而且部署GNS还得增加服务器消耗。

    检查软件包

    # rpm --query dhcp

    dhcp-3.0.5-18.el5

    配置DHCP服务

    # vi /etc/dhcp/dhcpd.conf

    ddns-update-style interim;

    ignore client-updates;

    subnet 192.168.32.0 netmask 255.255.255.0 {

      option routers 192.168.32.1;            # 客户端默认网关

      option subnet-mask 255.255.255.0;        # 客户端子网掩码.

      option broadcast-address 192.168.32.255;     # 广播地址.

      option domain-name "oracle.local";       #DNS搜索域

      option domain-name-servers 192.168.32.119;   # DNS服务器地址

      range 192.168.32.2 192.168.32.254;        # DHCP分配的地址范围

      default-lease-time 21600;            # DHCP地址默认租期

      max-lease-time 43200;             # DHCP地址最大租期

    }

    启动DHCP服务并设置自启动

    [root@rac-dns ~]# systemctl enable dhcpd.service && systemctl start dhcpd.service

    在GI中配置GNS

    在安装GI的第三步“Grid Plug and Play”中

    勾选“Configure GNS”“Configure nodes Virtual IPs as assigned by the Dynamic Networks” “Create a new GNS”

    GNS VIP Adress:GNS服务器IP地址

    GNS Sub Domain:DNS搜索域 并在节点的/etc/hosts文件中注释掉关于vip的配置

  • 相关阅读:
    Map之类的东西
    [待码][BZOJ1858]SCOI2010序列操作 jzyzoj1655
    Linux 系统时间和硬件时间
    自动化运维之puppet的学习(如何找到你需要的模块)
    linux 搭建hexo博客
    坚持不懈之linux haproxy的配置文件关键字查询手册
    坚持不懈之linux haproxy 配置文件 详情
    Linux Haproxy 安装和部署
    linux 破解版confluence安装
    zookeeper 简介
  • 原文地址:https://www.cnblogs.com/zhuhaiqing/p/7444055.html
Copyright © 2011-2022 走看看