zoukankan      html  css  js  c++  java
  • RAC 更改主机名及IP地址_m0_37933891的博客程序员宝宝

    RAC 更改主机名及IP地址_m0_37933891的博客-程序员宝宝

    技术标签: RAC  

     

    由于安装RAC时脚本执行顺序错了,导致实例orcl1装到了rac2节点上,orcl2装到了rac1节点上,看起来很别扭,趁这个机会练习下更改主机名和IP地址。

    原IP及主机名设置:

    #public IP
    172.12.1.11  rac1.oracle.com  rac1
    172.12.1.12  rac2.oracle.com  rac2
    
    #private IP
    10.10.10.1      rac1-priv.oracle.com  rac1-priv
    10.10.10.2      rac2-priv.oracle.com  rac2-priv
    
    #vritual IP
    172.12.1.21  rac1-vip.oracle.com  rac1-vip
    172.12.1.22  rac2-vip.oracle.com  rac2-vip
    
    #scan IP
    172.12.1.31  rac-scan.oracle.com  rac-scan
    
    修改后的设置:
    #public IP
    172.12.1.101  node1.oracle.com  node1
    172.12.1.102  node2.oracle.com  ndoe2
    
    #private IP
    10.10.10.11      node1-priv.oracle.com  node1-priv
    10.10.10.12      node2-priv.oracle.com  node2-priv
    
    #vritual IP
    172.12.1.201  node1-vip.oracle.com  node1-vip
    172.12.1.202  node2-vip.oracle.com  node2-vip
    
    #scan IP
    172.12.1.110  node-scan.oracle.com  node-scan

    修改流程:
    删除rac2节点,改rac2节点主机名、IP地址,两个节点的/etc/hosts,再把此节点加入删集群;删除rac1节点,改rac1节点主机名、IP地址,两个节点的/etc/hosts,再把此节点加入集群。

    具体步骤:

    1、检查2个节点是否是active和Unpinned ,如果是pinned的,用crsctl unpin css

    以下是关于pinned的解释:
    When Oracle Clusterware 11g release 11.2 is installed on a cluster with no previous Oracle software version,
    it configures cluster nodes dynamically, which is compatible with Oracle Database Release 11.2 and later,
    but Oracle Database 10g and 11.1 require a persistent configuration.
    This process of association of a node name with a node number is called pinning.

     
     

    Note:
    During an upgrade, all cluster member nodes are pinned automatically, and no manual pinning is required for
    existing databases. This procedure is required only if you install older database versions after installing
    Oracle Grid Infrastructure release 11.2 software.

    pinned的实验过程:

    [root@rac2 ~]# /u01/app/11.2.0/grid/bin/crsctl pin css -n rac2
    CRS-4664: Node rac2 successfully pinned.
    
    [grid@rac2 ~]$ olsnodes -n -s -t
    rac2    1       Active  Pinned
    rac1    2       Active  Unpinned
    
    [root@rac2 ~]# /u01/app/11.2.0/grid/bin/crsctl unpin css -n rac2
    CRS-4667: Node rac2 successfully unpinned.
    
    [grid@rac2 ~]$ olsnodes -n -s -t
    rac2    1       Active  Unpinned
    rac1    2       Active  Unpinned

    2、root用户在rac2节点 GRID_HOME 上执行

    [root@rac2 ~]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force   --删除集群配置,如果是最后一个节点/u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force -lastnode
    2017-02-26 06:43:57: Parsing the host name
    2017-02-26 06:43:57: Checking for super user privileges
    2017-02-26 06:43:57: User has super user privileges
    Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
    VIP exists.:rac1
    VIP exists.: /rac1-vip/172.12.1.21/255.255.255.0/eth0
    VIP exists.:rac2
    VIP exists.: /rac2-vip/172.12.1.22/255.255.255.0/eth0
    GSD exists.
    ONS daemon exists. Local port 6100, remote port 6200
    eONS daemon exists. Multicast port 22702, multicast IP address 234.112.191.105, listening port 2016
    ACFS-9200: Supported
    CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac2'
    CRS-2677: Stop of 'ora.registry.acfs' on 'rac2' succeeded
    CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
    CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
    CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'
    CRS-2673: Attempting to stop 'ora.OCR_VOTEDISK.dg' on 'rac2'
    CRS-2673: Attempting to stop 'ora.orcl.db' on 'rac2'
    CRS-2677: Stop of 'ora.OCR_VOTEDISK.dg' on 'rac2' succeeded
    CRS-2677: Stop of 'ora.orcl.db' on 'rac2' succeeded
    CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'
    CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rac2'
    CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded
    CRS-2677: Stop of 'ora.FRA.dg' on 'rac2' succeeded
    CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
    CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
    CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed
    CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
    CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
    CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
    CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac2'
    CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
    CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'
    CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
    CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac2'
    CRS-2677: Stop of 'ora.cssdmonitor' on 'rac2' succeeded
    CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
    CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
    CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded
    CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
    CRS-2677: Stop of 'ora.drivers.acfs' on 'rac2' succeeded
    CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
    CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
    CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
    CRS-2673: Attempting to stop 'ora.diskmon' on 'rac2'
    CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
    CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
    CRS-2677: Stop of 'ora.diskmon' on 'rac2' succeeded
    CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
    CRS-4133: Oracle High Availability Services has been stopped.
    Successfully deconfigured Oracle clusterware stack on this node

    另一个节点的状态如下,只有rac1:

    [grid@rac1 ~]$ crsctl stat res -t
    --------------------------------------------------------------------------------
    NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
    --------------------------------------------------------------------------------
    Local Resources
    --------------------------------------------------------------------------------
    ora.DATA.dg
                   ONLINE  ONLINE       rac1                                         
    ora.FRA.dg
                   ONLINE  ONLINE       rac1                                         
    ora.LISTENER.lsnr
                   ONLINE  ONLINE       rac1                                         
    ora.OCR_VOTEDISK.dg
                   ONLINE  ONLINE       rac1                                         
    ora.asm
                   ONLINE  ONLINE       rac1                     Started             
    ora.eons
                   ONLINE  ONLINE       rac1                                         
    ora.gsd
                   OFFLINE OFFLINE      rac1                                         
    ora.net1.network
                   ONLINE  ONLINE       rac1                                         
    ora.ons
                   ONLINE  ONLINE       rac1                                         
    ora.registry.acfs
                   ONLINE  ONLINE       rac1                                         
    --------------------------------------------------------------------------------
    Cluster Resources
    --------------------------------------------------------------------------------
    ora.LISTENER_SCAN1.lsnr
          1        ONLINE  ONLINE       rac1                                         
    ora.oc4j
          1        OFFLINE OFFLINE                                                   
    ora.orcl.db
          1        ONLINE  ONLINE       rac1                                         
          2        ONLINE  OFFLINE                               Instance Shutdown   
    ora.rac1.vip
          1        ONLINE  ONLINE       rac1                                         
    ora.scan1.vip
          1        ONLINE  ONLINE       rac1       

    3、在rac1节点上root执行

    [root@rac1 ~]# /u01/app/11.2.0/grid/bin/crsctl delete node -n rac2
    CRS-4661: Node rac2 successfully deleted.

    4、在要删除的节点上用grid用户执行,更新节点信息

    [grid@rac2 ~]$ echo $ORACLE_HOME
    /u01/app/11.2.0/grid
    [grid@rac2 ~]$ /u01/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={rac2}" CRS=TRUE -silent -local
    Starting Oracle Universal Installer...
    
    Checking swap space: must be greater than 500 MB.   Actual 4094 MB    Passed
    The inventory pointer is located at /etc/oraInst.loc
    The inventory is located at /u01/app/oraInventory
    'UpdateNodeList' was successful.

    5、清理要删除节点的Clusterware home安装文件,grid用户执行:
    $ Grid_home/deinstall/deinstall –local
    注意一定要添加 -local 选项 ,否者会删除 所有节点的Clusterware home 安装目录.
    期间会有交互,一直回车用默认值,最后一个选择[y]继续。

     
    [grid@rac2 deinstall]$ ./deinstall -local
    Checking for required files and bootstrapping ...
    Please wait ...
    Location of logs /u01/app/oraInventory/logs/
    
    ############ ORACLE DEINSTALL & DECONFIG TOOL START ############
    
    
    ######################## CHECK OPERATION START ########################
    Install check configuration START
    
    
    Checking for existence of the Oracle home location /u01/app/11.2.0/grid
    Oracle Home type selected for de-install is: CRS
    Oracle Base selected for de-install is: /u01/app/grid
    Checking for existence of central inventory location /u01/app/oraInventory
    Checking for existence of the Oracle Grid Infrastructure home 
    The following nodes are part of this cluster: rac2
    
    Install check configuration END
    
    Traces log file: /u01/app/oraInventory/logs//crsdc.log
    Enter an address or the name of the virtual IP used on node "rac2"[rac2-vip]
     > 
    
    The following information can be collected by running ifconfig -a on node "rac2"
    Enter the IP netmask of Virtual IP "172.12.1.22" on node "rac2"[255.255.255.0]
     > 
    
    Enter the network interface name on which the virtual IP address "172.12.1.22" is active
     > 
    
    Enter an address or the name of the virtual IP[]
     > 
    
    
    Network Configuration check config START
    
    Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check1620682285648405807.log
    
    Specify all RAC listeners that are to be de-configured [LISTENER,LISTENER_SCAN1]:
    
    Network Configuration check config END
    
    Asm Check Configuration START
    
    ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check4251457041802335046.log
    
    
    ######################### CHECK OPERATION END #########################
    
    
    ####################### CHECK OPERATION SUMMARY #######################
    Oracle Grid Infrastructure Home is: 
    The cluster node(s) on which the Oracle home exists are: (Please input nodes seperated by ",", eg: node1,node2,...)rac2
    Since -local option has been specified, the Oracle home will be de-installed only on the local node, 'rac2', and the global configuration will be removed.
    Oracle Home selected for de-install is: /u01/app/11.2.0/grid
    Inventory Location where the Oracle home registered is: /u01/app/oraInventory
    Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1
    Option -local will not modify any ASM configuration.
    Do you want to continue (y - yes, n - no)? [n]: y
    A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2017-02-26_07-15-19-AM.out'
    Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2017-02-26_07-15-19-AM.err'
    
    ######################## CLEAN OPERATION START ########################
    ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean1736507612183916135.log
    ASM Clean Configuration END
    
    Network Configuration clean config START
    
    Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean7845023467414677312.log
    
    De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1
    
    De-configuring listener: LISTENER
        Stopping listener on node "rac2": LISTENER
        Warning: Failed to stop listener. Listener may not be running.
    Listener de-configured successfully.
    
    De-configuring listener: LISTENER_SCAN1
        Stopping listener on node "rac2": LISTENER_SCAN1
        Warning: Failed to stop listener. Listener may not be running.
    Listener de-configured successfully.
    
    De-configuring Naming Methods configuration file...
    Naming Methods configuration file de-configured successfully.
    
    De-configuring backup files...
    Backup files de-configured successfully.
    
    The network configuration has been cleaned up successfully.
    
    Network Configuration clean config END
    
    
    ---------------------------------------->
    Remove the directory: /tmp/deinstall2017-02-26_07-15-08-AM on node: 
    Oracle Universal Installer clean START
    
    Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done
    
    Delete directory '/u01/app/11.2.0/grid' on the local node : Done
    
    Delete directory '/u01/app/grid' on the local node : Done
    
    Oracle Universal Installer cleanup was successful.
    
    Oracle Universal Installer clean END
    
    
    Oracle install clean START
    
    Clean install operation removing temporary directory '/tmp/install' on node 'rac2'
    
    Oracle install clean END
    
    Moved default properties file /tmp/deinstall2017-02-26_07-15-08-AM/response/deinstall_Ora11g_gridinfrahome1.rsp as /tmp/deinstall2017-02-26_07-15-08-AM/response/deinstall_Ora11g_gridinfrahome1.rsp3
    
    ######################### CLEAN OPERATION END #########################
    
    
    ####################### CLEAN OPERATION SUMMARY #######################
    Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1
    Oracle Clusterware was already stopped and de-configured on node "rac2"
    Oracle Clusterware is stopped and de-configured successfully.
    Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
    Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
    Successfully deleted directory '/u01/app/grid' on the local node.
    Oracle Universal Installer cleanup was successful.
    
    Oracle install successfully cleaned up the temporary directories.
    #######################################################################
    
    
    ############# ORACLE DEINSTALL & DECONFIG TOOL END #############
    
    [grid@rac2 deinstall]$ 

    6、使用grid用户在rac1节点执行以下命令,更新节点信息:

    [grid@rac1 ~]$ /u01/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={rac1}" CRS=TRUE -silent -local
    Starting Oracle Universal Installer...
    
    Checking swap space: must be greater than 500 MB.   Actual 3110 MB    Passed
    The inventory pointer is located at /etc/oraInst.loc
    The inventory is located at /u01/app/oraInventory
    'UpdateNodeList' was successful.

    7、使用grid用户在rac1节点验证rac2节点是否已被删除

    [grid@rac1 ~]$ cluvfy stage -post nodedel -n rac2 -verbose
    
    Performing post-checks for node removal 
    
    Checking CRS integrity...
    The Oracle clusterware is healthy on node "rac1"
    
    CRS integrity check passed
    
    Result: 
    Node removal check passed
    
    Post-check for node removal was successful. 

    8、在rac2被正确删除后,修改rac2的主机名和IP,修改两个节点的/etc/hosts

    [root@node1 ~]# cat /etc/sysconfig/network
    NETWORKING=yes
    NETWORKING_IPV6=no
    HOSTNAME=node1
    [root@node1 ~]# cat /etc/hosts
    # Do not remove the following line, or various programs
    # that require network functionality will fail.
    127.0.0.1       localhost.localdomain localhost
    ::1             localhost6.localdomain6 localhost6
    
    #public IP
    172.12.1.101  node1.oracle.com  node1
    172.12.1.11   rac1.oracle.com   rac1
    
    #private IP
    10.10.10.11      node1-priv.oracle.com  node1-priv
    10.10.10.1       rac1-priv.oracle.com   rac1-priv
    
    #vritual IP
    172.12.1.201  node1-vip.oracle.com  node1-vip
    172.12.1.21  rac1-vip.oracle.com  rac1-vip
    
    #scan IP
    172.12.1.31  rac-scan.oracle.com  rac-scan

    9、在rac1上使用grid用户检查节点2是否满足

    [grid@rac1 ~]$ cluvfy stage -pre nodeadd -n node1 -fixup -fixupdir /tmp -verbose 
    Performing pre-checks for node addition 
    
    Checking node reachability...
    
    Check: Node reachability from node "rac1"
      Destination Node                      Reachable?              
      ------------------------------------  ------------------------
      node1                                 yes                     
    Result: Node reachability check passed from node "rac1"
    
    
    Checking user equivalence...
    
    Check: User equivalence for user "grid"
      Node Name                             Comment                 
      ------------------------------------  ------------------------
      node1                                 failed                  
    Result: PRVF-4007 : User equivalence check failed for user "grid"
    
    ERROR: 
    User equivalence unavailable on all the specified nodes
    Verification cannot proceed
    
    
    Pre-check for node addition was unsuccessful on all the nodes. 

    因为主机名修改了,两节点间grid用户信任关系需要重建

    /u01/app/11.2.0/grid/deinstall/sshUserSetup.sh -user grid -hosts rac1 node1 -noPromptPassphrase 

    10、添加Node1节点,在rac1上执行下面的命令,使用Grid用户,但在此之前需先修改addNode.sh文件

     
    [grid@rac1 ~]$ $ORACLE_HOME/oui/bin/addNode.sh "CLUSTER_NEW_NODES={node1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node1-vip}" -silent

    一些无关紧要的小问题检查不通过,在图形界面安装时是可以忽略的,这里是不能直接忽略的,需要修改一下addNode.sh文件

    #!/bin/sh
    OHOME=/u01/app/11.2.0/grid
    INVPTRLOC=$OHOME/oraInst.loc
    EXIT_CODE=0
    ADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOC ORACLE_HOME=$OHOME $*"
    if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f "$OHOME/cv/cvutl/check_nodeadd.pl" ]
    then
            $ADDNODE
            EXIT_CODE=$?;
    else
            CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/check_nodeadd.pl -pre ORACLE_HOME=$OHOME $*"
            $CHECK_NODEADD
            EXIT_CODE=$?;
    EXIT_CODE=0   ##在这里添加一行,用于忽略一些小错误
            if [ $EXIT_CODE -eq 0 ]
            then
                    $ADDNODE
                    EXIT_CODE=$?;
            fi
    fi
    exit $EXIT_CODE ;
    
    执行结束后,在结尾有提示需用root用户在node1上执行脚本:
    WARNING:
    The following configuration scripts need to be executed as the "root" user in each cluster node.
    /u01/app/11.2.0/grid/root.sh #On nodes node1
    To execute the configuration scripts:
        1. Open a terminal window
        2. Log in as "root"
        3. Run the scripts in each cluster node
    
    The Cluster Node Addition of /u01/app/11.2.0/grid was successful.
    Please check '/tmp/silentInstall.log' for more details.

    执行报错:

    [root@node1 ~]# /u01/app/11.2.0/grid/root.sh 
    Running Oracle 11g root.sh script...
    
    The following environment variables are set as:
        ORACLE_OWNER= grid
        ORACLE_HOME=  /u01/app/11.2.0/grid
    
    Enter the full pathname of the local bin directory: [/usr/local/bin]: 
    The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n) 
    [n]: 
    The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
    [n]: 
    The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
    [n]: 
    
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    2017-02-26 09:28:09: Parsing the host name
    2017-02-26 09:28:09: Checking for super user privileges
    2017-02-26 09:28:09: User has super user privileges
    Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
    Creating trace directory
    /u01/app/11.2.0/grid/bin/cluutil -sourcefile /etc/oracle/ocr.loc -sourcenode rac2 -destfile /u01/app/11.2.0/grid/srvm/admin/ocrloc.tmp -nodelist rac2 ... failed
    Unable to copy OCR locations
    validateOCR failed for +OCR_VOTEDISK at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 7979.

    修改node1节点的/u01/app/11.2.0/grid/crs/install/crsconfig_params,将rac2都修改为node1

    [root@node1 install]# cat crsconfig_params | grep node1
    HOST_NAME_LIST=rac1,node1
    NODE_NAME_LIST=rac1,node1
    CRS_NODEVIPS='rac1-vip/255.255.255.0/eth0,node1-vip/255.255.255.0/eth0'
    NODELIST=rac1,node1
    NEW_NODEVIPS='rac1-vip/255.255.255.0/eth0,node1-vip/255.255.255.0/eth0'

    在node1的grid用户下建立ssh互信

    [grid@node1 grid]$ /u01/app/11.2.0/grid/deinstall/sshUserSetup.sh -user grid -hosts rac1 node1 -noPromptPassphrase

    重新执行[root@node1 ~]# /u01/app/11.2.0/grid/root.sh

    修改实例所在节点:

    [grid@node1 grid]$ srvctl modify instance -d orcl -i orcl1 -n node1
    [grid@node1 grid]$ srvctl status database -d orcl
    Instance orcl1 is not running on node node1
    Instance orcl2 is running on node rac1
    [grid@node1 grid]$ srvctl start instance -d orcl -i orcl1
    [grid@node1 grid]$ srvctl status database -d orcl
    Instance orcl1 is running on node node1
    Instance orcl2 is running on node rac1

    删除rac1节点,修改为node2,重复以上步骤。

    上述步骤一步都不可省略,否则会出错,在操作中遇到问题就检查上述步骤是否做错或者有遗漏。

  • 相关阅读:
    架构师技能图谱 V1.2
    CTO 技能图谱
    物联网的技术构架
    东进交换机
    Ipad2
    ipad2 恢复
    论文建议
    SQL归档
    SQL 会议消费记录统计
    javascript中的方法:类方法(静态方法)对象方法 原型方法
  • 原文地址:https://www.cnblogs.com/yaoyangding/p/15737467.html
Copyright © 2011-2022 走看看