zoukankan      html  css  js  c++  java
  • Oracle 12C RAC 添加删除GRID 节点

    Oracle 12C RAC 添加删除GRID 节点

    1 删除节点

     

    1.1 deconfig

    su - grid

    olsnodes -s -t

    [grid@bossdb2 ~]# olsnodes -s -t
    bossdb1 Active  Unpinned
    bossdb2 Active  Unpinned
    

    状态应该是Unpinned. 如果是pinned。 请执行 $GRID_HOME/bin/crsctl unpin css

    下面以grid 用户执行:

    GI_HOME 是本地路径 还是共享路径是有区别的。

    • 本地路径

      Grid_home/deinstall/deinstall -local

      执行输出如下:

      Checking for required files and bootstrapping ...
      Please wait ...
      Location of logs /tmp/deinstall2020-05-16_11-34-29AM/logs/
      
      ############ ORACLE DECONFIG TOOL START ############
      
      
      ######################### DECONFIG CHECK OPERATION START #########################
      ## [START] Install check configuration ##
      
      
      Checking for existence of the Oracle home location /g01/app/12.2.0
      Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
      Oracle Base selected for deinstall is: /g01/app/grid
      Checking for existence of central inventory location /g01/app/oraInventory
      Checking for existence of the Oracle Grid Infrastructure home /g01/app/12.2.0
      The following nodes are part of this cluster: bossdb2,bossdb1
      Checking for sufficient temp space availability on node(s) : 'bossdb2'
      
      ## [END] Install check configuration ##
      
      Traces log file: /tmp/deinstall2020-05-16_11-34-29AM/logs//crsdc_2020-05-16_11-34-37-AM.log
      
      Network Configuration check config START
      
      Network de-configuration trace file location: /tmp/deinstall2020-05-16_11-34-29AM/logs/netdc_check2020-05-16_11-34-38-AM.log
      
      Network Configuration check config END
      
      Asm Check Configuration START
      
      ASM de-configuration trace file location: /tmp/deinstall2020-05-16_11-34-29AM/logs/asmcadc_check2020-05-16_11-34-38-AM.log
      
      Database Check Configuration START
      
      Database de-configuration trace file location: /tmp/deinstall2020-05-16_11-34-29AM/logs/databasedc_check2020-05-16_11-34-38-AM.log
      
      Oracle Grid Management database was not found in this Grid Infrastructure home
      
      Database Check Configuration END
      
      ######################### DECONFIG CHECK OPERATION END #########################
      
      
      ####################### DECONFIG CHECK OPERATION SUMMARY #######################
      Oracle Grid Infrastructure Home is: /g01/app/12.2.0
      The following nodes are part of this cluster: bossdb2,bossdb1
      The cluster node(s) on which the Oracle home deinstallation will be performed are:bossdb2
      Oracle Home selected for deinstall is: /g01/app/12.2.0
      Inventory Location where the Oracle home registered is: /g01/app/oraInventory
      Option -local will not modify any ASM configuration.
      Oracle Grid Management database was not found in this Grid Infrastructure home
      Do you want to continue (y - yes, n - no)? [n]: y
      A log of this session will be written to: '/tmp/deinstall2020-05-16_11-34-29AM/logs/deinstall_deconfig2020-05-16_11-34-35-AM.out'
      Any error messages from this session will be written to: '/tmp/deinstall2020-05-16_11-34-29AM/logs/deinstall_deconfig2020-05-16_11-34-35-AM.err'
      
      ######################## DECONFIG CLEAN OPERATION START ########################
      Database de-configuration trace file location: /tmp/deinstall2020-05-16_11-34-29AM/logs/databasedc_clean2020-05-16_11-35-15-AM.log
      ASM de-configuration trace file location: /tmp/deinstall2020-05-16_11-34-29AM/logs/asmcadc_clean2020-05-16_11-35-15-AM.log
      ASM Clean Configuration END
      
      Network Configuration clean config START
      
      Network de-configuration trace file location: /tmp/deinstall2020-05-16_11-34-29AM/logs/netdc_clean2020-05-16_11-35-15-AM.log
      
      Network Configuration clean config END
      
      
      Run the following command as the root user or the administrator on node "bossdb2".
      
      /g01/app/12.2.0/crs/install/rootcrs.sh -force  -deconfig -paramfile "/tmp/deinstall2020-05-16_11-34-29AM/response/deinstall_OraGI12Home1.rsp"
      
      Press Enter after you finish running the above commands
      
      <----------------------------------------                                                      =================> 注意这里,一定要执行完上面执行的命令,再回车。
      
      
      ######################### DECONFIG CLEAN OPERATION END #########################
      
      
      ####################### DECONFIG CLEAN OPERATION SUMMARY #######################
      There is no Oracle Grid Management database to de-configure in this Grid Infrastructure home
      Oracle Clusterware is stopped and successfully de-configured on node "bossdb2"
      Oracle Clusterware is stopped and de-configured successfully.
      #######################################################################
      
      
      ############# ORACLE DECONFIG TOOL END #############
      
      Using properties file /tmp/deinstall2020-05-16_11-34-29AM/response/deinstall_2020-05-16_11-34-35-AM.rsp
      Location of logs /tmp/deinstall2020-05-16_11-34-29AM/logs/
      
      ############ ORACLE DEINSTALL TOOL START ############
      
      
      
      
      
      ####################### DEINSTALL CHECK OPERATION SUMMARY #######################
      A log of this session will be written to: '/tmp/deinstall2020-05-16_11-34-29AM/logs/deinstall_deconfig2020-05-16_11-34-35-AM.out'
      Any error messages from this session will be written to: '/tmp/deinstall2020-05-16_11-34-29AM/logs/deinstall_deconfig2020-05-16_11-34-35-AM.err'
      
      ######################## DEINSTALL CLEAN OPERATION START ########################
      ## [START] Preparing for Deinstall ##
      Setting LOCAL_NODE to bossdb2
      Setting CLUSTER_NODES to bossdb2
      Setting CRS_HOME to true
      Setting oracle.installer.invPtrLoc to /tmp/deinstall2020-05-16_11-34-29AM/oraInst.loc
      Setting oracle.installer.local to true
      
      ## [END] Preparing for Deinstall ##
      
      Setting the force flag to false
      Setting the force flag to cleanup the Oracle Base
      Oracle Universal Installer clean START
      
      Detach Oracle home '/g01/app/12.2.0' from the central inventory on the local node : Done
      
      Delete directory '/g01/app/12.2.0' on the local node : Done
      
      Delete directory '/g01/app/oraInventory' on the local node : Done
      
      The Oracle Base directory '/g01/app/grid' will not be removed on local node. The directory is not empty.
      
      Oracle Universal Installer cleanup was successful.
      
      Oracle Universal Installer clean END
      
      
      ## [START] Oracle install clean ##
      
      
      ## [END] Oracle install clean ##
      
      
      ######################### DEINSTALL CLEAN OPERATION END #########################
      
      
      ####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
      Successfully detached Oracle home '/g01/app/12.2.0' from the central inventory on the local node.
      Successfully deleted directory '/g01/app/12.2.0' on the local node.
      Successfully deleted directory '/g01/app/oraInventory' on the local node.
      Oracle Universal Installer cleanup was successful.
      
      Run 'rm -r /opt/ORCLfmap' as root on node(s) 'bossdb2' at the end of the session.
      Review the permissions and contents of '/g01/app/grid' on nodes(s) 'bossdb2'.
      If there are no Oracle home(s) associated with '/g01/app/grid', manually delete '/g01/app/grid' and its contents.
      Oracle deinstall tool successfully cleaned up temporary directories.
      #######################################################################
      
      
      ############# ORACLE DEINSTALL TOOL END #############
      

      此命令,是预生成删除卸载 CRS和本地资源的脚本,执行完成后,会提示执行 rootcrs.sh -force -deconfig 命令,示例如下:

      /g01/app/12.2.0/crs/install/rootcrs.sh -force  -deconfig -paramfile "/tmp/deinstall2020-05-16_11-34-29AM/response/deinstall_OraGI12Home1.rsp"
      

      该命令执行日志如下:

      2020-05-16 11:40:29: Checking parameters from paramfile /tmp/deinstall2020-05-16_11-34-29AM/response/deinstall_OraGI12Home1.rsp to validate installer variables
      2020-05-16 11:40:29: Skipping validation for ODA_CONFIG
      2020-05-16 11:40:29: Skipping validation for OPC_CLUSTER_TYPE
      2020-05-16 11:40:29: Skipping validation for OPC_NAT_ADDRESS
      2020-05-16 11:40:29: The configuration parameter file /tmp/deinstall2020-05-16_11-34-29AM/response/deinstall_OraGI12Home1.rsp  is valid
      2020-05-16 11:40:29: ### Printing the configuration values from files:
      2020-05-16 11:40:29:    /tmp/deinstall2020-05-16_11-34-29AM/response/deinstall_OraGI12Home1.rsp
      2020-05-16 11:40:29:    /g01/app/12.2.0/crs/install/s_crsconfig_defs
      2020-05-16 11:40:29: AFD_CONF=false
      2020-05-16 11:40:29: AFD_CONFIGURED=false
      2020-05-16 11:40:29: APPLICATION_VIP=
      2020-05-16 11:40:29: ASMCA_ARGS=
      2020-05-16 11:40:29: ASM_CONFIG=near
      2020-05-16 11:40:29: ASM_CREDENTIALS=
      2020-05-16 11:40:29: ASM_DIAGNOSTIC_DEST=
      2020-05-16 11:40:29: ASM_DISCOVERY_STRING=/dev/mapper/asm*
      2020-05-16 11:40:29: ASM_DISKSTRING=
      2020-05-16 11:40:29: ASM_DISK_GROUPS=
      2020-05-16 11:40:29: ASM_DROP_DISKGROUPS=false
      2020-05-16 11:40:29: ASM_HOME=
      2020-05-16 11:40:29: ASM_IN_HOME=false
      2020-05-16 11:40:29: ASM_LOCAL_SID=
      2020-05-16 11:40:29: ASM_ORACLE_BASE=
      2020-05-16 11:40:29: ASM_SID_LIST=
      2020-05-16 11:40:29: ASM_SPFILE=
      2020-05-16 11:40:29: ASM_UPGRADE=false
      2020-05-16 11:40:29: BIG_CLUSTER=true
      2020-05-16 11:40:29: CDATA_AUSIZE=4
      2020-05-16 11:40:29: CDATA_BACKUP_AUSIZE=4
      2020-05-16 11:40:29: CDATA_BACKUP_DISKS=/dev/mapper/asm-data1
      2020-05-16 11:40:29: CDATA_BACKUP_DISK_GROUP=MGMT
      2020-05-16 11:40:29: CDATA_BACKUP_FAILURE_GROUPS=
      2020-05-16 11:40:29: CDATA_BACKUP_QUORUM_GROUPS=
      2020-05-16 11:40:29: CDATA_BACKUP_REDUNDANCY=EXTERNAL
      2020-05-16 11:40:29: CDATA_BACKUP_SITES=
      2020-05-16 11:40:29: CDATA_BACKUP_SIZE=0
      2020-05-16 11:40:29: CDATA_DISKS=/dev/mapper/asm-ocr2,/dev/mapper/asm-ocr1,/dev/mapper/asm-ocr3
      2020-05-16 11:40:29: CDATA_DISK_GROUP=OCR
      2020-05-16 11:40:29: CDATA_FAILURE_GROUPS=ocr2,ocr1,ocr3
      2020-05-16 11:40:29: CDATA_QUORUM_GROUPS=
      2020-05-16 11:40:29: CDATA_REDUNDANCY=NORMAL
      2020-05-16 11:40:29: CDATA_SITES=
      2020-05-16 11:40:29: CDATA_SIZE=0
      2020-05-16 11:40:29: CLSCFG_MISSCOUNT=
      2020-05-16 11:40:29: CLUSTER_CLASS=STANDALONE
      2020-05-16 11:40:29: CLUSTER_GUID=
      2020-05-16 11:40:29: CLUSTER_NAME=bossCluster
      2020-05-16 11:40:29: CLUSTER_NODES=bossdb2
      2020-05-16 11:40:29: CLUSTER_TYPE=DB
      2020-05-16 11:40:29: CRFHOME=/g01/app/12.2.0
      2020-05-16 11:40:29: CRS_HOME=true
      2020-05-16 11:40:29: CRS_LIMIT_CORE=unlimited
      2020-05-16 11:40:29: CRS_LIMIT_MEMLOCK=unlimited
      2020-05-16 11:40:29: CRS_LSNR_STACK=32768
      2020-05-16 11:40:29: CRS_NODEVIPS='bossdb1-vip/255.255.255.224/eno1,bossdb2-vip/255.255.255.224/eno1'
      2020-05-16 11:40:29: CRS_STORAGE_OPTION=1
      2020-05-16 11:40:29: CSS_LEASEDURATION=400
      2020-05-16 11:40:29: DC_HOME=/tmp/deinstall2020-05-16_11-34-29AM/logs/
      2020-05-16 11:40:29: DIRPREFIX=
      2020-05-16 11:40:29: DISABLE_OPROCD=0
      2020-05-16 11:40:29: DROP_MGMTDB=false
      2020-05-16 11:40:29: EXTENDED_CLUSTER=false
      2020-05-16 11:40:29: EXTENDED_CLUSTER_SITES=
      2020-05-16 11:40:29: EXTERNAL_ORACLE=/opt/oracle
      2020-05-16 11:40:29: EXTERNAL_ORACLE_BIN=/opt/oracle/bin
      2020-05-16 11:40:29: GIMR_CONFIG=local
      2020-05-16 11:40:29: GIMR_CREDENTIALS=
      2020-05-16 11:40:29: GNS_ADDR_LIST=
      2020-05-16 11:40:29: GNS_ALLOW_NET_LIST=
      2020-05-16 11:40:29: GNS_CONF=false
      2020-05-16 11:40:29: GNS_CREDENTIALS=
      2020-05-16 11:40:29: GNS_DENY_ITF_LIST=
      2020-05-16 11:40:29: GNS_DENY_NET_LIST=
      2020-05-16 11:40:29: GNS_DOMAIN_LIST=
      2020-05-16 11:40:29: GNS_TYPE=
      2020-05-16 11:40:29: GPNPCONFIGDIR=/g01/app/12.2.0
      2020-05-16 11:40:29: GPNPGCONFIGDIR=/g01/app/12.2.0
      2020-05-16 11:40:29: GPNP_PA=
      2020-05-16 11:40:29: HOME_TYPE=CRS
      2020-05-16 11:40:29: HUB_NODE_LIST=bossdb1,bossdb2
      2020-05-16 11:40:29: HUB_NODE_VIPS=bossdb1-vip,bossdb2-vip
      2020-05-16 11:40:29: HUB_SIZE=32
      2020-05-16 11:40:29: ID=/etc/init.d
      2020-05-16 11:40:29: INIT=/sbin/init
      2020-05-16 11:40:29: INITCTL=/sbin/initctl
      2020-05-16 11:40:29: INSTALL_NODE=bossdb1
      2020-05-16 11:40:29: INVENTORY_LOCATION=/g01/app/oraInventory
      2020-05-16 11:40:29: ISROLLING=true
      2020-05-16 11:40:29: IT=/etc/inittab
      2020-05-16 11:40:29: JLIBDIR=/g01/app/12.2.0/jlib
      2020-05-16 11:40:29: JREDIR=/g01/app/12.2.0/jdk/jre/
      2020-05-16 11:40:29: LANGUAGE_ID=AMERICAN_AMERICA.AL32UTF8
      2020-05-16 11:40:29: LISTENER_USERNAME=grid
      2020-05-16 11:40:29: LOCAL_NODE=bossdb2
      2020-05-16 11:40:29: LOGDIR=/tmp/deinstall2020-05-16_11-34-29AM/logs/
      2020-05-16 11:40:29: MGMTDB_DATAFILE=
      2020-05-16 11:40:29: MGMTDB_DB_UNIQUE_NAME=
      2020-05-16 11:40:29: MGMTDB_DIAG=
      2020-05-16 11:40:29: MGMTDB_IN_HOME=false
      2020-05-16 11:40:29: MGMTDB_NODE=
      2020-05-16 11:40:29: MGMTDB_NODE_LIST=
      2020-05-16 11:40:29: MGMTDB_ORACLE_BASE=/g01/app/grid
      2020-05-16 11:40:29: MGMTDB_PWDFILE=
      2020-05-16 11:40:29: MGMTDB_SID=""
      2020-05-16 11:40:29: MGMTDB_SPFILE=""
      2020-05-16 11:40:29: MGMT_DB=true
      2020-05-16 11:40:29: MSGFILE=/var/adm/messages
      2020-05-16 11:40:29: MinimumSupportedVersion=11.2.0.1.0
      2020-05-16 11:40:29: NETWORKS="eno1"/10.88.1.0:public,"eno3"/172.26.9.0:asm,"eno3"/172.26.9.0:cluster_interconnect
      2020-05-16 11:40:29: NEW_HOST_NAME_LIST=
      2020-05-16 11:40:29: NEW_NODEVIPS='bossdb1-vip/255.255.255.224/eno1,bossdb2-vip/255.255.255.224/eno1'
      2020-05-16 11:40:29: NEW_NODE_NAME_LIST=
      2020-05-16 11:40:29: NEW_PRIVATE_NAME_LIST=
      2020-05-16 11:40:29: NODE_NAME_LIST=bossdb1,bossdb2
      2020-05-16 11:40:29: OCRCONFIG=/etc/oracle/ocr.loc
      2020-05-16 11:40:29: OCRCONFIGDIR=/etc/oracle
      2020-05-16 11:40:29: OCRID=
      2020-05-16 11:40:29: OCRLOC=ocr.loc
      2020-05-16 11:40:29: OCR_LOCATIONS=
      2020-05-16 11:40:29: OCR_VD_DISKGROUPS=
      2020-05-16 11:40:29: OCR_VOTINGDISK_IN_ASM=false
      2020-05-16 11:40:29: ODA_CONFIG=
      2020-05-16 11:40:29: OLASTGASPDIR=/etc/oracle/lastgasp
      2020-05-16 11:40:29: OLD_CRS_HOME=
      2020-05-16 11:40:29: OLRCONFIG=/etc/oracle/olr.loc
      2020-05-16 11:40:29: OLRCONFIGDIR=/etc/oracle
      2020-05-16 11:40:29: OLRLOC=olr.loc
      2020-05-16 11:40:29: OPC_CLUSTER_TYPE=
      2020-05-16 11:40:29: OPC_NAT_ADDRESS=
      2020-05-16 11:40:29: OPROCDCHECKDIR=/etc/oracle/oprocd/check
      2020-05-16 11:40:29: OPROCDDIR=/etc/oracle/oprocd
      2020-05-16 11:40:29: OPROCDFATALDIR=/etc/oracle/oprocd/fatal
      2020-05-16 11:40:29: OPROCDSTOPDIR=/etc/oracle/oprocd/stop
      2020-05-16 11:40:29: ORACLE_BASE=/g01/app/grid
      2020-05-16 11:40:29: ORACLE_BINARY_OK=true
      2020-05-16 11:40:29: ORACLE_HOME=/g01/app/12.2.0
      2020-05-16 11:40:29: ORACLE_HOME_VERSION=12.2.0.1.0
      2020-05-16 11:40:29: ORACLE_HOME_VERSION_VALID=true
      2020-05-16 11:40:29: ORACLE_OWNER=grid
      2020-05-16 11:40:29: ORA_ASM_GROUP=asmadmin
      2020-05-16 11:40:29: ORA_CRS_HOME=/g01/app/12.2.0
      2020-05-16 11:40:29: ORA_DBA_GROUP=oinstall
      2020-05-16 11:40:29: ObaseCleanupPtrLoc=/tmp/deinstall2020-05-16_11-34-29AM/utl/orabase_cleanup.lst
      2020-05-16 11:40:29: PING_TARGETS=
      2020-05-16 11:40:29: PRIVATE_NAME_LIST=
      2020-05-16 11:40:29: RCALLDIR=/etc/rc.d/rc0.d /etc/rc.d/rc1.d /etc/rc.d/rc2.d /etc/rc.d/rc3.d /etc/rc.d/rc4.d /etc/rc.d/rc5.d /etc/rc.d/rc6.d
      2020-05-16 11:40:29: RCKDIR=/etc/rc.d/rc0.d /etc/rc.d/rc1.d /etc/rc.d/rc2.d /etc/rc.d/rc4.d /etc/rc.d/rc6.d
      2020-05-16 11:40:29: RCSDIR=/etc/rc.d/rc3.d /etc/rc.d/rc5.d
      2020-05-16 11:40:29: RC_KILL=K15
      2020-05-16 11:40:29: RC_KILL_OLD=K96
      2020-05-16 11:40:29: RC_KILL_OLD2=K19
      2020-05-16 11:40:29: RC_START=S96
      2020-05-16 11:40:29: REMOTE_NODES=
      2020-05-16 11:40:29: REUSEDG=false
      2020-05-16 11:40:29: RHP_CONF=false
      2020-05-16 11:40:29: RIM_NODE_LIST=
      2020-05-16 11:40:29: SCAN_NAME=racscan
      2020-05-16 11:40:29: SCAN_PORT=1521
      2020-05-16 11:40:29: SCRBASE=/etc/oracle/scls_scr
      2020-05-16 11:40:29: SILENT=true
      2020-05-16 11:40:29: SO_EXT=so
      2020-05-16 11:40:29: SRVCFGLOC=srvConfig.loc
      2020-05-16 11:40:29: SRVCONFIG=/var/opt/oracle/srvConfig.loc
      2020-05-16 11:40:29: SRVCONFIGDIR=/var/opt/oracle
      2020-05-16 11:40:29: SYSTEMCTL=/usr/bin/systemctl
      2020-05-16 11:40:29: SYSTEMD_SYSTEM_DIR=/etc/systemd/system
      2020-05-16 11:40:29: TZ=Asia/Shanghai
      2020-05-16 11:40:29: UPSTART_INIT_DIR=/etc/init
      2020-05-16 11:40:29: USER_IGNORED_PREREQ=true
      2020-05-16 11:40:29: VNDR_CLUSTER=false
      2020-05-16 11:40:29: VOTING_DISKS=
      2020-05-16 11:40:29: inst_group=oinstall
      2020-05-16 11:40:29: inventory_loc=/g01/app/oraInventory
      2020-05-16 11:40:29: local=true
      2020-05-16 11:40:29: silent=false
      2020-05-16 11:40:29: ### Printing other configuration values ###
      2020-05-16 11:40:29: CLSCFG_EXTRA_PARMS=
      2020-05-16 11:40:29: DECONFIG=1
      2020-05-16 11:40:29: FORCE=1
      2020-05-16 11:40:29: HAS_GROUP=oinstall
      2020-05-16 11:40:29: HAS_USER=root
      2020-05-16 11:40:29: HOST=bossdb2
      2020-05-16 11:40:29: OLR_DIRECTORY=/g01/app/12.2.0/cdata
      2020-05-16 11:40:29: OLR_LOCATION=/g01/app/12.2.0/cdata/bossdb2.olr
      2020-05-16 11:40:29: ORA_CRS_HOME=/g01/app/12.2.0
      2020-05-16 11:40:29: SIHA=0
      2020-05-16 11:40:29: SUCC_REBOOT=0
      2020-05-16 11:40:29: SUPERUSER=root
      2020-05-16 11:40:29: addfile=/g01/app/12.2.0/crs/install/crsconfig_addparams
      2020-05-16 11:40:29: cluutil_trc_suff_pp=0
      2020-05-16 11:40:29: crscfg_trace=1
      2020-05-16 11:40:29: crscfg_trace_file=/tmp/deinstall2020-05-16_11-34-29AM/logs/crsdeconfig_bossdb2_2020-05-16_11-40-29AM.log
      2020-05-16 11:40:29: old_nodevips=
      2020-05-16 11:40:29: osdfile=/g01/app/12.2.0/crs/install/s_crsconfig_defs
      2020-05-16 11:40:29: parameters_valid=1
      2020-05-16 11:40:29: paramfile=/tmp/deinstall2020-05-16_11-34-29AM/response/deinstall_OraGI12Home1.rsp
      2020-05-16 11:40:29: platform_family=unix
      2020-05-16 11:40:29: pp_srvctl_trc_suff=0
      2020-05-16 11:40:29: srvctl_trc_suff=0
      2020-05-16 11:40:29: srvctl_trc_suff_pp=0
      2020-05-16 11:40:29: stackStartLevel=11
      2020-05-16 11:40:29: user_is_superuser=1
      2020-05-16 11:40:29: ### Printing of configuration values complete ###
      2020-05-16 11:40:29: Save the ASM password file location: +OCR/orapwASM
      2020-05-16 11:40:29: Print system environment variables:
      2020-05-16 11:40:29: HISTCONTROL = ignoredups
      2020-05-16 11:40:29: HISTSIZE = 1000
      2020-05-16 11:40:29: HOME = /root
      2020-05-16 11:40:29: HOSTNAME = bossdb2
      2020-05-16 11:40:29: LANG = en_US.UTF-8
      2020-05-16 11:40:29: LD_LIBRARY_PATH = /g01/app/12.2.0/lib:
      2020-05-16 11:40:29: LESSOPEN = ||/usr/bin/lesspipe.sh
      2020-05-16 11:40:29: LOGNAME = root
      2020-05-16 11:40:29: LS_COLORS = rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
      2020-05-16 11:40:29: MAIL = /var/spool/mail/root
      2020-05-16 11:40:29: ORACLE_BASE = /g01/app/grid
      2020-05-16 11:40:29: ORACLE_HOME = /g01/app/12.2.0
      2020-05-16 11:40:29: PATH = /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
      2020-05-16 11:40:29: PWD = /root
      2020-05-16 11:40:29: SHELL = /bin/bash
      2020-05-16 11:40:29: SHLVL = 2
      2020-05-16 11:40:29: SSH_CLIENT = 10.88.1.1 55961 22
      2020-05-16 11:40:29: SSH_CONNECTION = 10.88.1.1 55961 10.88.1.14 22
      2020-05-16 11:40:29: SSH_TTY = /dev/pts/1
      2020-05-16 11:40:29: TERM = linux
      2020-05-16 11:40:29: TZ = Asia/Shanghai
      2020-05-16 11:40:29: USER = root
      2020-05-16 11:40:29: XDG_RUNTIME_DIR = /run/user/0
      2020-05-16 11:40:29: XDG_SESSION_ID = 4753
      2020-05-16 11:40:29: _ = /g01/app/12.2.0/perl/bin/perl
      2020-05-16 11:40:29: Perform initialization tasks before configuring ACFS
      2020-05-16 11:40:29: Executing pwdx 105595 >/dev/null 2>&1
      2020-05-16 11:40:29: Executing cmd: pwdx 105595 >/dev/null 2>&1
      2020-05-16 11:40:29: Executing pwdx 129738 >/dev/null 2>&1
      2020-05-16 11:40:29: Executing cmd: pwdx 129738 >/dev/null 2>&1
      2020-05-16 11:40:29: Executing pwdx 129740 >/dev/null 2>&1
      2020-05-16 11:40:29: Executing cmd: pwdx 129740 >/dev/null 2>&1
      2020-05-16 11:40:29: Executing pwdx 129742 >/dev/null 2>&1
      2020-05-16 11:40:29: Executing cmd: pwdx 129742 >/dev/null 2>&1
      2020-05-16 11:40:29: Executing pwdx 129745 >/dev/null 2>&1
      2020-05-16 11:40:29: Executing cmd: pwdx 129745 >/dev/null 2>&1
      2020-05-16 11:40:29: Running /g01/app/12.2.0/bin/acfsdriverstate installed -s
      2020-05-16 11:40:29: Executing cmd: /g01/app/12.2.0/bin/acfsdriverstate installed -s
      2020-05-16 11:40:29: acfs is not installed
      2020-05-16 11:40:29: Performing few checks before running scripts
      2020-05-16 11:40:29: Attempt to get current working directory
      2020-05-16 11:40:29: Running as user grid: pwd
      2020-05-16 11:40:29: s_run_as_user2: Running /bin/su grid -c ' echo CLSRSC_START; pwd '
      2020-05-16 11:40:29: Removing file /tmp/qkMJe8eSbn
      2020-05-16 11:40:29: Successfully removed file: /tmp/qkMJe8eSbn
      2020-05-16 11:40:29: pipe exit code: 0
      2020-05-16 11:40:29: /bin/su successfully executed
      
      2020-05-16 11:40:29: The current working directory: /root
      2020-05-16 11:40:29: Change working directory to safe directory /g01/app/12.2.0
      2020-05-16 11:40:29: Pre-checks for running the rootcrs script passed.
      2020-05-16 11:40:29: Deconfiguring Oracle Clusterware on this node
      2020-05-16 11:40:29: Executing the [DeconfigValidate] step with checkpoint [null] ...
      2020-05-16 11:40:29: Perform initialization tasks before configuring OLR
      2020-05-16 11:40:29: Perform initialization tasks before configuring OCR
      2020-05-16 11:40:29: Perform initialization tasks before configuring CHM
      2020-05-16 11:40:29: Perform prechecks for deconfiguration
      2020-05-16 11:40:29: options=-force
      2020-05-16 11:40:29: Validate crsctl command
      2020-05-16 11:40:29: Validating /g01/app/12.2.0/bin/crsctl
      2020-05-16 11:40:29: Executing the [DeconfigResources] step with checkpoint [null] ...
      2020-05-16 11:40:29: Verifying the existence of CRS resources used by Oracle RAC databases
      2020-05-16 11:40:29: Check if CRS is running
      2020-05-16 11:40:29: Configured CRS Home: /g01/app/12.2.0
      2020-05-16 11:40:29: Running /g01/app/12.2.0/bin/crsctl check crs
      2020-05-16 11:40:29: Executing cmd: /g01/app/12.2.0/bin/crsctl check crs
      2020-05-16 11:40:29: Command output:
      >  CRS-4638: Oracle High Availability Services is online
      >  CRS-4537: Cluster Ready Services is online
      >  CRS-4529: Cluster Synchronization Services is online
      >  CRS-4533: Event Manager is online
      >End Command output
      2020-05-16 11:40:29: Validate srvctl command
      2020-05-16 11:40:29: Validating /g01/app/12.2.0/bin/srvctl
      2020-05-16 11:40:29: Remove Resources
      2020-05-16 11:40:29: Validate srvctl command
      2020-05-16 11:40:29: Validating /g01/app/12.2.0/bin/srvctl
      2020-05-16 11:40:29: Removing nodeapps...
      2020-05-16 11:40:29: Invoking "/g01/app/12.2.0/bin/srvctl config nodeapps"
      2020-05-16 11:40:29: trace file=/tmp/deinstall2020-05-16_11-34-29AM/logs/srvmcfg1.log
      2020-05-16 11:40:29: Executing cmd: /g01/app/12.2.0/bin/srvctl config nodeapps
      2020-05-16 11:40:36: Command output:
      >  Network 1 exists
      >  Subnet IPv4: 10.88.1.0/255.255.255.224/eno1, static
      >  Subnet IPv6:
      >  Ping Targets:
      >  Network is enabled
      >  Network is individually enabled on nodes:
      >  Network is individually disabled on nodes:
      >  VIP exists: network number 1, hosting node bossdb1
      >  VIP Name: bossdb1-vip
      >  VIP IPv4 Address: 10.88.1.6
      >  VIP IPv6 Address:
      >  VIP is enabled.
      >  VIP is individually enabled on nodes:
      >  VIP is individually disabled on nodes:
      >  VIP exists: network number 1, hosting node bossdb2
      >  VIP Name: bossdb2-vip
      >  VIP IPv4 Address: 10.88.1.7
      >  VIP IPv6 Address:
      >  VIP is enabled.
      >  VIP is individually enabled on nodes:
      >  VIP is individually disabled on nodes:
      >  ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL true
      >  ONS is enabled
      >  ONS is individually enabled on nodes:
      >  ONS is individually disabled on nodes:
      >End Command output
      2020-05-16 11:40:36: Invoking "/g01/app/12.2.0/bin/srvctl stop nodeapps -n bossdb2 -f"
      2020-05-16 11:40:36: trace file=/tmp/deinstall2020-05-16_11-34-29AM/logs/srvmcfg2.log
      2020-05-16 11:40:36: Executing cmd: /g01/app/12.2.0/bin/srvctl stop nodeapps -n bossdb2 -f
      2020-05-16 11:40:42: Getting the configured node role for the local node
      2020-05-16 11:40:42: Executing cmd: /g01/app/12.2.0/bin/crsctl get node role config
      2020-05-16 11:40:42: Command output:
      >  Node 'bossdb2' configured role is 'hub'
      >End Command output
      2020-05-16 11:40:42: The configured node role for the local node is hub
      2020-05-16 11:40:42: the node role is hub
      2020-05-16 11:40:42: Invoking "/g01/app/12.2.0/bin/srvctl remove vip -i bossdb2 -y -f"
      2020-05-16 11:40:42: trace file=/tmp/deinstall2020-05-16_11-34-29AM/logs/srvmcfg3.log
      2020-05-16 11:40:42: Executing cmd: /g01/app/12.2.0/bin/srvctl remove vip -i bossdb2 -y -f
      2020-05-16 11:40:44: Deconfiguring Oracle ASM or shared filesystem storage ...
      2020-05-16 11:40:44: Stopping Oracle Clusterware ...
      2020-05-16 11:40:44: Executing cmd: /g01/app/12.2.0/bin/crsctl stop crs -f
      2020-05-16 11:40:54: Command output:
      >  CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'bossdb2'
      >  CRS-2673: Attempting to stop 'ora.crsd' on 'bossdb2'
      >  CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'bossdb2'
      >  CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'bossdb2'
      >  CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'bossdb2' succeeded
      >  CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'bossdb2' has completed
      >  CRS-2677: Stop of 'ora.crsd' on 'bossdb2' succeeded
      >  CRS-2673: Attempting to stop 'ora.storage' on 'bossdb2'
      >  CRS-2673: Attempting to stop 'ora.crf' on 'bossdb2'
      >  CRS-2673: Attempting to stop 'ora.gpnpd' on 'bossdb2'
      >  CRS-2673: Attempting to stop 'ora.mdnsd' on 'bossdb2'
      >  CRS-2677: Stop of 'ora.crf' on 'bossdb2' succeeded
      >  CRS-2677: Stop of 'ora.gpnpd' on 'bossdb2' succeeded
      >  CRS-2677: Stop of 'ora.storage' on 'bossdb2' succeeded
      >  CRS-2673: Attempting to stop 'ora.asm' on 'bossdb2'
      >  CRS-2677: Stop of 'ora.asm' on 'bossdb2' succeeded
      >  CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'bossdb2'
      >  CRS-2677: Stop of 'ora.mdnsd' on 'bossdb2' succeeded
      >  CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'bossdb2' succeeded
      >  CRS-2673: Attempting to stop 'ora.ctssd' on 'bossdb2'
      >  CRS-2673: Attempting to stop 'ora.evmd' on 'bossdb2'
      >  CRS-2677: Stop of 'ora.ctssd' on 'bossdb2' succeeded
      >  CRS-2677: Stop of 'ora.evmd' on 'bossdb2' succeeded
      >  CRS-2673: Attempting to stop 'ora.cssd' on 'bossdb2'
      >  CRS-2677: Stop of 'ora.cssd' on 'bossdb2' succeeded
      >  CRS-2673: Attempting to stop 'ora.gipcd' on 'bossdb2'
      >  CRS-2677: Stop of 'ora.gipcd' on 'bossdb2' succeeded
      >  CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'bossdb2' has completed
      >  CRS-4133: Oracle High Availability Services has been stopped.
      >End Command output
      2020-05-16 11:40:54: The return value of stop of CRS: 0
      2020-05-16 11:40:54: Executing cmd: /g01/app/12.2.0/bin/crsctl check crs
      2020-05-16 11:40:54: Command output:
      >  CRS-4639: Could not contact Oracle High Availability Services
      >End Command output
      2020-05-16 11:40:54: Oracle CRS stack has been shut down
      2020-05-16 11:41:04: Reset OCR
      2020-05-16 11:41:04: Removing OLR file: /g01/app/12.2.0/cdata/bossdb2.olr
      2020-05-16 11:41:04: Removing file /g01/app/12.2.0/cdata/bossdb2.olr
      2020-05-16 11:41:04: Successfully removed file: /g01/app/12.2.0/cdata/bossdb2.olr
      2020-05-16 11:41:04: Removing file /etc/oracle/olr.loc
      2020-05-16 11:41:04: Successfully removed file: /etc/oracle/olr.loc
      2020-05-16 11:41:04: Removing file /etc/oracle/ocr.loc
      2020-05-16 11:41:04: Successfully removed file: /etc/oracle/ocr.loc
      2020-05-16 11:41:04: Executing the [DeconfigCleanup] step with checkpoint [null] ...
      2020-05-16 11:41:04: Running /g01/app/12.2.0/bin/acfshanfs installed -nfsv4lock
      2020-05-16 11:41:04: Executing cmd: /g01/app/12.2.0/bin/acfshanfs installed -nfsv4lock
      2020-05-16 11:41:04: Command output:
      >  ACFS-9459: ADVM/ACFS is not supported on this OS version: 'centos-release-7-6.1810.2.el7.centos.x86_64
      >  '
      >  ACFS-9204: false
      >End Command output
      2020-05-16 11:41:04: acfshanfs is not installed
      2020-05-16 11:41:04: Executing step deconfiguration ACFS on the current node
      2020-05-16 11:41:04: Executing cmd: /g01/app/12.2.0/bin/acfsdriverstate supported
      2020-05-16 11:41:04: Command output:
      >  ACFS-9459: ADVM/ACFS is not supported on this OS version: 'centos-release-7-6.1810.2.el7.centos.x86_64
      >  '
      >  ACFS-9201: Not Supported
      >End Command output
      2020-05-16 11:41:04: acfs is not supported
      2020-05-16 11:41:04: Running /g01/app/12.2.0/bin/okadriverstate installed
      2020-05-16 11:41:04: Executing cmd: /g01/app/12.2.0/bin/okadriverstate installed
      2020-05-16 11:41:05: Command output:
      >  OKA-9204: false
      >End Command output
      2020-05-16 11:41:05: OKA is not installed
      2020-05-16 11:41:05: Running /g01/app/12.2.0/bin/afddriverstate installed
      2020-05-16 11:41:05: Executing cmd: /g01/app/12.2.0/bin/afddriverstate installed
      2020-05-16 11:41:05: Command output:
      >  AFD-9204: AFD device driver installed status: 'false'
      >End Command output
      2020-05-16 11:41:05: AFD Driver is not installed
      2020-05-16 11:41:05: AFD Library is not present
      2020-05-16 11:41:05: AFD is not installed
      2020-05-16 11:41:05: Either /etc/oracle/olr.loc does not exist or is not readable
      2020-05-16 11:41:05: Make sure the file exists and it has read and execute access
      2020-05-16 11:41:05: Info: No ora file present at  /crf/admin/crfbossdb2.ora
      2020-05-16 11:41:05: CHM repository path not found
      2020-05-16 11:41:05: Executing cmd: /g01/app/12.2.0/bin/clsecho -p has -f clsrsc -m 4006
      2020-05-16 11:41:05: Command output:
      >  CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.
      >End Command output
      2020-05-16 11:41:05: CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.
      2020-05-16 11:41:05: Executing cmd: /g01/app/12.2.0/tfa/bossdb2/tfa_home/bin/uninstalltfa -silent -local -crshome /g01/app/12.2.0
      2020-05-16 11:43:01: Command output:
      >
      >  TFA will be uninstalled on node bossdb2 :
      >
      >  Removing TFA from bossdb2 only
      >  Please remove TFA locally on any other configured nodes
      >
      >  Notifying Other Nodes about TFA Uninstall...
      >  Sleeping for 10 seconds...
      >
      >  Stopping TFA Support Tools...
      >
      >  Stopping TFA in bossdb2...
      >
      >  Shutting down TFA
      >  Removed symlink /etc/systemd/system/multi-user.target.wants/oracle-tfa.service.
      >  Removed symlink /etc/systemd/system/graphical.target.wants/oracle-tfa.service.
      >  . . . . .
      >  . . .
      >  Successfully shutdown TFA..
      >
      >  Deleting TFA support files on bossdb2:
      >  Removing /g01/app/grid/tfa/bossdb2/database...
      >  Removing /g01/app/grid/tfa/bossdb2/log...
      >  Removing /g01/app/grid/tfa/bossdb2/output...
      >  Removing /g01/app/grid/tfa/bossdb2...
      >  Removing /g01/app/grid/tfa...
      >  Removing /etc/rc.d/rc0.d/K17init.tfa
      >  Removing /etc/rc.d/rc1.d/K17init.tfa
      >  Removing /etc/rc.d/rc2.d/K17init.tfa
      >  Removing /etc/rc.d/rc4.d/K17init.tfa
      >  Removing /etc/rc.d/rc6.d/K17init.tfa
      >  Removing /etc/init.d/init.tfa...
      >  Removing /g01/app/12.2.0/bin/tfactl...
      >  Removing /g01/app/12.2.0/tfa/bin...
      >  Removing /g01/app/12.2.0/tfa/bossdb2...
      >  Removing /g01/app/12.2.0/tfa...
      >
      >End Command output
      2020-05-16 11:43:01: Executing cmd: /g01/app/12.2.0/bin/clsecho -p has -f clsrsc -m 4007
      2020-05-16 11:43:01: Command output:
      >  CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.
      >End Command output
      2020-05-16 11:43:01: CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.
      2020-05-16 11:43:01: Remove init resources
      2020-05-16 11:43:01: itab entries=cssd|evmd|crsd|ohasd
      2020-05-16 11:43:01: Check if the startup mechanism upstart is being used
      2020-05-16 11:43:01: Executing cmd: /bin/rpm -qf /sbin/init
      2020-05-16 11:43:01: Command output:
      >  systemd-219-62.el7.x86_64
      >End Command output
      2020-05-16 11:43:01: Not using the Linux startup method: upstart
      2020-05-16 11:43:01: Check if the startup mechanism systemd is being used
      2020-05-16 11:43:01: Executing cmd: /bin/rpm -qf /sbin/init
      2020-05-16 11:43:01: Command output:
      >  systemd-219-62.el7.x86_64
      >End Command output
      2020-05-16 11:43:01: remove systemd conf for services: [cssd evmd crsd ohasd]
      2020-05-16 11:43:01: attempt to deconfigure oracle-cssd.service
      2020-05-16 11:43:01: Executing cmd: /usr/bin/systemctl status oracle-cssd.service
      2020-05-16 11:43:01: Command output:
      >  Unit oracle-cssd.service could not be found.
      >End Command output
      2020-05-16 11:43:01: The unit oracle-cssd.service may not be installed
      2020-05-16 11:43:01: isRunning: 0; isEnabled: 0
      2020-05-16 11:43:01: remove service file: /etc/systemd/system/oracle-cssd.service
      2020-05-16 11:43:01: attempt to deconfigure oracle-evmd.service
      2020-05-16 11:43:01: Executing cmd: /usr/bin/systemctl status oracle-evmd.service
      2020-05-16 11:43:01: Command output:
      >  Unit oracle-evmd.service could not be found.
      >End Command output
      2020-05-16 11:43:01: The unit oracle-evmd.service may not be installed
      2020-05-16 11:43:01: isRunning: 0; isEnabled: 0
      2020-05-16 11:43:01: remove service file: /etc/systemd/system/oracle-evmd.service
      2020-05-16 11:43:01: attempt to deconfigure oracle-crsd.service
      2020-05-16 11:43:01: Executing cmd: /usr/bin/systemctl status oracle-crsd.service
      2020-05-16 11:43:01: Command output:
      >  Unit oracle-crsd.service could not be found.
      >End Command output
      2020-05-16 11:43:01: The unit oracle-crsd.service may not be installed
      2020-05-16 11:43:01: isRunning: 0; isEnabled: 0
      2020-05-16 11:43:01: remove service file: /etc/systemd/system/oracle-crsd.service
      2020-05-16 11:43:01: attempt to deconfigure oracle-ohasd.service
      2020-05-16 11:43:01: Executing cmd: /usr/bin/systemctl status oracle-ohasd.service
      2020-05-16 11:43:01: Command output:
      >  ● oracle-ohasd.service - Oracle High Availability Services
      >     Loaded: loaded (/etc/systemd/system/oracle-ohasd.service; enabled; vendor preset: disabled)
      >     Active: active (running) since Sat 2020-05-16 00:03:04 CST; 11h ago
      >   Main PID: 57106 (init.ohasd)
      >     CGroup: /system.slice/oracle-ohasd.service
      >             ├─ 57106 /bin/sh /etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
      >             └─130720 /bin/sleep 10
      >
      >  May 16 11:37:15 bossdb2 su[128619]: pam_limits(su-l:session): unknown limit item 'proc'
      >  May 16 11:37:15 bossdb2 su[128619]: pam_limits(su-l:session): unknown limit item 'proc'
      >  May 16 11:37:16 bossdb2 su[128637]: (to grid) root on none
      >  May 16 11:37:16 bossdb2 su[128637]: pam_limits(su-l:session): unknown limit item 'proc'
      >  May 16 11:37:16 bossdb2 su[128637]: pam_limits(su-l:session): unknown limit item 'proc'
      >  May 16 11:37:16 bossdb2 su[128637]: pam_limits(su-l:session): unknown limit item 'proc'
      >  May 16 11:37:16 bossdb2 su[128637]: pam_limits(su-l:session): unknown limit item 'proc'
      >  May 16 11:40:47 bossdb2 ologgerd[116495]: Oracle Clusterware: 2020-05-16 11:40:47.606
      >                                            [(116495)]CRS-8504:Oracle Clusterware OLOGGERD process with operating system process ID 116495 is exiting
      >  May 16 11:40:47 bossdb2 osysmond.bin[116171]: Oracle Clusterware: 2020-05-16 11:40:47.987
      >                                                [(116171)]CRS-8504:Oracle Clusterware OSYSMOND process with operating system process ID 116171 is exiting
      >  May 16 11:40:51 bossdb2 octssd.bin[115935]: Oracle Clusterware: 2020-05-16 11:40:51.176
      >                                              [(115935)]CRS-8504:Oracle Clusterware OCTSSD process with operating system process ID 115935 is exiting
      >End Command output
      2020-05-16 11:43:01: isRunning: 1; isEnabled: 1
      2020-05-16 11:43:01: Executing cmd: /usr/bin/systemctl stop oracle-ohasd.service
      2020-05-16 11:43:01: Executing cmd: /usr/bin/systemctl disable oracle-ohasd.service
      2020-05-16 11:43:01: Command output:
      >  Removed symlink /etc/systemd/system/multi-user.target.wants/oracle-ohasd.service.
      >  Removed symlink /etc/systemd/system/graphical.target.wants/oracle-ohasd.service.
      >End Command output
      2020-05-16 11:43:01: remove service file: /etc/systemd/system/oracle-ohasd.service
      2020-05-16 11:43:01: Removing file /etc/systemd/system/oracle-ohasd.service
      2020-05-16 11:43:01: Successfully removed file: /etc/systemd/system/oracle-ohasd.service
      2020-05-16 11:43:01: Removing script for Oracle Cluster Ready services
      2020-05-16 11:43:01: Removing /etc/init.d/init.evmd file
      2020-05-16 11:43:01: Removing /etc/init.d/init.crsd file
      2020-05-16 11:43:01: Removing /etc/init.d/init.cssd file
      2020-05-16 11:43:01: Removing /etc/init.d/init.crs file
      2020-05-16 11:43:01: Removing /etc/init.d/init.ohasd file
      2020-05-16 11:43:01: Removing file /etc/init.d/init.ohasd
      2020-05-16 11:43:01: Successfully removed file: /etc/init.d/init.ohasd
      2020-05-16 11:43:01: Init file = ohasd
      2020-05-16 11:43:01: Removing "ohasd" from RC dirs
      2020-05-16 11:43:01: Removing file /etc/rc.d/rc0.d/K15ohasd
      2020-05-16 11:43:01: Successfully removed file: /etc/rc.d/rc0.d/K15ohasd
      2020-05-16 11:43:01: Removing file /etc/rc.d/rc1.d/K15ohasd
      2020-05-16 11:43:01: Successfully removed file: /etc/rc.d/rc1.d/K15ohasd
      2020-05-16 11:43:01: Removing file /etc/rc.d/rc2.d/K15ohasd
      2020-05-16 11:43:01: Successfully removed file: /etc/rc.d/rc2.d/K15ohasd
      2020-05-16 11:43:01: Removing file /etc/rc.d/rc3.d/S96ohasd
      2020-05-16 11:43:01: Successfully removed file: /etc/rc.d/rc3.d/S96ohasd
      2020-05-16 11:43:01: Removing file /etc/rc.d/rc4.d/K15ohasd
      2020-05-16 11:43:01: Successfully removed file: /etc/rc.d/rc4.d/K15ohasd
      2020-05-16 11:43:01: Removing file /etc/rc.d/rc5.d/S96ohasd
      2020-05-16 11:43:01: Successfully removed file: /etc/rc.d/rc5.d/S96ohasd
      2020-05-16 11:43:01: Removing file /etc/rc.d/rc6.d/K15ohasd
      2020-05-16 11:43:01: Successfully removed file: /etc/rc.d/rc6.d/K15ohasd
      2020-05-16 11:43:01: Init file = init.crs
      2020-05-16 11:43:01: Removing "init.crs" from RC dirs
      2020-05-16 11:43:01: Cleaning up SCR settings in /etc/oracle/scls_scr
      2020-05-16 11:43:01: Cleaning oprocd directory, and log files
      2020-05-16 11:43:01: Cleaning up Network socket directories
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/mdnsd
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/npohasd
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/npohasd2
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_CRSD
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_CRSD
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_CRSD_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_CRSD_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_CSSD
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_CSSD
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_CSSD_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_CSSD_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_CTSSD
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_CTSSD_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_EVMD
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_EVMD
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_EVMD_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_EVMD_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_GIPCD
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_GIPCD_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_GPNPD
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_GPNPD_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_INIT
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_INIT_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_LOGD
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_LOGD_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_MOND
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_bossdb2_MOND_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_css_ctrllcl_bossdb2_bossCluster
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_css_ctrllcl_bossdb2_bossCluster
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_css_ctrllcl_bossdb2_bossCluster_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_css_ctrllcl_bossdb2_bossCluster_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_GPNPD_bossdb2
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_GPNPD_bossdb2_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_sbossdb2gridbossClusterCRFM_CLIIPC
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_sbossdb2gridbossClusterCRFM_CLIIPC_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_sbossdb2gridbossClusterCRFM_MIIPC
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_sbossdb2gridbossClusterCRFM_MIIPC_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_sbossdb2gridbossClusterCRFM_SIPC
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/ora_gipc_sbossdb2gridbossClusterCRFM_SIPC_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sAevm
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sAevm_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sCRSD_IPC_SOCKET_11
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sCRSD_IPC_SOCKET_11_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sCRSD_UI_SOCKET
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sCRSD_UI_SOCKET_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sOCSSD_LL_bossdb2_
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sOCSSD_LL_bossdb2__lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sOCSSD_LL_bossdb2_bossCluster
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sOCSSD_LL_bossdb2_bossCluster_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sOHASD_IPC_SOCKET_11
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sOHASD_IPC_SOCKET_11_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sOHASD_UI_SOCKET
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sOHASD_UI_SOCKET_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sora_crsqs
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sora_crsqs_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sOracle_CSS_LclLstnr_bossCluster_2
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sOracle_CSS_LclLstnr_bossCluster_2_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sprocr_local_conn_0_PROC
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sprocr_local_conn_0_PROC
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sprocr_local_conn_0_PROC_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sprocr_local_conn_0_PROC_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sprocr_local_conn_0_PROL
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sprocr_local_conn_0_PROL_lock
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sSYSTEM.evm.acceptor.auth
      2020-05-16 11:43:01: Unlinking file : /var/tmp/.oracle/sSYSTEM.evm.acceptor.auth_lock
      2020-05-16 11:43:01: Remove /etc/oracle/maps
      2020-05-16 11:43:01: Remove /etc/oracle/setasmgid
      2020-05-16 11:43:01: Removing file /etc/oracle/setasmgid
      2020-05-16 11:43:01: Successfully removed file: /etc/oracle/setasmgid
      2020-05-16 11:43:01: removing all contents under /g01/app/12.2.0/gpnp/profiles/peer
      2020-05-16 11:43:01: removing all contents under /g01/app/12.2.0/gpnp/wallets/peer
      2020-05-16 11:43:01: removing all contents under /g01/app/12.2.0/gpnp/wallets/prdr
      2020-05-16 11:43:01: removing all contents under /g01/app/12.2.0/gpnp/wallets/pa
      2020-05-16 11:43:01: removing all contents under /g01/app/12.2.0/gpnp/wallets/root
      2020-05-16 11:43:01: Executing /etc/init.d/ohasd deinstall
      2020-05-16 11:43:01: Executing cmd: /etc/init.d/ohasd deinstall
      2020-05-16 11:43:02: Removing file /etc/init.d/ohasd
      2020-05-16 11:43:02: Successfully removed file: /etc/init.d/ohasd
      2020-05-16 11:43:02: Remove /var/tmp/.oracle
      2020-05-16 11:43:02: Remove /tmp/.oracle
      2020-05-16 11:43:02: Remove /etc/oracle/lastgasp
      2020-05-16 11:43:02: Removing file /etc/oratab
      2020-05-16 11:43:02: Successfully removed file: /etc/oratab
      2020-05-16 11:43:02: Removing file /etc/oracle/ocr.loc.orig
      2020-05-16 11:43:02: Successfully removed file: /etc/oracle/ocr.loc.orig
      2020-05-16 11:43:02: Removing file /etc/oracle/olr.loc.orig
      2020-05-16 11:43:02: Successfully removed file: /etc/oracle/olr.loc.orig
      2020-05-16 11:43:02: Remove /etc/oracle
      2020-05-16 11:43:02: Removing the local checkpoint file /g01/app/grid/crsdata/bossdb2/crsconfig/ckptGridHA_bossdb2.xml
      2020-05-16 11:43:02: Removing file /g01/app/grid/crsdata/bossdb2/crsconfig/ckptGridHA_bossdb2.xml
      2020-05-16 11:43:02: Successfully removed file: /g01/app/grid/crsdata/bossdb2/crsconfig/ckptGridHA_bossdb2.xml
      2020-05-16 11:43:02: Removing the local checkpoint index file /g01/app/grid/bossdb2/checkpoints/crsconfig/index.xml
      2020-05-16 11:43:02: Removing file /g01/app/grid/bossdb2/checkpoints/crsconfig/index.xml
      2020-05-16 11:43:02: Successfully removed file: /g01/app/grid/bossdb2/checkpoints/crsconfig/index.xml
      2020-05-16 11:43:02: Opening permissions on Oracle clusterware home
      2020-05-16 11:43:02: Reset Parent dir permissions for Oracle clusterware home
      2020-05-16 11:43:02: reset ACLs  of parent dir of grid home
      
      2020-05-16 11:43:02: Got /g01/app:grid:oinstall:0755
       from /g01/app/12.2.0/crs/install/ParentDirPerm_bossdb2.txt
      2020-05-16 11:43:02: Got /g01:grid:oinstall:0755
       from /g01/app/12.2.0/crs/install/ParentDirPerm_bossdb2.txt
      2020-05-16 11:43:02: Removing file /g01/app/12.2.0/crs/install/ParentDirPerm_bossdb2.txt
      2020-05-16 11:43:02: Successfully removed file: /g01/app/12.2.0/crs/install/ParentDirPerm_bossdb2.txt
      2020-05-16 11:43:02: removing cvuqdisk rpm
      2020-05-16 11:43:02: Executing /bin/rpm -e --allmatches cvuqdisk
      2020-05-16 11:43:02: Executing cmd: /bin/rpm -e --allmatches cvuqdisk
      2020-05-16 11:43:03: Successfully deconfigured Oracle Clusterware stack on this node
      2020-05-16 11:43:03: Executing cmd: /g01/app/12.2.0/bin/clsecho -p has -f clsrsc -m 336
      2020-05-16 11:43:03: Command output:
      >  CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node
      >End Command output
      2020-05-16 11:43:03: CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node
      2020-05-16 11:43:03: Performing clean-up as part of deinstall ...
      2020-05-16 11:43:03: The dir to remove is '/g01/app/grid/diag'
      2020-05-16 11:43:03: Checking if path [/g01/app/grid/diag] is shared
      2020-05-16 11:43:03: Invoking "/g01/app/12.2.0/bin/cluutil -chkshare -oh /g01/app/grid/diag -localnode bossdb2 -nodelist bossdb1,bossdb2"
      2020-05-16 11:43:03: trace file=/tmp/deinstall2020-05-16_11-34-29AM/logs/cluutil1.log
      2020-05-16 11:43:03: Running as user grid: /g01/app/12.2.0/bin/cluutil -chkshare -oh /g01/app/grid/diag -localnode bossdb2 -nodelist bossdb1,bossdb2
      2020-05-16 11:43:03: s_run_as_user2: Running /bin/su grid -c ' echo CLSRSC_START; /g01/app/12.2.0/bin/cluutil -chkshare -oh /g01/app/grid/diag -localnode bossdb2 -nodelist bossdb1,bossdb2 '
      2020-05-16 11:43:04: Removing file /tmp/BNrjGpayZm
      2020-05-16 11:43:04: Successfully removed file: /tmp/BNrjGpayZm
      2020-05-16 11:43:04: pipe exit code: 0
      2020-05-16 11:43:04: /bin/su successfully executed
      
      2020-05-16 11:43:04: Output: FALSE
      
      2020-05-16 11:43:04: The path [/g01/app/grid/diag] is not shared
      2020-05-16 11:43:04: Attempt to remove the node specific dir '/g01/app/grid/diag'
      2020-05-16 11:43:04: Remove /g01/app/grid/diag
      2020-05-16 11:43:04: The dir to remove is '/g01/app/grid/crsdata'
      2020-05-16 11:43:04: Checking if path [/g01/app/grid/crsdata] is shared
      2020-05-16 11:43:04: Invoking "/g01/app/12.2.0/bin/cluutil -chkshare -oh /g01/app/grid/crsdata -localnode bossdb2 -nodelist bossdb1,bossdb2"
      2020-05-16 11:43:04: trace file=/tmp/deinstall2020-05-16_11-34-29AM/logs/cluutil2.log
      2020-05-16 11:43:04: Running as user grid: /g01/app/12.2.0/bin/cluutil -chkshare -oh /g01/app/grid/crsdata -localnode bossdb2 -nodelist bossdb1,bossdb2
      2020-05-16 11:43:04: s_run_as_user2: Running /bin/su grid -c ' echo CLSRSC_START; /g01/app/12.2.0/bin/cluutil -chkshare -oh /g01/app/grid/crsdata -localnode bossdb2 -nodelist bossdb1,bossdb2 '
      2020-05-16 11:43:05: Removing file /tmp/lVlxtjRLAt
      2020-05-16 11:43:05: Successfully removed file: /tmp/lVlxtjRLAt
      2020-05-16 11:43:05: pipe exit code: 0
      2020-05-16 11:43:05: /bin/su successfully executed
      
      2020-05-16 11:43:05: Output: FALSE
      
      2020-05-16 11:43:05: The path [/g01/app/grid/crsdata] is not shared
      2020-05-16 11:43:05: Attempt to remove the node specific dir '/g01/app/grid/crsdata'
      2020-05-16 11:43:05: Remove /g01/app/grid/crsdata
      
    • 共享路径

    $ Grid_home/crs/install/rootcrs.sh -deconfig -force $ ./runInstaller -detachHome ORACLE_HOME=Grid_home -silent -local

    1.2 删除集群中的节点信息

    在存活的节点上以root用户执行 GI_HOME/bin/crsctl delete node -n node_tobe_deleted

    [root@bossdb1 ~]# /g01/app/12.2.0/bin/crsctl delete node -n bossdb2
    CRS-4661: Node bossdb2 successfully deleted.
    

    1.3 检查删除结果

    在存活节点上,以grid用户执行:

    cluvfy stage -post nodedel -n node_list [-verbose]
    

    示例如下:

    [grid@bossdb1 ~]$ cluvfy stage -post nodedel -n bossdb2 -verbose
    
    Verifying Node Removal ...
      Verifying CRS Integrity ...PASSED
      Verifying Clusterware Version Consistency ...PASSED
    Verifying Node Removal ...PASSED
    
    Post-check for node removal was successful.
    
    CVU operation performed:      stage -post nodedel
    Date:                         May 16, 2020 11:52:27 AM
    CVU home:                     /g01/app/12.2.0/
    User:                         grid
    

    1.4 确认VIP是否删除成功

    如果删除的节点,在删除前cluster 就已经挂掉,执行完上面操作后,需要确认VIP是否已删除。 以grid 用户执行:

    srvctl config vip -node node_tobe_deleted
    

    正常应如下:

    [grid@bossdb1 ~]$ srvctl config vip -node bossdb2
    PRKO-2310 : VIP does not exist on node bossdb2.
    

    如果VIP资源仍存在,则执行如下命令:

    srvctl stop vip -node deleted_node_name
    srvctl remove vip -vip deleted_vip_name
    

    2 添加节点

    在添加节点之前,该节点的准备操作是必须先要完成的。比如创建用户,修改内核参数,配置ASM磁盘,安装依赖包等。这里不再详述。

    有以下几点需要注意:

    • GI_HOME 中不要有任何文件
    • oraInventory 路径不需要创建。不需要配置/etc/oraInst.loc
    • 添加前检查 在已安装节点,以grid用户执行如下命令:

      cluvfy stage -pre nodeadd -n node3 [-fixup] [-verbose]
      
    • 同步GI_HOME

      进入 $GI_HOME/addnode/

      ./addnode.sh -silent "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip}"
        # 如果使用的是flex cluster ,则执行下面命令
        ./addnode.sh -silent "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip}" "CLUSTER_NEW_NODE_ROLES={hub}"
      
      # 如果添加leaf 节点,使用下面命令:
      ./addnode.sh -silent "CLUSTER_NEW_NODES={node3,node4}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip}" "CLUSTER_NEW_NODE_ROLES={hub,leaf}"
      ./addnode.sh -silent "CLUSTER_NEW_NODES={node3,node4}" "CLUSTER_NEW_NODE_ROLES={leaf,leaf}"
      
      # 如果在extend cluster 里添加,使用下面命令
      ./addnode.sh -silent "CLUSTER_NEW_NODES={node3,node4}" "CLUSTER_NEW_NODE_SITES={site1,site2}"
      
      

      示例如下:

      cd $ORACLE_HOME/addnode
      ./addnode.sh -ignoreSysPrereqs -skipPrereqs -silent "CLUSTER_NEW_NODES={bossdb2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={bossdb2-vip}" "CLUSTER_NEW_NODE_ROLES={hub}"
      
      
      Prepare Configuration in progress.
      
      Prepare Configuration successful.
      ..................................................   7% Done.
      
      Copy Files to Remote Nodes in progress.
      ..................................................   12% Done.
      ..................................................   17% Done.
      ..............................
      Copy Files to Remote Nodes successful.
      You can find the log of this install session at:
       /g01/app/oraInventory/logs/addNodeActions2020-05-16_12-52-06-PM.log
      
      Instantiate files in progress.
      
      Instantiate files successful.
      ..................................................   49% Done.
      
      Saving cluster inventory in progress.
      ..................................................   83% Done.
      
      Saving cluster inventory successful.
      The Cluster Node Addition of /g01/app/12.2.0 was successful.
      Please check '/g01/app/12.2.0/inventory/silentInstall2020-05-16_12-52-05-PM.log' for more details.
      
      Setup Oracle Base in progress.
      
      Setup Oracle Base successful.
      ..................................................   90% Done.
      
      Update Inventory in progress.
      
      Update Inventory successful.
      ..................................................   97% Done.
      
      As a root user, execute the following script(s):
              1. /g01/app/oraInventory/orainstRoot.sh
              2. /g01/app/12.2.0/root.sh
      
      Execute /g01/app/oraInventory/orainstRoot.sh on the following nodes:
      [bossdb2]
      Execute /g01/app/12.2.0/root.sh on the following nodes:
      [bossdb2]
      
      The scripts can be executed in parallel on all the nodes.
      
      ..................................................   100% Done.
      Successfully Setup Software.
      

      Oracle 12C 在 CentOS 7.6 上安装,有个BUG(文档 ID 2251322.1). 因此安装时,而根据他人经验,安装补丁也无济于事,因此忽略检查。

    • 执行orainstRoot和root.sh

      登录到新添加的节点,执行orainstRoot.sh 和root.sh

        [root@bossdb2 ~]# sh /g01/app/oraInventory/orainstRoot.sh
      Changing permissions of /g01/app/oraInventory.
      Adding read,write permissions for group.
      Removing read,write,execute permissions for world.
      
      Changing groupname of /g01/app/oraInventory to oinstall.
      The execution of the script is complete.
      
      

    执行root.sh

     sh /g01/app/12.2.0/root.sh
    Check /g01/app/12.2.0/install/root_bossdb2_2020-05-16_13-09-49-914477118.log for the output of root script
    
    • 检查添加结果

      以grid 用户执行。

      cluvfy stage -post nodeadd -n <node_name>
      

      命令示例:

      [grid@bossdb2 ~]$ cluvfy stage -post nodeadd -n bossdb2 -verbose
      

    Author: halberd.lee

    Created: 2020-05-16 Sat 14:01

    Validate

  • 相关阅读:
    linux实践——内核编程 基础模块
    linux内核分析 课程总结
    linux内核分析 期中总结
    linux内核分析 第八周
    linux内核分析 第4章读书笔记
    linux内核分析 第七周
    2020JAVA面试题
    springboot redis工具类
    mysql关于时间函数的应用
    jetty的web部署
  • 原文地址:https://www.cnblogs.com/halberd-lee/p/12900271.html
Copyright © 2011-2022 走看看