Greenplum5.16.0 安装教程
一、环境说明
1.1官方网站
Greenplum官方安装说明:https://gpdb.docs.pivotal.io/5160/install_guide/install_extensions.html
1.2硬件要求
Greenplum数据库集成对服务器的要求:
Operating System |
SUSE Linux Enterprise Server 11 SP2 CentOS 5.0 or higher Red Hat Enterprise Linux (RHEL) 5.0 or higher Oracle Unbreakable Linux 5.5 Note: See the Greenplum Database Release Notes for current supported platform information. |
File Systems |
xfs required for data storage on SUSE Linux and Red Hat (ext3 supported for root file system) |
Minimum CPU |
Pentium Pro compatible (P3/Athlon and above) |
Minimum Memory |
16 GB RAM per server |
Disk Requirements |
l 150MB per host for Greenplum installation l Approximately 300MB per segment instance for meta data l Appropriate free space for data with disks at no more than 70% capacity l High-speed, local storage |
Network Requirements |
10 Gigabit Ethernet within the array Dedicated, non-blocking switch NIC bonding is recommended when multiple interfaces are present |
Software and Utilities |
zlib compression libraries bash shell GNU tars GNU zip GNU sed (used by Greenplum Database gpinitsystem) perl secure shell |
1.3搭建环境
操作系统:Red Hat Enterprise Linux Server release 6.4 (Santiago) x86_64
主机情况:
Gp集成主机组成:1台主节点 + 1台主节点备份节点 + 3台数据节点(主节点用主机备份,数据节点用镜像备份)
Master(主节点):192.25.108.86(mdw)
Segment(数据节点):192.25.108.85(sdw1)、 192.25.108.84(sdw2)、192.25.108.86(sdw3)
主节点备份节点:192.25.108.85(smdw)
注意:这里服务器有限,将86即做主节点又做为数据节点;85即做数据节点又做为主节点备份节点;三台服务器的配置是一样的;
二、安装(强烈建议在root用户下操作)
注:标示“三台主机”的表示要在三台电脑上都做该操作,“master节点”表示只在mdw主机上操作;
2.1 系统设置
1,关闭防火墙(三台主机)(学习时可以直接关闭,正式环境是通过开放端口)
#systemctl status firewalld(查看防火墙状态)
出现以上信息表示防火墙已经关闭;
#systemctl stop firewalld(停止防火墙)
#systemctl disable firewalld(设置防火墙不可用)
假如有安装iptables,需要关闭:
service iptables stop 停止防火墙服务,重启电脑后仍然会开启
chkconfig iptables off 关闭防火墙服务开机启动,重启后生效
可以两个命令结合使用避免重启(重启服务器命令:reboot);
2,修改/etc/hosts文件(三台主机)
命令:vi /etc/hosts
在hosts文件中添加或修改一下内容:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.25.108.86 mdw
192.25.108.85 sdw1
192.25.108.84 sdw2
192.25.108.86 sdw3
192.25.108.85 smdw
添加之后,可以通过ping命令测试是否正确,如:ping sdw1 测试是否能访问sdw1节点.
3,修改或添加/etc/sysctl.conf(三台主机)
kernel.shmmax = 500000000
kernel.shmmni = 4096
kernel.shmall = 4000000000
kernel.sem = 500 1024000 200 4096
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 10000 65535
net.core.netdev_max_backlog = 10000
net.core.rmem_max = 2097152
net.core.wmem_max = 2097152
vm.overcommit_memory = 2
vm.swappiness = 10
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100
vm.dirty_background_ratio = 0
vm.dirty_ratio=0
vm.dirty_background_bytes = 1610612736
vm.dirty_bytes = 4294967296
然后执行生效命令:sysctl -p 或者 source sysctl.conf 或者 locate source sysctl.conf
4, 配置/etc/security/limits.conf文件,添加以下内容(三台主机)
* soft nofile 65536 #打开文件的最大数目
* hard nofile 65536
* soft nproc 131072 #进程的最大数目
* hard nproc 131072
5,设置预读块的值为16384(三台主机)(可不设置)
# /sbin/blockdev --getra /dev/sda 查看预读块,默认大小为256
# /sbin/blockdev --setra 16384 /dev/sda 设置预读块
6,设置磁盘访问I/O调度策略(三台主机)(可不设置)
#echo deadline > /sys/block/sda/queue/scheduler
7,创建文件all_hosts(所有主机名)和 all_segs(数据节点主机名)(三台主机)
路径可以自选,这里用/home
文件内容:
文件/home/all_hosts:
mdw
sdw1
sdw2
sdw3
smdw
文件/home/all_segs:
sdw1
sdw2
sdw3
2.2 安装Greenplum
1,准备greenplum数据库安装文件(master节点)
路径:/home/greenplum:
greenplum-db-5.16.0-rhel7-x86_64.rpm
2,安装软件(master节点)
# cd /home/greenplum
# rpm -Uvh ./greenplum-db-5.16.0-rhel7-x86_64.rpm
安装过程中会显示以下内容,直接使用默认即可;
注意:默认安装路径 /usr/local/,如下图;
# chown -R gpadmin /usr/local/greenplum*(在创建gpadmin后执行)
# chgrp -R gpadmin /usr/local/greenplum*(在创建gpadmin后执行)
3,获取环境参数(master节点)
# source /usr/local/greenplum-db-5.16.0/greenplum_path.sh
查看环境变量:
# echo $GPHOME
4,运行gpseginstall工具 (master节点)
# cd /usr/local/greenplum-db-5.16.0
# source greenplum_path.sh(执行gp相关命令的时候,切换用户后需要先执行这个命令)
# cd /usr/local/greenplum-db-5.16.0/bin(gp相关命令,切到bin目录下执行)
# gpseginstall -f /home/all_hosts -u gpadmin -p gp38281850
all_hosts是上个步骤创建的文件,安装过程中会让输入三台主机的密码,完成后提示成功,如下图:
注意:如果执行失败,可以采用先打通服务器之间连接(master节点):
# cd /usr/local/greenplum-db-5.16.0
# source greenplum_path.sh(执行gp相关命令的时候,切换用户后需要先执行这个命令)
# cd /usr/local/greenplum-db-5.16.0/bin(gp相关命令,切到bin目录下执行)
# gpssh-exkeys -f /home/all_hosts(如果在主节点上手动创建了gpadmin,也可以在gpdamin用户下执行此命令)
成功后再执行gpseginstall命令;
gpseginstall -f all_hosts -u gpadmin -p gpadmin执行这条指令时出错:
- 重新反复检查操作系统配置是否正确,比如关闭防火墙,/etc/sysctl.conf,/etc/hosts,greenplum_path.sh等
- gpseginstall会根据指定主机列表自动安装文件,创建系统用户gpadmin,并自动建立root用户和系统用户(gpadmin)的信任关系。重新运行前,可以先清理掉生成目录等。
- 这个其实也可以自行手工完成,通过useradd创建用户,通过gpssh-exkeys命令建立信任关系,手工创建目录等。
创建创建gpadmin组合用户命令:
# groupdel gpadmin
# userdel gpadmin
# groupadd -g 530 gpadmin
# useradd -g 530 -u 530 -m -d /home/gpadmin -s /bin/bash gpadmin
# passwd gpadmin
错误处理:
[root@sjck-db003tf bin]# gpseginstall -f /home/all_segs -u gpadmin -p gp38281808
20190312:09:30:46:041587 gpseginstall:sjck-db003tf:root-[INFO]:-Installation Info:
link_name greenplum-db
binary_path /usr/local/greenplum-db-5.16.0
binary_dir_location /usr/local
binary_dir_name greenplum-db-5.16.0
20190312:09:30:46:041587 gpseginstall:sjck-db003tf:root-[INFO]:-check cluster password access
20190312:09:30:47:041587 gpseginstall:sjck-db003tf:root-[INFO]:-de-duplicate hostnames
20190312:09:30:47:041587 gpseginstall:sjck-db003tf:root-[INFO]:-master hostname: sjck-db003tf
20190312:09:30:47:041587 gpseginstall:sjck-db003tf:root-[INFO]:-check for user gpadmin on cluster
20190312:09:30:47:041587 gpseginstall:sjck-db003tf:root-[INFO]:-add user gpadmin on master
20190312:09:30:48:041587 gpseginstall:sjck-db003tf:root-[INFO]:-add user gpadmin on cluster
20190312:09:30:48:041587 gpseginstall:sjck-db003tf:root-[INFO]:-chown -R gpadmin:gpadmin /usr/local/greenplum-db
20190312:09:30:48:041587 gpseginstall:sjck-db003tf:root-[INFO]:-chown -R gpadmin:gpadmin /usr/local/greenplum-db-5.16.0
20190312:09:30:48:041587 gpseginstall:sjck-db003tf:root-[INFO]:-rm -f /usr/local/greenplum-db-5.16.0.tar; rm -f /usr/local/greenplum-db-5.16.0.tar.gz
20190312:09:30:48:041587 gpseginstall:sjck-db003tf:root-[INFO]:-cd /usr/local; tar cf greenplum-db-5.16.0.tar greenplum-db-5.16.0
20190312:09:31:03:041587 gpseginstall:sjck-db003tf:root-[INFO]:-gzip /usr/local/greenplum-db-5.16.0.tar
20190312:09:31:38:041587 gpseginstall:sjck-db003tf:root-[INFO]:-remote command: mkdir -p /usr/local
20190312:09:31:39:041587 gpseginstall:sjck-db003tf:root-[INFO]:-remote command: rm -rf /usr/local/greenplum-db-5.16.0
20190312:09:31:40:041587 gpseginstall:sjck-db003tf:root-[INFO]:-scp software to remote location
The authenticity of host 'smdw (192.25.108.85)' can't be established.
ECDSA key fingerprint is SHA256:FVZzJbgTMrxJp2gjhQlAgyXAg+cOlZ3mp+nun+ujTwM.
Are you sure you want to continue connecting (yes/no)? yes
Warning: the ECDSA host key for 'smdw' differs from the key for the IP address '192.25.108.85'
Offending key for IP in /root/.ssh/known_hosts:7
Are you sure you want to continue connecting (yes/no)?
-- 解决方案:
删除:/root/.ssh/known_hosts 或者 删除对应的秘钥信息
然后执行:
gpssh-exkeys -f /home/all_hosts
重新生成秘钥
然后执行:
gpseginstall -f /home/all_segs -u gpadmin -p gp38281808
5,切换到gpadmin用户验证无密码登录(master节点)
(1)切换用户
$ su - gpadmin
su [user] 和 su - [user]的区别:
su [user]切换到其他用户,但是不切换环境变量,su - [user]则是完整的切换到新的用户环境。
(2)使用gpssh工具来测试无密码登录所有主机,结果如下图:
$ gpssh -f /home/all_hosts -e ls -l $GPHOME
6,配置环境变量(master节点)
vi ~/.bashrc
source /usr/local/greenplum-db-5.16.0/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/home/data/master/gpseg-1
export PGPORT=5432
export PGUSER=gpadmin
export PGDATABASE=postgres(默认数据库)
修改生效:source ~/.bashrc
备用主机,将环境文件复制到备用主机:
$ cd ~
# 只需要替换standby_hostname名即可;
$ scp .bashrc standby_hostname:`pwd`
查看效果:只能在gpadmin用户下看到替换效果;
7,创建存储区域(master节点)【root用户下执行】
注意:#df -hl 查看各个文件夹剩余空间,用空间大的作为data存放目录;
(1) 创建Master数据存储区域
# mkdir -p /home/data/master
(2) 改变目录的所有权
# chown gpadmin:gpadmin /home/data/master
(3) 使用gpssh工具在所有segment主机上创建主数据和镜像数据目录,如果没有设置镜像可以不创建mirror目录(执行下面命令):
mkdir /home/data
mkdir /home/data/master
chown gpadmin:gpadmin /home/data/master/
source /usr/local/greenplum-db-4.2/greenplum_path.sh
-- 主节点备用机
gpssh -h smdw -e 'mkdir /home/data/master'
gpssh -h smdw -e 'chown gpadmin /home/data/master'
-- 数据节点
gpssh -f /home/all_segs -e 'mkdir /home/data'
gpssh -f /home/all_segs -e 'mkdir /home/data/primary'
gpssh -f /home/all_segs -e 'mkdir /home/data/mirror'
gpssh -f /home/all_segs -e 'chown gpadmin /home/data/primary'
gpssh -f /home/all_segs -e 'chown gpadmin /home/data/mirror'
注意:这样可以登录数据节点看看是否生成对应的文件夹。
8,同步系统时间(Master节点)
(1) 在Master主机上编辑/etc/ntp.conf来设置如下内容:
server 127.127.1.0
(2) 在Segment主机上编辑/etc/ntp.conf
server mdw prefer
server smdw
# 如果有指定的数据中心NTP服务器,则需要将mdw和smdw指定到数据中心IP;
(3) 在Master主机上,通过NTP守护进程同步系统时钟(切换到su - gpadmin)
$ gpssh -f all_hosts -v -e 'ntpd'
(4)验证下观察时间是否一致(切换到su - gpadmin):
$gpssh -f /home/all_hosts -v date
注意:没有ntp.conf文件,安装命令:yum -y install ntp
9,检测系统环境
Master上进行主机OS参数检测(Master主节点)【切换到su - gpadmin】:
$ gpcheck -f /home/all_hosts -m mdw
gpcheck时遇到的一些报错解决:
$ gpssh -f /home/all_hosts -e 'echo deadline > /sys/block/sr0/queue/scheduler'
$ gpssh -f /home/all_hosts -e 'echo deadline > /sys/block/sr1/queue/scheduler'
$ gpssh -f /home/all_hosts -e 'echo deadline > /sys/block/sda/queue/scheduler'
$ /sbin/blockdev --setra 16384 /dev/sda* /sbin/blockdev --getra /dev/sda*
2.3 检查硬件性能
1, 检查网络性能(Master主节点)【切换到 su - gpadmin】
$ gpcheckperf -f /home/all_segs -r N -d /tmp > subnet1.out
$ cat subnet1.out
2,检查磁盘I/O和内存带宽(Master主节点)【切换到 su - gpadmin】
$ gpcheckperf -f /home/all_hosts -d /home/data/mirror -r ds
[gpadmin@sjck-db003tf bin]$ gpcheckperf -f /home/all_hosts -d /home/data/mirror -r ds
/usr/local/greenplum-db/./bin/gpcheckperf -f /home/all_hosts -d /home/data/mirror -r ds
--------------------
-- DISK WRITE TEST
--------------------
--------------------
-- DISK READ TEST
--------------------
--------------------
-- STREAM TEST
--------------------
====================
== RESULT
====================
disk write avg time (sec): 60.64
disk write tot bytes: 100862754816
disk write tot bandwidth (MB/s): 2711.94
disk write min bandwidth (MB/s): 247.14 [sdw3]
disk write max bandwidth (MB/s): 1297.59 [sdw1]
disk read avg time (sec): 37.11
disk read tot bytes: 100862754816
disk read tot bandwidth (MB/s): 2730.62
disk read min bandwidth (MB/s): 745.66 [sdw3]
disk read max bandwidth (MB/s): 1224.26 [smdw]
stream tot bandwidth (MB/s): 32104.45
stream min bandwidth (MB/s): 9512.24 [smdw]
stream max bandwidth (MB/s): 11821.18 [sdw2]
三、初始化Greenplum数据库
3.1数据库配置文件
1,以gpadmin用户登录
# su - gpadmin
2,从模板中拷贝一份gpinitsystem_config文件
cp $GPHOME/docs/cli_help/gpconfigs/gpinitsystem_config /home/gpadmin/gpinitsystem_config
chown gpadmin:gpadmin /home/gpadmin/gpinitsystem_config
3,设置所有必须的参数
# 数据库应用名称
ARRAY_NAME="EMC Greenplum DW"
# Segment的前缀(包含数据节点和其镜像,目录/home/data/primary /home/data/mirror)
SEG_PREFIX=gpseg
# Primary Segment的起始端口号
PORT_BASE=40000
# 指定Primary Segment的数据目录,DATA_DIRECTORY参数指定每个Segment主机配置多少个Instance(目录路径只需要设置到primary,会自动生成带序号的文件)
declare -a DATA_DIRECTORY=(/home/data/primary /home/data/primary /home/data/primary /home/data/primary)
# Master所在主机的hostname
MASTER_HOSTNAME=mdw
# Master的目录
MASTER_DIRECTORY=/home/data/master
# Master端口
MASTER_PORT=5432
# bash版本(官方文档:TRUSTED SHELL=ssh 资料查阅:TRUSTED_SHELL=ssh)
TRUSTED SHELL=ssh
# 设置检查点段的大小,较大的检查点段可以改善大数据量装载的性能,同时会加长灾难事物恢复时间,缺省值8(如果多台服务器级的主机,有足够的内存>16G >16核,那么可以设置256)
CHECK_POINT_SEGMENTS=8
# 字符集
ENCODING=UNICODE
--设置镜像参数
# Mirror Segment起始端口号
MIRROR_PORT_BASE=7000
# Primary Segment主备同步的起始端口号
REPLICATION_PORT_BASE=8000
# Mirror Segment主备同步的起始端口号
MIRROR_REPLICATION_PORT_BASE=9000
# Mirror Segment目录(个数跟Primary Segment一致)
declare -a MIRROR_DATA_DIRECTORY
=(/home/data/mirror /home/data/mirror /home/data/mirror /home/data/mirror)
3.2运行初始化工具初始化数据库
1,初始化数据库
# 主节点没有备份主机
$ cd ~
$ gpinitsystem -c /home/gpadmin/gpinitsystem_config -h /home/all_segs -s
或者
# 主节点有备份主机
$ cd ~
$ gpinitsystem -c /home/gpadmin/gpinitsystem_config -h /home/all_segs -s smdw -S
注意:参数-s 表示镜像为散列spread储存;缺省默认表示组group存储;
成功之后,数据库便启动了,信息如下:
2,启动和停止数据库测试
$ gpstart
$ gpstop
3.3访问数据库(默认登录postgres数据库)
$ psql -d postgres
输入查询语句
postgres=# select datname,datdba,encoding,datacl from pg_database;
3.4退出数据库
postgres=# d 或者 ctrl + d
3.5数据库状态
查看状态:$ gpstate
- gpstate -c:primary instance 和 mirror instance 的对应关系;
- gpstate -m:只列出mirror 实例的状态和配置信息;
- gpstate -f:显示standby master 的详细信息;
表示正常;
四、扩展Segment节点
4.1 扩展方法
gpdb推荐的硬件配置环境下:每个有效的CPU核对应一个Segment Instance ,比如一个Segment主机配备2个双核的CPU ,那么可以选择每个Segment主机配置4个主实例(Primary Instance)
查看逻辑CPU的个数 $ cat /proc/cpuinfo | grep "processor" | wc -l
扩展Segment个数,总共分三步:
(1)将主机加入集群(如果在原有主机上扩展,不需要这一步)
这一步主要做的是:环境配置;
例如:OS kernel参数;
创建go管理用户;
ssh key的交换(使用gpssh-exkeys -e exist_hosts -x new_hosts);
Greenplum bin软件的拷贝;
使用gpcheck检查(gpcheck -f new_hosts);
使用gpcheckperf检查性能(gpcheckperf -f new_hosts_file -d /data1 -d /data2 -v);
(2)初始化segement并加入集群
这一步主要做的是:产生配置文件;
命令:gpexpand -f new_hosts_file(也可以自己写配置文件);
在指定目录初始化segment数据库(gpexpand -i cnf -D dbname);
将新增的segment信息添加到master元表;
扩展失败怎么处理?
(3)重分布表
规划表的重分布优先级顺序;
将表数据根据新的 segment重新分布;
分析表;
4.2示例
4.2.1示例一:在原主机上新增节点
假设需要原地扩展9个Segment,在原有的3台主机各增加3个segment;
(1)因为没有新增主机,所以直接进入第二步;
(2)创建需要扩展的segment的主机文件/home/seg_hosts:
$ vi /home/seg_hosts
sdw1
sdw2
sdw3
(3)产生配置文件:
创建一个database:
$ psql -d postgres
postgres=# create database addseg;
创建配置文件:
$ gpexpand -f /home/seg_hosts -D addseg
Please refer to the Admin Guide for more information.
Would you like to initiate a new System Expansion Yy|Nn (default=N):
> y
What type of mirroring strategy would you like?
spread|grouped (default=grouped):
> spread
How many new primary segments per host do you want to add? (default=0):
> 3
Enter new primary data directory 1:
> /home/data/primary
Enter new primary data directory 2:
> /home/data/primary
Enter new primary data directory 3:
> /home/data/primary
Enter new mirror data directory 1:
> /home/data/mirror
Enter new mirror data directory 2:
> /home/data/mirror
Enter new mirror data directory 3:
> /home/data/mirror
Generating configuration file...
20190312:18:22:22:094698 gpexpand:sjck-db003tf:gpadmin-[INFO]:-Generating input file...
Input configuration files were written to 'gpexpand_inputfile_20190312_182222' and 'None'.
Please review the file and make sure that it is correct then re-run
with: gpexpand -i gpexpand_inputfile_20190312_182222 -D addseg
20190312:18:22:22:094698 gpexpand:sjck-db003tf:gpadmin-[INFO]:-Exiting...
查看生成的配置文件:$ cat gpexpand_inputfile_20190313_112226
sdw1:sdw1:40001:/home/data/primary/gpseg3:9:3:p:8001
sdw2:sdw2:7001:/home/data/mirror/gpseg3:10:3:m:9001
sdw2:sdw2:40001:/home/data/primary/gpseg4:11:4:p:8001
sdw3:sdw3:7001:/home/data/mirror/gpseg4:12:4:m:9001
sdw3:sdw3:40001:/home/data/primary/gpseg5:13:5:p:8001
sdw1:sdw1:7001:/home/data/mirror/gpseg5:14:5:m:9001
sdw1:sdw1:40002:/home/data/primary/gpseg6:15:6:p:8002
sdw3:sdw3:7002:/home/data/mirror/gpseg6:16:6:m:9002
sdw3:sdw3:40002:/home/data/primary/gpseg7:17:7:p:8002
sdw2:sdw2:7002:/home/data/mirror/gpseg7:18:7:m:9002
sdw2:sdw2:40002:/home/data/primary/gpseg8:19:8:p:8002
sdw1:sdw1:7002:/home/data/mirror/gpseg8:20:8:m:9002
sdw1:sdw1:40003:/home/data/primary/gpseg9:21:9:p:8003
sdw2:sdw2:7003:/home/data/mirror/gpseg9:22:9:m:9003
sdw2:sdw2:40003:/home/data/primary/gpseg10:23:10:p:8003
sdw3:sdw3:7003:/home/data/mirror/gpseg10:24:10:m:9003
sdw3:sdw3:40003:/home/data/primary/gpseg11:25:11:p:8003
sdw1:sdw1:7003:/home/data/mirror/gpseg11:26:11:m:9003
配置文件内容格式以及字段含义:
hostname:address:port:fselocation:dbid:content:prefered_role:replication_port
hostname:主机名
addree:类似主机名
port:segment监听端口,根据配置的primary和mirror的初始端口值累加(需要注意已经使用的端口)
fselocation:segment data目录,注意是全路径
dbid:gp集群的唯一ID,可以到表gp_segment_configuration中获取,必须是累加
content:可以到表gp_segment_configuration中获取,必须是累加
prefered_role:角色(p或m)(primary , mirror)
replication_port:如果没有mirror则不需要(用于replication端口),根据配置primary和mirror的主备同步的初始端口值累加(注意已经使用的端口)
注意:如果觉得上面内容有问题可以手动修改或者手动写配置文件,要注意镜像是spread存储方式,同一个segment的primarty和mirror不能在同一台主机上;
(4)接下来需要修改 greenplum bin目录权限:
gpexpand需要在这个目录写入一些内容;
$ chmod -R 700 /usr/local/greenplum-db-5.16.0/
(5)执行gpexpand进行扩展:
$ gpexpand -i gpexpand_inputfile_20190313_112226 -D addseg -S -V -v -n 1 -B 1 -t /tmp
参数:
-B batch_size
Batch size of remote commands to send to a given host before making a one-second pause. Default is 16. Valid values are 1-128.
The gpexpand utility issues a number of setup commands that may exceed the host's maximum threshold for unauthenticated connections as defined by MaxStartups in the SSH daemon configuration. The one-second pause allows authentications to be completed before gpexpand issues any more commands.
The default value does not normally need to be changed. However, it may be necessary to reduce the maximum number of commands if gpexpand fails with connection errors such as 'ssh_exchange_identification: Connection closed by remote host.'
-c | --clean
Remove the expansion schema.
-d | --duration hh:mm:ss
Duration of the expansion session from beginning to end.
-D database_name
Specifies the database in which to create the expansion schema and tables. If this option is not given, the setting for the environment variable PGDATABASE is used. The database templates template1 and template0 cannot be used.
-e | --end 'YYYY-MM-DD hh:mm:ss'
Ending date and time for the expansion session.
-f | --hosts-file filename
Specifies the name of a file that contains a list of new hosts for system expansion. Each line of the file must contain a single host name.
This file can contain hostnames with or without network interfaces specified. The gpexpand utility handles either case, adding interface numbers to end of the hostname if the original nodes are configured with multiple network interfaces.
Note: The Greenplum Database segment host naming convention is sdwN where sdw is a prefix and N is an integer. For example, sdw1, sdw2 and so on. For hosts with multiple interfaces, the convention is to append a dash (-) and number to the host name. For example, sdw1-1 and sdw1-2 are the two interface names for host sdw1.
-i | --input input_file
Specifies the name of the expansion configuration file, which contains one line for each segment to be added in the format of:
hostname:address:port:fselocation:dbid:content:preferred_role:replication_port
If your system has filespaces, gpexpand will expect a filespace configuration file (input_file_name.fs) to exist in the same directory as your expansion configuration file. The filespace configuration file is in the format of:
filespaceOrder=filespace1_name:filespace2_name: ...
dbid:/path/for/filespace1:/path/for/filespace2: ...
dbid:/path/for/filespace1:/path/for/filespace2: ...
...
-n parallel_processes
The number of tables to redistribute simultaneously. Valid values are 1 - 96.
Each table redistribution process requires two database connections: one to alter the table, and another to update the table's status in the expansion schema. Before increasing -n, check the current value of the server configuration parameter max_connections and make sure the maximum connection limit is not exceeded.
-r | --rollback
Roll back a failed expansion setup operation. If the rollback command fails, attempt again using the -D option to specify the database that contains the expansion schema for the operation that you want to roll back.
-s | --silent
Runs in silent mode. Does not prompt for confirmation to proceed on warnings.
-S | --simple-progress
If specified, the gpexpand utility records only the minimum progress information in the Greenplum Database table gpexpand.expansion_progress. The utility does not record the relation size information and status information in the table gpexpand.status_detail.
Specifying this option can improve performance by reducing the amount of progress information written to the gpexpand tables.
[-t | --tardir] directory
The fully qualified path to a directory on segment hosts were the gpexpand utility copies a temporary tar file. The file contains Greenplum Database files that are used to create segment instances. The default directory is the user home directory.
-v | --verbose
Verbose debugging output. With this option, the utility will output all DDL and DML used to expand the database.
--version
Display the utility's version number and exit.
-V | --novacuum
Do not vacuum catalog tables before creating schema copy.
运行成功输出如下:
20190313:16:50:44:012900 gpexpand:digoal193096:digoal-[INFO]:-Exiting...
(6)添加节点失败了,怎么办?
启动限制模式,回滚;
$ gpstart -m
$ gpexpand -r -D addseg 或者 gpexpand --rollback -D addseg
$ gpstart -a
然后找问题解决,继续上一步,直到成功;
成功后,可以登录数据库查看节点数变多:
postgres=#select * from gp_segment_configuration;
(7)重分布
在数据重分布前,新增节点对老数据不起作用;
计划重分布任务中,表的调度顺序:
postgres=# c addseg
addseg=#select * from gpexpand.status;
查看接下来的任务,如果要调整任务的先后顺序,改rank即可:
addseg=#select * from gpexpand.status_detail;
例如:
addseg=#Update gpexpand.status_detail set rank=10;
addseg=#Update gpexpand.status_detail set rank=1 where fq_name = ‘public.lineitem’;
addseg=#Update gpexpand.status_detail set rank=2 where fq_name = ‘public.orders;
还有多少表未完成重分布:
addseg=#select * from gpexpand.expansion_progress;
执行重分布命令,需要在指定计划在多久内完成,或者机会在哪天完成重分布,脚本会自动调用重分布:
$ gpexpand -a -d 1:00:00 -D addseg -S -t /tmp -v -n 1
重分布过程中,可以看到进度:
addseg=#select * from gpexpand.expansion_progress;
addseg=#select * from gpexpand.status_detail;
(8)最后一步,清除重分布产生的schema gpexpand
$ gpexpand -c -D addseg
问你是否需要清除gpexpand schema前将状态信息导出
Do you want to dump the gpexpand.status_detail table to file?Yy|Nn (default=Y)
>y
注意:如果提示已经超过回退的节点,可以采用全量恢复($ gprecoverseg -F )
4.2.2示例二:新增一台主机,4个segment
新增主机:192.25.108.87 sdw4
(1)关闭防火墙(sdw4)
#systemctl status firewalld(查看防火墙状态)
#systemctl stop firewalld(停止防火墙)
#systemctl disable firewalld(设置防火墙不可用)
(2)修改/etc/hosts文件(四台主机)
命令:vi /etc/hosts
在hosts文件中添加或修改一下内容:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.25.108.86 mdw
192.25.108.85 sdw1
192.25.108.84 sdw2
192.25.108.86 sdw3
192.25.108.87 sdw4
192.25.108.85 smdw
(3)修改或添加/etc/sysctl.conf(sdw4)
kernel.shmmax = 500000000
kernel.shmmni = 4096
kernel.shmall = 4000000000
kernel.sem = 500 1024000 200 4096
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 10000 65535
net.core.netdev_max_backlog = 10000
net.core.rmem_max = 2097152
net.core.wmem_max = 2097152
vm.overcommit_memory = 2
vm.swappiness = 10
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100
vm.dirty_background_ratio = 0
vm.dirty_ratio=0
vm.dirty_background_bytes = 1610612736
vm.dirty_bytes = 4294967296
然后执行生效命令:sysctl -p
(4)配置/etc/security/limits.conf文件,添加以下内容(sdw4)
* soft nofile 65536 #打开文件的最大数目
* hard nofile 65536
* soft nproc 131072 #进程的最大数目
* hard nproc 131072
(5)设置预读块的值为16384(sdw4)(可不设置)
# /sbin/blockdev --getra /dev/sda 查看预读块,默认大小为256
# /sbin/blockdev --setra 16384 /dev/sda 设置预读块
(6)设置磁盘访问I/O调度策略(sdw4)(可不设置)
#echo deadline > /sys/block/sda/queue/scheduler
(7)创建admin用户和data文件夹(sdw4)
跟其他三台主机保持一致:/home/gpadmin /home/data(文件夹要授权gpadmin)
(8)创建文件exist_hosts和new_hosts(Master节点)
$ vi /home/exist_hosts
mdw
sdw1
sdw2
sdw3
smdw
$ vi /home/new_hosts
sdw4
(9)交换ssh key(Master节点)
Master使用gp管理员用户(root)访问了所有segment不需要密码,Master pub拷贝到所有的segment authorized_keys
$ gpssh-exkeys -e /home/exist_hosts -x /home/new_hosts
(10)安装软件到segment hosts
$ gpseginstall -f /home/new_hosts -u gpdamin -p gp38281850
(11)使用gpcheckperf检测性能
$ gpcheckperf -f /home/new_hosts -d /tmp -v
接下来和4.2.1的示例一的步骤差不多;
(12)产生配置文件
$ gpexpand -f /home/new_hosts -c(gpexpand -f /home/new_hosts -D addseg)
注意:dbid, contendid都务必连续,通过查看gp_segment_configuration(同一主机,端口不能冲突)
(13)修改greeenplum bin目录权限
gpexpand需要在这个目录写入一些内容;
$ chmod -R 700 /usr/local/greenplum-db-5.16.0/
(14)执行gpexpand进行扩展:
$ gpexpand -i gpexpand_inputfile_20190313_112380 -D addseg -S -V -v -n 1 -B 1 -t /tmp
(15)执行重分布命令
需要在指定计划在多久内完成,或者计划在哪天完成重分布交本会自动调用重分布;
$ gpexpand -a -d 1:00:00 -D addseg -S -t /tmp -v -n 1
五、问题处理
日志路径:/home/gpadmin/gpAdminLogs
5.1问题一:扩展节点提示:主机名和Ip的秘钥不一致
问题:
Warning: the ECDSA host key for 'sjck-db001tf' differs from the key for the IP address '192.25.108.84'
解决方案:
(1)查看#cat /home/gpadmin/.ssh/known_hosts ;
(2)删除秘钥:#rm -rf /root/.ssh/known_hosts和 #rm -rf cat /home/gpadmin/.ssh/known_hosts;
(3)将所有涉及到的主机名(包含所有节点以及节点本机的hostname)加入到
#vi /home/all_hostnames,内容如下:
mdw
smdw
sdw1
sdw2
sdw3
sjck-db001tf
sjck-db002tf
sjck-db003tf
(4)重新shh生成key:#gpssh-exkeys -f /home/all_hostnames(生成/root/.ssh/known_hosts文件)
(5)重新生成帐号gpadmin的shh key:#gpseginstall -f /home/all_hosts -u gpadmin -p gp38281808(生成/home/gpadmin/.ssh/known_hosts 文件)
(6)查看#/root/.ssh/known_hosts 和 #cat /home/gpadmin/.ssh/known_hosts 内容是否一致;
(7)不一致,则删除文件重新生成;