pmxcfs
The Proxmox Cluster file system (“pmxcfs”) is a database-driven file system for storing configuration files, replicated in real time to all cluster nodes using corosync. We use this to store all PVE related configuration files.
Although the file system stores all data inside a persistent database on disk, a copy of the data resides in RAM. That imposes restriction on the maximum size, which is currently 30MB. This is still enough to store the configuration of several thousand virtual machines.
We use the Corosync Cluster Engine for cluster communication, and SQlite for the database file. The file system is implemented in user space using FUSE.
The file system is mounted at: /etc/pve This service is usually started and managed using systemd toolset. The service is called pve-cluster. root@cu-pve04:~# systemctl status pve-cluster ● pve-cluster.service - The Proxmox VE cluster filesystem Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2019-04-30 21:20:23 CST; 1 weeks 0 days ago Main PID: 3745 (pmxcfs) Tasks: 13 (limit: 17203) Memory: 88.9M CPU: 25min 56.856s CGroup: /system.slice/pve-cluster.service └─3745 /usr/bin/pmxcfs May 08 15:39:22 cu-pve04 pmxcfs[3745]: [dcdb] notice: received all states May 08 15:39:22 cu-pve04 pmxcfs[3745]: [dcdb] notice: leader is 1/3745 May 08 15:39:22 cu-pve04 pmxcfs[3745]: [dcdb] notice: synced members: 1/3745, 3/3878 May 08 15:39:22 cu-pve04 pmxcfs[3745]: [dcdb] notice: start sending inode updates May 08 15:39:22 cu-pve04 pmxcfs[3745]: [dcdb] notice: sent all (3) updates May 08 15:39:22 cu-pve04 pmxcfs[3745]: [dcdb] notice: all data is up to date May 08 15:39:22 cu-pve04 pmxcfs[3745]: [status] notice: received sync request (epoch 1/3745/0000000F) May 08 15:39:22 cu-pve04 pmxcfs[3745]: [status] notice: received all states May 08 15:39:22 cu-pve04 pmxcfs[3745]: [status] notice: all data is up to date May 08 15:39:23 cu-pve04 pmxcfs[3745]: [status] notice: received log root@cu-pve04:/var/lib/pve-cluster# ls -l total 4136 -rw------- 1 root root 77824 May 8 15:17 config.db -rw------- 1 root root 32768 May 8 15:18 config.db-shm -rw------- 1 root root 4124152 May 8 15:18 config.db-wal root@cu-pve04:/var/lib/pve-cluster# file * config.db: SQLite 3.x database, last written using SQLite version 3016002 config.db-shm: data config.db-wal: SQLite Write-Ahead Log, version 3007000
root@cu-pve04:/etc/pve# ls -l total 5 -rw-r----- 1 root www-data 451 Apr 30 14:23 authkey.pub -rw-r----- 1 root www-data 881 Apr 30 20:47 ceph.conf -rw-r----- 1 root www-data 545 Apr 30 17:22 corosync.conf -rw-r----- 1 root www-data 16 Apr 30 14:09 datacenter.cfg -rw-r----- 1 root www-data 2057 Apr 30 14:23 pve-root-ca.pem -rw-r----- 1 root www-data 1675 Apr 30 14:23 pve-www.key -rw-r----- 1 root www-data 177 May 7 17:54 storage.cfg -rw-r----- 1 root www-data 66 May 4 20:38 user.cfg -rw-r----- 1 root www-data 119 Apr 30 14:23 vzdump.cron drwxr-xr-x 2 root www-data 0 Apr 30 14:23 nodes drwx------ 2 root www-data 0 Apr 30 14:23 priv lrwxr-xr-x 1 root www-data 0 Jan 1 1970 local -> nodes/cu-pve04 lrwxr-xr-x 1 root www-data 0 Jan 1 1970 lxc -> nodes/cu-pve04/lxc lrwxr-xr-x 1 root www-data 0 Jan 1 1970 openvz -> nodes/cu-pve04/openvz lrwxr-xr-x 1 root www-data 0 Jan 1 1970 qemu-server -> nodes/cu-pve04/qemu-server ------------------------------------------------------------ 虚拟机包括100.conf和images即100.disk两种文件 The /etc/pve/qemu-server/<VMID>.conf files stores VM configuration, where "VMID" is the numeric ID of the given VM. One can use the qm command to generate and modify those files. root@cu-pve04:/etc/pve/nodes# ls -R .: cu-pve04 cu-pve05 cu-pve06 ./cu-pve04: lrm_status lxc openvz priv pve-ssl.key pve-ssl.pem qemu-server ./cu-pve04/lxc: ./cu-pve04/openvz: ./cu-pve04/priv: ./cu-pve04/qemu-server: 100.conf 101.conf 102.conf 103.conf 105.conf 106.conf 107.conf ./cu-pve05: lrm_status lxc openvz priv pve-ssl.key pve-ssl.pem qemu-server ./cu-pve05/lxc: ./cu-pve05/openvz: ./cu-pve05/priv: ./cu-pve05/qemu-server: 104.conf 108.conf ./cu-pve06: lrm_status lxc openvz priv pve-ssl.key pve-ssl.pem qemu-server ./cu-pve06/lxc: ./cu-pve06/openvz: ./cu-pve06/priv: ./cu-pve06/qemu-server: 109.conf
pvedaemon - PVE API Daemon
This daemon exposes the whole Proxmox VE API on 127.0.0.1:85. It runs as root and has permission to do all privileged operations.
The daemon listens to a local address only, so you cannot access it from outside. The pveproxy daemon exposes the API to the outside world.
root@cu-pve04:~# pvedaemon status running root@cu-pve04:~# ss -lntp|grep proxy LISTEN 0 128 0.0.0.0:3128 0.0.0.0:* users:(("spiceproxy work",pid=2195573,fd=6),("spiceproxy",pid=7541,fd=6)) LISTEN 0 128 0.0.0.0:8006 0.0.0.0:* users:(("pveproxy worker",pid=2310656,fd=6),("pveproxy worker",pid=2305989,fd=6),("pveproxy worker",pid=2295860,fd=6),("pveproxy",pid=7522,fd=6)) root@cu-pve04:~# ss -lntp|grep daemon LISTEN 0 128 127.0.0.1:85 0.0.0.0:* users:(("pvedaemon worke",pid=2252583,fd=6),("pvedaemon worke",pid=2250382,fd=6),("pvedaemon worke",pid=2250172,fd=6),("pvedaemon",pid=4500,fd=6)) --------------------------------------------------------
pveproxy - PVE API Proxy Daemon
This daemon exposes the whole Proxmox VE API on TCP port 8006 using HTTPS. It runs as user www-data and has very limited permissions. Operation requiring more permissions are forwarded to the local pvedaemon.
Requests targeted for other nodes are automatically forwarded to those nodes. This means that you can manage your whole cluster by connecting to a single Proxmox VE node.
It is possible to configure “apache2”-like access control lists. Values are read from file /etc/default/pveproxy.
root@cu-pve04:~# pveproxy status
running
/etc/defaults/
-----------------------------------------------------------------------
pvestatd - PVE Status Daemon
This daemon queries the status of VMs, storages and containers at regular intervals. The result is sent to all nodes in the cluster.
root@cu-pve04:/etc/default# pvestatd status
running
----------------------------------------------------------------------
qmeventd - PVE Qemu Eventd Daemon
This service is usually started and managed using systemd toolset. The service is called qmeventd.
/usr/sbin/qm cleanup
systemctl status qmeventd root@cu-pve04:/etc/default# systemctl status qmeventd ● qmeventd.service - PVE Qemu Event Daemon Loaded: loaded (/lib/systemd/system/qmeventd.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2019-04-30 21:20:19 CST; 1 weeks 0 days ago Main PID: 2829 (qmeventd) Tasks: 1 (limit: 17203) Memory: 8.8M CPU: 10.100s CGroup: /system.slice/qmeventd.service └─2829 /usr/sbin/qmeventd /var/run/qmeventd.sock May 07 13:47:13 cu-pve04 qmeventd[2807]: OK May 07 13:47:13 cu-pve04 qmeventd[2807]: Finished cleanup for 103 May 08 15:01:40 cu-pve04 qmeventd[2807]: file /etc/pve/storage.cfg line 4 (section 'local') - ignore config line: cephfs: kycfs May 08 15:01:40 cu-pve04 qmeventd[2807]: file /etc/pve/storage.cfg line 5 (section 'local') - unable to parse value of 'path': duplicate attribute May 08 15:01:40 cu-pve04 qmeventd[2807]: file /etc/pve/storage.cfg line 6 (section 'local') - unable to parse value of 'content': duplicate attribute May 08 15:01:40 cu-pve04 qmeventd[2807]: Starting cleanup for 107 May 08 15:01:40 cu-pve04 qmeventd[2807]: trying to acquire lock... May 08 15:01:41 cu-pve04 qmeventd[2807]: OK May 08 15:01:41 cu-pve04 qmeventd[2807]: storage 'kycfs' does not exists May 08 15:01:41 cu-pve04 qmeventd[2807]: Finished cleanup for 107
------------------------------------------------------
pve-ha-lrm - PVE Local Resource Manager Daemon
root@cu-pve05:~# pve-ha-lrm status
running
------------------------------------------
pve-ha-crm - PVE Cluster Resource Manager Daemon
root@cu-pve05:~# pve-ha-crm status
running
------------------------------------------
spiceproxy - SPICE Proxy Service
root@cu-pve04:~# spiceproxy status
running
This daemon listens on TCP port 3128, and implements an HTTP proxy to forward CONNECT request from the SPICE client to the correct Proxmox VE VM. It runs as user www-data and has very limited permissions.
It is possible to configure "apache2" like access control lists. Values are read from file /etc/default/pveproxy.
------------------------------------------
pve-firewall - PVE Firewall Daemon
All firewall related configuration is stored on the proxmox cluster file system. So those files are automatically distributed to all cluster nodes, and the pve-firewall service updates the underlying iptables rules automatically on changes.
You can configure anything using the GUI (i.e. Datacenter → Firewall, or on a Node → Firewall), or you can edit the configuration files directly using your preferred editor.
If you enable the firewall, traffic to all hosts is blocked by default. Only exceptions is WebGUI(8006) and ssh(22) from your local network.
Each virtual network device has its own firewall enable flag. So you can selectively enable the firewall for each interface. This is required in addition to the general firewall enable option.
The firewall runs two service daemons on each node:
pvefw-logger: NFLOG daemon (ulogd replacement).
pve-firewall: updates iptables rules
root@cu-pve04:~# pve-firewall compile
ipset cmdlist:
iptables cmdlist:
ip6tables cmdlist:
ebtables cmdlist:
no changes
firewall disabled
root@cu-pve04:~# pve-firewall localnet
local hostname: cu-pve04
local IP address: 192.168.1.4
network auto detect: 192.168.1.0/24
using detected local_network: 192.168.1.0/24
root@cu-pve04:~# pve-firewall status
Status: disabled/running
root@cu-pve04:~# pve-firewall stop
root@cu-pve04:~# pve-firewall status
Status: disabled/stopped
----------------------------------------------
root@cu-pve04:~# ps -ef|grep pve
root 2697 1 0 16:26 ? 00:00:00 /usr/sbin/pvefw-logger
ceph 4418 1 2 16:26 ? 00:00:28 /usr/bin/ceph-mgr -f --cluster ceph --id cu-pve04 --setuser ceph --setgroup ceph
ceph 4421 1 0 16:26 ? 00:00:02 /usr/bin/ceph-mds -f --cluster ceph --id cu-pve04 --setuser ceph --setgroup ceph
ceph 4438 1 2 16:26 ? 00:00:33 /usr/bin/ceph-mon -f --cluster ceph --id cu-pve04 --setuser ceph --setgroup ceph
root 4739 1 0 16:26 ? 00:00:09 pvestatd
root 5028 1 0 16:26 ? 00:00:00 pvedaemon
root 5031 5028 0 16:26 ? 00:00:00 pvedaemon worker
root 5032 5028 0 16:26 ? 00:00:00 pvedaemon worker
root 5033 5028 0 16:26 ? 00:00:00 pvedaemon worker
root 5073 1 0 16:26 ? 00:00:00 pve-ha-crm
www-data 5093 1 0 16:26 ? 00:00:00 pveproxy
www-data 5096 5093 0 16:26 ? 00:00:01 pveproxy worker
www-data 5097 5093 0 16:26 ? 00:00:03 pveproxy worker
www-data 5098 5093 0 16:26 ? 00:00:04 pveproxy worker
root 5118 1 0 16:26 ? 00:00:00 pve-ha-lrm
root 8959 1 0 16:39 ? 00:00:01 pve-firewall
Ports used by Proxmox VE
Web interface: 8006
VNC Web console: 5900-5999
SPICE proxy: 3128
sshd (used for cluster actions): 22
rpcbind: 111
corosync multicast (if you run a cluster): 5404, 5405 UDP