Strumenti Utente

Strumenti Sito


proxmox:uff

Questa è una vecchia versione del documento!


Struttura

Macchine a disposizione:
Server1:
Dell PowerEdge R360
40 x Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.4Ghz (2 Sockets)
5 x 300 GB Disk + 3 x 900 GB Disk
Realizzati 2 VD in raid-5:
prima unità (/dev/sda), 5×300 in raid-5 - 1.1 TB
seconda unità (/dev/sdb) 3×900 in raid-5 - 1.8 TB

Server2:
16 x Intel(R) Xeon(R) CPU E7330 @ 2.4Ghz (4 Sockets)
4 x 146 GB Disk + 4 x 1 TB Disk
Realizzati 2 VD in raid-5:
prima unità (/dev/cciss/c0d0), 4×146 in raid-5 - 410.1 GB
seconda unità (/dev/cciss/c0d1) 4×1 in raid-5 - 2.5 TB

Dischi

L'installazione viene effettuata specificando per ciascun server 200 GB come spazio di installazione di Proxmox
Di seguito la mappatura iniziale dei dischi per ciascun server:

fdisk -l _ Server1.txt
Disk /dev/sda: 1.1 TiB, 1197759004672 bytes, 2339373056 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 19647CB6-EC57-4DCC-A21E-69096E97B588
 
Device          Start        End    Sectors   Size Type
/dev/sda1        2048       4095       2048     1M BIOS boot
/dev/sda2        4096     528383     524288   256M EFI System
/dev/sda3      528384  419430400  418902017 199.8G Linux LVM
 
Disk /dev/sdb: 1.8 TiB, 1999307276288 bytes, 3904897024 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: DCCDBA40-303A-482D-B8FF-F8E8D18966AA
 
Disk /dev/mapper/pve-root: 49.8 GiB, 53418655744 bytes, 104333312 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
 
 
Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
fdisk -l _ Server2.txt
Disk /dev/cciss/c0d0: 410.1 GiB, 440345714688 bytes, 860050224 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 9AD3DEF8-C4D5-4598-83FA-9E80C58406A9
 
Device                Start       End   Sectors   Size Type
/dev/cciss/c0d0p1      2048      4095      2048     1M BIOS boot
/dev/cciss/c0d0p2      4096    528383    524288   256M EFI System
/dev/cciss/c0d0p3    528384 419430400 418902017 199.8G Linux LVM
 
Disk /dev/cciss/c0d1: 2.5 TiB, 2700455206912 bytes, 5274326576 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 95B61035-78CA-4ECD-B32F-85BEA368A53D
 
Disk /dev/mapper/pve-root: 49.8 GiB, 53418655744 bytes, 104333312 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
 
 
Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Rete

Procediamo quindi alla configurazione delle schede di rete
Ciascun server ha 4 nic di seguito i file /etc/network/interface per ciascun nodo

interface_Server1.txt
auto lo
iface lo inet loopback
 
iface eno1 inet manual
 
auto vmbr0
iface vmbr0 inet static
        address 192.168.1.117
        netmask 255.255.255.0
        gateway 192.168.1.254
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0
 
auto eno2
iface eno2 inet static
        address 192.168.2.1
        netmask 255.255.255.248
 
auto eno3
iface eno3 inet manual
 
iface eno3.1 inet manual
        vlan_raw_device eno3
 
auto vmbr11
iface vmbr11 inet manual
        bridge_ports eno3.1
        bridge_stp on
        bridge_fd 0.0
        pre-up ifup eno3.1
        post-down ifdown eno3.1
 
iface eno3.3 inet manual
        vlan_raw_device eno3
auto vmbr13
iface vmbr13 inet manual
        bridge_ports eno3.3
        bridge_stp on
        bridge_fd 0.0
        pre-up ifup eno3.3
        post-down ifdown eno3.3
 
iface eno3.7 inet manual
        vlan_raw_device eno3
auto vmbr17
iface vmbr17 inet manual
        bridge_ports eno3.7
        bridge_stp on
        bridge_fd 0.0
        pre-up ifup eno3.7
        post-down ifdown eno3.7
 
iface eno4 inet manual
interface_Server2.txt
auto lo
iface lo inet loopback
 
iface enp6s0 inet manual
 
auto enp8s0
iface enp8s0 inet static
        address 192.168.2.2
        netmask 255.255.255.248
 
auto ens2
iface ens2 inet manual
 
iface ens1 inet manual
 
auto vmbr0
iface vmbr0 inet static
        address 192.168.1.118
        netmask 255.255.255.0
        gateway 192.168.1.254
        bridge_ports ens1
        bridge_stp off
        bridge_fd 0
 
auto ens2
iface ens2 inet manual
 
iface ens2.1 inet manual
        vlan_raw_device ens2
 
auto vmbr11
iface vmbr11 inet manual
        bridge_ports ens2.1
        bridge_stp on
        bridge_fd 0.0
        pre-up ifup ens2.1
        post-down ifdown ens2.1
 
iface ens2.3 inet manual
        vlan_raw_device ens2
 
auto vmbr13
iface vmbr13 inet manual
        bridge_ports ens2.3
        bridge_stp on
        bridge_fd 0.0
        pre-up ifup ens2.3
        post-down ifdown ens2.3
 
iface ens2.7 inet manual
        vlan_raw_device ens2
 
auto vmbr17
iface vmbr17 inet manual
        bridge_ports ens2.7
        bridge_stp on
        bridge_fd 0.0
        pre-up ifup ens2.7
        post-down ifdown ens2.7

inseriamo anche la configurazione del file /etc/hosts

hosts_server1.txt
127.0.0.1 localhost.localdomain localhost
192.168.1.117 pvequ1.miodominio.local pvequ1 pvelocalhost
192.168.1.118 pvequ2.miodominio.local pvequ2
 
# The following lines are desirable for IPv6 capable hosts
 
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
hosts_server2.txt
127.0.0.1 localhost.localdomain localhost
192.168.1.118 pvequ2.miodominio.local pvequ2 pvelocalhost
192.168.1.117 pvequ1.miodominio.local pvequ1
 
# The following lines are desirable for IPv6 capable hosts
 
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

cluster

A questo punto siamo pronti per creare il cluster

inizializzare il cluster dal primo nodo

pvecm create pvequre

aggiungere il secondo nodo eseguendo dal secondo nodo:

pvecm add ip.primo.nodo.xx 

Nel caso di un cluster a due nodi come questo è necessario modificare a manina il seguente file:
modificare il file /etc/pve/corosync.conf nel nodo principale aggiungendo la riga indicata nell'apposita sezione

	quorum {
	  provider: corosync_votequorum
-->	  two_node: 1
	}

Ora accedento tramite interfaccia web ad entrambi i nodi vedremo la consolle di gestione centralizzata dei due nodi.

drbd

Prima di installare il tool per la gestione dello storage ridondante dobbiamo intervenire sui file dei repositori in quanto noi utilizziamo la versione di proxmox completamente libera e non la parte comprensiva di sottoscrizione a pagamento.

rinominare

mv /etc/apt/sources.list.d/pve-enterprise.list /etc/apt/sources.list.d/pve-enterprise.list.disabled

aggiungere ad /etc/apt/sources.list

deb http://download.proxmox.com/debian stretch pve-no-subscription

Poi:

apt-get update

infine:

apt-get install drbd8-tools

ora dobbiamo risistemare i dischi di entrambi i nodi aggiungendo le opportune partizioni secondo lo schema seguente

Server1

Disk /dev/sda: 1.1 TiB, 1197759004672 bytes, 2339373056 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 19647CB6-EC57-4DCC-A21E-69096E97B588

Device          Start        End    Sectors   Size Type
/dev/sda1        2048       4095       2048     1M BIOS boot
/dev/sda2        4096     528383     524288   256M EFI System
/dev/sda3      528384  419430400  418902017 199.8G Linux LVM
/dev/sda4   419432448 1788861966 1369429519   653G Linux LVM
/dev/sda5  1788862464 2339373022  550510559 262.5G Linux filesystem


Disk /dev/sdb: 1.8 TiB, 1999307276288 bytes, 3904897024 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: DCCDBA40-303A-482D-B8FF-F8E8D18966AA

Device     Start        End    Sectors  Size Type
/dev/sdb1   2048 3904896990 3904894943  1.8T Linux LVM

server2

Disk /dev/cciss/c0d0: 410.1 GiB, 440345714688 bytes, 860050224 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 9AD3DEF8-C4D5-4598-83FA-9E80C58406A90

Device                Start       End   Sectors   Size Type
/dev/cciss/c0d0p1      2048      4095      2048     1M BIOS boot
/dev/cciss/c0d0p2      4096    528383    524288   256M EFI System
/dev/cciss/c0d0p3    528384 419430400 418902017 199.8G Linux LVM
/dev/cciss/c0d0p4 419432448 860050190 440617743 210.1G Linux filesystem


Disk /dev/cciss/c0d1: 2.5 TiB, 2700455206912 bytes, 5274326576 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 95B61035-78CA-4ECD-B32F-85BEA368A53D

Device                 Start        End    Sectors  Size Type
/dev/cciss/c0d1p1       2048 3904896990 3904894943  1.8T Linux LVM
/dev/cciss/c0d1p2 3904897024 5274326542 1369429519  653G Linux LVM

modificare il file /etc/lvm/lvm.conf aggiungendo la seguente riga modificata nel modo giusto

nodo1

filter = [ "r|/dev/sdb1|", "r|/dev/sda4|", "r|/dev/disk/|", "r|/dev/block/|", "a|.*/|" ]

nodo2

filter = [ "r|/dev/cciss/c0d1p1|", "r|/dev/cciss/c0d1p2|", "r|/dev/disk/|", "r|/dev/block/|", "a|.*/|" ]

aggiornare i repository

apt-get update

installare drbd

apt-get install drbd8-utils

Di seguito i file fondamentali per drbd ossia il file /etc/drbd.d/global_common.comf che deve essere uguale su entrambi i nodi così come i file risorsa che per la situazione è stato scelto di crearne 2. Quindi due dispositivi DRBD.

global_common.conf
# DRBD is the result of over a decade of development by LINBIT.
# In case you need professional services for DRBD or have
# feature requests visit http://www.linbit.com
 
global {
        usage-count yes;
        # minor-count dialog-refresh disable-ip-verification
        # cmd-timeout-short 5; cmd-timeout-medium 121; cmd-timeout-long 600;
}
 
common {
        handlers {
                # These are EXAMPLE handlers only.
                # They may have severe implications,
                # like hard resetting the node under certain circumstances.
                # Be careful when chosing your poison.
 
                # pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                # pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                # local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
                # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
                # split-brain "/usr/lib/drbd/notify-split-brain.sh root";
                # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
                # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
                # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
        }
 
        startup {
                wfc-timeout 60;
                degr-wfc-timeout 50;
                outdated-wfc-timeout 50;
                #wait-after-sb
        }
 
        options {
                # cpu-mask on-no-data-accessible
        }
 
        disk {
                # size on-io-error fencing disk-barrier disk-flushes
                # disk-drain md-flushes
                resync-rate 50M;
                #resync-after al-extents
                # c-plan-ahead c-delay-target c-fill-target c-max-rate
                # c-min-rate disk-timeout
                on-io-error detach;
                disk-barrier no;
                disk-flushes no;
        }
        net {
                protocol C;
                #timeout max-epoch-size max-buffers
                # connect-int ping-int sndbuf-size rcvbuf-size ko-count
                allow-two-primaries;
                #cram-hmac-alg shared-secret
                after-sb-0pri discard-zero-changes;
                after-sb-1pri discard-secondary;
                after-sb-2pri disconnect;
                #always-asbp rr-conflict
                # ping-timeout data-integrity-alg tcp-cork on-congestion
                # congestion-fill congestion-extents csums-alg verify-alg
                # use-rle
        }
}

file risorsa:

r0.res
resource r0 {
        #startup {
        #        become-primary-on both;
        #}
        on pvequ1 {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 192.168.2.1:7788;
                meta-disk internal;
        }
        on pvequ2 {
                device /dev/drbd0;
                disk /dev/cciss/c0d1p1;
                address 192.168.2.2:7788;
                meta-disk internal;
        }
}
r1.res
resource r1 {
        #startup {
        #        become-primary-on both;
        #}
        on pvequ1 {
                device /dev/drbd1;
                disk /dev/sda4;
                address 192.168.2.1:7789;
                meta-disk internal;
        }
        on pvequ2 {
                device /dev/drbd1;
                disk /dev/cciss/c0d1p2;
                address 192.168.2.2:7789;
                meta-disk internal;
        }
}

al momento della creazione dei metadati, al primo avvio del servizio e alla prima sincronizzazione è bene commentare la voce ne file .res in cui dispone l'avvio come primaria della risorsa ( su entrambi i nodi e le risorse).

ora che abbiamo predisposto le risorse creiamo il file dei metadati per ciascun dispositivo:

drbdadm create-md r0
drbdadm create-md r1

rispondiamo yes e confermiamo nel caso venga proposto o controlliamo i dati nel caso venga resitutito un errore.

avviamo il servizio su entrambi i nodi:

systemctl start drbd.service

per controllare la situazione usiamo:

cat /proc/drbd

per effettuare la prima sincronizzazione:

drbdadm -- --overwrite-data-of-peer primary r0
drbdadm -- --overwrite-data-of-peer primary r1

al termine avremo una situazione tipo la seguente:

version: 8.4.7 (api:1/proto:86-101)
srcversion: 4702B0F5608C26F576DF75A 
 0: cs:SyncTarget ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----
    ns:0 nr:1898108168 dw:1897990952 dr:819848 al:8 bm:0 lo:0 pe:7 ua:0 ap:0 ep:1 wo:d oos:0
 1: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----
    ns:0 nr:684756816 dw:684693872 dr:817472 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0

ora possiamo abilitare all'avvio il servizio:

systemctl enable drbd.service

e togliere il commento alla sezione dei file .res nella quale si abilita la condizione di primary all'avvio.

proxmox/uff.1509177979.txt.gz · Ultima modifica: 2023/04/17 14:25 (modifica esterna)