vmbr0
가 생성됨vmbr0.N
형식으로 VLAN 인터페이스를 설정 가능 (예: vmbr0.10
→ VLAN ID 10)# nano /etc/network/interfaces
auto lo
iface lo inet loopback
iface enp5s0f0 inet manual
iface enp5s0f1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.204.240/24
gateway 192.168.204.254
bridge-ports enp5s0f0
bridge-stp off
bridge-fd 0
+ auto vmbr1
+ iface vmbr1 inet static
+ address 192.168.204.241/24
+ bridge-ports enp5s0f1
+ bridge-stp off
+ bridge-fd 0
source /etc/network/interfaces.d/*
# systemctl restart networking
# ip a
출력 예시:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp5s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether 00:25:90:c6:d2:42 brd ff:ff:ff:ff:ff:ff
3: enp5s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP group default qlen 1000
link/ether 00:25:90:c6:d2:43 brd ff:ff:ff:ff:ff:ff
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:25:90:c6:d2:42 brd ff:ff:ff:ff:ff:ff
inet 192.168.204.240/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::225:90ff:fec6:d242/64 scope link
valid_lft forever preferred_lft forever
7: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:25:90:c6:d2:43 brd ff:ff:ff:ff:ff:ff
inet 192.168.204.241/24 scope global vmbr1
valid_lft forever preferred_lft forever
inet6 fe80::225:90ff:fec6:d243/64 scope link
valid_lft forever preferred_lft forever
# nano /etc/network/interfaces
auto lo
iface lo inet loopback
iface enp5s0f0 inet manual
iface enp5s0f1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.204.240/24
gateway 192.168.204.254
bridge-ports enp5s0f0
bridge-stp off
bridge-fd 0
- auto vmbr1
- iface vmbr1 inet static
- address 192.168.204.241/24
- bridge-ports enp5s0f1
- bridge-stp off
- bridge-fd 0
source /etc/network/interfaces.d/*
# systemctl restart networking
# ip a
출력 예시:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp5s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether 00:25:90:c6:d2:42 brd ff:ff:ff:ff:ff:ff
3: enp5s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 00:25:90:c6:d2:43 brd ff:ff:ff:ff:ff:ff
8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:25:90:c6:d2:42 brd ff:ff:ff:ff:ff:ff
inet 192.168.204.240/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::225:90ff:fec6:d242/64 scope link
valid_lft forever preferred_lft forever
# nano /etc/network/interfaces
auto lo
iface lo inet loopback
iface enp5s0f0 inet manual
iface enp5s0f1 inet manual
+ auto bond0
+ iface bond0 manual
+ bond-slaves enp5s0f0 enp5s0f1
+ bond-miimon 100
+ bond-mode active-backup
+ bond-primary enp5s0f0
auto vmbr0
iface vmbr0 inet static
address 192.168.204.240/24
gateway 192.168.204.254
+ bridge-ports bond0
bridge-stp off
bridge-fd 0
source /etc/network/interfaces.d/*
# systemctl restart networking
# cat /proc/net/bonding/bond0
출력 예시:
Ethernet Channel Bonding Driver: v6.8.12-4-pve
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: enp5s0f0 (primary_reselect always)
Currently Active Slave: enp5s0f0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0
Slave Interface: enp5s0f0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:90:c6:d2:42
Slave queue ID: 0
Slave Interface: enp5s0f1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:90:c6:d2:43
Slave queue ID: 0
# ip link set enp5s0f0 down
# cat /proc/net/bonding/bond0
출력 예시:
Ethernet Channel Bonding Driver: v6.8.12-4-pve
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: enp5s0f0 (primary_reselect always)
Currently Active Slave: enp5s0f1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0
Slave Interface: enp5s0f0
MII Status: down
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 00:25:90:c6:d2:42
Slave queue ID: 0
Slave Interface: enp5s0f1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:90:c6:d2:43
Slave queue ID: 0
# nano /etc/network/interfaces
auto lo
iface lo inet loopback
iface enp5s0f0 inet manual
iface enp5s0f1 inet manual
auto bond0
iface bond0 manual
bond-slaves enp5s0f0 enp5s0f1
bond-miimon 100
bond-mode active-backup
bond-primary enp5s0f0
auto vmbr0
iface vmbr0 inet static
address 192.168.204.240/24
gateway 192.168.204.254
bridge-ports bond0
bridge-stp off
bridge-fd 0
+ bridge-vlan-aware yes
+ bridge-vids 2-4094
source /etc/network/interfaces.d/*
# systemctl restart networking
# nano /etc/network/interfaces
auto lo
iface lo inet loopback
iface enp5s0f0 inet manual
iface enp5s0f1 inet manual
auto bond0
iface bond0 inet manual
bond-slaves enp5s0f0 enp5s0f1
bond-miimon 100
bond-mode active-backup
bond-primary enp5s0f0
auto vmbr0
iface vmbr0 inet static
address 192.168.204.240/24
gateway 192.168.204.254
bridge-ports bond0
bridge-stp off
bridge-fd 0
- bridge-vlan-aware yes
- bridge-vids 2-4094
source /etc/network/interfaces.d/*
# systemctl restart networking
1. 개요 Rocky Linux는 엔터프라이즈 환경에서 사용되는 RHEL(Red Hat Enterprise Linux)과 완전히 호환되는 오픈소스 Linux…
https://youtu.be/XwG4jBWakzQ 1. 개요 Supermicro IPMIView는 Supermicro에서 제공하는 IPMI (Intelligent Platform Management Interface) 기반의 통합 관리…
1. 개요 이 문서는 두 개의 NIC (enp5s0f0, enp5s0f1)를 bonding(active-backup) 방식으로 구성하고, 해당 bond 장치를 브리지(br0) 와 연결하여 KVM 가상머신에서…
1. 개요 KVM에서 NVIDIA GPU를 Passthrough 설정하여 VM에 할당할 때 RmInitAdapter failed 오류를 자주 접하게…
1. 개요 Proxmox에서 pGPU(Physical GPU)와 vGPU(Virtual GPU)를 동일한 서버에서 동시에 사용하는 방법을 정리합니다. 2. 버전…
1. 개요 Proxmox에서 vGPU를 설정하는 방법을 정리합니다. 2. 버전 Proxmox 8.2 3. vGPU란? vGPU(Virtual GPU)는…