Update 22 June 2020 I have updated this post to be compatible with LXD 4.0. I also adapted it in order to create an empty profile that does only the macvlan stuff and is independent of the default profile. Finally, I am calling the profile macvlan
(previous name: lanprofile).
2020년 6월 22일 업데이트 이 게시물을 LXD 4.0과 호환되도록 업데이트했습니다. 또한 기본 프로필과 독립적인 macvlan 작업만 수행하는 빈 프로필을 만들도록 조정했습니다. 마지막으로, 이 프로필을 macvlan
라고 부릅니다(이전 이름: lanprofile).
WARNING #1: By using macvlan, your computer’s network interface will appear on the network to have more than one MAC address. This is fine for Ethernet networks. However, if your interface is a Wireless interface (with security like WPA/WPA2), then the access point will reject any other MAC addresses coming from your computer. Therefore, all these will not work in that specific case.
경고 #1: macvlan을 사용하면 컴퓨터의 네트워크 인터페이스가 네트워크에서 여러 개의 MAC 주소를 가진 것처럼 나타납니다. 이는 이더넷 네트워크에서는 괜찮습니다. 그러나 인터페이스가 무선 인터페이스(WPA/WPA2와 같은 보안이 있는 경우)인 경우, 액세스 포인트는 컴퓨터에서 오는 다른 MAC 주소를 거부합니다. 따라서 이러한 모든 경우는 특정 상황에서 작동하지 않습니다.
WARNING #2: If your host is in a virtual machine, then it is likely that the VM software will block the DHCP requests of the containers. There are options on both VMWare and Virtualbox to allow Promiscuous mode (somewhere in their Network settings). You need to enable that. Keep in mind that people reported success only on VMWare. Currently, on VirtualBox, you need to switch the network interface on the host into the PROMISC mode, as a workaround.
경고 #2: 호스트가 가상 머신에 있는 경우, VM 소프트웨어가 컨테이너의 DHCP 요청을 차단할 가능성이 높습니다. VMWare와 Virtualbox 모두에서 Promiscuous 모드를 허용하는 옵션이 있습니다(네트워크 설정의 어딘가에 있습니다). 이를 활성화해야 합니다. 사람들이 VMWare에서만 성공했다고 보고했다는 점을 염두에 두십시오. 현재 VirtualBox에서는 호스트의 네트워크 인터페이스를 PROMISC 모드로 전환해야 합니다.
In LXD terminology, you have the host and then you have the many containers on this host. The host is the computer where LXD is running. By default, all containers run hidden in a private network on the host. The containers are not accessible from the local network, nor from the Internet. However, they have network access to the Internet through the host. This is NAT networking.
LXD 용어에서 호스트가 있고 이 호스트에 여러 개의 컨테이너가 있습니다. 호스트는 LXD가 실행되고 있는 컴퓨터입니다. 기본적으로 모든 컨테이너는 호스트의 개인 네트워크에서 숨겨져 실행됩니다. 컨테이너는 로컬 네트워크나 인터넷에서 접근할 수 없습니다. 그러나 호스트를 통해 인터넷에 대한 네트워크 접근이 가능합니다. 이것이 NAT 네트워킹입니다.
How can we get some containers to receive an IP address from the LAN and be accessible on the LAN?
어떻게 하면 일부 컨테이너가 LAN에서 IP 주소를 받아 LAN에서 접근 가능하게 할 수 있을까요?
This can be achieved using macvlan (L2) virtual network interfaces, a feature provided by the Linux kernel.
이것은 리눅스 커널에서 제공하는 기능인 macvlan (L2) 가상 네트워크 인터페이스를 사용하여 달성할 수 있습니다.
In this post, we are going to create a new LXD profile and configure macvlan in it. Then, we launch new containers under the new profile, or attach existing containers to the new profile (so they get as well a LAN IP address).
이 게시물에서는 새로운 LXD 프로필을 만들고 그 안에 macvlan을 구성할 것입니다. 그런 다음, 새로운 프로필 아래에서 새로운 컨테이너를 시작하거나 기존 컨테이너를 새로운 프로필에 연결하여 LAN IP 주소를 받을 수 있도록 합니다.
Creating a new LXD profile for macvlan
macvlan을 위한 새로운 LXD 프로필 생성
Let’s see what LXD profiles are available.
LXD 프로필이 어떤 것이 있는지 살펴보겠습니다.
$ lxc profile list +------------+---------+ | NAME | USED BY | +------------+---------+ | default | 11 | +------------+---------+
There is a single profile, called default, the default profile. It is used by 11 LXD containers on this system.
이 시스템에는 기본 프로필이라고 불리는 단일 프로필이 있습니다. 이 프로필은 11개의 LXD 컨테이너에서 사용됩니다.
We create a new profile. The new profile is called macvlan.
새 프로필을 생성합니다. 새 프로필의 이름은 macvlan입니다.
$ lxc profile create macvlan Profile macvlan created $ lxc profile list +------------+---------+ | NAME | USED BY | +------------+---------+ | default | 11 | +------------+---------+ | macvlan | 0 | +------------+---------+
What are the default settings of a new profile?
새 프로필의 기본 설정은 무엇인가요?
$ lxc profile show macvlan config: {} description: "" devices: {} name: macvlan used_by: [] $
We need to add a nic with nictype macvlan
and parent to the appropriate network interface on the host and we are then ready to go. Let’s identify the correct parent, using the ip route command. This command shows the default network route. It also shows the name of the device (dev), which is in this case enp5s12. (Before systemd, those used to be eth0 or wlan0. Now, the name varies depending on the specific network cards).
우리는 nic 유형 macvlan
와 부모를 호스트의 적절한 네트워크 인터페이스에 추가해야 하며, 그러면 준비가 완료됩니다. ip route 명령을 사용하여 올바른 부모를 식별해 보겠습니다. 이 명령은 기본 네트워크 경로를 보여줍니다. 또한 이 경우 enp5s12인 장치(dev)의 이름도 보여줍니다. (systemd 이전에는 eth0 또는 wlan0이 사용되었습니다. 이제 이름은 특정 네트워크 카드에 따라 다릅니다).
$ ip route show default 0.0.0.0/0 default via 192.168.1.1 dev enp5s12 proto static metric 100
Now we are ready to add the appropriate device to the macvlan
LXD profile. We use the lxc profile device add command to add a device eth0 to the profile lanprofile. We set nictype to macvlan, and parent to enp5s12.
이제 적절한 장치를 macvlan
LXD 프로필에 추가할 준비가 되었습니다. lxc profile device add 명령을 사용하여 lanprofile 프로필에 eth0 장치를 추가합니다. nictype을 macvlan으로 설정하고, parent를 enp5s12로 설정합니다.
$ lxc profile device add macvlan eth0 nic nictype=macvlan parent=enp5s12 Device eth0 added to macvlan $ lxc profile show macvlan config: {} description: "" devices: eth0: nictype: macvlan parent: enp5s12 type: nic name: macvlan used_by: [] ubuntu@myvm:~$ $
Well, that’s it. We are now ready to launch containers using this new profile, and they will get an IP address from the DHCP server of the LAN.
그게 다입니다. 이제 이 새로운 프로필을 사용하여 컨테이너를 시작할 준비가 되었으며, 이들은 LAN의 DHCP 서버에서 IP 주소를 받을 것입니다.
Launching LXD containers with the new profile
새 프로필로 LXD 컨테이너 시작하기
Let’s launch two containers using the new macvlan profile and then check their IP address. We need to specify first the default
profile, and then the macvlan
profile. By doing this, the container will get the appropriate base configuration from the first profile, and then the networking will be overridden by the macvlan
profile.
새로운 macvlan 프로필을 사용하여 두 개의 컨테이너를 실행한 다음 IP 주소를 확인해 보겠습니다. 먼저 default
프로필을 지정한 다음 macvlan
프로필을 지정해야 합니다. 이렇게 하면 컨테이너는 첫 번째 프로필에서 적절한 기본 구성을 가져오고, 그 다음 macvlan
프로필에 의해 네트워킹이 재정의됩니다.
$ lxc launch ubuntu:18.04 net1 --profile default --profile macvlan Creating net1 Starting net1 $ lxc launch ubuntu:18.04 net2 --profile default --profile macvlan Creating net2 Starting net2 $ lxc list +------+---------+---------------------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+---------------------+------+-----------+-----------+ | net1 | RUNNING | 192.168.1.7 (eth0) | | CONTAINER | 0 | +------+---------+---------------------+------+-----------+-----------+ | net2 | RUNNING | 192.168.1.3 (eth0) | | CONTAINER | 0 | +------+---------+---------------------+------+-----------+-----------+ $
Both containers got their IP address from the LAN router. Here is the router administration screen that shows the two containers. I edited the names by adding LXD in the front to make them look nicer. The containers look and feel as just like new LAN computers!
두 컨테이너는 LAN 라우터에서 IP 주소를 받았습니다. 여기 두 컨테이너를 보여주는 라우터 관리 화면이 있습니다. 이름을 더 보기 좋게 만들기 위해 앞에 LXD를 추가하여 편집했습니다. 컨테이너는 마치 새로운 LAN 컴퓨터처럼 보이고 느껴집니다!
Let’s ping from one container to the other.
한 컨테이너에서 다른 컨테이너로 핑을 보내봅시다.
$ lxc exec net1 -- ping -c 3 192.168.1.7 PING 192.168.1.7 (192.168.1.7) 56(84) bytes of data. 64 bytes from 192.168.1.7: icmp_seq=1 ttl=64 time=0.064 ms 64 bytes from 192.168.1.7: icmp_seq=2 ttl=64 time=0.067 ms 64 bytes from 192.168.1.7: icmp_seq=3 ttl=64 time=0.082 ms --- 192.168.1.7 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2036ms rtt min/avg/max/mdev = 0.064/0.071/0.082/0.007 ms
You can ping these containers from other computers on your LAN! But the host and these macvlan
containers cannot communicate over the network. This has to do with how macvlan works in the Linux kernel.
이 컨테이너들은 LAN의 다른 컴퓨터에서 핑을 보낼 수 있습니다! 그러나 호스트와 이 macvlan
컨테이너들은 네트워크를 통해 통신할 수 없습니다. 이는 macvlan이 리눅스 커널에서 작동하는 방식과 관련이 있습니다.
Troubleshooting 문제 해결
Help! I cannot ping between the host and the containers!
도와주세요! 호스트와 컨테이너 간에 핑을 보낼 수 없습니다!
To be able to get the host and containers to communicate with each other, you need some additional changes to the host in order to get added to the macvlan as well. It discusses it here, though I did not test because I do not need communication of the containers with the host. If you test it, please report below.
호스트와 컨테이너가 서로 통신할 수 있도록 하려면, macvlan에 추가되기 위해 호스트에 몇 가지 추가 변경이 필요합니다. 여기에서 논의되지만, 저는 호스트와의 컨테이너 통신이 필요하지 않기 때문에 테스트하지 않았습니다. 테스트하신 경우 아래에 보고해 주세요.
Help! I do not get anymore those net1.lxd, net2.lxd fancy hostnames!
도와주세요! 더 이상 net1.lxd, net2.lxd 멋진 호스트 이름을 받지 못하고 있습니다!
The default LXD DHCP server assigns hostnames like net1.lxd, net2.lxd to each container. Then, you can get the containers to communicate with each other using the hostnames instead of the IP addresses.
기본 LXD DHCP 서버는 각 컨테이너에 net1.lxd, net2.lxd와 같은 호스트 이름을 할당합니다. 그런 다음 IP 주소 대신 호스트 이름을 사용하여 컨테이너 간에 통신할 수 있습니다.
When using the LAN DHCP server, you would need to configure it as well to produce nice hostnames.
LAN DHCP 서버를 사용할 때, 멋진 호스트 이름을 생성하도록 구성해야 합니다.
Help! Can these new macvlan containers read my LAN network traffic?
도와주세요! 이 새로운 macvlan 컨테이너가 내 LAN 네트워크 트래픽을 읽을 수 있나요?
The new macvlan LXD containers (that got a LAN IP address) can only see their own traffic and also any LAN broadcast packets. They cannot see the traffic meant for the host, nor the traffic for the other containers.
새로운 macvlan LXD 컨테이너(랜 IP 주소를 받은)는 자신의 트래픽과 LAN 브로드캐스트 패킷만 볼 수 있습니다. 호스트를 위한 트래픽이나 다른 컨테이너를 위한 트래픽은 볼 수 없습니다.
Help! I get the error Error: Device validation failed “eth0”: Cannot use “nictype” property in conjunction with “network” property
도와주세요! Error: Device validation failed “eth0”: Cannot use “nictype” property in conjunction with “network” property
오류가 발생합니다
A previous version of this tutorial had the old style of how to add a device to a LXD profile. The old style was supposed to work in compatibility mode in newer versions of LXD. But at least in LXD 4.2 it does not, and gives the following error. You should not get this error anymore since I have updated the post. You may get an error if you are using a very old LXD. In that case, report back in the comments please.
이 튜토리얼의 이전 버전은 LXD 프로필에 장치를 추가하는 구식 방법을 가지고 있었습니다. 구식 방법은 최신 버전의 LXD에서 호환성 모드로 작동해야 했습니다. 그러나 LXD 4.2에서는 작동하지 않으며 다음과 같은 오류가 발생합니다. 제가 게시물을 업데이트했기 때문에 더 이상 이 오류가 발생하지 않아야 합니다. 매우 오래된 LXD를 사용하고 있다면 오류가 발생할 수 있습니다. 그런 경우에는 댓글로 알려주세요.
$ lxc profile device set macvlan eth0 nictype macvlan
Error: Device validation failed "eth0": Cannot use "nictype" property in conjunction with "network" property
$
Summary 요약
With this tutorial, you are able to create containers that get an IP address from the LAN (same source as the host), using macvlan.
이 튜토리얼을 통해 macvlan을 사용하여 LAN(호스트와 동일한 출처)에서 IP 주소를 받는 컨테이너를 생성할 수 있습니다.
A downside is that the host and these macvlan
containers cannot communicate over the network. For some, this is a neat advantage, because they shield the host from the containers.
단점은 호스트와 이러한 macvlan
컨테이너가 네트워크를 통해 통신할 수 없다는 것입니다. 일부에게는 이것이 호스트를 컨테이너로부터 보호하기 때문에 깔끔한 장점입니다.
The macvlan
containers are then visible on the LAN and work just like another LAN computer.
그 macvlan
컨테이너는 LAN에서 보이며 다른 LAN 컴퓨터처럼 작동합니다.
This tutorial has been updated with the newer commands to edit a LXD profile (lxc profile device add
). The older command now gives an error as you can see in the more recent comments below.
이 튜토리얼은 LXD 프로필을 편집하기 위한 최신 명령어로 업데이트되었습니다 ( lxc profile device add
). 이전 명령어는 아래의 최근 댓글에서 볼 수 있듯이 이제 오류를 발생시킵니다.
Great write-up Simos… 훌륭한 글이야 시모스…
Great – thanks for the brilliant written article – it works now!
좋습니다 – 훌륭한 글을 써 주셔서 감사합니다 – 이제 잘 작동합니다!
I did it with the above post, but the container vm can not get the ip address. Tell me how to fix the problem.
위의 게시물로 했지만, 컨테이너 VM이 IP 주소를 받을 수 없습니다. 문제를 해결하는 방법을 알려주세요.
The test environment is DHCP.
테스트 환경은 DHCP입니다.
[root@ns01 ~]# uname -a
Linux ns01 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux (centos 7)
[root@ns01 ~]# lxc list [root@ns01 ~]# lxc 목록
+——+———+——+——+————+———–+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+——+———+——+——+————+———–+
| c2 | RUNNING | | | PERSISTENT | 0 |
| c2 | 실행 중 | | | 지속적 | 0 |
+——+———+——+——+————+———–+
[root@ns01 ~]# lxc profile show lanprofile
config: {}
description: Default LXD profile
기본 LXD 프로필
devices: 장치:
eth0:
name: eth0
nictype: macvlan
parent: ens160
type: nic
root:
path: /
pool: default
type: disk 타입: 디스크
name: lanprofile
used_by:
– /1.0/containers/c2
[root@ns01 ~]# ifconfig ens160
ens160: flags=4163 mtu 1500
inet 192.168.50.30 netmask 255.255.255.0 broadcast 192.168.50.255
inet6 fe80::9e56:786c:39be:e30a prefixlen 64 scopeid 0x20
ether 00:0c:29:22:66:3e txqueuelen 1000 (Ethernet)
RX packets 4342 bytes 1841714 (1.7 MiB)
RX 패킷 4342 바이트 1841714 (1.7 MiB)
RX errors 0 dropped 12 overruns 0 frame 0
RX 오류 0 드롭 12 오버런 0 프레임 0
TX packets 2515 bytes 262656 (256.5 KiB)
TX 패킷 2515 바이트 262656 (256.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
TX 오류 0 드롭 0 오버런 0 캐리어 0 충돌 0
[root@ns01 ~]# ip route show 0.0.0.0/0
default via 192.168.50.1 dev ens160 proto static metric 100
======= below c2 (centos 7) ======
[root@c2 ~]# ip -d link show eth0
7: eth0@if2: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
link/ether 00:16:3e:f4:6f:8d brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
macvlan mode bridge addrgenmode eui64
macvlan 모드 브리지 addrgenmode eui64
Thanks. 감사합니다.
Simos Xenitellis 2018년 6월 16일 10:22
Author 저자
Do you use a VM for the host? If so, then the VM might be the issue.
호스트에 VM을 사용하나요? 그렇다면 VM이 문제일 수 있습니다.
Also, if the host is on a wireless interface, then that’s definitely the issue.
또한, 호스트가 무선 인터페이스에 있다면, 그것이 확실히 문제입니다.
I am getting this error when I try to start a macvlan container :
macvlan 컨테이너를 시작하려고 할 때 이 오류가 발생합니다:
ray@USN-LPC:/var/lib/lxd/containers$ lxc start LPC2
error: Failed to run: /usr/bin/lxd forkstart LPC2 /var/lib/lxd/containers /var/log/lxd/LPC2/lxc.conf:
Try `lxc info –show-log LPC2` for more info
ray@USN-LPC:/var/lib/lxd/containers$ lxc info LPC2
Name: LPC2
Remote: unix://
Architecture: x86_64
Created: 2018/05/26 13:14 UTC
Status: Stopped 상태: 중지됨
Type: persistent 유형: 지속적
Profiles: lanprofile
ray@USN-LPC:/var/lib/lxd/containers$ lxc profile show lanprofile
config:
environment.http_proxy: “”
user.network_mode: “”
description: Default LXD profile
기본 LXD 프로필
devices: 장치:
eth0:
nictype: macvlan
parent: enp0s3
type: nic
root:
path: /
pool: lxdpool
type: disk 타입: 디스크
name: lanprofile
used_by:
– /1.0/containers/LPC2
Ideas? 아이디어?
Thanks. 감사합니다.
Ray 레이
Simos Xenitellis 2018년 6월 16일 10:20
Author 저자
What does *lxc info –show-log LPC2* show? It should give a hint of the exact error.
*lxc info –show-log LPC2*는 무엇을 보여줍니까? 정확한 오류에 대한 힌트를 제공해야 합니다.
로버트 M. 코레츠키 2018년 8월 21일 20:50
This worked perfectly on a CentOS 7.5 1804 (core) host on 8/21/2018. I created an LXD ZFS-backed container for Ubuntu 18.04 exactly as you describe, and it automatically got an address assigned on my home network LAN by the DHCP server on that LAN. The private network on the host that LXD gave me was useless, but the methods you illustrate here are very useful. Thanks for your work, and for the clarity of exposition as well! Bravo!
이것은 2018년 8월 21일에 CentOS 7.5 1804 (core) 호스트에서 완벽하게 작동했습니다. 저는 당신이 설명한 대로 Ubuntu 18.04를 위해 LXD ZFS 기반 컨테이너를 만들었고, 그것은 자동으로 해당 LAN의 DHCP 서버에 의해 제 홈 네트워크 LAN에서 주소가 할당되었습니다. LXD가 저에게 제공한 호스트의 개인 네트워크는 쓸모가 없었지만, 여기서 설명하는 방법들은 매우 유용합니다. 당신의 작업과 명확한 설명에 감사드립니다! 브라보!
Simos Xenitellis 2018년 9월 3일 21:58
Author 저자
Thanks Robert for the kind words!
친절한 말씀 감사합니다, 로버트!
Wouter 2018년 9월 11일 18:34
Thanks for your article. I did not succeed with macvlan. On Ubuntu 17.10 server I have a container running with the name proxy:
귀하의 기사에 감사드립니다. macvlan으로 성공하지 못했습니다. Ubuntu 17.10 서버에서 proxy라는 이름의 컨테이너가 실행되고 있습니다.
root@box:~# lxc list
+——-+———+——+——+————+———–+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+——-+———+——+——+————+———–+
| proxy | RUNNING | | | PERSISTENT | 0 |
| 프록시 | 실행 중 | | | 지속적 | 0 |
with this MAC:
root@box:~# lxc exec proxy ifconfig |grep ether
ether 00:16:3e:23:71:a9 txqueuelen 1000 (Ethernet)
that should be getting an IP; at least, the container is asking for one and is offered one:
Sep 11 12:06:51 box dhcpd[5314]: DHCPDISCOVER from 00:16:3e:23:71:a9 via enp1s0f1
Sep 11 12:06:51 box dhcpd[5314]: DHCPOFFER on 192.168.12.25 to 00:16:3e:23:71:a9 via enp1s0f1
but is not:
root@box:/var/log# lxc exec proxy ip a |grep inet\ 192
root@box:/var/log# lxc exec proxy ifconfig |grep 192
and ICMP doesnt work:
root@box:/var/log# ping 192.168.12.25 -c1
PING 192.168.12.25 (192.168.12.25) 56(84) bytes of data.
From 192.168.12.1 icmp_seq=1 Destination Host Unreachable
root@box:/var/log# lxc profile show lanprofile |grep “nictype|parent”
nictype: macvlan
parent: enp1s0f1
I stopped and started the container several times and also did a reinstall / reinit of lxd several times. Please help :).
root@box:/var/log# lxc –version
2.18
root@box:/var/log# lxc network list |grep “lx|enp1s0f1”
| enp1s0f1 | physical | NO | | 1 |
| lxcbr0 | bridge | NO | | 0 |
| lxdbr0 | bridge | YES | | 0 |
Author
Since your container requests an IP address and is offered one, then any problem should be related to the operating system of the container. I cannot think of the scenario that the DHCPOFFER being somehow blocked and not reaching the container. You can get the DHCP client in the container to output debugging information in order to see how it processes the DHCPOFFER.
In addition, I notice that you are running LXD 2.18. If I remember correctly, you probably use a PPA or the backports repository. I suggest to upgrade to the snap package, which currently has LXD 3.4. An upgrade should not directly fix the issue that you are facing. However, version 2.18 of LXD is not supported as far as I remember. Only version 2.0.11 (Ubuntu 16.04), 3.0.x (Ubuntu 18.04) are supported until the EOL of the corresponding LTS Ubuntu version.
I was runnng 2.18 and upgraded via backports (thanks for the hint) to 2.21. No luck. Also, I launched both 16.04 and 18.04 containers with 2.21: no luck. dhclient -v eth0 says
Listening on LPF/eth0/00:16:3e:db:e2:ce
Sending on LPF/eth0/00:16:3e:db:e2:ce
Sending on Socket/fallback
DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 3 (xid=0x548c1235)
how… I don’t understand, but it seems the answer (DHCPOFFER) is not received. Giving up for now. First, find time to reinstall the box with Ubuntu 18.04 and then further with LXD 3.x.
thanks
Author
If there was virtualization in play (like VirtualBox), then this would be explained easily. There is an issue with Virtualbox and the result is exactly what you describe. That is, the container does not receive the DHCPOFFER in order to continue with the rest of the protocol. The workaround in Virtualbox would be to put the host’s interface in promiscuous mode.
@Simos Xenitellis on September 12, 2018 at 16:24
thanks for thinking with me. Its Ubuntu 17.10 on bare metal. Also, I installed the snap package with lxd 3.x (couldnt wait for finding time to install 18.04 :)) and it’s the same. Both 16.04 and 18.04 containers request DHCP, an offer is sent but not received.
Author
The macvlan functionality works in LXD, therefore I assume there is some issue with the network driver or network settings.
I do not have the full picture. From the interface name (enp1s0f1), I assume it’s the second network interface of two? Can you verify with tshark that the DHCPOFFER is sent to the correct network interface?
Yes, enp1s0f1 is the second part of this NIC
https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/ethernet-controller-i350-datasheet.pdf
Yes, I did tcpdump on enp1s0f1 and saw the DHCPOFFER there; it’s also the only interface where dhcpd is listening on.
Author
I think that this stage you can post this issue on https://discuss.linuxcontainers.org/
If it is a bug in LXD, it should be then reported on Github so that it gets fixed.
I can confirm that this is not working on a VirtualBox VM (ubuntu 18.04) as LXD host.
side comment … if u have a proxmox hist inside a VirtualBox VM there is no problem whatsoever to get public IP’s for containers working (shared adapter, no promiscous mode). So whether anyone could break down how proxmox handles networking for LXC containers cure might be around the corner.
LXC containers can receive address from DHCP either you have to use
1. The “Adapter Type” should be set to “PCnet-FAST III” (not the default, which is an Intel PRO/1000 variant), see https://www.virtualbox.org/ticket/6519
2. “Promiscuous Mode” should be set to “Allow All”.
as mentioned here https://forums.virtualbox.org/viewtopic.php?t=59215
or you can use Intel PRO/1000 variant but then you have to create a br0 manually in host and add host’s nic to br0 through /etc/network/interfaces or through netplan. Then assign br0 to containers and of course Promiscuous Mode” should be set to Allow All.
Author
I have tried with Virtualbox (LXD running on Ubuntu 18.04 in a VirtualBox VM).
I set up the Promiscuous setting in Virtualbox but
macvlan
did not work either.But when you set as well the host’s Ethernet interface in PROMISC mode, then it works.
You can set it to PROMISC mode
ip link set eth5 promisc on
or
ifconfig eth5 promisc
When you run proxmox, is the interface in PROMISC mode?
I have followed this article and another youtube video and have created a set of LXD containers all using Ubuntu 18.04. I want to expose one of the containers to the internet since it is the primary web server for this system. All of the systems are using Static IP using MacVlan and have been configured with netplan 50-cloud.
Host System – IP 192.168.86.100
LXC System – NextCloud Server – 192.168.86.101
LXC System – OnlyOffice Document Server – 192.168.86.102
I can ping from my workstation to each of the systems, and from the system back to my work station, but when I try to ping from 2 or 3 to 1, it gets no response. I ping from 1 to 2 or 3, I get no response. I port forward to #2 on port 80. I try to access it from the web, I get page not found. I am sure I missed something, but have not found the magic key. Any thoughts?
I got this method working, but have found I can’t set static ip’s on the containers – Did you experience anything similar? – Do you think it might be because the network is no longer managed/ or a bug? – It’s driving me mad
lxc network list
+—————-+———-+———+————-+———+
| NAME | TYPE | MANAGED | DESCRIPTION | USED BY |
+—————-+———-+———+————-+———+
| enp3s0 | physical | NO | | 0 |
+—————-+———-+———+————-+———+
| enp3s0.102 | vlan | NO | | 2 |
+—————-+———-+———+————-+———+
lxc config device set disco-test-002 eth0 ipv4.address 172.16.102.33
lxc config device get disco-test-002 eth0 ipv4.address
172.16.102.33
lxc list # After restart etc etc, ip returns to the dhcp leased address
+—————-+———+———————-+——+————+———–+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+—————-+———+———————-+——+————+———–+
| disco-test-001 | RUNNING | 172.16.102.11 (eth0) | | PERSISTENT | |
+—————-+———+———————-+——+————+———–+
| disco-test-002 | RUNNING | 172.16.102.16 (eth0) | | PERSISTENT | |
+—————-+———+———————-+——+————+———–+
lxc version
Client version: 3.13
Server version: 3.13
lxc config show disco-test-002
architecture: x86_64
config:
image.architecture: amd64
image.description: Ubuntu disco amd64 (20190607_07:42)
image.os: Ubuntu
image.release: disco
image.serial: “20190607_07:42”
volatile.base_image: fd3f73af851567ca5a4a3083b305a9e4c89fb0e52e74e9da3d095311b36f992b
volatile.eth0.hwaddr: 00:16:3e:26:84:9a
volatile.eth0.name: eth0
volatile.idmap.base: “0”
volatile.idmap.current: ‘[{“Isuid”:true,”Isgid”:false,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000},{“Isuid”:false,”Isgid”:true,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000}]’
volatile.idmap.next: ‘[{“Isuid”:true,”Isgid”:false,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000},{“Isuid”:false,”Isgid”:true,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000}]’
volatile.last_state.idmap: ‘[{“Isuid”:true,”Isgid”:false,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000},{“Isuid”:false,”Isgid”:true,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000}]’
volatile.last_state.power: RUNNING
devices:
eth0:
ipv4.address: 172.16.102.33
nictype: macvlan
parent: enp3s0.102
type: nic
ephemeral: false
profiles:
– default
stateful: false
description: “”
Author
When you use macvlan, the container’s networking is not managed anymore by LXD (“unmanaged”).
Therefore, commands like “lxc config device set disco-test-002 eth0 ipv4.address 172.16.102.33” have no effect to such containers.
With macvlan, it is up to you to configure the networking of a container, and LXD cannot do the configuration for you. Having said that, one option to set a static network configuration to a container using macvlan, is to use “cloud-init” in LXD. LXD supports cloud-init configuration, and passes it directly to the container. See me other post on cloud-init.
Hi Simos,
This is a great writeup, and your contribution to opensource is really cool. I’ve got a couple questions, which perhaps you may have answered already somewhere, so please bare with me to ask again.
I have a baremetal host that I’m trying to start using lxc/lxd on. It’s running Ubuntu 18.04 and I just initialized lxd on it. I have downloaded a guest ubuntu 16.04 lxc container and trying to assign an IPv6 to it based on macvlan (prefeered) but it seems not to be working.
Any leads or step by step guide?
Thank you.
Author
Thanks!
When you use
macvlan
, then your host appears as yet another system on your LAN. It is good to set it up to get the network configuration from the network, therefore it is your network admin’s issue to set it up. Verify whether you get IPv4 address from the network before investigating an IPv6 address. If you want to set the IPv6 configuration manually, then this is a separate tutorial.Hi, what about case if we run lxd on virtualbox machines (by vagrant) ?
it use nat network for internet access and hostonly network for connect vm machines.
I run lxd on first vm, and if i use default network settings with lxdbr0, i have internet connection inside containers, but can’t access to container by ip from another vm in same network
if i create in vagrant additional private network with dhcp ip resolving (eth2) and define in lxd profile macvlan based on this device – i have access to container ip from another vm, but internet access not available inside containers
Author
For testing, it is better to try first without Vagrant. Once you get a working configuration, you can automate with Vagrant.
There are several combinations for the networking in VirtualBox virtual machines as shown in the table below,
https://www.thomas-krenn.com/en/wiki/Network_Configuration_in_VirtualBox
You mention that you have two VirtualBox VMs, both with “Host-only networking”? If that is the case, then according to the table any containers inside these VMs do not have Internet connectivity.
I consider the two following scenarios of using LXD with macvlan, inside a VirtualBox virtual machine:
Same as before, but with two Virtualbox virtual machines with same configuration. Have the LXD containers get IP addresses from the LAN, then access each other, the VMs and the Internet.
For this to work, you need to
a1. Set in Virtualbox the Networking to “Bridged networking”.
a2. In the Advanced tab, change the “Promiscuous Mode” setting from “Deny” to “Allow All”. It has to be “Allow All” for this to work.
b. Start the VM and setup LXD. Create a LXD profile for macvlan (perhaps edit the “default” profile and add in there there configuration for macvlan.
c. In the VM, set the network interface into promiscuous mode as well. For example, “sudo ip link set enp0s3 promisc on”.
d. Create a LXD container with macvlan configuration. It will be able to get an IP address from your LAN and all will work fine.
e. Repeat with multiple VMs. The LXD containers will be able to access each other, the LAN, the VMs and the Internet.
Thanks for reply. So internet access from containers inside VM possible only with bridged vagrant mode, and there are no way to create any additional ip rules inside VM for containers?
And also one moment, internet access for my work machine provided by router, i have static ip, not dhcp mode
for clear explanation, i don’t need access from internet to containers, only from containers to internet. And network example – green lines – ping success, red – failed https://drive.google.com/file/d/1X92O94P1hxGkWMulTq1_tqOXFqErWvnQ/view?usp=sharing
Author
According to the table at https://www.thomas-krenn.com/en/wiki/Network_Configuration_in_VirtualBox#Network_Address_Translation_Service
there are three different network settings for the VM (therefore, the containers) to access the Internet. See the column “Access Guest -> external Network”.
It is OK if your work machine is assigned a static IP. If there is no DHCP server, then you would have to setup manually the networking of each “macvlan” container.
In the network diagram, VM1 is using NAT networking. Which means that the VM is not accessible directly by the LXD containers of VM2.
Thanks a lot! It was very helpful!
Hi Simos,
Thanks for the instructions provided above! I’ve followed them to the letter on several machines, but consistently see the same problem: my interface is consistently dropping packets.
I’m referring to the host’s interface. The interface consistently drops 1 packet, roughly every 30 seconds. I have tested this with multiple containers running, but the packet loss remains constant (no increase with more containers).
I’ve tested this in both ubuntu 16.04 and 18.04 across 3 different hosts, both have the same result. This happens whether the interface is in promiscuous mode or not.
Do you have any idea what the cause could be?
Author
Hi Nathan,
I have not noticed this issue. I suspect it would be a Linux kernel issue.
Can you post some instructions on how to automate this check of the dropped packet?
It would be good to be able to replicate before reporting.
Hi Simos,
Thanks for your response. I’ve put together a quick way to monitor the issue this morning:
The name of the interface being monitored should be supplied as an argument.
The kernel releases that I’ve tested this on are ‘4.4.0-112-generic’ and ‘5.0.0-29-generic’ on Ubuntu 16.04.5 and 18.04.3 respectively.
Author
Thanks!
There is a tool, dropwatch, at https://github.com/nhorman/dropwatch which may show some extra information on the cause of the packet drop.
I tried with Linux 4.15 but could not replicate (see dropped packets in
ifconfig
output of the host’s interface, when a container was using the macvlan over that interface).Hi Simos,
Thanks for looking into this. Unfortunately, dropwatch hasn’t yielded any helpful information: I get long lists of ‘dropped packets’ whether packets are being dropped or not. The lists are similar and cover the same range of addresses regardless of whether packets are being dropped or not.
I will continue to investigate and update you if I find the source of the problem.
Thank you for the write up. But I wonder if I (or perhaps) you missed something. In the intro, you outline that “we launch new containers under the new profile, or attach existing containers to the new profile”. But I don’t see the bit were we attach existing containers to the new profile. Could you show me how I would do that?
Thanks.
–wpd
Author
Thanks!
Here you go. You use
lxc profile assign
to assign a new set of profiles to an existing container. Previously, this subcommand used to be “attach”. Now, it’s assign.Simos,
Have you seen this error before?
Error: Device validation failed “eth0”: Cannot use “nictype” property in conjunction with “network” property
I posted a write up with my own experience of deploying Macvlan & LXD (see https://blog.plip.com/2019/08/17/nat-and-macvlan-on-production-lxd-plus-reverse-proxy-ssh-config/) and some users are coming across this error when then run this command:
lxc profile device set lanprofile eth0 nictype macvlan
While I’ve tried to reproduce the error, I haven’t been able to. I thought I might reach out because I feel you seem to have a much stronger grasp of this stuff than I do. I suspected maybe newer versions of LXD are causing it, but I’ve been unable to confirm.
Much thanks!!
Author
Hi mrjones!
You can show the profile with
lxc profile show lanprovile
in order to have an idea on what is in there.Such an error should be easy to diagnose, because you just need to copy the profile content in order to replicate.
Hello,
The problem that I encounter and that mrjones transmitted to you is the impossibility of following this tutorial:
https://blog.simos.info/how-to-make-your-lxd-container-get-ip-addresses-from-your-lan/
There may be some prerequisites that have not been mentioned in this one.
I have a fresh install of Ubuntu Server 20.04 amd64 and lxc 4.2
Author
You are both right. The syntax in
lxc profile device add
has changed in a recent version of LXD and the old syntax was supposed to work in a compatibility mode. However, in LXD 4.2 (at least) it gives the error that mrjones show above.If you get any other errors, please post the exact command that gives you these errors.
By making these changes, it is likely that older versions of LXD may not be working. I do not know when the format changed, so it is up to you to notify me! 🙂
Hello,
I have problem with CentOS is not give me a network
but with Ubuntu it work fine
I don’t know what a problem is
could you please help
Author
Hi!
Can you show a full example that demonstrates that the same LXD profile for macvlan works for an Ubuntu container but does not work with a Centos container?
lxc list
so that we can see that the Ubuntu container got an IP address but the other did not.Such a reproducible set of instructions would be easy for me to try, and can escalate to the proper place to get it fixed.
thank for reply,
now i can config ip for container but container can’t ping to own gateway like this:
My ip I set
My lxd host: 10.2.17.135 (ubuntu20)
Container1: 10.2.17.15 (Centos7)
Container2: 10.2.17.16 (Centos7)
Container1 can ping Container2 but can’t ping to My lxd host and it own gateway(10.2.17.1)
10.2.17.15 can ping 10.2.17.16 but can’t ping to 10.2.17.135 or 10.2.17.1
Hi, I’ve got a problem with Centos 8. Centos 7, Fedora and Ubuntu containers are getting the IP properly, Centos 8 and Centos Stream don’t get the IP.
Author
This has been addressed here, https://discuss.linuxcontainers.org/t/1-6-lxd-containers-not-assigned-ipv4-ipv6-addresses/9687
The Centos 8 Stream container image did not get automatically upon starting an IP address from the default network setup, and it has just been fixed.
If you have cached a Centos 8 container image, you may need to delete it first.
You mention that CentOS 8 has an issue as well. Please verify and report back so that it gets fixed as well.
Hi Sir.
I have tried the steps above but when creating the container I did not get the IP details as follows:
Is there something wrong?.
Thank you for the advice
Author
Hi!
Is there a VM involved with the
ens3
network interface? If there is KVM or something similar, then they may be breakingmacvlan
.Unfortunately, it still doesn’t work 🙁
[root@lxd1 ~]# lxc ls
+------+-------+------+------+------+-----------+----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
+------+-------+------+------+------+-----------+----------+
[root@lxd1 ~]# lxc image ls
+-------+-------------+--------+-------------+--------------+------+------+-------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |
+-------+-------------+--------+-------------+--------------+------+------+-------------+
[root@lxd1 ~]# lxc launch images:centos/8-Stream stream --profile default --profile macvlan
Creating stream
Starting stream
[root@lxd1 ~]# lxc ls
+--------+---------+------+------+-----------+-----------+------------------------------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
+--------+---------+------+------+-----------+-----------+------------------------------+
| stream | RUNNING | | | CONTAINER | 0 | lxd1.-redacted- |
+--------+---------+------+------+-----------+-----------+------------------------------+
[root@lxd1 ~]# lxc launch images:centos/8 centos80 --profile default --profile macvlan
Creating centos80
Starting centos80
[root@lxd1 ~]# lxc ls
+----------+---------+------+------+-----------+-----------+------------------------------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
+----------+---------+------+------+-----------+-----------+------------------------------+
| centos80 | RUNNING | | | CONTAINER | 0 | lxd1.-redacted- |
+----------+---------+------+------+-----------+-----------+------------------------------+
| stream | RUNNING | | | CONTAINER | 0 | lxd1.-redacted- |
+----------+---------+------+------+-----------+-----------+------------------------------+
[root@lxd1 ~]# lxc launch images:centos/7 centos7 --profile default --profile macvlan
Creating centos7
Starting centos7
[root@lxd1 ~]# lxc ls
+----------+---------+------+------+-----------+-----------+------------------------------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
+----------+---------+------+------+-----------+-----------+------------------------------+
| centos7 | RUNNING | | | CONTAINER | 0 | lxd1.-redacted- |
+----------+---------+------+------+-----------+-----------+------------------------------+
| centos80 | RUNNING | | | CONTAINER | 0 | lxd1.-redacted- |
+----------+---------+------+------+-----------+-----------+------------------------------+
| stream | RUNNING | | | CONTAINER | 0 | lxd1.-redacted- |
+----------+---------+------+------+-----------+-----------+------------------------------+
[root@lxd1 ~]# lxc ls
+----------+---------+------+------+-----------+-----------+------------------------------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
+----------+---------+------+------+-----------+-----------+------------------------------+
| centos7 | RUNNING | | | CONTAINER | 0 | lxd1.-redacted- |
+----------+---------+------+------+-----------+-----------+------------------------------+
| centos80 | RUNNING | | | CONTAINER | 0 | lxd1.-redacted- |
+----------+---------+------+------+-----------+-----------+------------------------------+
| stream | RUNNING | | | CONTAINER | 0 | lxd1.-redacted- |
+----------+---------+------+------+-----------+-----------+------------------------------+
[root@lxd1 ~]# lxc ls
+----------+---------+--------------------+------+-----------+-----------+------------------------------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
+----------+---------+--------------------+------+-----------+-----------+------------------------------+
| centos7 | RUNNING | 10.10.1.127 (eth0) | | CONTAINER | 0 | lxd1.-redacted- |
+----------+---------+--------------------+------+-----------+-----------+------------------------------+
| centos80 | RUNNING | | | CONTAINER | 0 | lxd1.-redacted- |
+----------+---------+--------------------+------+-----------+-----------+------------------------------+
| stream | RUNNING | | | CONTAINER | 0 | lxd1.-redacted- |
+----------+---------+--------------------+------+-----------+-----------+------------------------------+
[root@lxd1 ~]# lxc image ls
+-------+--------------+--------+----------------------------------------+--------------+-----------+----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |
+-------+--------------+--------+----------------------------------------+--------------+-----------+----------+-------------------------------+
| | 10504755901f | no | Centos 7 amd64 (20201217_07:08) | x86_64 | CONTAINER | 83.41MB | Dec 17, 2020 at 10:14am (UTC) |
+-------+--------------+--------+----------------------------------------+--------------+-----------+----------+-------------------------------+
| | ad35b15ede2d | no | Centos 8 amd64 (20201217_07:08) | x86_64 | CONTAINER | 125.56MB | Dec 17, 2020 at 10:13am (UTC) |
+-------+--------------+--------+----------------------------------------+--------------+-----------+----------+-------------------------------+
| | b26773ad0a9b | no | Centos 8-Stream amd64 (20201217_07:08) | x86_64 | CONTAINER | 126.81MB | Dec 17, 2020 at 10:12am (UTC) |
+-------+--------------+--------+----------------------------------------+--------------+-----------+----------+-------------------------------+
Outstandingly useful, worked first time. THANK YOU VERY MUCH.
is there any way to get this running for wireless interface
You can use
ipvlan
orrouted
in order to make this (assigning a LAN IP address to the container) work.Here is the tutorial with
routed
, https://blog.simos.info/how-to-get-lxd-containers-get-ip-from-the-lan-with-routed-network/tx for the reply, i tried with routed network it worked but was not able to internet from container (like ping google.com) but other hosts are reachable from the container.
Most probably you had DNS issues. Can you tell me which Linux runs on the host and which in the container? Also, the host is baremetal or some virtualization platform?
My host machine is not a VM, it is a bare metal box and it is running with ubuntu 20.04
This is inside the container host machine is reachable
This is inside the container google.com is not reachable
Does it ping 1.1.1.1 ?
Author
Are you still trying to get
macvlan
to work over a wireless interface? The only way it might work is if you disable any wireless security on the access point (i.e. disable WPA/WPA2/WEP), because the access point should be in a position to accept two distinct MAC addresses from the same single host.If you are using
routed
oripvlan
, please say so.It would help if you show us the LXD profile(s) when you create
u1
. Because you might have two network interfaces in the container, and one of them is the standard one attached tolxdbr0
, hence access to other LXD containers.My profile config
lxc list
+——+———+———————-+——+———–+———–+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+——+———+———————-+——+———–+———–+
| u1 | RUNNING | 192.168.1.200 (eth0) | | CONTAINER | 0 |
+——+———+———————-+——+———–+———–+
lxc exec u1 bash
root@u1:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 9a:0e:27:a5:33:d9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
root@u1:~#
Author
You are using
routed
in this setup while this post is aboutmacvlan
. Can you post a comment under https://blog.simos.info/how-to-get-lxd-containers-get-ip-from-the-lan-with-routed-network/ ?You can post your LXD profile (this last comment), and mention that name resolution (DNS) has not been configured even if it has been specified in the LXD profile.
I already have 2 containers which configured as NAT. I’ve tried to add new profile by
lxc profile add net1 macvlan
but then it can’t get any IP and internet doesn’t work anymore.
How can I add new macvlan profiles to my existing containers?
Hi!
The default LXD profile has the default network configuration for the container to use private bridge networking. It looks like this:
$ lxc profile show default
...
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
...
$
If you use the
macvlan
profile in this post as is, then it has its own section foreth0
as follows:$ lxc profile show macvlan
...
devices:
eth0:
nictype: macvlan
parent: enp3s0
type: nic
...
$
Therefore, if you launch a container using the
default
profile, and then add themacvlan
profile, the container will end up losing theeth0/bridged
and it will be replaced witheth0/macvlan
. Because both network interfaces have the same name.All in all, if you want your containers to have both network interfaces, then edit your
macvlan
profile so that the network interface does not have the same name as in thedefault
profile.To answer your initial question that your containers lose network connectivity when you apply the
macvlan
profile; most likely LXD runs in a VM (macvlan
may not work in such a case) or you have a WiFi interface for the networking of your host (WiFi networking with WEP/WPA/WPA2 is not compatible withmacvlan
).Thank you for the excellent tutorial Simos. I very much appreciate the time you took to explain the basic concepts. The world of containers is new to me and I was finding it extremely difficult to get good resources that actually explain the concepts. Now thanks to your tutorial I can configure my container just the way I want.
Author
Many thanks Alexei for your kind words!
Interesting….
I’m searching a way to do the same, but with static public IPs.
Currently I have
enp35s0
with 4 public IPs assigned to it, and 3 LXD containers and I want each container to use one of the public IPs.Simox Xenitellis, can you give a hint, please?
Author
I got you covered. It’s this post, https://blog.simos.info/configuring-public-ip-addresses-on-cloud-servers-for-lxd-containers/
Thank you so much, this worked perfectly, with a minimum of hassles. I now have a lxd container running with a lan ip, statically assigned from my router. Now to configure it for server duties!
Author
Thanks for your kind words! Glad it worked!
Actually famous last words 🙂
Oddly, I can’t ping or ssh onto the container from the host, though I can from other PC’s
I presume this could be some routing issue? have rebooted the host to be sure.
Author
The
macvlan
networking has this feature that there is no network communication between the host and any of themacvlan
containers.There is a valid explanation for this, and it is a known feature. Actually, you would choose
macvlan
partly because you can isolate the host from the LXDmacvlan
containers. This is really cool in terms of security; if a container gets somehow compromised, the host is insulated in terms of networking. Some more background reading: https://sreeninet.wordpress.com/2016/05/29/macvlan-and-ipvlan/LXD supports several more ways to have a container get an IP address from the LAN. If you want the containers to communicate with the host, see the bridged post at https://blog.simos.info/how-to-make-your-lxd-containers-get-ip-addresses-from-your-lan-using-a-bridge/
The slight difficulty in using a public bridge, is that you have to setup such a bridge on the host.
Sorry for the spam, resolved the issue. Obviously can’t connect to host with macvlan. Tried routed but couldn’t get it to work – got the ip, but the container couldn’t talk to anything.
Got bridged to work, quite straightforward in the end.
Thanks for all the great tutorials!