Docker MacVLAN Networks

  • vnull
  • Pub Jan 6, 2023
  • Edited Jan 9, 2023
  • 6 minutes read


MacVLAN configures a sub-interfaces (also known as slave devices) of a parent physical Ethernet interface with its own unique MAC address and as a result with its own IP address. Applications, Virtual Machines and containers can now be grouped to a specific sub-interface, in order to connect directly to the physical network using their own MAC and IP Address.

MacVLAN in the image that mavvlan created unique MAC addresses to sub-interfaces. These can also be manually assigned.
  • Most NICs have a limitation on the number of MAC addresses. Sometimes exceeding that specific limit may affect the system’s performance.
  • According to IEEE 802.11 protocol specifications, multiple MAC addresses on a single client are not allowed. Macvlan sub-interfaces will be blocked by the user’s wireless interface driver or AP.

Use MacVLAN networks

Legacy applications or applications which monitor network traffic, expect to be directly connected to the physical network. In this type of situation, use the macvlan network driver to assign a MAC address to each container’s virtual network interface, making it appear to be a physical network interface directly connected to the physical network. Need to designate a physical interface on your Docker host to use for the macvlan, as well as the subnet and gateway of the macvlan. Isolate macvlan networks using different physical network interfaces. Keep the following things in mind:

  • It is very easy to unintentionally damage network due to IP address exhaustion or to “VLAN spread”, which is a situation in which having an large number of unique MAC addresses in the network.

  • Networking equipment needs to be able to handle “promiscuous mode”, where one physical interface can be assigned multiple MAC addresses.

  • If application can work using a bridge on a single Docker host or overlay to communicate across multiple Docker hosts, these solutions may be better.

Create a MacVLAN network

When creating a macvlan network, it can either be in bridge mode or 802.1q trunk bridge mode.

  • In bridge mode, macvlan traffic goes through a physical device on the host.

  • In 802.1q trunk bridge mode, traffic goes through an 802.1q sub-interface which Docker creates on the fly. This allows to control routing and filtering at a more granular.

Bridge mode

To create a macvlan network which bridges with a given physical network interface, use --driver macvlan with the docker network create command. Need to specify the parent, which is the interface the traffic will physically go through on the Docker host.

docker network create -d macvlan \
 --subnet= \
 --gateway= \
 -o parent=eth0 pub_net

Need to exclude IP addresses from being used in the macvlan network, such as when a given IP address is already in use, use --aux-addresses:

 docker network create -d macvlan \
  --subnet= \
  --ip-range= \
  --gateway= \
  --aux-address="my-router=" \
  -o parent=eth0 macnet32

802.1q trunk bridge mode

If specify a parent interface name with a dot included, such as eth0.50, Docker interprets that as a sub-interface of eth0 and creates the sub-interface automatically.

 docker network create -d macvlan \
    --subnet= \
    --gateway= \
    -o parent=eth0.50 macvlan50

Use an ipvlan instead of MacVLAN

This post shows the use of Docker IPvlan networks.

In the above example, still using a L3 bridge. Use ipvlan instead, and get an L2 bridge. Specify -o ipvlan_mode=l2.

 docker network create -d ipvlan \
    --subnet= \
    --subnet= \
    --gateway= \
    --gateway= \
     -o ipvlan_mode=l2 -o parent=eth0 ipvlan210

Use IPv6

If configured the Docker daemon to allow IPv6, use dual-stack IPv4/IPv6 macvlan networks.

 docker network create -d macvlan \
    --subnet= --subnet= \
    --gateway= --gateway= \
    --subnet=2001:db8:abc8::/64 --gateway=2001:db8:abc8::10 \
     -o parent=eth0.218 \
     -o macvlan_mode=bridge macvlan216

Docker Compose Example

Making things a bit easier with a docker compose file. To illustrate the above, we will use a two Docker Containers.

All examples were executed in a linux distribution with Docker MacVLAN. If you execute them in macOS or Windows environments the sample commands might change a bit.

The gateway can be only specified under version 2.

Additional IPAM configurations, such as gateway, are only honored for version 2 at the moment.
Default network gateway value for docker-compose version 3

Create the docker-compose.yml file:

version: '2'

    image: alpine
    container_name: container1
    restart: unless-stopped
    tty: true
      - db_net

    image: alpine
    container_name: container2
    tty: true
      - db_net

    driver: macvlan
      parent: eth0
      - subnet:

From within the directory Run: docker-compose up -d

Validate Containers Up

Showing command and output in the code samples.

Using Docker Inspect

Check the container process to see if two containers are up and running:

 docker ps --format \
> "table {{.ID}}\t{{.Status}}\t{{.Names}}"

66c84b6688d6   Up About a minute   container1
ea6b160c7d6e   Up About a minute   container2

Check Docker networks:

docker network ls

NETWORK ID     NAME                 DRIVER    SCOPE
2d44dd74beb6   bridge               bridge    local
bd777dfcc50c   host                 host      local
fcf63e3315dc   macvlan_db_net       macvlan   local
156a09c3d2f5   none                 null      local

How to Get A Docker Container IP Address:

Using Docker Inspect on container ID 39ad7928071a

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'

Using the Network ID fcf63e3315dc

docker network inspect -f \
'{{range .IPAM.Config}}{{.Subnet}}{{end}}' fcf63e3315dc

Look up each Container’s IP individually:

Based on the ipvlan_db_net Network ID fcf63e3315dc assuming jq is instaled.

docker network inspect -f \
'{{json .Containers}}' fcf63e3315dc | \
jq '.[] | .Name + ":" + .IPv4Address'


Look up each Container’s MAC individually:

docker network inspect -f \
'{{json .Containers}}' fcf63e3315dc | \
jq '.[] | .Name + ":" + .MacAddress'


Using Docker exec

In the following example we will work with container1.

Using the exec to run commands in a running container, but can execute an interactive sh shell on the container if want to execute additional commands.

docker exec -it container1 sh

From Docker host ping remote server.

docker exec container1 ping

Remote host capture packets tcpdump

tcpdump -e -ni eth0 icmp

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes


03:36:24.792270 02:42:c0:a8:00:02 > 00:00:00:00:01:10, ethertype IPv4 (0x0800), length 98: > ICMP echo request, id 8, seq 0, length 64
03:36:24.792391 00:00:00:00:01:10 > 02:42:c0:a8:00:02, ethertype IPv4 (0x0800), length 98: > ICMP echo reply, id 8, seq 0, length 64

MAC from container: HWaddr 02:42:c0:a8:00:02

docker exec container1 ifconfig

eth0      Link encap:Ethernet  HWaddr 02:42:c0:a8:00:02
          inet addr:  Bcast:  Mask:

MAC from host: ether 00:00:00:00:01:10

 ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet  netmask  broadcast        
        ether 00:00:00:00:01:10  txqueuelen 1000  (Ethernet)

Assiging your own MAC address with the following mac_address: 00:00:00:00:01:11


    image: alpine
    container_name: container1
    restart: unless-stopped
    tty: true
    mac_address: 00:00:00:00:01:11
      - db_net

Stop Containers

Remove containers and network with docker-compose down


This is a great way for legacy applications or applications which monitor network traffic, expect to be directly connected to the physical network.

Below are some considerations when using MacVLAN:

Network Interface Compatibility Common DHCP Server
Hardware Performance Low CPU, Normal Network Utilization
Security Meets 802.11 standards
Implementation Easy to Set-Up

Next in the Series:

Do you have an idea or suggestion for a blog post? Submit it here!

Related Posts

2023 Phoenix VMUG UserCon

  • vnull
  • Sep 8, 2023
  • 4 minutes read

Introduction: The recent 2023 Phoenix VMUG UserCon brought together some like-minded people in the field, with discussions ranging from VMware technologies to best practices for optimizing existing systems.

Read more

Red Hat User Group Insights, Ansible Automation Platform, and ITSM Integration

  • vnull
  • Jun 1, 2023
  • 3 minutes read

Introduction: This blog post aims to summarize the key takeaways from this informative workshop. At the recent Red Hat User Group workshop on Red Hat Insights, Red Hat Ansible Automation Platform, and their integration with management (ITSM) systems, such as ServiceNow, provided valuable insights into how these technologies work together.

Read more

Robocopy Examples

  • vnull
  • Feb 10, 2023
  • 5 minutes read

Robocopy Examples Robocopy has many command line options and it can be overwhelming to know which commands to use. In this post, we will take a look at how to ues robocopy to copy, mirror, purge Files and Folders.

Read more