VMM integration

  1. ESXi
  2. VMM
  3. vCenter
  4. AVS
  5. Single VDS (single VMM domain)
  6. Dual switch (distinct VMM with the same vSphere)
  7. Resolution immediacy
  8. Deployment immediacy
  9. SCVMM
  10. OpenStack
  11. ML2
  12. GBP

ESXi

  • kernel does not support LACP (only DVS does) → on init disable suspend-individual

VMM

  • only one APIC from the cluster interacts with VMM domain – shard leader; non-preemptive, change only on reload/upgrade
; shard leader
# show vmware domain name <VMM>

vCenter

  • limits:
    1. 200 DVS
    2. 50 AVS/AVE
    3. 3200 ESXi
    4. 1 AVS per ESXi
  • reachability between APIC OOB, vCenter and vmKernel
  • dynamic VLANs + OpFlex (AVS)
  • DVS does not support OpFlex, ACI uses imperative API
  • endpoint retention time: how long info about IP/MAC of a VM is stored, on expiry – 3 ARP (threshold – 75% of aging)
  • vMotion vmknics – silent ⇒ requires unknown unicast flooding or BD with subnet (enables ARP gleaning)
  • mgmt VLAN – tagged

AVS

  • application virtual switch
  • included in ACI license
  • descendant of N1k virtual: AVS ≡ VEM, APIC ≡ VSM
  • OpFlex:
    1. southbound API for APIC and leaf
    2. XML/JSON
    3. between leaf and AVS through infra VLAN
  • features:
    1. µEPG (available for DVS as well)
    2. TCP connection tracking (~ ZBF inspect), 5 minns aging, ~ reflexive ACL; moves along the VM after vMotion
    3. FTP tracking; moves along the VM after vMotion
    4. VXLAN encapsulation with and without local switching (within infra VLAN)
    5. VM traffic telemetry
    6. several L2 hops to leaf
  • no support for:
    1. pre-provision resolution immediacy
    2. intra-EPG contract
  • interacts with hypervisor virtual switch
  • AVE – evolution of AVS: separate VM that forwards traffic through
  • AVS fabric-wide mcast address (VMM GIPo): AVS connection to leaf for VXLAN
  • uses VLAN if feature is not supported with VXLAN (L4-L7)
  • installation – via VSUM (virtual switch update manager, VM)
  • distributed FW modes:
    1. disabled
    2. learning (default)
    3. enbled
  • distributed FW constraints:
    1. 250k flows per ESXi
    2. 10k flows per EP
  • switching mode:
    1. no local switching (NS)
    2. local switching (LS): traffic within EPG – through AVS, between EPG – through leaf

Single VDS (single VMM domain)

  • QoS is required, otherwise mgmt traffic (e.g. vMotion) starts VM traffic
  • Pre-provision resolution is recommended to avoid race condition
  • 2 physical NICs

Dual switch (distinct VMM with the same vSphere)

  • VMM must have non-overlapping VLAN pool
  • 4 physical NICs
  • AVS does not support pre-provision ⇒ for VM traffic
  • possible to physically divide traffic (different NICs)
  • can be used to migrate to ACI-managed DVS

Resolution immediacy

  1. pre-provision:
    • configures VLAN on all ports belonging to AAEP
    • for critical EPGs (e.g. hypervisor mgmt, NFS, vMotion)
    • in case hypervisor is not connected to leaf directly (e.g. through Fabric Interconnects without CDP/LLDP)
    • for vmkernel intf because APIC has to have access to it to collect CDP/LLDP data
  2. immediate:
    • configures VLAN on all ports where VMM hypervisor is present
    • discovers hypervisor via OpFlex, CDP, LLDP (APIC compares info from leaf and hypervisor)
    • if there is FI in between, leaf and DVS see it via CDP/LLDP and understand that this is the same FI
    • required with DVS for vMotion to continue working on target ESXi after APIC failure (port group provisioned already)
  3. on-demand:
    • configures VLAN on ports that have EPG present on them (vCenter notifies APIC)
    • during vMotion leaf passes policy to AVE via OpFlex ⇒ APIC is not involved
  • describes the ports that should be configured with VRF, BD, SVI
  • does not affect physical domain
  • responsible for programming the policy into DVS (API) and AVE (OpFlex with leaf)

Deployment immediacy

  1. immediate: programs policy into ASIC ASAP
  2. on-demand: programs policy into ASIC after 1st packet is received
  • describes when the policy is programmed to ASIC

SCVMM

  • system center virtual machine manager
  • EPG ≡ VM network
  • VMM ≡ logical network
  • dynamic VLANs
  • OpFlex in infra VLAN
  • SCVMM agent:
    1. on SCVMM
    2. Windows service
    3. PowerShell to SCVMM, REST to APIC
  • Hyper-V agent:
    1. on Hyper-V host
    2. interacts with vSwitch
    3. communicates with APIC via OpFlex
  • µEPG precedence order the same as for AVS
  • pre-provision resolution + immediate deploy

OpenStack

  • interacts with ML2 or GBP
  • ACI does not create config in OpenStack, ACI plugin passes Neutron config to ACI
  • limited support for UCS-B

ML2

  • modular layer 2
  • functions:
    1. distributed switching, routing, DHCP
    2. SNAT
    3. external connectivity (SNAT into host address)
    4. floating IP address (allocating public addresses to VMs + NAT)
  • tenant ≡ project ≡ VRF
  • EPG ≡ BD ≡ network
  • assigning subnet to a router = permit-all contract between subnets
  • address is allocated when an instance is created, host is always aware of VM address ⇒ DHCP is always local
  • for every VM there is a switch (qbr) that is attached to br-int (NICs): qbr(qvb) – (qvo)br-int
  • security groups → iptables for tap (VM – (tap)qbr)

GBP

  • group-based policy
  • policy target ≡ vNIC
  • policy group ≡ EPG
  • policy rule ≡ subject
  • policy rule set ≡ contract
  • L2 policy ≡ BD
  • L3 policy ≡ VRF
  • supports service chaining