UCS

  1. Unified computing system (UCS)
    1. Integrated management controller (IMC)
    2. Virtual interface card (VIC)
  2. UCS-B
  3. UCS-C
  4. Fabric interconnect (FI)
    1. Fabric failover
    2. Uplink mode
    3. Forwarding
      1. End-host mode
      2. Switching mode
    4. Disjoint L2
    5. Service template
    6. Server pool
    7. IP pool
    8. vNIC/vHBA placement policy
  5. SAN
  6. Converged infrastructure

Unified computing system (UCS)

  • 127.0.0.0/8 addresses for management between server and FI
  • IOM ≡ FEX, VNtag encapsulation

Integrated management controller (IMC)

  • BIOS, disk, NIC settings
  • KVM

Virtual interface card (VIC)

  • up to 256 PCI-compliant vNIC for hypervisor
    • VM-FEX can replace DVS: vNIC per VM
  • does not have BIA
  • FC and Ethernet offload
  • VIC 1400 supports OS kernel bypass

UCS-B

  • 5108 – chassis
  • UCS FI 6424 – FI in chassis instead of IOMs
  • IOM pinning divides uplink count: 2,4,8
  • static pinning must be manually acknowledged (disruptive)
    • if link fails: auto rebalance
    • if link recovers: traffic not balanced through link
  • 8 links from backplane to blade: 4 per IOM
  • no local switching within chassis (IOM ≡ FEX)
    • if pinned to different FIs, traffic goes upstream ⇒ not suitable for east-west traffic
# show pinning server-interfaces

; bcast receiver interface
# show platform software enm interval info vlandb <N>

UCS-C

  • connection to UCSM
    • dual-wire mgmt (shared LOM): LOM → FEX, PCIe → FEX
    • single connect: PCIe → FEX, mgmt + data
    • direct connect: PCIe → FI, mgmt + data

Fabric interconnect (FI)

  • hosts UCS Manager (UCSM)
    • controls IMC: firmware, service profiles, FCoE, network
    • SQLite
  • based on Nexus
  • L1/L2 ports do not pass traffic, used only for config sync
  • only one uplink from UCS is used, second uplink is standby
  • on failover sends gratuitous ARP from all known MACs in CAM
    • learns only server MACs
  • if all uplinks are lost, server ports are disabled
  • always LACP active, IP+MAC src+dst load-balancing
# connect local-mgmt
(local-mgmt)# erase configuration

# connect nxos
# scope system
 /system # set mgmt-db-check-policy health-check-interval <DAYS>
# show cluster state
# show cluster extended-state

Fabric failover

  • creates vEth on both FIs
  • VIC-level, if pinning on FI fails ⇒ no support from OS required
  • switchover
    • send GARP from FIs
    • send IGMP global leave from FIs ≡ trigger IGMP query
  • types
    • uplink: Ethernet only, may be connected to vPC
    • FCoE: FCoE only
    • appliance: directly attached NAS, iSCSI
      • no STP
      • up to 4 per FI
    • unified: FCoE + Ethernet
      • no vPC support
      • howto enable: FCoE uplink → network uplink ≡ assign 2 roles
  • unified port-channel:
    • FCoE port-channel N + LAN port-channel N
    • unified member ports that belong to only 1 (!) port-channel N (match using equal N)

Forwarding

End-host mode

  • default
  • mode change requires reboot
  • active uplinks
  • server port is pinned to uplink + RPF: if return traffic via wrong uplink – drop (déjà-vu check)
    • BUM is pinned per VLAN
    • default pinning: auto, round-robin
  • local switching between server ports
  • no support: silent hosts, directly attached storage

Switching mode

  • participates in STP as a switch (PVST+)
  • used when upstream is L2/L3 demarcation, allows optimal uplink selection (otherwise traffic may flow through ISL)

Disjoint L2

  • VLAN-to-uplink mapping in end-host mode: FI have different upstream switches and different VLANs on uplinks
  • if VLAN/VLAN Group is assigned to one uplink, it is prohibited on other uplinks
    • VLAN Group: uplink has VLANs only from group
    • VLAN Manager: uplink has global VLANs + assigned VLANs (including assigned to others)
    • if Group and Manager conflict, union of permitted VLANs is used
  • static pinning does not account for VLANs and is not restored after failure ⇒ unpredictable blackhole, because all VLANs are permitted on all uplinks by default
  • dynamic pinning: all vNIC VLANs must be allowed on uplink to consider it for pinning
  • Pin groups: for hard pinning (does not affect bcast pinning)
; VIF + vNIC + uplink mapping
# show service-profile circuit name <SERVICE_PROFILE>

Service template

  • inheritance allows to change all children through single parent policy
  • type: cannot be changed after creation
    • initial
    • updating
  • change of assigned template and template association – disruptive

Server pool

  • server can be in several pools
  • available ≡ discovered + not associated

IP pool

  • for iSCSI and CIMC

vNIC/vHBA placement policy

  • adapter order, presented to OS
    • might be needed for mgmt adapter: ESXi uses first vNIC for mgmt
    • required for proper SAN boot
  • vCon:
    • 4 per policy
    • physical adapter representation
    • contains vNIC/vHBA (≡ assign PCIe ID)

SAN

  • end-host mode
    • FI in NPV mode: connect to non-Cisco upstream FC switches
    • soft pinning (if not port-channel), migrates on Ethernet failure
  • no support:
    • NPIV ESXi
    • trunk/port-channel towards directly connected FC/FCoE storage
  • FCoE between FI and server: CoS = 3
    • non-FCoE traffic with CoS = 3 is remarked to CoS = 0
  • VSAN
    • max 32
    • no pruning on FC/FCoE uplink
  • WWxN pool to assign WWNN and WWPN, not a common pool (WWPN – derived)

Converged infrastructure

  • Vblock: EMC + VMware + Intel
  • FlexPod: NetApp
  • VersaStack: IBM
  • VSPEX: EMC
  • SmartStack: HPE
  • FlashStack: PureStorage
  • Ceph with Openstack