Type 1 vs Type 2 Hypervisors
A hypervisor is a piece of software that lets one physical machine run many virtual machines (VMs). They split into two families based on what sits between the hypervisor and the hardware.
What a hypervisor actually does
It does three things:
- Slices hardware — one physical CPU → many virtual CPUs, one physical NIC → many virtual NICs, etc.
- Isolates guests — each VM thinks it owns a whole machine; none can see the others’ memory or CPU state
- Multiplexes access — schedules time on real CPUs, arbitrates access to disks and networks
An analogy a network engineer will feel at home with: a hypervisor is to servers what a switch with VLANs is to cables. One physical thing, many logical ones, with enforced isolation.
Type 1 — “bare-metal” hypervisor
┌───────┐ ┌───────┐ ┌───────┐
│ VM │ │ VM │ │ VM │ ← Guest OSes (Linux, Windows, ...)
└───────┘ └───────┘ └───────┘
┌─────────────────────────────┐
│ Type 1 Hypervisor │ ← Runs directly on hardware
└─────────────────────────────┘
┌─────────────────────────────┐
│ Physical Hardware │
└─────────────────────────────┘
The hypervisor is the operating system on the physical host. Nothing else runs there. You manage it remotely (SSH, web UI, API).
Examples:
- VMware ESXi — the enterprise default
- Microsoft Hyper-V — shipped with Windows Server; also runs “bare” as Hyper-V Server
- KVM (on Linux) — technically a kernel module that turns Linux into a Type 1 hypervisor; debated classification
- Xen — older, used by AWS historically
- Proxmox VE — open-source, wraps KVM + LXC with a management UI
Used for: production data centers, enterprise virtualization, cloud provider infrastructure.
Type 2 — “hosted” hypervisor
┌───────┐ ┌───────┐
│ VM │ │ VM │ ← Guest OSes
└───────┘ └───────┘
┌─────────────────────────────┐
│ Type 2 Hypervisor │ ← Runs as an application
│ (VirtualBox, VMware │
│ Workstation, ...) │
└─────────────────────────────┘
┌─────────────────────────────┐
│ Host OS (macOS, Windows, │ ← Full general-purpose OS
│ Linux) │
└─────────────────────────────┘
┌─────────────────────────────┐
│ Physical Hardware │
└─────────────────────────────┘
The hypervisor is an application installed on a normal operating system. You open it like any other program.
Examples:
- VMware Workstation / Fusion
- VirtualBox
- Parallels Desktop (macOS)
- QEMU without KVM
Used for: developer laptops, testing, labs, running one OS inside another.
Side-by-side
| Type 1 (bare-metal) | Type 2 (hosted) | |
|---|---|---|
| Runs on | Hardware directly | A host OS |
| Overhead | Very low | Higher (host OS in the way) |
| Performance | Near-native | Good, but reduced |
| Resource sharing | Dedicated to VMs | Shares with host apps |
| Typical scale | Dozens to hundreds of VMs per host | 1–5 VMs |
| Management | Central (vCenter, SCVMM) | Local GUI |
| Boot path | Hypervisor loads → VMs | Host OS boots → you open the app → start VMs |
| Security boundary | Smaller attack surface | Large (whole host OS) |
Key virtualization concepts
- Hardware-assisted virtualization — CPU features (Intel VT-x, AMD-V) that let the hypervisor run guest code efficiently without software emulation. Required for modern hypervisors.
- Paravirtualization — the guest OS is aware it’s virtualized and cooperates with the hypervisor for faster I/O (virtio drivers). Contrast with full virtualization where the guest thinks it’s on real hardware.
- Live migration / vMotion — move a running VM from one host to another with no downtime. Requires shared storage (or storage vMotion) and a fast network.
- Snapshots — capture VM state at a point in time; roll back later. Not a backup — snapshots live on the same storage and grow diff chains.
- Overcommit — allocate more vCPUs / RAM to VMs than the host physically has. Works because rarely does every VM peak at once. Dangerous if unmanaged.
- VM vs container — a VM runs a full guest OS on virtualized hardware. A container shares the host kernel and isolates only user-space. VMs isolate harder; containers start faster and are lighter.
Why it matters for a network engineer
- Virtual switches (vSwitch, DVS, Open vSwitch) live inside the hypervisor. They bridge VM NICs to physical uplinks. VLANs travel through them the same as on physical switches.
- East-west traffic between VMs on the same host never hits a physical switch — it’s handled internally by the vSwitch. This is invisible to external monitoring unless you enable netflow or port mirroring inside the hypervisor.
- Distributed switches span many hypervisor hosts, letting VMs migrate without changing their network identity.
See also
- 🖥️ Server Infrastructure MOC
- 🐧 Linux MOC — many hypervisors are Linux-based (KVM, ESXi internals)
- 📦 Containers MOC — the other way to slice compute