RAID Levels

RAID = Redundant Array of Independent Disks. Combine multiple physical drives into one logical unit to get more speed, more reliability, or both.

What RAID is for

A single disk is one of three things waiting to happen:

  1. Too slow (one drive’s throughput is fixed)
  2. Too small (one drive’s capacity is fixed)
  3. Going to fail (drives die; it’s a matter of when)

RAID solves these by spreading data across multiple drives. The specific pattern is the “level.”

Important mental model: RAID is not a backup. RAID protects against drive failure, not against deletion, corruption, ransomware, site loss, or operator error. You still need backups. Always.

The three building blocks

  1. Striping — split data into chunks; write chunks to different drives in parallel. Fast, but any drive failure loses everything.
  2. Mirroring — write the same data to two or more drives. Slow-ish writes, any one drive can die, no capacity gain.
  3. Parity — compute a mathematical XOR of data; store it on a separate drive. If one drive dies, reconstruct its content from the remaining data + parity.

Every RAID level is a combination of these three.

The levels that matter

RAID 0 — Striping, no redundancy

Drive A: [ block 1 ][ block 3 ][ block 5 ]
Drive B: [ block 2 ][ block 4 ][ block 6 ]
  • Capacity: 100% of all drives
  • Speed: fast (parallel reads and writes)
  • Fault tolerance: zero. One drive dies → all data lost.
  • Use case: scratch disk, caches, anything reproducible

RAID 1 — Mirroring

Drive A: [ block 1 ][ block 2 ][ block 3 ]
Drive B: [ block 1 ][ block 2 ][ block 3 ]   (exact copy)
  • Capacity: 50% (half is mirror)
  • Speed: reads can be fast (two sources), writes are normal
  • Fault tolerance: one drive can die
  • Use case: OS boot drives, small critical volumes

RAID 5 — Striping with parity

Drive A: [ d1 ][ d4 ][ P2 ]
Drive B: [ d2 ][ P1 ][ d5 ]
Drive C: [ P0 ][ d3 ][ d6 ]   (P = parity, rotated across drives)
  • Minimum drives: 3
  • Capacity: N−1 drives’ worth
  • Fault tolerance: one drive can die
  • Write penalty: each write requires read-old-data → read-old-parity → compute → write-both (4 I/Os)
  • Rebuild: reconstruct lost drive from remaining data + parity
  • Modern caveat: as drive sizes grow (multi-TB), rebuild times stretch to days, and the probability of a second drive failing during rebuild climbs. For large drives, RAID 5 is increasingly considered risky — RAID 6 is the safer choice.

RAID 6 — Striping with double parity

Same as RAID 5 but stores two parity blocks per stripe using different algorithms.

  • Minimum drives: 4
  • Capacity: N−2 drives’ worth
  • Fault tolerance: two drives can die
  • Write penalty: higher than RAID 5 (6 I/Os per write)
  • Use case: large arrays where rebuild times matter

RAID 10 — Striping of mirrors (nested)

                     Stripe
              ┌──────────┴──────────┐
        Mirror pair 1         Mirror pair 2
        ┌──────┴──────┐       ┌──────┴──────┐
     Drive A      Drive B   Drive C      Drive D
     (copy 1)     (copy 1)  (copy 2)     (copy 2)
  • Minimum drives: 4 (even number)
  • Capacity: 50%
  • Speed: fast reads and writes — no parity calculation
  • Fault tolerance: one drive per mirror pair can die (so up to half, if you’re lucky; at least one)
  • Use case: high-IOPS workloads (databases), VM storage — the common “performance + redundancy” default

The skipped levels

RAID 2, 3, 4 exist but are historical. RAID 0+1 (mirror of stripes) is inferior to RAID 10 (stripe of mirrors) because RAID 10 survives more failure patterns.

Side-by-side

LevelMin drivesUsable %Can loseWritesReadsTypical use
02100%0FastFastThrowaway
1250%1OKFastBoot drives
53(N−1)/N1Slow (parity)GoodGeneral (legacy)
64(N−2)/N2SlowerGoodLarge arrays
10450%≥1FastFastPerformance + redundancy

Hardware vs software RAID

Hardware RAIDSoftware RAID
WhereDedicated controller card (with battery-backed cache)OS (Linux mdadm, ZFS, Windows Storage Spaces)
PerformanceOffloaded from CPUUses CPU, but modern CPUs are fast enough
PortabilityArray tied to controller modelArray readable on any compatible host
MonitoringVendor tools (often poor)OS-native tools (excellent)
CostExtra hardwareFree
Modern trendLosing ground to ZFS / Btrfs / storage arraysZFS and friends are the default now

What RAID does NOT protect against

  • File-level errors — RAID writes corrupted data just as faithfully as good data
  • Controller failure — if the RAID card dies, the array may be unreadable
  • Fire / flood / theft — same physical location
  • Ransomware — encrypts your data; RAID dutifully replicates the encrypted version
  • “rm -rf” — deleted on all mirrors simultaneously
  • Silent bit rot — classical RAID doesn’t checksum; ZFS / Btrfs / ReFS do

Always pair RAID with offsite backups (see Backup Fundamentals — RPO and RTO).

RAID vs modern alternatives

  • ZFS / Btrfs — filesystem-level redundancy with checksums. RAID-Z (1/2/3) is ZFS’s answer to RAID 5/6/triple-parity, with block-level integrity checks.
  • Erasure coding — generalised parity used by object storage (Ceph, MinIO, S3 internally). Can tolerate many failures across many nodes at much lower overhead than mirroring.
  • Distributed storage — Ceph, Gluster, HCI (VMware vSAN, Nutanix) spread data across whole clusters. RAID-like protection happens between nodes, not between drives in one chassis.

See also