SAN vs NAS
Two ways to deliver storage over a network. The difference is not where the storage lives — it’s what the client sees: raw block device (SAN) vs filesystem (NAS).
The one-sentence distinction
- SAN (Storage Area Network) presents raw blocks — the client sees a disk it can format and own.
- NAS (Network-Attached Storage) presents files — the client sees a folder it can share with others.
Everything else is a consequence of that choice.
SAN — block storage over a network
Server → HBA/NIC → SAN fabric → Storage array → LUN (looks like a raw disk)
↑
Server formats it with
its own filesystem (ext4,
NTFS, VMFS) — the array
doesn't care.
The server sees what looks like a local hard drive. It runs its own filesystem on top. The storage array just serves blocks; it has no idea about files or directories.
Protocols:
- Fibre Channel (FC) — dedicated FC fabric, FC switches, HBAs. High performance, expensive, complex.
- iSCSI — SCSI commands over TCP/IP. Runs on Ethernet. Cheaper, “good enough” for most workloads.
- FCoE — Fibre Channel over Ethernet. Niche, mostly displaced.
- NVMe over Fabrics (NVMe-oF) — modern replacement for iSCSI; much lower latency, over RDMA or TCP.
Key terms:
- LUN (Logical Unit Number) — a chunk of storage the array exposes as a “disk”
- WWN / WWPN — Fibre Channel’s MAC-equivalent, used for zoning
- Zoning — FC equivalent of VLAN ACLs; which initiators can see which targets
- Multipathing — multiple paths from server to LUN for redundancy; MPIO on the server side decides which to use
- Thin vs thick provisioning — thin = only use storage you actually write to; thick = reserve full size up front
Use case: VM datastores, database volumes, Exchange, any workload that wants its own filesystem with maximum performance.
NAS — file storage over a network
Server → network → NAS → Filesystem (Server mounts a share, sees files/folders)
↑
NAS owns the filesystem and serves files.
Many clients can share the same share simultaneously.
The server sees a mounted folder. Files are read and written as files. The NAS owns the filesystem; locking, permissions, and concurrent access are handled there.
Protocols:
- NFS (Network File System) — Unix/Linux native. NFSv3 (stateless, simple), NFSv4 (stateful, integrated auth).
- SMB/CIFS — Windows native. Modern is SMB 3.x with encryption and multichannel.
- AFP — legacy Apple; deprecated in favor of SMB.
Use case: shared documents, user home directories, backups-to-share, media files, anything multiple clients need concurrent access to.
Side-by-side
| SAN | NAS | |
|---|---|---|
| Client sees | Raw block device / disk | Mounted folder |
| Client does | Formats its own filesystem | Reads and writes files |
| Typical protocol | FC, iSCSI, NVMe-oF | NFS, SMB |
| Network | Usually dedicated / high-speed | Usually shared corporate network |
| Shared between clients | Hard (needs cluster filesystem like VMFS, OCFS2) | Native — multi-client is the point |
| Performance | High, low latency | Good, higher latency (extra layer) |
| Complexity | High (zoning, multipathing, LUN mgmt) | Low (mount a share) |
| Cost | Higher (especially FC) | Lower |
| Typical use | VMs, databases, Tier-1 apps | File sharing, home dirs, content |
When the line blurs
Modern storage products do both. NetApp, Dell EMC Isilon, Pure Storage ship one box that speaks iSCSI and NFS simultaneously. Cloud storage is similar:
- AWS EBS — block (SAN-like, attached to one EC2 at a time)
- AWS EFS — NFS (NAS)
- AWS FSx — SMB (NAS, Windows-friendly)
- AWS S3 — object storage, a third category
Object storage — the third category
Worth knowing even if you only asked about SAN/NAS.
- Objects (files + metadata) accessed by a flat key, over HTTP
- No filesystem, no directories (directories are simulated via key prefixes)
- Massively scalable; eventual consistency typically
- Examples: AWS S3, Azure Blob Storage, Google Cloud Storage, MinIO, Ceph RGW
- Use case: backups, archives, static web content, data lakes
| Block | File | Object | |
|---|---|---|---|
| Addressed by | Block offset | Path | Key |
| Protocol | iSCSI, FC, NVMe-oF | NFS, SMB | HTTP (S3 API) |
| Typical use | OS disks, DB storage | User files, shares | Backups, archives, web |
| Typical scale | TBs | TBs | PBs and up |
Networking considerations
A network engineer’s perspective:
- Dedicated storage VLAN is standard. Storage traffic is bursty, latency-sensitive, and you don’t want it fighting with user traffic. Jumbo frames (MTU 9000) often enabled.
- Multipathing needs multiple physical paths that don’t share single points of failure. Typical: two NICs on the server, two switches, two controllers on the array.
- iSCSI and NFS across routers → watch MTU, packet loss, latency. For iSCSI, under 2 ms RTT is a soft limit before IOPS suffer.
- FC is its own world — doesn’t ride Ethernet, has its own switching fabric.
- Storage-over-WAN is async-replicated, not synchronous. RTT-bound.
See also
- RAID Levels
- Backup Fundamentals — RPO and RTO
- Type 1 vs Type 2 Hypervisors — VMs on SAN is the classic pairing
- 🖥️ Server Infrastructure MOC