PCIe Quick Converter
Enter any value to convert and estimate minimum lanes needed
GB/s
Gb/s
GT/s
Estimate only, ~2% encoding loss not included

What Are PCIe Lanes?

PCIe lanes are like lanes on a highway. Each lane is a path that data can travel through. The more lanes you have, the more data can flow at once.

Think of it like this

Imagine a highway between your GPU and your CPU. A single lane highway can only fit one car at a time. A 16-lane highway? That's 16 cars side by side, all moving at once. More lanes = more stuff moving = faster overall transfer.

When you see specs like "x16" or "x4", that's the number of lanes. A graphics card typically uses x16 (16 lanes), while an NVMe SSD usually x4 (4 lanes).

Lane Count Comparison
x1
1 lane
x4
4 lanes
x8
8 lanes
x16
16 lanes

PCIe Generations: Speed Limits

If lanes are the number of highway lanes, then generations are the speed limit. Each new PCIe generation doubles the speed limit per lane.

Speed Limit Analogy

PCIe 3.0 is like a highway with a 60 mph speed limit. PCIe 4.0 raised it to 120 mph. PCIe 5.0? That's 240 mph. PCIe 6.0 hits 480 mph. Same number of lanes, but everything moves twice as fast with each generation.

Generation Speed Comparison (per lane)
Gen 3
8 GT/s
Gen 4
16 GT/s
Gen 5
32 GT/s
Gen 6
64 GT/s
Bandwidth Per Lane (Each Generation)
PCIe 3.0
~1 GB/s
~1 GB/s
PCIe 4.0
~2 GB/s
~2 GB/s
PCIe 5.0
~4 GB/s
~4 GB/s
PCIe 6.0
~8 GB/s
~8 GB/s

This means a PCIe 4.0 x4 slot has the same bandwidth as a PCIe 3.0 x8 slot. Newer generations let you do more with fewer lanes.

Bandwidth Reference

Here's the full picture showing bandwidth for common lane configurations across generations:

Measured in Bytes

Config PCIe 3.0 PCIe 4.0 PCIe 5.0 PCIe 6.0
x1 ~1 GB/s ~2 GB/s ~4 GB/s ~8 GB/s
x4 ~4 GB/s ~8 GB/s ~16 GB/s ~32 GB/s
x8 ~8 GB/s ~16 GB/s ~32 GB/s ~64 GB/s
x16 ~16 GB/s ~32 GB/s ~64 GB/s ~128 GB/s

Measured in Bits

Config PCIe 3.0 PCIe 4.0 PCIe 5.0 PCIe 6.0
x1 ~8 Gb/s ~16 Gb/s ~32 Gb/s ~64 Gb/s
x4 ~32 Gb/s ~64 Gb/s ~128 Gb/s ~256 Gb/s
x8 ~64 Gb/s ~128 Gb/s ~256 Gb/s ~512 Gb/s
x16 ~128 Gb/s ~256 Gb/s ~512 Gb/s ~1024 Gb/s

Why only x1, x2, x4, x8, x16? PCIe lane counts follow powers of 2. You can't have x3, x5, x6, or x12 - only 1, 2, 4, 8, or 16 lanes. This is by design: binary addressing makes routing and switching simpler, and the doubling pattern lets hardware easily split or combine links (a x16 slot can run as two x8s, four x4s, etc.).

Transfer Rate vs Throughput

You'll see different types of speed numbers: transfer rate and throughput. Here's the difference:

  • Transfer Rate (GT/s) - "Giga-transfers per second" - This is the raw signaling speed at the physical layer. It's the theoretical maximum - how fast the electrical signals are switching. PCIe 4.0 runs at 16 GT/s, 5.0 at 32 GT/s, and 6.0 at 64 GT/s per lane.
  • Throughput (Gb/s) - "Gigabits per second" - This is the data rate after encoding overhead, expressed in bits. Often used interchangeably with GT/s since they're approximately equal (1 GT/s ≈ 1 Gb/s), but Gb/s technically represents usable data while GT/s is raw signaling. Common in networking contexts.
  • Throughput (GB/s) - "Gigabytes per second" - This is the actual usable data that gets transferred after encoding overhead, expressed in bytes (Gb/s ÷ 8). This is what you actually experience when transferring files in real-world use.
The Shipping Analogy

Think of PCIe data transfer like shipping products. Just as products need packaging to travel safely, data needs encoding to transfer reliably. Lets say we want to transfer 64 GB of data on 1 Gen5 lane:

Shipping
=
PCIe
Depot = CPU / Source device
Truck = Copper traces / Physical medium
Packaging = Encoding (128b/130b)
Product = Data
Highway = PCIe lanes
Warehouse = SSD / GPU / Destination device
Pallets = Practical measurement of data (bytes)
Giga Transfers Per Second (GT/s) — Trucks Leaving the Depot

At the depot (CPU), products (data) are boxed up in protective packaging (encoding) and loaded onto trucks (copper traces). Our Gen 5 lane supports 32 GT/s, so 32 trucks can be loaded at the depot per hour, for data it's per second. This is the raw signaling speed.

Gigabits Per Second (Gb/s) — Truck Loads Arriving

The depot packaged the products to protect them for transport. Just like data signals are wrapped in encoding for reliable transfer. The trucks then travel down the highway (PCIe lane) to the warehouse (SSD/GPU). Workers unload the products and remove the packaging — this is where the encoding overhead (~2%) is stripped away. What remains is the actual product or usable data measured in bits. The depot packaged 32 truck loads of product, but ~2% of the cargo space was packaging, so the warehouse received ~31.5 truck loads of actual product, ~31.5 Gigabits.

Gigabytes Per Second (GB/s) — Pallets Received

Inside the warehouse, workers organize the unboxed products onto pallets for storage or use. Every 8 products stack into 1 pallet — this is the bits to bytes conversion (÷8). Pallets are the practical unit the warehouse actually works with, just like GB/s is what you experience when transferring files. Our ~31.5 truck loads deliver ~4 pallets to the warehouse per hour. Our Gen5 lane shipped ~4 GB in 1 second.

The Complete Journey

32 trucks leave the depot (GT/s)~31.5 truck loads of product arrive after unboxing (Gb/s)~4 pallets of warehouse space is used (GB/s)

When specs matter: Marketing specs may use Gb/s or GT/s because bigger numbers look better. When comparing real-world performance, look for GB/s throughput - that's what your files and games actually experience.

Under The Hood: Wires & Signals

Each PCIe lane is actually four copper wires (two pairs):

  • TX pair - Transmit: sends data OUT from the device
  • RX pair - Receive: brings data IN to the device
Single PCIe 5.0 Lane - Full Duplex
TX (out)
~4 GB/s
RX (in)
~4 GB/s
Per Direction
~4 GB/s
Bidirectional Total
~8 GB/s
Copper Wires
4 wires

This is called full-duplex communication - data flows in both directions simultaneously, like a two-way street where traffic moves both ways at the same time.

Why does this matter?

If you see "64 GB/s" for PCIe 5.0 x16, that's the one-way speed. Because it's full-duplex, data can flow at 64 GB/s in AND 64 GB/s out at the same time. Some motherboard marketing calls this "128 GB/s bidirectional" - technically true, but single-direction is what gets advertised when buying devices and it's what you face in real-world scenarios when loading a game or transferring a file.

Samsung 9100 PRO NVMe SSD
Samsung 9100 PRO Series
2TB PCIe 5.0 x4, NVMe 2.0, M.2 Internal SSD
Up to 14,700 MB/s
This Gen5 NVMe SSD advertises its highest single-direction speed: 14,700 MB/s (14.7 GB/s). When connected with 4 Gen5 lanes.

The math breakdown (using PCIe 5.0 x4 as example):

  • PCIe 5.0 runs at 32 GT/s (giga-transfers per second) per lane
  • With 128b/130b encoding, that's roughly 3.94 GB/s of usable data per lane
  • x4 lanes = ~15.75 GB/s theoretical max (~16 GB/s)
  • The Samsung above advertises 14.7 GB/s - real-world speeds are below theoretical and also limited by heat/software
  • This is per direction - TX and RX each get this bandwidth

Real-world context: Most workloads are asymmetric. Your GPU mostly receives data (textures, game assets) and sends back relatively little (frame completion signals). Your NVMe drive reads and writes, but rarely at full speed in both directions simultaneously. The full-duplex capability is there, but you rarely max out both directions at once.

CPU Lanes vs Chipset Lanes

This is where motherboards get tricky. All PCIe lanes originate at the CPU, but devices have two sources they connect from:

  • CPU Lanes - Direct connection to the processor. Fastest possible path. Limited quantity (usually 20-24 usable lanes depending on CPU). Currently the only lanes that support Gen5 come directly from the CPU.
  • Chipset Lanes - Start at the CPU, then run at Gen4 to a physical chipset. This chipset expands a 4-lane (AMD) or 8-lane (Intel) CPU uplink/DMI into more connectivity. The chipset acts like a hub or traffic intersection, managing data transfers between the CPU and chipset-connected devices.
How Devices Connect - Depends on CPU
GPU
NVMe
General
USB
CPU I/O Die
20-24 lanes
CPU
All lanes originate here
x4 (AMD) / x8 (Intel)
Chipset
Expands into more lower Gen lanes
More Slots
USB
SATA
WiFi / LAN

The chipset bottleneck: All devices connected through the chipset share that single x4 (AMD) or x8 (Intel) uplink back to the CPU. If you have multiple fast NVMe drives on chipset lanes, they aren't competing competing for that shared connection unless they are being used at the same time.

Direct CPU lanes (green in diagram) are premium - they connect straight to the CPU and support the fastest Gen5 speeds. That's why your GPU, primary M.2 slot, and fastest devices typically connect here.

Chipset lanes (teal in diagram) run at lower generations. The chipset expands that small uplink into many more connections - great for USB, SATA, WiFi, LAN, add-in cards and secondary storage that won't max out the link.

Want to see PCIe lanes explained visually? Check out our video breakdowns on YouTube.

Watch on YouTube →

When Does This Actually Matter?

Here's the honest truth: most people won't notice PCIe bandwidth limits in everyday use. Edge cases and power users aside, average everyday uses and gaming don't push the limits:

Gaming GPU
Uses ~8-14 GB/s typical gaming
Gen5 x16 provides 64 GB/s - massive headroom. Even x8 (32 GB/s) has room to spare.
NVMe SSD (Gen 5)
Uses ~2-4 GB/s typical, 14 GB/s peak
Real-world rarely hits max. Game loading, OS use stays well under. Large file transfers can burst higher.
USB4 / Thunderbolt
Uses ~1-2 GB/s typical, 5 GB/s peak
External SSDs and docks rarely saturate the link.
NVMe SSD (Gen 4)
Uses ~1-3 GB/s typical, 7 GB/s peak
Secondary M.2 slots. Real-world usage rarely maxes out - multiple drives share chipset uplink fine.
WiFi 6E / 7 Card
Uses ~0.3 GB/s max
x1 slot is plenty. Even PCIe 3.0 x1 is overkill for WiFi.
Capture Card (4K60)
Uses ~1.5 GB/s
x4 Gen3 is sufficient. Chipset lanes work perfectly fine.
10GbE Network Card
Uses ~1.2 GB/s max
x4 Gen3 is enough. Only matters if running multiple high-bandwidth cards.

Rule of thumb: If your device doesn't max out the available bandwidth, faster lanes won't help. A WiFi card on PCIe 5.0 x16 won't load web pages any faster than PCIe 3.0 x1. You are lmited by what the device or connection can achieve.

Any Need to Worry?

Usually specific workstation and productivity tasks:

  • Multiple fast NVMe drives - If you're running multiple drives for video editing scratch disks, chipset bandwidth becomes a factor when using multiple devices at the same time.
  • Workstation multi-GPU - Running two GPUs at x8/x8 for rendering, workstation tasks or LLM's. Consumer boards max out at 2 PCIe x16 slots in x8 mode for multi GPU setups. Enterprise boards/CPUs have more available lanes.
  • High-bandwidth add-in cards - 10GbE+ network cards, Thunderbolt, PCIe capture cards, SATA and chipset connected USB can compete for chipset bandwidth. If many devices are being used all at once there can be an increase in latency (response time)

This is exactly why MoboMaps exists. We show you which slots share lanes, what gets disabled when, and help you pick a board that matches your actual needs.

Quick Reference

  • Lanes (x1, x4, x8, x16) = Number of data paths (more = more bandwidth)
  • Generation (3.0, 4.0, 5.0, 6.0) = Speed per lane (newer = faster per lane)
  • GT/s (Transfer Rate) = Raw signaling speed (theoretical maximum)
  • GB/s (Throughput) = Actual usable bandwidth (what you experience)
  • CPU lanes = Direct, fast, limited quantity
  • Chipset lanes = Shared connection, more available, potential bottleneck

The bottom line: More lanes and newer generations give you more bandwidth. But bandwidth only matters if your devices can actually use it. Match your motherboard's lane layout to your actual build, not theoretical maximums.

See How Your Board Stacks Up

Now that you understand PCIe lanes, explore our interactive maps to see exactly how different motherboards allocate their lanes.

View All Boards