All Episodes

October 26, 2025 14 mins

This is a classic review of a now old but yet still important paper, the original Flash Attention paper. We review this in light of advances in compiler technology.


The June 23, 2022 Stanford paper describes the original **FlashAttention**, an innovative, IO-aware algorithm designed to significantly enhance the efficiency of the attention mechanism in Transformer models by optimizing memory usage and access. Standard attention suffers from complexity that scales **quadratically** ($O(N^2)$) with sequence length ($N$) for both memory footprint and access to slow High Bandwidth Memory (HBM), which creates a performance bottleneck. FlashAttention overcomes this by employing **tiling and recomputation** within a single customized CUDA kernel, dramatically reducing the memory footprint to scale **linearly** ($O(N)$) and eliminating the quadratic term in HBM access complexity. While the algorithm does not reduce the total Floating Point Operations (FLOPs) and even slightly increases them due to recomputation, the massive reduction in slow memory transfers results in substantial **wall-clock runtime speedups** during both training and inference.


Source:

https://arxiv.org/pdf/2205.14135

Mark as Played

Advertise With Us

Popular Podcasts

Spooky Podcasts from iHeartRadio
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.