Diagnosing InnoDB Pending I/O Operations
Pending I/O operations in InnoDB represent read and write requests that have been submitted to the operating system but not yet completed. The FILE I/O section of SHOW ENGINE INNODB STATUS reports pending normal AIO reads, pending normal AIO writes, pending ibuf AIO reads, and pending fsyncs — each queue serving a different category of InnoDB I/O. A healthy server with fast NVMe storage shows very few pending operations (0–2); dozens of pending operations indicate I/O saturation where the storage device cannot service requests as fast as InnoDB generates them. Diagnosing I/O-bound InnoDB performance requires correlating SHOW ENGINE INNODB STATUS pending I/O counts with OS-level I/O metrics from iostat and the Performance Schema file I/O tables. This guide explains each I/O queue, its thresholds, and the tuning parameters for maximising I/O throughput.
Anatomy of the FILE I/O Section
Pending Normal AIO Reads and Writes per Slot
Normal AIO (Asynchronous I/O) reads are page fetch requests submitted to the OS I/O subsystem for pages not found in the buffer pool. Each "pending normal AIO read" represents a buffer pool page being loaded from a tablespace file via the kernel AIO interface (Linux io_uring or libaio, depending on the innodb_use_native_aio setting). InnoDB distributes AIO requests across multiple read I/O threads (innodb_read_io_threads, default 4), each managing a queue of outstanding requests. Similarly, normal AIO writes are dirty pages being flushed to tablespace files distributed across innodb_write_io_threads (default 4). If pending AIO reads exceed 2× innodb_read_io_threads, the storage cannot keep pace with read demand — increase innodb_read_io_threads to 8–16 or upgrade to faster storage.
Pending Flushes: Log Flush vs Buffer Pool Flush
Pending flushes are separated into two categories in SHOW ENGINE INNODB STATUS: "Pending flushes (fsync) log: N" and "buffer pool: M". Log flushes are fsync() calls to the redo log files — critical-path I/O that directly determines transaction commit latency. Buffer pool flushes are fsync() calls to tablespace files after dirty page writes — background I/O that affects dirty page flush throughput but not individual transaction latency. A large number of pending log fsyncs (> 5) while write latency is elevated suggests the redo log device is the bottleneck. Consider placing redo log files on a separate NVMe device from tablespace files, or enabling innodb_flush_log_at_trx_commit=2 to batch redo log fsyncs to once per second at the cost of up to 1 second of potential data loss.
Interpreting I/O Throughput Metrics
OS File Reads, Writes, and Fsync Rates per Second
The FILE I/O section reports cumulative and per-second OS file read and write rates: "OS file reads N, OS file writes M, OS fsyncs P" with per-second rates such as "N.N reads/s, N.N writes/s, N.N fsyncs/s" averaged over the last 30 seconds. These metrics represent actual file system calls to the kernel, encompassing both InnoDB data and redo log I/O. The fsync rate is particularly diagnostic: each committed transaction (with innodb_flush_log_at_trx_commit=1) generates exactly one fsync to the redo log. A server processing 1,000 TPS with innodb_flush_log_at_trx_commit=1 must sustain 1,000 fsyncs/second on the redo log device — verify this is within your storage subsystem's rated fsync throughput before deploying to production.
Average Bytes per Read and Read-Ahead Efficiency
The "avg bytes / read" metric in the FILE I/O section — computed as total bytes read divided by OS file reads — reveals the average read I/O size. InnoDB reads data in page-sized chunks (16 KB by default), so a single-page random read shows avg bytes / read ≈ 16,384. When read-ahead is active, InnoDB pre-fetches 64 contiguous pages (1 MB) in a single linear read-ahead request, raising avg bytes / read toward 1,048,576. A high avg bytes / read value confirms effective read-ahead, reducing the number of individual I/O operations required for sequential or range scans. innodb_read_ahead_threshold (default 56) controls the minimum number of pages in a buffer pool extent that must be accessed before read-ahead is triggered; lower values enable more aggressive pre-fetching at the cost of unnecessary I/O.
Resolving Storage Bottlenecks
Increasing innodb_read_io_threads and innodb_write_io_threads
innodb_read_io_threads and innodb_write_io_threads (both default 4) set the number of background threads handling AIO read and write requests respectively. Each thread manages a queue slot in the AIO array; the maximum concurrent I/O operations = threads × innodb_aio_queue_depth_per_thread (default 256 in MySQL 8.0). On servers with high-parallelism NVMe SSDs (rated for 100,000+ IOPS), increase both innodb_read_io_threads and innodb_write_io_threads to 8–16 to keep the device fully utilised. Monitor the "Pending AIO reads" and "Pending AIO writes" counts in SHOW ENGINE INNODB STATUS: if they are always 0, increasing threads provides no benefit; if they are consistently high, threads are undersized for the storage throughput. These values require a server restart to change.
NVMe vs SAS Tuning: innodb_use_native_aio and O_DIRECT
innodb_use_native_aio=ON (the default on Linux) enables the Linux libaio kernel interface for asynchronous I/O, which allows InnoDB to submit multiple I/O requests to the kernel simultaneously without blocking any thread. On NVMe devices, native AIO is critical for achieving rated IOPS because it allows queue depths > 1, exploiting the drive's internal parallelism. innodb_flush_method=O_DIRECT bypasses the Linux page cache for InnoDB data file reads and writes, preventing double-caching of buffer pool data in both InnoDB's buffer pool and the OS page cache. Always use O_DIRECT on dedicated MySQL servers to maximise the buffer pool's effective size. On NVMe, consider O_DIRECT_NO_FSYNC (available in Percona Server) to further reduce fsync overhead by relying on NVMe's write barrier guarantees instead of explicit fsyncs.