Asynchronous I/O

Asynchronous (non-blocking) I/O lets a program issue an I/O request and keep doing useful work instead of blocking until the request completes.

Why?

A blocking read() wastes the zillions of CPU cycles the hardware could do while waiting on I/O. Threads mitigate this, but bring their own costs: race conditions, per-thread stack overhead, and hard limits on max threads.

Two ways to find out whether I/O is ready:

  • Pollingselect, poll, epoll on Linux; kqueue on BSD/macOS; IOCP on Windows. The mio crate (from tokio) abstracts these.
  • Interrupt — signals on UNIX.

With mio, the pattern is: create a Poll, register event sources (e.g. a TcpListener) with a token, then loop on poll.poll(&mut events, timeout), matching events by their token.

Four server models (increasing scalability):

  1. Blocking I/O, 1 process per request (old Apache). Simple; overhead caps around 10k.
  2. Blocking I/O, 1 thread per request. Lighter; adds race conditions.
  3. Async I/O, thread pool + callbacks.
  4. Non-blocking I/O, thread pool multiplexed with select/poll, event-driven — each thread handles many connections.

libcurl provides two interfaces: easy (synchronous, single request) and multi (asynchronous, many requests dispatched together via perform() and wait()). A multi handle can manage thousands of connections.