Fearless concurrency is Rust’s promise that the same ownership and borrowing rules that prevent memory errors also turn concurrency bugs (races, some deadlocks) into compile errors.
Why does this matter for performance?
The compiler doesn’t make your program faster directly. It makes correct concurrent code cheaper to write, so more engineering time goes to speedups instead of debugging flaky schedule-dependent bugs.
Three ways to move data between threads:
Capturing, a closure references variables from its enclosing scope. Usually needs move || to transfer ownership into the spawned thread, since the compiler can’t prove a borrowed reference will outlive the thread
Message passing, channels via std::sync::mpsc (multi-producer, single-consumer). tx.send(val) moves ownership of val, rx.recv() blocks, try_recv() doesn’t. The sent type must implement Send
Shared state, Mutex<T> wraps the protected value so the data can’t be touched without holding the lock. MutexGuard drops on scope exit and releases automatically
Channel
let (tx, rx) = mpsc::channel();thread::spawn(move || { let val = String::from("hi"); tx.send(val).unwrap();});let received = rx.recv().unwrap();
Two marker traits are central:
Send, the type can be transferred across threads (move ownership)
Sync, &T can be shared across threads (primitive types are Sync, Rc<T> is not)
Rust doesn't eliminate deadlocks
The borrow checker prevents data races but nothing stops thread 1 acquiring A then B while thread 2 acquires B then A. Lock ordering is still on you.