Walk-Forward Training

This is a term that I learned.

  1. Rolling window (fixed-size training) Train on last T days Test on next K days Move forward by K days and repeat

Example:

  • Train: 60 days
  • Test: 7 days
  • Roll: 7 days

So you get many out-of-sample test segments.

Do you retrain from scratch when you do the forward walk?

Yes, That’s the cleanest evaluation because it mimics “what could I have trained at that time?”

Isn't that just overfitting?

It feels like “overfitting over and over,” but the point is the opposite: to measure (and simulate) how well you generalize to unseen future data in a world where the distribution drifts.

So walk-forward is both:

  • an evaluation method (true out-of-sample testing repeated many times), and
  • a production simulation (periodic retrain cadence).