In this talk, I’ll present our work on The Lottery Ticket Hypothesis, showing that at a standard pruning technique, iterative magnitude pruning, naturally uncovers subnetworks that are capable of training effectively from early in training. These subnetworks hold out the promise of more efficient machine learning methods, including inference, fine-tuning of pre-trained networks, and sparse training.