There’s good news on this front, however. AI practitioners and researchers have developed strategies over the last few years with the potential to significantly reduce the volume of labeled data needed to build accurate AI models. These approaches encompass ways to learn models with just unlabeled data, to transfer-and-adapt models across problems, as well as best practices around “iterating on data” to improve model performance. By using these approaches, it is often possible to build a good AI model with a fraction of the labeled data that might otherwise be needed.