Sparse Model
A neural network where many weights are zero or inactive, reducing computational requirements. Mixture of Experts (MoE) is a type of sparse model where only some experts activate for each input. Enables larger model capacity with manageable inference costs.