We deconstruct the Generative Adversarial Networks (GANs) to its three fundamental problems to study: formulation, generalization, and optimization. We propose systematic principles to formulate the population goals of GANs (when infinite samples are available), and reveal and further develop connections between GANs and robust statistics. We provide principled methods to achieve the population formulations of GANs given finite samples with small generalization error, and demonstrate the intricacy of moving from infinite samples to finite samples in statistical error. We show through examples the importance of solving the inner maximization problem before the outer minimization problem, and demonstrate embedding the knowledge of the solution of the inner maximization problem could make a locally unstable algorithm globally stable. Joint work with Banghua Zhu and David Tse.