A common approach to statistical model selection — particularly in scientific domains in which it is of interest to draw inferences about an underlying phenomenon — is to develop powerful procedures that provide control on false discoveries. Such methods are widely used in inferential settings involving variable selection, graph estimation, and others in which a discovery is naturally regarded as a discrete concept. However, this view of a discovery is ill-suited to many model selection and structured estimation problems in which the underlying decision space is not discrete. We describe a geometric reformulation of the notion of a discovery, which enables the development of model selection methodology for a broader class of problems. We highlight the utility of this viewpoint in problems involving subspace selection and low-rank estimation, with a specific algorithm to control for false discoveries in these settings.
This is joint work with Parikshit Shah and Venkat Chandrasekaran.