
We recently developed and forwarded a framework for capturing, visualizing, and analyzing the unique record of an individual's everyday digital experiences: screenomics. In our quest to derive knowledge from and understand screenomes – ordered sequences of hundreds of thousands of smartphone and laptop screenshots obtained every five seconds for between one day and six months – the data have become a playground for learning about the computational machinery used to process images and text, machine learning algorithms, human-labeling of unknown taxonomies, qualitative inquiry, and the tension between N = 1 and N = many approaches. Using illustrative problems, I share how engagement with these new data is reshaping both how we do analyses and how we study the person-context transactions that drive human behavior.