Error models for quantum computing processors describe their deviation from ideal behavior and predict the consequences in applications. But experimental behavior is rarely consistent with error models, even in characterization experiments like randomized benchmarking (RB) or gate set tomography (GST). I show how to resolve these inconsistencies, and quantify the rate of unmodeled errors, by augmenting error models with a parameterized wildcard error model. Wildcard error relaxes the model's predictions, and the amount of wildcard error required (to reconcile the model with observed data) quantifies the rate of unmodeled errors. I'll demonstrate the use of wildcard error to augment RB and GST, and to quantify leakage.
Robin Blume-Kohout was born on a kitchen table in the Alaska Bush. Despite this awkward start, he somehow got bachelor's degrees in physics and English from Kenyon College (where he worked with Ben Schumacher, inventor of the term "qubit"), and a PhD in physics from UC-Berkeley (although his dissertation research on decoherence was done with Wojciech Zurek at Los Alamos National Lab). After an inappropriately long series of postdocs at Caltech's IQI, the Perimeter Institute, and Los Alamos, he settled down at Sandia National Labs, which turns out to be an amazingly cool place for quantum computing research. Robin founded Sandia's Quantum Performance Laboratory, where we research the performance of quantum computing components, and develop tools to assess and enhance it.