< Back

updated: 9th September 2022

My first project got me thinking about quantum state tomography as an estimation problem and dealt with establishing connections between Bayesian and minimax estimation by generalising some known results. Basically, an estimator is defined as a function from the set of measurement outcomes to the set of density matrices. A minimax estimator is one that minimizes worst-case risk while the Bayes estimator minimizes average risk. There are different ways to get rid of functional dependence on a quantity you don’t know! Risk is expected loss (average over measurement outcomes) — loss is a distance function (like fidelity, trace-distance, etc.) between the true state (which we don’t know and hence getting rid of it by either taking a sup or an average with respect to a ‘prior’ distribution) and its estimator. Note that estimators are tied to measurements. We also found (our own contribution) that when dealing with covariant states — a covariant measurement is optimal in the sense that it minimizes the worst-case risk (minimax measurement). And, that if there is a subgroup of the covariant group in question, whose elements’ unitary projective representations form a 2-design, then the corresponding covariant measurement (under the subgroup) with the same seed as the minimax covariant measurement (on the whole group) is also minimax.

Such a way of posing the problem of tomography lends to a single-shot measurement scheme. In the minimax picture, given a measurement, let us say there is a minimax estimator that has minimum risk in the worst case. When a single copy of the unknown state is measured, we use the corresponding value of the estimator for the observed outcome and get with it.

But, what if I have two copies of the unknown state? Can I reduce the worst-case risk further by performing another measurement in sequence? Or a joint measurement on both copies for want of better results? (Intuitively, a joint measurement should be better than a series of independent measurements but it really depends on the constraints — what if we can only perform independent measurements?)

The question thus is, how many copies are needed if I am allowed to incure a worst case risk epsilon — entails coming up with a measurement (joint/independent as per the problem specifics) and the corresponding estimator. This is also how state tomography is typically stated and studied in literature. People have asked for sample complexity given an accuracy epsilon (in terms of trace-distance and fidelity mostly) and obtained optimal measurements that satisfy such a constraint.

What happens to such questions when we minimize average risk and obtain a Bayesian estimator instead, you might ask? Closed forms for Bayesian estimators are elusive except for losses/distance functions called Bregman divergences, e.g. relative entropy, for which they are simply the average of quantum states over a posterior distribution, given a prior distribution over states. But, this Bayesian mean estimator is overpowering in that one can simply use this approach over say maximum likelihood estimation if one is not too bothered by questions of optimality: there is an automatic, algorithmic way to obtain posteriors from priors via Bayes rule and hence update estimators on the ‘fly’. Even in the absence of an ‘optimal measurement’ you could keep performing different measurements until you reach a desired accuracy. What is the catch? Performing integrals aren’t exactly easy but still doable and newer efficient algorithms for performing numerical integrals have been proposed too. (I’ve summed up the main arguments here. See papers by Robin Blume-Kohout for more discussion.)

So, if we are to take the Bayesian mean estimation approach then we know it’s optimal for at least a few distance measures we care about. The next question is: what is the optimal measurement in the single shot case? The answer is: it’s not so easy to do the optimisation for general quantum states. And, the same holds for the question: what if I have 2 copies — can I find a joint optimal measurement which is better than repeating the optimal single shot measurement twice?

I am interested in seeing if Haar random orthogonal bases (aka randomised measurements) can give nice Bayesian estimates.

Randomised measurements have appeared recently in the context of tomography in a paper titled Fast and robust quantum state tomography from few measurement bases. This work uses trace-distance as loss and defines an algorithm called ‘Hamiltonian Updates’ which is based on a variant of the mirror-descent meta algorithm. The authors give a rigorous convergence bound for it that applies to any single-shot measurement primitive capable of distinguishing arbitrary pairs of quantum states (in trace-distance) and is tomographically complete. And, this is where Haar random measurements appear. They found that approximate 4-designs are an example of a measurement primitive that satisfies the aforementioned criteria for convergence.The convergence of the algorithm itself is quantified in terms of relative entropy. (I find this a little curious: measure progress of algorithm in one metric and chose to stop updates based on closeness of estimator to true state in terms of another. Would benchmarking the algorithm against different metrics result in discrepancies? May be not. “The choice of metric to measure convergence is governed by convergence of the algorithm itself.” Cool.) For this algorithm, they wrote down a. number of basis measurment settings b. worst-case sample complexity, c. classical runtime, and c. classical storage. While earlier works have focussed on either a. or b., the last two cost factors have not really been looked at in the literature. This makes their work especially complete.