We investigate semantic guarantees of private learning algorithms for their
resilience to training Data Reconstruction Attacks (DRAs) by informed
adversaries. To this end, we derive non-asymptotic minimax lower bounds on the
adversary’s reconstruction error against learners that satisfy differential
privacy (DP) and metric differential privacy (mDP). Furthermore, we demonstrate
that our lower bound analysis for the latter also covers the high dimensional
regime, wherein, the input data dimensionality may be larger than the
adversary’s query budget. Motivated by the theoretical improvements conferred
by metric DP, we extend the privacy analysis of popular deep learning
algorithms such as DP-SGD and Projected Noisy SGD to cover the broader notion
of metric differential privacy.