How Can We Analyze Differentially-Private Synthetic Datasets?
Synthetic datasets generated within the multiple imputation frame-work are now commonly used by statistical agencies to protect the confidentiality of their respondents. More recently, researchers have also proposed techniques to generate synthetic datasets which offer the formal guarantee of differential privacy. While combining rules were derived for the first type of synthetic datasets, little has been said on the analysis of differentially-private synthetic datasets generated with multiple imputations. In this paper, we show that we can not use the usual combining rules to analyze synthetic datasets which have been generated to achieve differential privacy. We consider specifically the case of generating synthetic count data with the beta-binomial synthetizer, and illustrate our discussion with simu-lation results. We also propose as a simple alternative a Bayesian model which models explicitly the mechanism for synthetic data generation.