Being promoted by major search engines such as Google, Yahoo!, Bing, and Yandex, Microdata embedded in web pages, especially using schema.org, has become one of the most important markup languages for the Web. However, deployed Microdata is very often not free from errors, which makes it difficult to estimate the data volume and create an accurate data profile. In addition, as the usage of global identifiers is not common, the real number of entities described by this format in the Web is hard to assess. In this article, we discuss how the subsequent application of data cleaning steps, such as duplicate detection and correction of common schema-based errors, leads to a more realistic view on the data, step by step. The cleaning steps applied include both heuristics for fixing errors as well as means to perform duplicate detection and duplicate elimination. Using the Web Data Commons Microdata corpus, we show that applying such quality improvement methods can essentially change the statistical profile of the dataset and lead to different estimates of both the number of entities as well as the class distribution within the data.