Introduction: During face recognition human gaze predominantly centers on the eye region (Barton et. al., 2006), with human decisions preferentially based on the eyes' visual information (Schyns et. al., 2002; Peterson et. al., 2006). The reason behind this strategy, however, is largely unknown. We previously showed using ideal observer analysis that the eye region contains the greatest amount of ... [Show full abstract] objective diagnostic information (Peterson et. al., 2007). The purpose of this study was to quantitatively measure the relationship between the amount of visual information contained within each feature region and the efficiency with which human recognition strategy exploits these conditions. Methods: We photographed 40 Caucasian students (20 female) in tightly controlled conditions (holding expression, distance, orientation and lighting constant). We equated face size and contrast energy. We created masks to occlude background, hair, ears and neck, as well as either the eyes, nose, mouth or chin. The task entailed randomly sampling a face and a feature to exclude, embedding the image in white Gaussian noise, and asking the observer to make an identification. Results: Consistent with findings from a much larger sample (1000 faces; Peterson et. al., 2007), ideal observer analysis showed the eyes were the most diagnostic feature, followed by the mouth, nose and chin. More importantly, humans followed the same trend; however, performance when the eyes were blocked was impaired to a much greater degree for humans than for the ideal observer. Conclusion: The eyes are, objectively, the best region to use when making a face identification. Here, we have shown that human recognition strategy not only follows this trend, but exacerbates it. Given the eye region's domination of attention during normal human interaction, a strategy that over-weights this region may be a simple adaptation to an optimal strategy.