Extremely variant image pairs include distorted, deteriorated, and corrupted scenes that have experienced severe geometric, photometric, or non-geometric-non-photometric transformations with respect to their originals. Real world visual data can become extremely dusty, smoky, dark, noisy, motion-blurred, affine, JPEG compressed, occluded, shadowed, virtually invisible, etc. Therefore, matching of extremely variant scenes is an important problem and computer vision solutions must have the capability to yield robust results no matter how complex the visual input is. Similarly, there is a need to evaluate feature detectors for such complex conditions. With standard settings, feature detection, description, and matching algorithms typically fail to produce significant number of correct matches in these types of images. Though, if full potential of the algorithms is applied by using extremely low thresholds, very encouraging results are obtained. In this paper, potential of 14 feature detectors: SIFT, SURF, KAZE, AKAZE, ORB, BRISK, AGAST, FAST, MSER, MSD, GFTT, Harris Corner Detector based GFTT, Harris Laplace Detector, and CenSurE has been evaluated for matching 10 extremely variant image pairs. MSD detected more than 1 million keypoints in one of the images and SIFT exhibited a repeatability score of 99.76% for the extremely noisy image pair but failed to yield high quantity of correct matches. Rich information is presented in terms of feature quantity, total feature matches, correct matches, and repeatability scores. Moreover, computational costs of 25 diverse feature detectors are reported towards the end, which can be used as a benchmark for comparison studies.