This paper studies the problem of Generalized Zero-shot Learning (G-ZSL), whose goal is to classify instances from both seen and unseen classes at the test time. We propose a novel domain division method to solve G-ZSL. Some previous models with domain division operations only calibrate the confident prediction of source classes (W-SVM (Scheirer et al., 2014)) or take target-class instances as outliers (Socher et al., 2013). In contrast, we propose to directly estimate and fine-tune the decision boundary between the source and the target classes. Specifically, we put forward a framework that enables to learn compositional domains by splitting the instances into Source, Target, and Uncertain domains and perform recognition in each domain, where the uncertain domain contains instances whose labels cannot be confidently predicted. We use two statistical tools, namely, bootstrapping and Kolmogorov–Smirnov (K-S) Test, to learn the compositional domains for G-ZSL. We validate our method extensively on multiple G-ZSL benchmarks, on which it achieves state-of-the-art performances. The codes are available on https://github.com/hendrydong/demo_zsl_domain_division.