April 2025
·
4 Reads
Neural network emulators have become an invaluable tool for climate prediction tasks but do not have an inherent ability to produce equitable predictions (e.g., predictions which are equally accurate across different regions or groups of people). This motivates the need for explicit internal representations of fairness. To that end, we draw on methods for enforcing physical constraints in emulators and propose a custom loss function which punishes predictions of unequal quality across any prespecified regions or category, here defined using Human Development Index. This loss function weighs a standard error metric against another which captures inequity between groups, allowing us to adjust the priority of each. Our results show that emulators trained with our loss function provide more equitable predictions. We empirically demonstrate that an appropriate selection of an equity priority can minimize loss of performance, mitigating the tradeoff between accuracy and equity.