As machine learning continues to grow in popularity, so does the need for labeled training data. Crowd workers often have to tag, label, or annotate these datasets in the course of a labour-intensive, monotonous, and error-prone process that can even be frustrating. However, current task and system designs typically disregard worker-centric issues. In this vision statement, we argue that given the rising human-AI interaction in crowd work, further attention needs to be paid to the design of labeling systems in this regard. Specifically, we see the need for platforms to adapt dynamically to affective-cognitive states of crowd workers based on different types of data (i.e., physiological, behavioral or self-reported). A platform that is considerate to its crowd workers should be able to adapt to such states on an individual level, for instance, by suggesting currently fitting tasks. As a conclusion, we call for interdisciplinary research to make this vision a reality.