Conversational agents are increasingly popular in various domains of application. Due to their ability to interact with users in human language, anthropomorphizing these agents to positively influence users’ trust perceptions seems justified. Indeed, conceptual and empirical arguments support the trust-inducing effect of anthropomorphic design. However, an opposing research stream that has widely been overlooked provides evidence that human-likeness reduces agents’ trustworthiness. Based on a thorough analysis of psychological mechanisms related to the contradicting theoretical positions, we propose that the agent substitution type acts as a situational moderator variable on the positive relationship between anthropomorphic design and agents’ trustworthiness. We argue that different agent types are related to distinct user expectations that influence the cognitive evaluation of anthropomorphic design. We further discuss how these differences translate into neurophysiological responses and propose an experimental set-up using a combination of behavioral, self-reported and eye-tracking data to empirically validate our proposed model.