Radboud University
  • Nijmegen, Netherlands

'If we want to develop AI that helps people, we need all the brainpower we can get.'

23 September 2022
To build the best artificial intelligence, we don't just need computing power and new technology, argues Pim Haselager. If we want to develop AI that helps people, we need all the brainpower we can get. This includes people from all walks of life where AI will play a role. Only then can we properly understand the implications of this new technology.

Haselager is affiliated with Radboud University as a professor of Societal Implications of AI and directs a research group with the same title. In his research, he focuses on the responsible, interdisciplinary development and application of artificial intelligence and cognitive neuroscience. Haselager: ‘A responsible use of AI and cognitive neuroscience requires constructive ethics that must be in place early on. It is precisely in the research and development phase of technology that social expectations and concerns should be able to play an important role.’

‘Topics such as human-robot interaction, decision support machines, chatbots, and direct communication between brain and computer require a social discussion on the development and use of technology,’ says the researcher. ‘We have a long history of prejudice as a human race, a bias that we base our decisions on. If we're not careful, that bias will be adopted by self-learning AI systems.’ However, it can be difficult to prevent this from happening - especially on your own, and especially if you may also have certain (unconscious) biases yourself. ‘To think about this wisely and talk about it in a balanced way, diversity is a prerequisite,’ Haselager points out.


Posted 23 September 2022
548 views