ResearchPDF Available

Artificial Intelligence, AI biases and risks, and the need for AI-regulation and AI ethics: some examples, 17 Nov 2018

Authors:
  • Independent Researcher, London, UK

Abstract

This is a 1-page compilation of publicly available information with regards to Artificial Intelligence (AI), built in biases (coder bias, contextual bias, and AI learning bias) influencing AI, and risks including risks to people and populations to do with racist and far right driven AI systems. Reference is also made to the acknowledged need for AI regulation and ethics. The references are from 2016-2018.
ARTIFICIAL INTELLIGENCE, AI Details
What is AI? “Artificial Intelligence [AI] has various definitions, but in general it means a program that uses data to build a model of some aspect of the world. This model is then used to make
informed decisions and predictions about future events. The technology is used widely, to provide speech and face recognition, language translation, and personal recommendations
on music, film and shopping sites. In the future, it could deliver driverless cars, smart personal assistants, and intelligent e
nergy grids. AI has the potential to make organisations more
effective and efficient, but the technology raises serious issues of ethics, governance, privacy and law.” [1]
AI bias (1): Individual data
coder/s bias influencing AI
robots
“The problem is that as well as putting these [AI] robots to work, humans are also the ones inputting the data enabling them to do that work. ‘There is no such thing as a neutral
algorithm’. […] ‘If the system is using some metric for decision making regarding employment, who came up with that metric, what data is it based on, and how is it being
applied?’” [2]
AI bias (2): Contextual bias ’Bias can creep in very easily with learning systems, and depends entirely on the data they’ve been trained on’ […]. This means if an organisation is already an old boys’ network of
employees from similar socio-economic and educational backgrounds, a robot instilled with the existing blueprint of that workforce cannot hope to make much of a diversifying
impact. [2] [Note: In other words, aspects such as the demographics of organisations or stakeholder groups’ ”instills the AI robot with ’existing blueprints of its workforce’”].
AI bias (3): robot-learning bias “This problem was highlighted earlier in 2016 when Microsoft was forced to take its AI chatbot Tay off Twitter just hours after its launch. Robots learn from the humans they are
programmed by, followed by those they interact with; it wasn’t long before Tay assimilated the conversations and opinions of those around her and posted a series of racist, sexist
tweets and denied the holocaust.”[2] [Note: In other words, assimilation of new data combined with AI’s existing coder and contextual biases, could escelate to potentially harmful
actions].
AI example: Twitter taught AI
to be racist in less than a day,
24 March 2016
“Now, while these screenshots seem to show that Tay has assimilated the internet's worst tendencies into its personality, it's not quite as straightforward as that. Searching through
Tay's tweets (more than 96,000 of them!) we can see that many of the bot's nastiest utterances have simply been the result of copying users. If you tell Tay to "repeat after me," it
will — allowing anybody to put words in the chatbot's mouth.” [3]
The risks with AI constructed
registers
“Another area where AI can be misused is in building registries, which can then be used to target certain population groups. Crawford noted historical cases of registry abuse,
including IBM’s role in enabling Nazi Germany to track Jewish, Roma and other ethnic groups with the Hollerith Machine, and the Book of Life used in South Africa during apartheid.
Donald Trump has floated the idea of creating a Muslim registry. […] Palantir is building an intelligence system to assist Donald Trump in deporting immigrants” [4].
AI regulation (April 2018) “Britain needs to lead the way on artificial intelligence regulation, in order to prevent companies such as Cambridge Analytica setting precedents for dangerous and unethical use of
the technology, the head of the House of Lords select committee on AI has warned.” [5]
AI ethics: The House of Lords
select committee on AI, five
ethical principles
“At the core of the committee’s recommendations are five ethical principles which, it says, should be applied across sectors, nationally and internationally:
- Artificial intelligence should be developed for the common good and benefit of humanity.
- Artificial intelligence should operate on principles of intelligibility and fairness.
- Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
- All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
-The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.” [5]
References: 1. Sample, I. (2017). 'We can't compete': why universities are losing their best AI scientists. The Guardian 1 November, at: www.theguardian.com/science/2017/nov/01/cant-compete-universities-losing-best-ai-scientists; 2. .
Apostolides, Z. (2016). Soon robots could be taking your job interview. The Guardian 14 December, at: www.theguardian.com/careers/2016/dec/14/soon-robots-could-be-taking-your-job-interview. 3. Vincent, J. (2016). Twitter taught
Microsoft’s AI chatbot to be a racist asshole in less than a day. The Verge 24 March, at: www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist; .4. Solon, O. (2017). Artificial intelligence is ripe for abuse, tech researcher
warns: 'a fascist's dream’. The Guardian 13 March, at: www.theguardian.com/technology/2017/mar/13/artificial-intelligence-ai-abuses-fascism-donald-trump; 5. Hern, A. (2018). Cambridge Analytica scandal 'highlights need for AI
regulation’. Lords report stresses need for artificial intelligence to be used for the common good. The Guardian 16 April, at: www.theguardian.com/technology/2018/apr/16/cambridge-analytica-scandal-highlights-need-for-ai-regulation; 2
All references on this slide were accessed on 13/07/2018 by Dr Karin Johansson Blight, draft 2 up-loaded 17 November 2018.
Artificial Intelligence, AI biases and risks, and the need for AI-regulation and AI ethics: some examples
Conference Paper
Full-text available
This paper aims to present the development of an instrumental solution to a necessity raised from an artificial intelligence project, latter called Victor robot project. The Victor robot demands a methodological combination of the reasoning of the areas of software engineering, computer science and Law. For its unprecedented factor, all researchers must develop knowledge in an intense form, while working in different thought process, language and very specific legal texts in a huge volume of data. In a second part, this paper presents some sui generis features of general repercussion as a constitutional filter and a possible field for ontological development for machine learning, and important to understand your potential application at Supreme Court activity. Finally, the article presents some steps of the project that is still in progress, but is already considered the largest artificial intelligence project in the Brazilian judiciary, which has 100 million cases in stock.
ResearchGate has not been able to resolve any references for this publication.