The capabilities of Artificial Intelligence (AI) evolve rapidly and affect almost all sectors of society. AI has been increasingly integrated into criminal and harmful activities, expanding existing vulnerabilities, and introducing new threats. This article reviews the relevant literature, reports, and representative incidents which allows to construct a typology of the malicious use and abuse of systems with AI capabilities. The main objective is to clarify the types of activities and corresponding risks. Our starting point is to identify the vulnerabilities of AI models and outline how malicious actors can abuse them. Subsequently, we explore AIenabled and AI-enhanced attacks. While we present a comprehensive overview, we do not aim for a conclusive and exhaustive classification. Rather, we provide an overview of the risks of enhanced AI application, that contributes to the growing body of knowledge on the issue. Specifically, we suggest four types of malicious abuse of AI (integrity attacks, unintended AI outcomes, algorithmic trading, membership inference attacks) and four types of malicious use of AI (social engineering, misinformation/fake news ,hacking autonomous weapon systems). Mapping these threats enables advanced reflection of governance strategies, policies, and activities that can be developed or improved to minimize risks and avoid harmful consequences. Enhanced collaboration among governments, industries, and civil society actors is vital to increase preparedness and resilience against malicious use and abuse of AI. Keywords-Malicious, Crime and abuse on Artificial Intelligence. I. INTRODUCTION The impact of Artificial Intelligence (AI) systems has become a focal point in academic studies, political debates, and civil society reports. The development of AI is lauded for its transformative technological capabilities, such as advanced automated image recognition with applications in medicine, like the detection of cancer. However, this technological advancement is not without criticism and apprehension, particularly concerning uncertainties surrounding the consequences of automation on the labor market, including concerns about mass unemployment.AI can be used for good things like helping governments improve their abilities. But at the same time, it can also be used to attack them. So, even though AI can be helpful, it can also cause problems, especially in cybersecurity and fighting cybercrime. The private sector, predominantly driving AI development, extends its applications to customer-oriented domains, while defense sectors utilize similar capabilities for their operations. The line between actions of state and non-state actors is increasingly blurred, as illustrated by recent ransomware attacks targeting public infrastructure in various countries.Moreover, the dual-use aspect of technology is not novel in the realm of cybercrime or cybersecurity. However, the unique vulnerabilities introduced by AI for malicious use and abuse pose novel challenges. The thorough valuation of the threat scenery is vital to initiate and adjust governance mechanisms, tool proactive measures, and bolster cyber resilience.It evaluates the main categories of AI use and abuse