Discussion
Started 23 May 2024

Should Artificial Intelligence contents and methods be included in most (or all) computer science subjects?

Currently, AI is being applied in many areas of society, in its economic, social, educational, and many other components.
But normally the current applications of computer science are combinations of different specialties: programming, databases, use of interfaces, analysis techniques and algorithm design, etc.
Would it be convenient to include AI elements in each of its subjects in the computer science specialist's learning, to facilitate this cooperation/coordination between AI and the other components of practical applications?

Most recent answer

Yes, AI should be included in most courses

All replies (5)

Alireza Ghorbani Koundoghdi
University College of Nabi Akram
Yes, including AI content in most computer science subjects is beneficial. Here's why:
  1. Comprehensive Skill Development: Students gain a holistic understanding of how AI integrates with programming, databases, HCI, and algorithm design.
  2. Interdisciplinary Knowledge: AI intersects with various computer science areas, enhancing problem-solving and innovation.
  3. Industry Relevance: AI skills are in high demand, making graduates more competitive and adaptable to evolving job markets.
  4. Practical Applications: Many modern applications involve AI, such as automated testing in software engineering and threat detection in cybersecurity.
Implementation Strategies
  • Modular Approach: Add AI modules to existing courses.
  • AI Projects: Encourage projects that integrate AI with other fields.
  • Interdisciplinary Courses: Develop courses focusing on AI's intersection with other subjects.
  • Hands-on Workshops: Offer practical labs and workshops on real-world AI applications.
Incorporating AI into the computer science curriculum ensures students are well-prepared for current and future technological challenges.
1 Recommendation
Hussain Nizam
Dalian University of Technology
The integration of Artificial Intelligence (AI) content and methods into computer science education has sparked significant debate. The argument for integrating AI into the curriculum is strong, considering the pervasive impact of AI on various aspects of technology and society. However, the extent of integration and the specific subjects in which AI should be included may vary based on educational goals, resources, and the level of study. It is important to strike a balance by integrating AI education with foundational computer science principles to ensure a comprehensive and well-rounded education.!
1 Recommendation
Mohammed Saha Alam
National University of Singapore
Integrating Artificial Intelligence (AI) into computer science education is a topic of great importance.
  1. Understanding AI: AI is a branch of computer science focused on creating systems that can perform tasks normally requiring human intelligence. These tasks range from understanding natural language, recognizing patterns, making decisions, and learning from experience1. AI encompasses various subfields, each with unique objectives and specializations.
  2. Types of AI: AI can be categorized into three levels based on its capabilities: Artificial Narrow Intelligence (ANI): This is the most common form of AI we interact with today. ANI is designed to perform a single task, like voice recognition or recommendations on streaming services. Artificial General Intelligence (AGI): AGI can understand, learn, adapt, and implement knowledge across a wide range of tasks at a human level. While large language models and tools such as ChatGPT have shown the ability to generalize across many tasks, as of 2023, this is still a theoretical concept. Artificial Super Intelligence (ASI): ASI refers to a future scenario where AI surpasses human intelligence in nearly all economically valuable work. However, this concept remains largely speculative.
  3. Integration of AI in Computer Science Education: Including AI elements in computer science education can have several benefits: Holistic Understanding: Students gain a holistic understanding of AI’s role in various domains, including programming, databases, and algorithm design. Interdisciplinary Skills: AI bridges the gap between computer science and other fields, such as natural language processing, computer vision, and robotics. Practical Applications: Students learn how to apply AI techniques to real-world problems, enhancing their problem-solving abilities. Industry Relevance: As AI becomes more prevalent, professionals with AI knowledge are in high demand across industries. Ethical Considerations: Teaching AI involves discussing ethical implications, bias, and responsible AI development.
  4. Curriculum Considerations: Here are some ways to incorporate AI into computer science curricula:Foundations: Introduce fundamental AI concepts, including machine learning, neural networks, and data preprocessing. Specialized Courses: Offer specialized courses on natural language processing, computer vision, and reinforcement learning. Projects and Labs: Assign projects where students build AI models or analyze real-world data. Guest Lectures: Invite industry experts to discuss AI applications and trends. Ethics and Bias: Include discussions on ethical AI development and mitigating bias.
  5. Resources for Learning AI: Online platforms like Coursera offer courses that cover essential AI skills, including machine learning, robotics, and data interpretation2. Explore beginner’s guides and resources to understand the basics of AI and automation3.
In summary, integrating AI elements into computer science education can empower students to navigate the evolving landscape of technology and contribute to practical applications across various domains.
1 Recommendation
Yes, AI should be included in most courses

Similar questions and discussions

If man succeeds in building a general artificial intelligence, will it mean that man has better understood the essence of his own consciousness?
Discussion
56 replies
  • Dariusz ProkopowiczDariusz Prokopowicz
If man succeeds in building a general artificial intelligence, will this mean that man has become better acquainted with the essence of his own intelligence and consciousness?
If man succeeds in building a general artificial intelligence, i.e., AI technology capable of self-improvement, independent development and perhaps also achieving a state of artificial consciousness, will this mean that man has fully learned the essence of his own intelligence and consciousness?
Assuming that if man succeeds in building a general, general artificial intelligence, i.e. AI technology capable of self-improvement, independent development and perhaps also obtaining a state of artificial consciousness then perhaps this will mean that man has fully learned the essence of his own intelligence and consciousness. If this happens, what will be the result? Will man first learn the essence of his own intelligence and consciousness and then build a general, general artificial intelligence, i.e. AI technology capable of self-improvement, independent development and perhaps also obtaining a state of artificial consciousness, or vice versa, i.e. first a general artificial intelligence and artificial consciousness capable of self-improvement and development will be created and then thanks to the aforementioned technological progress made from the field of artificial intelligence, man will fully learn the essence of his own intelligence and consciousness. In my opinion, it is most likely that both processes will develop and implement simultaneously on a reciprocal feedback basis.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If man succeeds in building a general artificial intelligence, i.e., AI technology capable of self-improvement, independent development and perhaps also achieving a state of artificial consciousness, will this mean that man has fully learned the essence of his own intelligence and consciousness?
If man succeeds in building a general artificial intelligence, will this mean that man has better learned the essence of his own consciousness?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Is it safe for humans to create new drugs by artificial intelligence?
Discussion
27 replies
  • Dariusz ProkopowiczDariusz Prokopowicz
Is the design of new pharmaceutical formulations through the involvement of AI technology, including the creation of new drugs to treat various diseases by artificial intelligence, safe for humans?
There are many indications that artificial intelligence technology can be of great help in terms of discovering and creating new drugs. Artificial intelligence can help reduce the cost of developing new drugs, can significantly reduce the time it takes to design and create new drug formulations, the time it takes to conduct research and testing, and can thus provide patients with new therapies for treating various diseases and saving lives faster. Thanks to the use of new technologies and analytical methods, the way healthcare professionals treat patients has been changing rapidly in recent times. As scientists manage to overcome the complex problems associated with lengthy research processes, and the pharmaceutical industry seeks to reduce the time it takes to develop life-saving drugs, so-called precision medicine is coming to the rescue. It takes a lot of time to develop, analyze, test and bring a new drug to market. Artificial intelligence technology is particularly helpful in this regard, including reducing the aforementioned time to create a new drug. When creating most drugs, the first step is to synthesize a compound that can bind to a target molecule associated with the disease. The molecule in question is usually a protein, which is then tested for various influencing factors. In order to find the right compound, researchers analyze thousands of potential candidates of different molecules. When a compound that meets certain characteristics is successfully identified, then researchers search through huge libraries of similar compounds to find the optimal interaction with the protein responsible for the specific disease. In contrast, many years of time and many millions of dollars of funding are required to complete this labor-intensive process today. In a situation where artificial intelligence, machine learning and deep learning are involved in this process, then the entire process can be significantly reduced in time, costs can be significantly reduced and the new drug can be brought to the pharmaceutical market faster by pharmaceutical companies. However, can an artificial intelligence equipped with artificial neural networks that has been taught through deep learning to carry out the above-mentioned processes get it wrong when creating a new drug? What if the drug that was supposed to cure a person of a particular disease produces a number of new side effects that prove even more problematic for the patient than the original disease from which it was supposed to be cured? What if the patient dies due to previously unforeseen side effects? Will insurance companies recognize the artificial intelligence's mistake and compensate the family of the deceased patient? Who will bear the legal, financial, ethical, etc. responsibility for such a situation?
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Is the design of new pharmaceutical formulations through the involvement of AI technologies, including the creation of new drugs to treat various diseases by artificial intelligence, safe for humans?
Is the creation of new drugs by artificial intelligence safe for humans?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
What will be the impact of Industry 4.0/5.0 technologies on labor markets in the future?
Discussion
9 replies
  • Dariusz ProkopowiczDariusz Prokopowicz
What will be the impact of Industry 4.0/5.0 technologies, including generative artificial intelligence and Big Data Analytics, on labor markets in the future?
In recent years, the development of digitization and Internetization of the economy has accelerated. These processes in particular Digitization and Internetization of the economy are now occurring simultaneously in many areas of economic processes, in the functioning of business entities and public, financial and other institutions operating in economies that are increasingly becoming knowledge-based economies. The Covid-19 pandemic has further accelerated the processes of digitization and Internetization of the economy. More and more companies and enterprises operating in various industries and sectors of the economy are expanding their operations via the Internet, remotely providing their services and selling their products through e-commerce.
The development of information processing technologies in the era of the current Fourth Technological Revolution is determined by the development and growth of the applications of ICT, Internet technologies and advanced data processing technologies Industry 4.0. The current technological revolution associated with the concept of Industry 4. 0 is motivated by the development of such technologies as: Big Data Analytics, Data Science, cloud computing, machine learning, multi-layer artificial neural networks-based deep learning, generative artificial intelligence, personal and industrial Internet of Things, Business Intelligence analytics, autonomous robots, horizontal and vertical data system integration, multi-criteria simulation models, digital twins, Blockchain, smart technologies, cyber security instruments, augmented and virtual reality, and other technologies for advanced multi-criteria processing of large sets of data and information.
The processes of digitalization and Internetization of the economy are determined by “upstream” processes, i.e., those inspired by public institutions, including computerization, Internetization of public offices, and “downstream” determinants, i.e., e.g., the growth of e-commerce, the increase in the share of transactions and payments made electronically via the Internet, the development of Internet banking and the shift of a significant part of business in many companies to remote service conducted via the Internet.
In the future, due to the development of applications of Industry 4.0 technologies, including the ever-improving generative artificial intelligence, the employment of people in certain types of professions, occupations in certain industries and sectors of the economy may significantly decline. It is likely that significant declines in employment in some branches and sectors of the knowledge economy will occur in the next few years. There are many indications that in a few years the scale of applications of generative artificial intelligence in various areas of production processes and services provided will increase strongly. On the other hand, the development of applications of constantly improving generative artificial intelligence and other Industry 4.0 technologies will also create new jobs in analytical and research fields. However, just a few years ago, the prevailing thesis was that the scale of job losses would be much greater than the new jobs being created. In a period of rapid technological advances in recent years in the development of generative artificial intelligence technology and its applications, the emerging new jobs in which generative AI technologies are used is the aforementioned thesis begins to lose its relevance. In this regard, it is not out of the question that the number of new jobs and professions being created may fully manage the potentially emerging gap of lack of employment generated by AI technology replacing human labor. Regardless of the nature of the changes in labor markets that will result from the implementation of new Industry 4.0 technologies into various fields of business activities of companies and enterprises, it is certain that the impact on labor markets will be significant and substantial in the years to come.
I have described the key issues of opportunities and threats to the development of artificial intelligence technologies in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And the applications of Big Data technologies in sentiment analysis, business analytics and risk management were described in my co-authored article:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
I invite you to familiarize yourself with the issues described in the publications given above, and to scientific cooperation in these issues.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What will be the impact of Industry 4.0/5.0 technologies, including generative artificial intelligence and Big Data Analytics technologies on labor markets in the future?
What will be the impact of Industry 4.0/5.0 technologies on labor markets in the future?
And what is your opinion on this topic?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Can generative artificial intelligence technology help detect cybercrime attacks carried out using ransomware viruses?
Discussion
10 replies
  • Dariusz ProkopowiczDariusz Prokopowicz
Could the use of generative artificial intelligence technology to detect cybercrime attacks carried out using ransomware viruses significantly increase the level of cyber security in many companies, enterprises, financial and public institutions?
How can systems for managing the risk of cybercrime and/or loss of sensitive data archived in internal databases be improved through the use of generative artificial intelligence technology?
In a situation where companies, enterprises, financial and public institutions have a built in cybercrime risk management system, including email anti-spam applications, anti-virus systems, complex login tools, backap systems for data contained on hard drives, firewalls, cyber threat early warning systems, etc., then most cybercrime attacks targeting these business entities prove to be ineffective, and those that are effective cause very limited problems, financial losses, etc. However, there are still many business entities, especially companies and SMEs, that do not have complex, high-tech, integrated systems built to manage the risk of cybercrime and/or loss of sensitive data stored in databases. In recent years, one of the most serious cybercrime problems causing serious financial losses in some companies, enterprises, public institutions include cyberattacks used by cybercriminals with ransomware-type viruses. A successful attack carried out using ransomware viruses results in infecting a computer, blocking users, company employees from accessing the company's internal systems, stealing or blocking access to data collected in the company's databases, information stored on hard drives, etc., with a simultaneous demand to pay a ransom to remove the imposed blockades. In Poland, of the companies attacked with ransomware viruses, as many as 77 percent agree to pay the ransom. So security systems are still too poorly organized in many companies and institutions. In many business entities, systems for managing the risk of cybercrime and/or loss of sensitive data archived in internal databases are still not professionally built. Cybercrime risk management in many companies and enterprises apparently works poorly or not at all. Since generative artificial intelligence technology is being applied in many areas of cyber-security, so the question arises, could the application of this technology to detect cyber-crime attacks carried out with ransomware-type viruses significantly increase the level of cyber-security in many companies, enterprises, financial and public institutions?
I am conducting research in the problems of analyzing cybercriminal attacks conducted using ransomware viruses and in improving cyber security systems. I have included the conclusions of my research in the following articles:
Analysis of the security of information systems protection in the con-text of the global cyberatomy ransomware conducted on June 2, 2017
Development of malware ransomware as a new dimension of cybercrime taking control of IT enterprise and banking systems
Determinants of the development of cyber-attacks on IT systems of companies and individual clients in financial institutions
The Impact of the COVID-19 Pandemic on the Growing Importance of Cybersecurity of Data Transfer on the Internet
Cybersecurity of Business Intelligence Analytics Based on the Processing of Large Sets of Information with the Use of Sentiment Analysis and Big Data
THE QUESTION OF THE SECURITY OF FACILITATING, COLLECTING AND PROCESSING INFORMATION IN DATA BASES OF SOCIAL NETWORKING
I invite you to get acquainted with the issues described in the above-mentioned publications and to scientific cooperation in these issues.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can cybercrime risk management systems and/or loss of sensitive data archived in internal databases be improved through the application of generative artificial intelligence technology?
Could the application of generative artificial intelligence technology to detect cyberattacks carried out using ransomware viruses significantly increase the level of cyber security in many companies, enterprises, financial and public institutions?
Can generative artificial intelligence technology help detect cybercrime attacks carried out using ransomware viruses?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Can the application of AI and Big Data technologies improve Business Intelligence predictive analytics processes?
Discussion
8 replies
  • Dariusz ProkopowiczDariusz Prokopowicz
Can the application of generative artificial intelligence technology and Big Data Analytics improve the processes of predictive analytics performed as part of Business Intelligence?
Can the application of generative artificial intelligence technology and Big Data Analytics improve the processes of predictive analytics carried out within the framework of Business Intelligence and thus the effectiveness of business, economic and financial analytics supporting the management process of an organization, enterprise, company, corporation, etc., can be increased? And if so, how and to what extent?
As information systems that allow the largely automated performance of Business Intelligence analytics become an important factor in supporting the process of business management, so the importance of the new technologies of Industry 4.0/5.0, including generative artificial intelligence and Big Data Analytics, to improve the said analytical processes is growing. On the one hand, the obvious point is that the application of generative artificial intelligence technology and Big Data Analytics can improve the processes of predictive analytics carried out within the framework of Business Intelligence, and thus the effectiveness of business, economic and financial analytics supporting the management process of an organization, enterprise, company, corporation, etc. can be increased. However, on the other hand, it is also important to precisely define the determinants that determine the performance of such analytical processes, to point out the role of the new technologies of Industry 4.0/5.0, including generative artificial intelligence and Big Data Analytics technologies in the processes of predictive analytics carried out within the framework of Business Intelligence, and to estimate the extent of the influence of these technologies on the improvement of the said analytical processes.
I am conducting research on this issue. I have included the conclusions of my research in the following article:
Business Intelligence analytics based on the processing of large sets of information with the use of sentiment analysis and Big Data
I invite you to familiarize yourself with the problems described in the publications given above and to cooperate with me in scientific research on these problems.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the application of generative artificial intelligence technology and Big Data Analytics improve the processes of predictive analytics carried out within the framework of Business Intelligence and thus the effectiveness of business, economic and financial analytics supporting the management process of an organization, enterprise, company, corporation, etc., can be increased? And if so, how and to what extent?
Can the use of generative artificial intelligence and Big Data Analytics technologies improve the processes of predictive analytics performed as part of Business Intelligence?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

Related Publications

Conference Paper
Full-text available
In this article we describe Augmented Reality (AR) cards for computer science education that were created in the PCBuildAR project. From a technological point of view, we use marker-based AR for the cards so that students can learn and practice at any time with their smartphones. The instructional approach is based on the components content, constr...
Chapter
AI plays a significant role in education in general and very definitely in computer science education. AI offers various benefits, but there are also well-grounded concerns and potential dangers present when AI is used in education. It is therefore essential that a proper investigation and analysis of the need for improved regulation for the use of...
Article
Identifies design variables directly related to the development of computer-based instruction (CBI) and shows how those variables can be applied in the actual development of a course. The design is for a computer literacy course concerned with both content and student attitudes about that content. (Author)
Got a technical question?
Get high-quality answers from experts.