Article

ChatGPT for Computational Social Systems: From Conversational Applications to Human-Oriented Operating Systems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Welcome to the second issue of the IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS (TCSS) of 2023. According to the latest update of CiteScoreTracker from Elsevier Scopus released on February 5, 2023, the CitesSore of TCSS has reached a historical high of 9.6. Many thanks to all for your great effort and support.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... • Computational Challenges: The high computational costs of running sophisticated AI models like ChatGPT in realtime environments, such as those required for intelligent vehicles, pose significant challenges. Discussions might delve into optimizing computational resources and developing more efficient AI models that can operate within the constraints of vehicular technology [76]. ...
... • Ethical and Practical Considerations: Implementing AI in sensitive applications such as vehicles brings ethical and practical considerations to the forefront. Discussions might touch on the responsibility for AI-driven decisions, privacy concerns related to data collection and usage, and the integrity of the AI systems in terms of providing unbiased and accurate information [76]. ...
... • The advent of language models like ChatGPT has broadened the computational landscape, particularly in social systems and AI linguistic frameworks. Wang et al. delve into the utilization of ChatGPT within computational social systems, examining its transition from simple conversational applications to more complex, human-oriented operating systems[76]. Their work likely discusses the integration of AI in social contexts, emphasizing the need for systems that not only process information but also understand and adapt to human nuances, potentially transforming user interaction with technology. ...
... This large-scale language model, distinguished by its characteristics of conversational interface, faster responses, and costs-effectiveness, was an evolution from GPT-3 model [12]. This technology has demonstrated potential across diverse sectors including education, business, and computational social systems, showcasing its wide applicability [13,14]. However, ChatGPT faces limitations, in areas such as narrative coherence, bias, and the dissemination of fake news, as reported by [15]. ...
... [7] argue that ethical considerations, societal implications, and the design of these technologies must be addressed to ensure their reliable and bene cial integration into society. Continuous research and development concerning ChatGPT underscore its capacity to revolutionize the processing, dissemination, and application of information across various domains [14]. ...
Preprint
Full-text available
The reception of ChatGPT in the educational field has varied between enthusiasm and scepticism, generating a series of controversies and challenges in response to the emergence of artificial intelligence (AI) technologies within the educational context. The purpose of this study was to interpret and describe the perceptions of a group of pre-service teachers on the use of ChatGPT as an assistant in a learning experience implemented during their training, and on the implications of this AI tool in their future work context as teachers. A sequential mixed methodology was employed, initially collecting quantitative data, followed by a deeper analysis based on qualitative data from which explanatory categories of the phenomenon emerged. Our study on the integration of ChatGPT into teacher training programs reveals that students valued the use of ChatGPT for its conceptual support and quick feedback, enhancing their learning experience. However, concerns about the accuracy of information and plagiarism underline the need for a critical engagement with AI tools. Regarding their future teaching role, they highlighted the importance of mediation, training, and guiding students in the critical and responsible use of AI. They also identified the need to modify assessment methods to encourage critical thinking, problem-solving, and creativity. The study emphasized the importance of integrating AI technologies into teacher training in a reflective and critical manner, with guidance from educators and IT specialists. This These findings suggest a balanced approach to incorporating AI in education, emphasizing ethical use and the transformative potential of AI to support pedagogical objectives.
... ChatGPT is a powerful tool for natural language processing and communication, and it has a wide range of potential applications in fields such as education, customer service, and healthcare [2][3][4]. Using GAI like ChatGPT is like using an Iron Man suit from Marvel Comics and movies. Similar to strengthening with an Iron Man suit, existing data can be augmented or improved with ChatGPT. ...
... 2) Limited effect model A limited effect model for GAI can be expressed as a logistic function model. If the level of competence (C) is specified on the X-axis and the task level (T2) is specified on the Y-axis, it can be estimated by a logistic function such as (2). ...
Article
Full-text available
As the performance of generative artificial intelligence (GAI), such as ChatGPT, improves, content created by GAI will be distributed in the social media space, and knowledge and writings from unknown sources will be disseminated and reproduced. Now that GAI is becoming widespread, it is necessary to distinguish GAI from human intelligence, which constitutes knowledge. The data, information, knowledge, and work (DIKW) hierarchy is a useful framework for teaching and for checking metacognitive and explainable artificial intelligence (XAI) literacy. There are two types of collaboration between GAI and human intelligence: a combined intelligence model and a parallel intelligence model. The combined intelligence model is a method of using GAI for creating works by collecting data, organizing information, and deriving knowledge from information. This model is suitable for GAI-assisted tasks (GAIATs). The parallel intelligence model is suitable for GAI-assisted learning (GAIAL); it is a method in which a person develops abilities by analyzing and comparing tasks created by GAI after going through the data-information-knowledge-work process. The zone of proximal development (ZPD) created by educational scaffolding is a quantitative framework that is appropriate for evaluating the effects of GAI. The ZPD generated by GAI that corresponds to scaffolding should be managed so as not to favor or disadvantage specific individuals.
... It stands out for features such as accessibility, personalization, conversational format, and cost-effectiveness (Rahman & Watanobe, 2023). Developed based on models like GPT-3 with a vast number of parameters, ChatGPT has shown promise across various fields such as medical education, CAD design, radiology, and computational social systems (Gilson et al., 2023;Nelson et al., 2023;Lyu et al., 2023;Wang et al., 2023). Despite its strengths, ChatGPT has limitations, including occasional issues with narrative coherence as noted by Gilson et al. (2023), and the lack of patient-specific information that may impact the credibility of the health information it provides (Davis, 2023). ...
Article
Full-text available
The reception of ChatGPT in the educational field has varied between enthusiasm and skepticism, sparking a series of controversies and challenges in response to the emergence of artificial intelligence (AI) technologies within the educational context. For this reason, exploring the student perception of future educators in training regarding these tools, considering the challenges they will face both in their immersion processes in practice and in their future professional role, is believed to constitute a contribution to the field of initial teacher education knowledge. The purpose of this study was to interpret and describe the perceptions of a group of pedagogy students on the use of ChatGPT as an assistant in a learning experience implemented during their training and on the implications of this AI tool in their future work context as teachers. In this context, applied research in teaching was conducted with a course on "Assessment of and for learning" comprised of 26 students from various pedagogy programs in their fourth year. A sequential mixed methodology was employed, collecting quantitative data initially, to then delve deeper into the analysis based on qualitative data. For this, students were first asked to complete a questionnaire on prior knowledge and experiences with the use of ChatGPT. Subsequently, considering the results, work in class focused on two central themes: the drafting of prompts and academic integrity. Then, the workshop, its phases, and the students began working with the assistance of ChatGPT. Feedback sessions on the workshops were held, followed by the application of a semi-structured questionnaire with open-ended questions aimed at gathering information on student perceptions based on this experience and the implications of AI as future teachers. The data collected were analyzed based on codes and the development of categories. A focus group with 12 student representatives was later conducted to promote reflection and critical analysis among students on the implications of artificial intelligence in their future professional teaching performance. The extracted data revealed new codes and explanatory categories of the phenomenon under study. From the qualitative phase, 4 categories and emerging themes were identified, such as "Advantages of using ChatGPT as an assistant in the training of university pedagogy students", within which students expressed a positive valuation of the experience, highlighting the assistance of the chatbot during the workshop through various functions such as helping to justify or provide conceptual support to the evaluation criteria they were developing, proposing ideas for assessment situations, and receiving guidance or feedback quickly on their work. Additionally, a second category grouped the risks associated with using the chatbot, such as the possible lack of rigor or reliability of the information it provides, or the misuse of these tools through plagiarism. Other categories reported on student perceptions regarding their future professional teaching performance, with emerging themes such as the importance of teacher training and mediation when integrating these tools into learning processes with their students, ethical implications, and the need to transform assessment methods in schools to promote the development of critical thinking, information analysis, and creativity, so that AI tools can be a real support for students and not a threat. Finally, students highlighted the importance of working with these tools from the beginning of their training with the mediation of a teacher and a specialist in these new information technologies. Article visualizations: </p
... Recently, LLM (Shanahan, McDonell, and Reynolds 2023;Wang, Li, et al. 2023) has achieved ground-breaking technical implementations and demonstrated the remarkable potential to achieve human-level intelligence. They are increasingly used as the core coordinator or controller for creating autonomous agents, and a wide variety of AI Agents have emerged. ...
Article
Full-text available
The widespread use of Internet has accelerated the explosive growth of data, which in turn leads to information overload and information confusion. This makes it difficult for us to communicate effectively in social groups, thereby intensifying the demands for emotional companionship. Therefore, we propose a novel social group chatting framework based on Large Language Model (LLM) powered multiple autonomous agents collaboration in this article. Specifically, BERTopic is used to extract topics from history chatting content for each social group everyday, and then multiple topics tracking is realised through multi‐level association by adaptive time sliding‐window mechanism and optimal matching. Furthermore, we use topic tracking architecture and prompts to design and implement an AI Chatbot system with different characters that can conduct natural language conversations with users in online social group. LLM, as the controller and coordinator of the whole AI Chatbot for sub‐tasks, allows different AI Agents to autonomously decide whether to participate in current topic, how to generate response, and whether to propose a new topic. Each AI Agent has their own multi‐store memory system based on the Atkinson‐Shiffrin model. Finally, we construct a verification environment based on online game that is consistent with real society. Subjective and objective evaluation methods were deployed to perform qualitative and quantitative analyses to demonstrate the performance of our AI Chatbot system.
... The integration of ChatGPT with Application Programming Interfaces (APIs) is expected to revolutionize the landscape of human-computer interaction in various applications [26]. As natural language processing capabilities continue to improve, this synergistic relationship will enable more seamless and efficient communication between humans and machines, as well as across multiple digital platforms. ...
Article
Full-text available
This article reports the results of an experiment conducted with ChatGPT to see how its performance compares to human performance on tests that require specific knowledge and skills, such as university admission tests. We chose a general undergraduate admission test and two tests for admission to biomedical programs: the Scholastic Assessment Test (SAT), the Cambridge BioMedical Admission Test (BMAT), and the Italian Medical School Admission Test (IMSAT). In particular, we looked closely at the difference in performance between ChatGPT-4 and its predecessor, ChatGPT-3.5, to assess its evolution. The performance of ChatGPT-4 showed a significant improvement over ChatGPT-3.5 and, compared to real students, was on average within the top 10% in the SAT test, while the score in the IMSAT test granted admission to the two highest ranked Italian medical schools. In addition to the performance analysis, we provide a qualitative analysis of incorrect answers and a classification of three different types of logical and computational errors made by ChatGPT-4, which reveal important weaknesses of the model. This provides insight into the skills needed to use these models effectively despite their weaknesses, and also suggests possible applications of our analysis in the field of education.
... ChatGPT alters human perception and challenges societal conventions and political and economic structures. It has considerable promise in addressing difficulties in Computational Social Systems (CSS), which research social phenomena through computer science and sociology interfaces [67]. ChatGPT could revolutionize human-machine interaction. ...
Article
Full-text available
Artificial Intelligence and Natural Language Processing technology have demonstrated significant promise across several domains within the medical and healthcare sectors. This technique has numerous uses in the field of healthcare. One of the primary challenges in implementing ChatGPT in healthcare is the requirement for precise and up-to-date data. In the case of the involvement of sensitive medical information, it is imperative to carefully address concerns regarding privacy and security when using GPT in the healthcare sector. This paper outlines ChatGPT and its relevance in the healthcare industry. It discusses the important aspects of ChatGPT's workflow and highlights the usual features of ChatGPT specifically designed for the healthcare domain. The present review uses the ChatGPT model within the research domain to investigate disorders associated with the hepatic system. This review demonstrates the possible use of ChatGPT in supporting researchers and clinicians in analyzing and interpreting liver-related data, thereby improving disease diagnosis, prognosis, and patient care.
... Over the past few years, the widespread adoption of social media platforms [17][18][19] has revolutionized the support of post-disaster relief efforts by offering real-time insights into affected communities. During disasters, these platforms fill up with personal opinions and emotions [20,21], burying critical information from responders and affected communities. ...
Article
Full-text available
Social media has emerged as a critical platform for disseminating real-time information during disasters. However, extracting actionable resource data, such as needs and availability, from this vast and unstructured content remains a significant challenge, leading to delays in identifying and allocating resources, with severe consequences for affected populations. This study addresses this challenge by investigating the potential of label and topic features, combined with text embeddings, to enhance the performance and efficiency of resource identification from social media data. We propose Crisis Resource Finder (CRFinder), a novel framework that leverages label encoding and topic features to extract richer contextual information, uncover hidden patterns, and reveal the true context of disaster resources. CRFinder incorporates novel techniques such as multi-level text-label attention and contrastive text-topic attention to capture semantic and thematic nuances within the textual data. Additionally, our approach employs topic injection and selective contextualization techniques to enhance thematic relevance and focus on critical information, which is pivotal for targeted relief efforts. Extensive experiments demonstrate the significant improvements achieved by CRFinder over existing state-of-the-art methods, with average weighted F1-score gains of 7.12%, 6.44%, and 7.89% on datasets from the Nepal earthquake, Italy earthquake, and Chennai floods, respectively. By providing timely and accurate insights into resource needs and availabilities, CRFinder has the potential to revolutionize disaster response efforts.
... 3) The third cornerstone includes foundation models [29], retrieval-augmented generation (RAG) [30], scenarios engineering (SE) [31], and Human-Oriented Operating Systems (HOOS) [32]. Foundation models possess strong capabilities to solve various downstream sensing tasks, and can be recognized as the core of CSI. ...
Article
Full-text available
The transition from cyber-physical-system-based (CPS-based) Industry 4.0 to cyber-physical-social-system-based (CPSS-based) Industry 5.0 brings new requirements and opportunities to current sensing approaches, especially in light of recent progress in large language models (LLMs) and retrieval augmented generation (RAG). Therefore, the advancement of parallel intelligence powered crowdsensing intelligence (CSI) is witnessed, which is currently advancing toward linguistic intelligence. In this paper, we propose a novel sensing paradigm, namely conversational crowdsensing, for Industry 5.0 (especially for social manufacturing). It can alleviate workload and professional requirements of individuals and promote the organization and operation of diverse workforce, thereby facilitating faster response and wider popularization of crowdsensing systems. Specifically, we design the architecture of conversational crowdsensing to effectively organize three types of participants (biological, robotic, and digital) from diverse communities. Through three levels of effective conversation (i.e., inter-human, human-AI, and inter-AI), complex interactions and service functionalities of different workers can be achieved to accomplish various tasks across three sensing phases (i.e., requesting, scheduling, and executing). Moreover, we explore the foundational technologies for realizing conversational crowdsensing, encompassing LLM-based multi-agent systems, scenarios engineering and conversational human-AI cooperation. Finally, we present potential applications of conversational crowdsensing and discuss its implications. We envision that conversations in natural language will become the primary communication channel during crowdsensing process, enabling richer information exchange and cooperative problem-solving among humans, robots, and AI.
... According to the theory of planned behavior, an entrepreneur's entrepreneurial intention is contingent upon his or her attitudes toward entrepreneurship, subjective norms, and perceived behavioral control. ChatGPT-like next-generation AI technology has the potential to intervene in these three aspects, thereby enhancing the entrepreneurial intention of user entrepreneurs and reinforcing their ability to identify entrepreneurial opportunities [24][25][26]. ...
Article
Full-text available
ChatGPT, characterized by its reliance on big data, robust algorithms, and significant computational power, has become a benchmark AI application product, signifying a new breakthrough in AI technology. The emergence of applications based on ChatGPT-like next-generation AI technology has triggered a series of interconnected transformations in human society's ways of thinking, production, living, and governance. However, the academic community has yet to conduct research specifically on innovation and entrepreneurship. Against this backdrop, this study explores the effect of the novel features of ChatGPT-like next-generation AI technology on user entrepreneurs, driving factors, and the entrepreneurial process. The findings reveal the following: (1) User entrepreneurs collect extensive user data through ChatGPT-like AI technology and intelligently analyze it to achieve optimal entrepreneurial judgments and decisions. (2) User entrepreneurs utilize ChatGPT-like AI technology to understand the latent needs of users and to acquire user demand information, such as product shortcomings and appeals. (3) ChatGPT-like AI technology enhances the entrepreneurial intention of user entrepreneurs, stimulates their creative thinking, and expands and deepens their social networks, thereby strengthening their identification with entrepreneurial opportunities. (4) ChatGPT-like AI technology drives and empowers the three-stage evolution of user entrepreneurship: idea generation, prototype development, and commercialization of innovative products. This study not only provides new insights and theoretical foundations for user entrepreneurship research to better explore and leverage the application of ChatGPT-like AI technology in the entrepreneurial process but also offers significant practical implications for encouraging users to actively engage in innovation and entrepreneurship activities, supporting the achievement of sustainable digital entrepreneurship goals.
... where C represents the common sense of LLMs, for example, the understanding of traffic control rules input to LLM via prompts [32]; S represents the current state of the traffic system, and llm represents the function calling of LLMs. ...
Preprint
Full-text available
In the digital era, data has become a pivotal asset, advancing technologies such as autonomous driving. Despite this, data trading faces challenges like the absence of robust pricing methods and the lack of trustworthy trading mechanisms. To address these challenges, we introduce a traffic-oriented data trading platform named Data on The Move (DTM), integrating traffic simulation, data trading, and Artificial Intelligent (AI) agents. The DTM platform supports evident-based data value evaluation and AI-based trading mechanisms. Leveraging the common sense capabilities of Large Language Models (LLMs) to assess traffic state and data value, DTM can determine reasonable traffic data pricing through multi-round interaction and simulations. Moreover, DTM provides a pricing method validation by simulating traffic systems, multi-agent interactions, and the heterogeneity and irrational behaviors of individuals in the trading market. Within the DTM platform, entities such as connected vehicles and traffic light controllers could engage in information collecting, data pricing, trading, and decision-making. Simulation results demonstrate that our proposed AI agent-based pricing approach enhances data trading by offering rational prices, as evidenced by the observed improvement in traffic efficiency. This underscores the effectiveness and practical value of DTM, offering new perspectives for the evolution of data markets and smart cities. To the best of our knowledge, this is the first study employing LLMs in data pricing and a pioneering data trading practice in the field of intelligent vehicles and smart cities.
... The prospective integration of chatbots with computer vision technology heralds a new era of possibilities in AI. These include artistic creations like painting [199], intelligent vehicle operation [200]- [202], industrial automation [203], and visually interactive conversational systems [204]. Beyond computer vision, integrating these chatbots with chemical systems using technologies like SMILES [205] could revolutionize how chemical compositions are interpreted and interacted with. ...
Preprint
Full-text available
The past few decades have witnessed an upsurge in data, forming the foundation for data-hungry, learning-based AI technology. Conversational agents, often referred to as AI chatbots, rely heavily on such data to train large language models (LLMs) and generate new content (knowledge) in response to user prompts. With the advent of OpenAI's ChatGPT, LLM-based chatbots have set new standards in the AI community. This paper presents a complete survey of the evolution and deployment of LLM-based chatbots in various sectors. We first summarize the development of foundational chatbots, followed by the evolution of LLMs, and then provide an overview of LLM-based chatbots currently in use and those in the development phase. Recognizing AI chatbots as tools for generating new knowledge, we explore their diverse applications across various industries. We then discuss the open challenges, considering how the data used to train the LLMs and the misuse of the generated knowledge can cause several issues. Finally, we explore the future outlook to augment their efficiency and reliability in numerous applications. By addressing key milestones and the present-day context of LLM-based chatbots, our survey invites readers to delve deeper into this realm, reflecting on how their next generation will reshape conversational AI.
... 11 ChatGPT can create human-like text and retain a conversational style, enabling more realistic natural interactions. 12 These capabilities are made possible by the combination of NLP and a generative AI that depends on deep learning. 13,14 LETTER We summarize the main contributions of this work as follows: ...
Article
Conversational Artificial Intelligence (AI) and Natural Language Processing have advanced significantly with the creation of a Generative Pre-trained Transformer (ChatGPT) by OpenAI. ChatGPT uses deep learning techniques like transformer architecture and self-attention mechanisms to replicate human speech and provide coherent and appropriate replies to the situation. The model mainly depends on the patterns discovered in the training data, which might result in incorrect or illogical conclusions. In the context of open-domain chats, we investigate the components, capabilities constraints, and potential applications of ChatGPT along with future opportunities. We begin by describing the components of ChatGPT followed by a definition of chatbots. We present a new taxonomy to classify them. Our taxonomy includes rule-based chatbots, retrieval-based chatbots, generative chatbots, and hybrid chatbots. Next, we describe the capabilities and constraints of ChatGPT. Finally, we present potential applications of ChatGPT and future research opportunities. The results showed that ChatGPT, a transformer-based chatbot model, utilizes encoders to produce coherent responses. K E Y W O R D S ChatGPT, conversational artificial intelligence, deep learning, generative pre-trained transformer, large language models, natural language processing, self-attention mechanisms 1 INTRODUCTION Does a computer think? is a straightforward but complicated query. 1 The term "Artificial Intelligence" (AI) was first used by researchers in 1955 to describe computers and processes that mimic human intellect and make judgments similarly to discover a solution to this dilemma. 2 Researchers first envisioned that these machines could transform into intelligent forms and introduced the Three Laws of Robotics to set the rules that bots should adhere to and cannot be bypassed. 3 At this time, the term "[ro]bots" was first used inČapek's (1921) science fiction play. 4,5 In November 2022, OpenAI, a renowned AI research institution, introduced a Generative Pre-trained Transformer (ChatGPT), a state-of-the-art chatbot system. ChatGPT, an instance of Natural Language Processing (NLP), excels in realistic conversation and possesses the ability to handle follow-up queries, acknowledge errors, critique flawed assumptions, and decline inappropriate requests. 6,7 Although ChatGPT's main purpose was to mimic human speech, it is capable of much more than that such as generating poems, stories, and novels. 8 The development of ChatGPT heralds the future arrival of cutting-edge AI technology that will genuinely test the validity of the Turing Test and show whether machines are capable of thinking like humans. 9 ChatGPT is a revolutionary conversational AI-powered bot that demonstrates the paradigm shift occurring not only in the educational landscape but also in every aspect of our lives. 5 It is unclear if it would ultimately pass the Turing Test, but it is certainly revolutionary. 10 ChatGPT is based on GPT-3, the third version of the OpenAI GPT series, which is more advanced than traditional chatbots in terms of scale (175 billion parameters compared to 1.5 billion in GPT-2), a larger dataset used as training data, more fine-tuning, enhanced capabilities, and more human-like text generation. 11 ChatGPT can create human-like text and retain a conversational style, enabling more realistic natural interactions. 12 These capabilities are made possible by the combination of NLP and a generative AI that depends on deep learning. 13,14 Internet Technology Letters. 2024;e530. wileyonlinelibrary.com/journal/itl2
... People use ChatGPT for various things, including writing emails, essays, and software code. [79][80][81] ChatGPT processes user input and produces a response using NLP and ML techniques. From the user input, the technology extracts the crucial components, such as the speaker's purpose, entities, and the overall context of the dialogue. ...
... Scaling ChatGPT has a number of problems that must be addressed in order to ensure efficient and successful operation:  ChatGPT demands extensive computational resources, such as processing power and memory, to create real-time responses. Scaling the system to support a large number of concurrent talks necessitates a strong infrastructure that includes high-performance servers and smart resource management strategies [17].  ChatGPT is a large language model created by OpenAI that can generate human-like text. ...
Conference Paper
Full-text available
The growing popularity of chatbots has transformed the way users interact with apps and services. ChatGPT, a cutting-edge conversational Artificial Intelligence (AI) model, has emerged as a strong tool capable of providing tailored interactions and creating human-like responses. However, as the user base grows and workloads become more dynamic, ChatGPT’s architectural scalability becomes critical to maintaining responsiveness, minimizing latency, and optimizing resource use. This research paper provides a complete case study of ChatGPT’s architectural scalability, with a focus on its capacity to handle increasing user demands efficiently. Scaling a complex conversational AI model like ChatGPT comes with its own set of hurdles. We go into the complexities of vertical scaling, which includes raising individual instance resources, and horizontal scaling, which involves adding more instances to manage concurrent user interactions. We do performance studies on different cloud platforms Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure and their available services for scalability of ChatGPT. Our research includes vertical and horizontal scaling scenarios, allowing us to analyze each platform’s effectiveness in handling various workloads and user traffic. Our study’s findings provide important insights into the effective scaling of ChatGPT. The study emphasizes the importance of constant monitoring and dynamic scaling in order to react to shifting user demands while maintaining high availability.
... Generative Artificial Intelligence, epitomized by models like ChatGPT, represents a significant advancement in the realm of AI. However, its application to achieve the SDGs is beset with challenges [54][55][56][57][58][59][60]. This section explores the intricate obstacles faced by ChatGPT and akin generative AI models in substantially contributing to the realization of the SDGs. ...
Article
Full-text available
The emergence of generative artificial intelligence (AI) models like ChatGPT has marked the dawn of a new era in human-machine interaction, profoundly impacting various sectors of society. This study investigates the roles and hurdles faced by ChatGPT and its counterparts in advancing the United Nations' Sustainable Development Goals (SDGs). These 17 SDGs form a comprehensive framework addressing diverse global challenges such as poverty, inequality, climate change, and healthcare. Leveraging its natural language processing capabilities, ChatGPT actively promotes education (SDG 4) by providing accessible and personalized learning experiences. Moreover, it aids in information dissemination, supporting goals like zero hunger (SDG 2) and good health and well-being (SDG 3) by distributing vital agricultural and healthcare knowledge. Nevertheless, integrating AI, including ChatGPT, into sustainable development endeavors presents multifaceted challenges. Ethical concerns related to privacy, bias, and misinformation impede progress toward SDGs like gender equality (SDG 5) and reduced inequalities (SDG 10). Technical limitations also hinder AI's potential contributions, posing challenges to goals associated with clean water and sanitation (SDG 6) and affordable and clean energy (SDG 7). Addressing these challenges necessitates global collaboration and policy frameworks that align with the SDGs. This research delves into innovative approaches to effectively harness ChatGPT's capabilities, ensuring alignment with the SDGs. By confronting ethical and technical challenges and fostering collaboration among stakeholders, generative AI can significantly augment the global pursuit of sustainable development, fostering a more inclusive, knowledgeable, and interconnected world. Keywords: ChatGPT, Artificial Intelligence, Generative AI, sustainable development, sustainable development goals, smarts cities.
... This evaluation provides a balanced perspective on the practical considerations of deploying ChatGPT. − Focus on relevance and application: While covering technical aspects, we also emphasize the relevance and application of LLM's in real-world scenarios [26] and the implications of ChatGPT's advancements in dialogue modeling [27]. ...
Article
Full-text available
Conversational AI has seen a growing interest among government, researchers, and industrialists. This comprehensive survey paper provides an in-depth analysis of large language models, specifically focusing on ChatGPT. This paper discusses the architecture, training process, and challenges associated with large language models, including bias, interpretability, and ethics. It explores various applications of ChatGPT and examines future research trends, such as improving model generalization, addressing data scarcity, and integrating multimodal capabilities. This survey also serves as a roadmap for researchers, practitioners, and policymakers, offering valuable insights into the current state and future potential of large language models and ChatGPT.
... Generative AI is pivotal in fraud detection and prevention within the financial industry. These AI systems analyze vast transactional data to identify patterns and anomalies indicative of fraudulent behavior [25,26]. By continuously monitoring transactions and flagging suspicious activities in real-time, financial institutions can take immediate action, safeguarding their assets and customer accounts. ...
Article
Full-text available
Generative Artificial Intelligence (AI), exemplified by ChatGPT and similar models, has rapidly infiltrated the realms of finance and accounting, revolutionizing traditional processes while presenting unique challenges. This paper delves into the multifaceted role and challenges faced by these generative AI technologies in the intricate landscape of financial and accounting sectors. In finance, ChatGPT streamlines customer interactions, offering personalized financial advice, aiding investment strategies, and facilitating real-time market analysis. It handles voluminous data swiftly, enhancing algorithmic trading, risk management, and fraud detection. In accounting, these AI models automate data entry, categorization, and report generation, reducing human errors and operational costs. They also assist in compliance tasks, ensuring adherence to evolving regulations, and enhance forensic accounting techniques. However, the integration of ChatGPT and similar AI in finance and accounting is not without challenges. Ethical dilemmas arise concerning data privacy, security, and biased decision-making algorithms. Ensuring these AI systems comply with industry standards and regulations while maintaining the integrity and confidentiality of sensitive financial data remains a significant hurdle. Moreover, there are concerns about the accountability of AI-driven financial decisions, requiring a delicate balance between human expertise and machine intelligence. Additionally, the constant evolution of financial markets demands adaptability and continuous learning from these AI systems, necessitating ongoing research and development efforts. This paper critically analyzes the evolving role of ChatGPT and similar generative AI in finance and accounting, shedding light on the transformative potential and the hurdles that need to be surmounted for these technologies to truly revolutionize the financial landscape. Keywords: ChatGPT, Generative Artificial Intelligence, Artificial Intelligence, Accounting, Finance, Education
... The versatility of ChatGPT is impressive because it can produce a variety of outputs, including essays, poems, prompts, contracts, lecture notes, and computer code. Even though its fluidity is frequently impressive, its accuracy and originality are only sometimes guaranteed [15]. A "large language model" that predicts words based on enormous amounts of data it has been trained on powers ChatGPT's technology. ...
Article
Full-text available
Transferring credits between universities worldwide is challenging and time consuming, and usually follows strict and complicated administrative guidelines. Students may encounter severe difficulties during such a process, primarily if they rely on those credits to transfer to another school or graduate on time. Universities may also find it challenging to compare massive online open courses (MOOCs) to courses offered in their traditional programs due to the need for more uniformity in course content and academic rigor. Administrators and professors may struggle to determine whether students have acquired the knowledge and skills necessary for credit transfer through the MOOC. The power of ChatGPT (Generative Pre-trained Transformer) may be employed in identifying and matching courses to MOOCs. At the same time, blockchain technology may provide a speedy and smooth process for credit transfer. A pilot structure that enables students to sign up for an MOOC and determine course equivalency without the challenges typically connected with credit-transfer issues or the recognition of courses taught outside their university is presented, piloted, and tested, significantly impacting the entire credit-transfer process.
... Such a system will present a unified, human-oriented interface, coordinating with people's behavior habits and preferences, and use human feedback to improve its underlying agent system. Some researchers suggest that such a system could be called a Human-oriented Operating System (H2OS) [134]. Like the operating system for computer software and hardware, H2OS is a complex system composed of multiple components that can virtualize transportation resources, parallelize transportation requests, and apply transportation planning to real-world operation. ...
Article
In 2014, IEEE Intelligent Transportation Systems Society established a Technical Committee on Transportation 5.0 with the mission of promoting and transforming the deployment of advanced and innovative technologies, especially Artificial Intelligence in transportation. This paper briefly summarizes our main research and findings over the last decade. Transportation Foundation Models, Transportation Scenarios Engineering, and Transportation Operating Systems have been identified as the main directions for the research and development of next-generation intelligent transportation systems.
Chapter
The AI tools are designed to mimic human-like responses in natural language conversations. It uses deep learning techniques, trained on diverse internet texts, to generate coherent replies across various prompts. Its functions include providing information, engaging in conversations, assisting with tasks, and offering creative suggestions. At its core, AI Tool relies on a transformer neural network, excelling at capturing long-range text dependencies, making it ideal for language-related tasks. With a whopping 175 billion parameters, it stands as one of the most substantial language models developed. Its extensive training in internet text gives it broad language understanding and general knowledge. However, it's crucial to note that it can occasionally produce incorrect or nonsensical answers. Users must critically assess their responses and verify information from reliable sources when needed. This work presents analysis on a sector specific AI Tool’s response on user’s questionaries. The responses from AI tools have been cross-verified with human experts for accuracy and validation. The AI Tool's performance has undergone thorough evaluation using specific parameters. This study aims to benefit the research community and ordinary users, by offering a comprehensive understanding of AI generative responses and its patterns. This work aims to promote responsible and informed usage of this advanced language model.
Article
Full-text available
Pre-trained large language models (PLMs) have the potential to support urban science research through content creation, information extraction, assisted programming, text classification, and other technical advances. In this research, we explored the opportunities, challenges, and prospects of PLMs in urban science research. Specifically, we discussed potential applications of PLMs to urban institution, urban space, urban information, and citizen behaviors research through seven examples using ChatGPT. We also examined the challenges of PLMs in urban science research from both technical and social perspectives. The prospects of the application of PLMs in urban science research were then proposed. We found that PLMs can effectively aid in understanding complex concepts in urban science, facilitate urban spatial form identification, assist in disaster monitoring, sense public sentiment and so on. They have expanded the breadth of urban research in terms of content, increased the depth and efficiency of the application of multi-source big data in urban research, and enhanced the interaction between urban research and other disciplines. At the same time, however, the applications of PLMs in urban science research face evident threats, such as technical limitations, security, privacy, and social bias. The development of fundamental models based on domain knowledge and human-AI collaboration may help improve PLMs to support urban science research in future.
Article
Full-text available
Recent advances in human-in-the-loop or human-centric research have sparked a new wave of scientific exploration. These studies have enhanced the understanding of complex social systems and contributed to more sustainable artificial intelligence (AI) ecosystems. However, the incorporation of human or social factors increases system complexity, making traditional approaches inadequate for managing these complex systems and necessitating a novel operational paradigm. Over decades of work, a mature and comprehensive theory of parallel intelligence (PI) has been established. Rooted in cyber-physical-social systems (CPSS), PI adapts flexibly to various situations within complex systems through the ACP framework (Artificial systems, Computational experiments, and Parallel execution), ensuring system reliability. This paper provides a detailed review and a novel perspective on PI, beginning with the historical and philosophical origins of CPSS and proceeding to present both the fundamental framework and technological implementations of PI. PI-based Industry 5.0 is highlighted, where three pillars are adopted to help realize the supposed vision. Additionally, the paper outlines applications of PI in multiple fields, such as transportation, healthcare, manufacturing, and agriculture, and discusses the opportunities and challenges for imaginative intelligence. The continuous exploration of PI is expected to eventually facilitate the realization of “6S”-based (safe, secure, sustainable, sensitive, service, and smart) parallel ecosystems.
Article
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in human–computer interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems’ responsibility, and a human-centered approach is vital. We introduce the recommender system, assistant, and human (RAH) framework, an innovative solution with large language model (LLM)-based agents such as perceive, learn, act, critic, and reflect, emphasizing the alignment with user personalities. The framework utilizes the learn-act-critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework’s efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
Article
Recent advancements in natural language processing (NLP) have catalyzed the development of models capable of generating coherent and contextually relevant responses. Such models are applied across a diverse array of applications, including but not limited to chatbots, expert systems, question-and-answer robots, and language translation systems. Large Language Models (LLMs), exemplified by OpenAI’s Generative Pretrained Transformer (GPT), have significantly transformed the NLP landscape. They have introduced unparalleled abilities in generating text that is not only contextually appropriate but also semantically rich. This evolution underscores a pivotal shift towards more sophisticated and intuitive language understanding and generation capabilities within the field. Models based on GPT are developed through extensive training on vast datasets, enabling them to grasp patterns akin to human writing styles and deliver insightful responses to intricate questions. These models excel in condensing text, extending incomplete passages, crafting imaginative narratives, and emulating conversational exchanges. However, GPT LLMs are not without their challenges, including ethical dilemmas and the propensity for disseminating misinformation. Additionally, the deployment of these models at a practical scale necessitates a substantial investment in training and computational resources, leading to concerns regarding their sustainability. ChatGPT, a variant rooted in transformer-based architectures, leverages a self-attention mechanism for data sequences and a reinforcement learning-based human feedback (RLHF) system. This enables the model to grasp long-range dependencies, facilitating the generation of contextually appropriate outputs. Despite ChatGPT marking a significant leap forward in NLP technology, there remains a lack of comprehensive discourse on its architecture, efficacy, and inherent constraints. Therefore, this survey aims to elucidate the ChatGPT model, offering an in-depth exploration of its foundational structure and operational efficacy. We meticulously examine Chat-GPT’s architecture and training methodology, alongside a critical analysis of its capabilities in language generation. Our investigation reveals ChatGPT’s remarkable aptitude for producing text indistinguishable from human writing, whilst also acknowledging its limitations and susceptibilities to bias. This analysis is intended to provide a clearer understanding of ChatGPT, fostering a nuanced appreciation of its contributions and challenges within the broader NLP field. We also explore the ethical and societal implications of this technology, and discuss the future of NLP and AI. Our study provides valuable insights into the inner workings of ChatGPT, and helps to shed light on the potential of LLMs for shaping the future of technology and society. The approach used as Eco-GPT, with a three-level cascade (GPT-J, J1-G, GPT-4), achieves 73% and 60% cost savings in CaseHold and CoQA datasets, outperforming GPT-4.
Article
This letter is a brief summary of a series of IEEE TIV's decentralized and hybrid workshops (DHWs) on Federated Intelligence for Intelligent Vehicles. The discussed results are: 1) Different scales of large models (LMs) can be federated and deployed on IVs, and three types of federated collaboration between large and small models can be adopted for IVs. 2) Federated fine-tuning of LMs is beneficial for IVs data security. 3) The sustainability of IVs can be improved through optimizing existing models and continuous learning using federated intelligence. 4) LM-enhanced knowledge can make IVs smarter.
Article
Full-text available
In 2022, OpenAI's unveiling of generative AI Large Language Models (LLMs)- ChatGPT, heralded a significant leap forward in human-machine interaction through cutting-edge AI technologies. With its surging popularity, scholars across various fields have begun to delve into the myriad applications of ChatGPT. While existing literature reviews on LLMs like ChatGPT are available, there is a notable absence of systematic literature reviews (SLRs) and bibliometric analyses assessing the research's multidisciplinary and geographical breadth. This study aims to bridge this gap by synthesising and evaluating how ChatGPT has been integrated into diverse research areas, focussing on its scope and the geographical distribution of studies. Through a systematic review of scholarly articles, we chart the global utilisation of ChatGPT across various scientific domains, exploring its contribution to advancing research paradigms and its adoption trends among different disciplines. Our findings reveal a widespread endorsement of ChatGPT across multiple fields, with significant implementations in healthcare (38.6%), computer science/IT (18.6%), and education/research (17.3%). Moreover, our demographic analysis underscores ChatGPT's global reach and accessibility, indicating participation from 80 unique countries in ChatGPT-related research, with the most frequent countries keyword occurrence, USA (719), China (181), and India (157) leading in contributions. Additionally, our study highlights the leading roles of institutions such as King Saud University, the All India Institute of Medical Sciences, and Taipei Medical University in pioneering ChatGPT research in our dataset. This research not only sheds light on the vast opportunities and challenges posed by ChatGPT in scholarly pursuits but also acts as a pivotal resource for future inquiries. It emphasises that the generative AI (LLM) role is revolutionising every field. The insights provided in this paper are particularly valuable for academics, researchers, and practitioners across various disciplines, as well as policymakers looking to grasp the extensive reach and impact of generative AI technologies like ChatGPT in the global research community.
Article
With the advent of Web 3.0, the evolution of cities and societies is increasingly oriented toward virtual spaces. This shift signifies an inevitable trend where the integration of virtual and real elements becomes vital to their development. Such a transition will bring huge changes to the organizational structure and methods, development modes, as well as operating mechanisms of cities and societies. In the virtual-real integrated cities and societies, traditional economic and management principles and models are no longer applicable. Consequently, it is crucial to explore new economic and management models tailored to Web 3.0. This article integrates virtual cities/societies with actual cities/societies, and proposes the innovative paradigm of MetaCities/MetaSocieties. Based on parallel intelligence theory and metaverse technologies, the research framework of MetaCities/MetaSocieties is established, and its main participants and operating mode are discussed. In addition, in view of the new economic and management issues faced in MetaCities/MetaSocieties, the innovative paradigms of MetaEconomics and MetaManagement are proposed, and the operational logic and models of MetaEconomics, as well as the MetaManagement big models and management-oriented operating systems, are proposed. This work aims to offer valuable insights for the evolution of cities and societies in the upcoming intelligent era, and inspire the development of new MetaEconomics and MetaManagement models in MetaCities/MetaSocieties.
Article
Human beings are endowed with a natural curiosity and creativity, which motivate them to learn new things from their interactions with the world. Human learning has involved exploration and experimentation, which have allowed humans to discover new facts and principles, and to invent new artifacts and systems. Human learning has also affected human evolution, both genetically and culturally, as humans have adjusted to different situations and demands in their environments. However, in the current world, human learning is largely facilitated by artificial intelligence (AI) tools, which are programs that can perform tasks that usually require human intelligence, such as comprehension, reasoning, problem-solving, and communication. AI tools can support humans in their learning endeavors, by giving them access to enormous amounts of information, and by delivering them customized and interactive assistance and feedback. AI tools can also amplify human creativity and innovation, by generating novel and diverse content, such as code, poems, essays, songs, and more. But what are the effects of this dependence on AI tools for human learning and evolution? Does it boost or diminish human curiosity and creativity? Does it enable or limit human autonomy and agency? Does it foster or hamper human diversity and collaboration? These are some of the questions that this topic will explore, by evaluating the pros and cons of using AI tools for human learning, and the ethical and social issues that arise from this phenomenon. [28] Today when we look around us we observe the advancement in technology has brought a lot of comfort to our lives in terms of traveling, education, or enjoying content virtually. [29] Talking about our basic requirements, technology has become so friendly that we can learn everything through E-Learning. Everyone only wondered about having an AI which will help in making our lives easy. The latest concept in terms of AI which is widely received and accepted by the people everywhere around the Globe is the Open AI that is Chat Gpt, Gemini, Copilot. All of these AI helps us in decision making or cutting our chase short for finding solutions for either lengthy solutions like writing a summary related to something or Questions which are easy to solve but difficult to look for solutions. About a quarter (27%) of Americans say they interact with artificial intelligence almost constantly or several times a day. Artificial intelligence (AI) is used in a variety of ways, including online product recommendations, facial recognition software and chatbots. One in six (17%) adults reported that they can often or always recognise when they are using AI, one in two (50%) adults reported that they can some of the time or occasionally recognise when they are using AI, one in three (33%) adults reported that they can hardly ever or never recognise when they are using AI. [26] In this project we are testing the dependence upon the recently emerged Open AI tools such as ChatGPT, Google Bard, Bing. Our motive is to find out whether people are using these powerful tools to help in their academics or other tasks only or do they take advice from these tools in their financial planning as well.
Article
Welcome to the second issue of IEEE Transactions on Computational Social Systems (TCSS) of 2024. This issue showcases an impressive array of 104 regular papers alongside our Special Issue on Big Data and Computational Social Intelligence for Guaranteed Financial Security, highlighting cutting-edge research aimed at harnessing big data and computational techniques to fortify financial security amidst the digital finance evolution. With a focus on addressing the intricate challenges of financial big data, enhancing the efficacy of artificial intelligence, and covering critical topics from data mining to digital currencies, this issue underscores the vital role of cross-disciplinary efforts in mitigating financial security risks.
Article
BIG models or foundation models are rapidly emerging as a key force in advancing intelligent societies [1]–[3] Their significance stems not only from their exceptional ability to process complex data and simulate advanced cognitive functions, but also from their potential to drive innovation across various industries. When it comes to their value creation, the commercial application is undoubtedly an effective approach, with enterprise management serving as an ideal scenario. As we gaze into the future, it is increasingly evident that these large models will not only transform the operational facets of businesses but also revolutionize the strategic decision-making processes at their core. The introduction of large models into enterprise management marks a new era where data-driven insights and machine intelligence become integral to corporate governance.
Chapter
NLP has witnessed a remarkable improvement in applications, from voice assistants to sentiment analysis and language translations. However, in this process, a huge amount of personal data flows through the NLP system. Over time, a variety of techniques and frameworks have been developed to ensure that NLP systems do not ignore user privacy. This chapter highlights the significance of privacy-enhancing technologies (differential privacy, secure multi-party computation, homomorphic encryption, federated learning, secure data aggregation, tokenization and anonymization) in protecting user privacy within NLP systems. Differential privacy introduces noise to query responses or statistical results to protect individual user privacy. Homomorphic encryption allows computations on encrypted data to maintain privacy. Federated learning facilitates collaborative model training without sharing data. Tokenization and anonymization preserve anonymity by replacing personal information with non-identifiable data. This chapter explores these methodologies and techniques for user privacy in NLP systems.
Article
This study aims to explore the determinants of user behaviors toward an artificial intelligence (AI) tool, ChatGPT, focusing on university students and office workers. In this study, we present a comprehensive model to understand user engagement with AI tools, specifically focusing on ChatGPT. The model is grounded on four primary stages, each containing distinct variables: 1) fundamental (comprising perceived intelligence and system quality), 2) knowledge and service (covering knowledge acquisition, application, personalization, and trust), 3) gain with user tendency (encompassing utilitarian benefits, individual impact, satisfaction, and personal innovativeness), and 4) behavior (including behavioral intention, continued usage, and Word-of-Mouth (WOM)). A total of 13 variables have been examined. A survey was conducted on a sample of 645 university students and office workers, and the collected data were analyzed using partial least squares structural equation modeling (PLS-SEM). The results reveal significant associations between perceived intelligence and knowledge management, and personalization. System quality also significantly impacts knowledge management and personalization. Knowledge acquisition and application were found to significantly affect utilitarian benefits and individual impact, but not satisfaction. Personalization significantly influenced utilitarian benefits, individual impact, and satisfaction. Trust significantly impacts behavioral intention. Utilitarian benefits and individual impact had a positive effect on satisfaction, behavioral intention, and WOM. Personal innovativeness was significantly associated with behavioral intention. Behavioral intention significantly affected usage and WOM, while usage did not significantly associate with WOM. Among control variables, only age affects behavioral intention. This study also confirmed the indirect effects and conducted a multi-group analysis (MGA) between students and workers. MGA results show that there are significant differences in three relationships (personalization-satisfaction, utilitarian benefits-WOM, and behavioral intention-WOM) between students and workers. This research extends the understanding of AI tool usage and provides theoretical and practical insights for researchers, practitioners, and policymakers in AI and related fields.
Article
Full-text available
Peran artificial intelligence memudahkan mencari informasi yang tepat dan akurat bahkan penyelesaian masalah dengan model yang kompleks. Salah satu terobosan berbasis AI adalah ChatGPT oleh OpenAI pada tahun 2020, dilanjutkan dengan versi terbaru pada tahun 2023 yaitu GPT–3. Sejak saat itu, beberapa teknologi AI serupa versi mobile mulai bermunculan, salah satunya AicoGPT. Namun, kinerja dari aplikasi serupa ini belum dapat diandalkan sehingga masih perlu menganalisis tanggapan para penggunanya, apakah akan sama menakjubkannya atau tidak. Dari permasalahan tersebut, penelitian ini dibuat dengan tujuan untuk menganalisis 1443 data ulasan para pengguna aplikasi AicoGPT di Google Playstore dengan teknik analisis sentimen menggunakan TFIDF dan perbandingan klasifikasi LR dan SVM. Dari kedua ujicoba tersebut, menghasilkan akurasi terbaik dengan Algoritma SVM, yaitu sebesar 92%. Sedangkan LR menghasilkan akurasi sebesar 89%. Dari penelitian ini, dapat disimpulkan secara singkat bahwa metode TF-IDF dengan klasifikasi SVM, cocok digunakan untuk melakukan analisis sentimen dari dataset yang diteliti.
Article
Full-text available
Empowered by blockchain and Web3 technologies, decentralized autonomous organizations (DAOs) are able to redefine resources, production relations, and organizational structures in a revolutionary manner. This article aims to reanalyze DAOs from the perspectives of organization and operation, and provide a more precise definition of DAOs as DAOs and Operations. Based on this, the fundamental principles and requirements of DAOs are explained, while the infrastructure based on cyber–physical–social system (CPSS) and parallel intelligence, as well as the supporting technologies, such as digital twins, metaverse, and Web3, are discussed. Besides, a five-layer intelligent architecture is presented, and the closed-loop equation and new function-oriented intelligent algorithms are also proposed. Moreover, the governance mechanisms from the individual, organizational and social perspectives are discussed, and the incentive mechanisms for the human, robot, and digital human are analyzed. This article can be regarded as a stepping stone for further research and developments of DAOs.
Article
Full-text available
In the future, management in smart societies will revolve around knowledge workers and the works they produce. This article is committed to explore new management framework, model, paradigm, and solution for organizing, managing, and measuring knowledge works. First, the parallel management framework is presented that would allow for the virtual-real interactions of humans in social space, robots in physical space, and digital humans in cyberspace to realize descriptive, predictive, and prescriptive intelligence for management. Then, the management foundation models are proposed by fusing scenarios engineering with artificial intelligence foundation models and cyber–physical-social systems. Moreover, the new management paradigm driven by decentralized autonomous organizations and operations is formulated for the advancement of smart organizations and intelligent operations. On these basis, the management operating systems that highlight features of simple intelligence, provable security, flexible scalability, and ecological harmony are finally put forward as new management solution.
Article
Full-text available
This article discusses the impact and significance of the autonomous science movement and the role and potential uses of intelligent technology in DAO-based decentralized science (DeSci) organizations and operations. What is DeSci? How does it relate the science of team science? What are its potential contributions to multidisciplinary, interdisciplinary, and/or transdisciplinary studies? Does it have any correspondence to the social movement organizations in traditional social sciences or the cyber movement organizations in the new digital age? Particularly, issues on DeSci to current professional communities, such as IEEE and its societies, conferences, and publications, are addressed, and the effort for the framework and process of DAO-based DeSci for free, fair, and responsibility sensitive sciences is reviewed.
Article
Full-text available
Data sharing, research ethics, and incentives must improve
Article
Full-text available
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
Article
Full-text available
Social computing represents a new computing paradigm and an interdisciplinary research and application field. Undoubtedly, it strongly influences system and software developments in the years to come. We expect that social computing's scope continues to expand and its applications multiply. From both theoretical and technological perspectives, social computing technologies moves beyond social information processing towards emphasizing social intelligence. As we've discussed, the move from social informatics to social intelligence is achieved by modeling and analyzing social behavior, by capturing human social dynamics, and by creating artificial social agents and generating and managing actionable social knowledge
Article
The well-known ancient Chinese philosopher Lao Tzu (老子) or Laozi (6th∼4th century BC during the Spring and Autumn period) started his classic Tao Teh Ching 《道德经》 or Dao De Jing (see Fig. 1) with six Chinese characters: “道(Dao)可(Ke)道(Dao)非(Fei)常(Chang)道(Dao)”, which has been traditionally interpreted as “道可道,非常道” or “The Dao that can be spoken is not the eternal Dao”. However, mordern archaeological discoveries in 1973 and 1993 at Changsha, Hunan, and Jingmen, Hubei, China, have respectively indicated a new, yet more natural and simple interpretation: “道,可道,非常道”, or “The Dao, The Speakable Dao, The Eternal Dao”.
Article
Researchers are excited but apprehensive about the latest advances in artificial intelligence. Researchers are excited but apprehensive about the latest advances in artificial intelligence.
Article
First of all, I would like to take this opportunity to express my sincere and deep thanks to our Editor-in-Chief, Professor MengChu Zhou, who took over my position after I was drafted for rejuvenating IEEE Transactions on Computational Social Systems in 2017. During the past five years, MengChu's professional leadership and dedication has transformed IEEE/CAA Journal of Automatica Sinica (JAS) from its infancy to a young and high-impact publication in the world that is full of vitality and actively engaged by a group of talented and charged associate EiCs and editors, which is clearly demonstrated in MengChu's farewell editorial [1]. I am very glad that Professor Qing-Long Han, an influential and leading scientist of the world-class in AI, control, automation, and intelligent science and technology from Australia, as well as a staunch supporter and great leader of this journal from its beginning, will take over the EiC torch from MengChu next year, since I am extremely confident that our journal will reach a new high for its service and quality under his new leadership.
Article
Crypto management is proposed to tackle the management decision-making challenges under data asymmetry and trust asymmetry that cannot be solved merely by technical means. It emphasizes the novel management model for the real-time generation of reliable, trustworthy, and usable management decisions based on blockchain and blockchain-driven technologies. First, the framework model of crypto management with detailed descriptions of each technique is introduced, where blockchain is the underlying technology, decentralized autonomous organization (DAO) is the management structure, federated data is the decision basis, smart contract is the decision method, and non-fungible token (NFT) is the main decision incentive. Then, its collaboration mechanisms of on-blockchain DAO and off-blockchain organization as well as intra-organization and extra-organization nodes are discussed. Moreover, the potential applications of crypto management are addressed, and a case of task-oriented performance management is given to state how crypto management works to generate the real-time management decisions. Toward the end, the future research directions are pointed out in this emerging new area.
Article
Briefing: An investigation and outline of MetaControl and DeControl in Metaverses for control intelligence and knowledge automation are presented. Prescriptive control with prescriptive knowledge and parallel philosophy is proposed as the starting point for the new control philosophy and technology, especially for computational control of metasystems in cyber-physical-social systems. We argue that circular causality, the generalized feedback mechanism for complex and purposive systems, should be adapted as the fundamental principle for control and management of metasystems with metacomplexity in metaverses. Particularly, an interdisciplinary approach is suggested for MetaControl and DeControl as a new form of intelligent control based on five control metaverses: Meta Verses, MultiVerses, InterVerses, TransVerse, and Deep Verses.
Article
Decentralized science (DeSci) is a hot topic emerging with the development of Web3 or Web3.0 and decentralized autonomous organizations (DAOs) and operations. DeSci fundamentally differs from the centralized science (CeSci) and Open Science (OS) movement built in the centralized way with centralized protocols. It changes the basic structure and legacy norms of current scientific systems via reshaping the cooperation mode, value system, and incentive mechanism. As such, it can provide a viable path for solving bottleneck problems in the development of science, such as oligarchy, silos, and so on, and make science more fair, free, responsible, and sensitive. However, DeSci itself still faces many challenges, including scaling, balancing the quality of participants, system suboptimal loops, lack of accountability mechanism, and so on. Taking these into consideration, this article presents a systematic introduction of DeSci, proposes a novel reference model with a six-layer architecture, addresses the potential applications, and also outlines the key research directions in this emerging field. This article is committed to providing helpful guidance and reference for future research efforts on DeSci.
Article
Artificial intelligence (AI)’s rapid development has produced a variety of state-of-the-art models and methods that rely on network architectures and features engineering. However, some AI approaches achieve high accurate results only at the expense of interpretability and reliability. These problems may easily lead to bad experiences, lower trust levels, and systematic or even catastrophic risks. This article introduces the theoretical framework of scenarios engineering for building trustworthy AI techniques. We propose six key dimensions, including intelligence and index, calibration and certification, and verification and validation to achieve more robust and trusting AI, and address issues for future research directions and applications along this direction.
Article
Welcome to the third issue of IEEE Transactions on Computational Social Systems (TCSS) of 2022. According to the latest update of CiteScoreTracker from Elsevier Scopus released on April 6, 2022, the CitesSore of IEEE TCSS has reached a historical high of 8.4. Many thanks to all for your great effort and support.
Article
This article outlines a journey toward the TRUE DAO System of Intelligent Systems based parallel intelligence with the help of digital twins, metaverses, web 3.0, and blockchain technology. It argues for a HANOI approach, i.e., integrated Human, Artificial, Natural, and Organizational Intelligence for achieving knowledge automation for sustainable and smart societies.
Article
Welcome to the first issue of the IEEE Transactions on Computational Social Systems (TCSS) of 2022. We would like to take this opportunity to express our sincere thanks to our associate editors, reviewers, authors, and readers for your great support and effort devoted to IEEE TCSS. Happy New Year to you all, and cheers to health, happiness, and high-producing in 2022!
Article
Welcome to the last issue of IEEE Transactions on Computational Social Systems (TCSS) this year. This is also the last time I am serving as the Editor-in-Chief of this great journal, and I would like to take this opportunity to thank you all for your great help and support during the last three and half years. A special thank must go to my “Executive Editorial Task Force”: Professor Yong Yuan of People’s University of China; Drs. Rui Qin, Xiao Wang, and Xueliang Zhao of The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences; and Dr. Juanjuan Li of Beijing Institute of Technology, for their hardworking and dedication within my term as EiC. We have rejuvenated TCSS in a very short period and maintained its state of healthy growth, and this is a great team I must remember and will be proud of.
Article
Welcome to the new issue of IEEE Transactions on Computational Social Systems (TCSS). First, I would like to share the news of my resignation from the position of Editor-in-Chief (EiC) due to my new position as the Vice President (VP) for Human Machine Systems of the IEEE Systems, Man, and Cybernetics Society (SMCS). SMCS requires its Transactions EiC to have no membership in its Executive Committee or Board of Governors (BoG). In 2017, I offered my resignation from the BoG in order to be the EiC of TCSS, and now I need to resign my EiC position since I have been elected as SMCS’s VP in the end of 2019. Thank you all for this extremely valuable three-and-half year experience in a field I truly love and enjoy, but it is time to let new leadership to lead TCSS to a new level of quality and excellence. A call for new EiC was announced by VP for Publication at the end of May, and I am happy to inform you that, by the deadline of June 10, seven scholars from North America, Europe, and Asia have been recommended for the EiC position. A decision will be made by the Selection Committee in July and I will announce the result in the next issue.
Article
After decades of debate on the feasibility of open access (OA) to scientific publications, we may be nearing a tipping point. A number of recent developments, such as Plan S, suggest that OA upon publication could become the default in the sciences within the next several years. Despite uncertainty about the long-term sustainability of OA models, many publishers who had been reluctant to abandon the subscription business model are showing openness to OA (1). Although more OA can mean more immediate, global access to scholarship, there remains a need for practical, sustainable models, for careful analysis of the consequences of business model choices, and for “caution in responding to passionate calls for a ‘default to open’” (2). Of particular concern for the academic community, as subscription revenues decline in the transition to OA and some publishers prioritize other sources of revenue, is the growing ownership of data analytics, hosting, and portal services by large scholarly publishers. This may enhance publishers' ability to lock in institutional customers through combined offerings that condition open access to journals upon purchase of other services. Even if such “bundled” arrangements have a near-term benefit of increasing openly licensed scholarship, they may run counter to long-term interests of the academic community by reducing competition and the diversity of service offerings. The healthy functioning of the academic community, including fair terms and conditions from commercial partners, requires that the global marketplace for data analytics and knowledge infrastructure be kept open to real competition.
Chapter
The abundant volume of natural language text in the connected world, though having a large content of knowledge, but it is becoming increasingly difficult to disseminate it by a human to discover the knowledge/wisdom in it, specifically within any given time limits. The automated NLP is aimed to do this job effectively and with accuracy, like a human does it (for a limited of amount text). This chapter presents the challenges of NLP, progress so far made in this field, NLP applications, components of NLP, and grammar of English language—the way machine requires it. In addition, covers the specific areas like probabilistic parsing, ambiguities and their resolution, information extraction, discourse analysis, NL question-answering, commonsense interfaces, commonsense thinking and reasoning, causal-diversity, and various tools for NLP. Finally, the chapter summary, and a set of relevant exercises are presented.
Article
Welcome to the last issue of the IEEE Transactions on Computational Social Systems (TCSS) of this year, with a special focus on “blockchainbased secure and trusted computing for IoT.” Here, we have 18 regular articles and a brief discussion on social intelligence. I would like to take this opportunity to thank and congratulate everyone, especially our editorial board for a great job well done. Looking forward to working with you all in 2020!
Article
Welcome to the fourth issue of the IEEE Transactions on Computational Social Systems (TCSS), which includes 16 regular papers and a brief discussion on social computing. We would also like to inform you that IEEE will conduct its regular 5-year review for TCSS at its TAB meeting in November at Boston. Any suggestions for our review report are welcome!
Article
The origin of artificial intelligence is investigated, based on which the concepts of hybrid intelligence and parallel intelligence are presented. The paradigm shift in Intelligence indicates the ``new normal'' of cyber-social-physical systems (CPSS), in which the system behaviors are guided by Merton's Laws. Thus, the ACP-based parallel intelligence consisting of Artificial societies, Computational experiments and Parallel execution are introduced to bridge the big modeling gap in CPSS.
Article
In order to describe the fuzzy set whose intention varies with time interval, we present the theory of time-varying universe of discourse and dynamic fuzzy rules by synthesizing fuzzy set, linguistic dynamic systems (LDS), and dynamic programming. The time-varying universe discourse is divided into two types: discrete type and continuous type, and each type is sorted into incremental, decremental, and mixed classes. Then, how to build dynamic fuzzy rules and how to compute with words on time-varying universe of discourse are discussed. Finally, the linguistic dynamic orbits on the time-varying universe of discourse are given.
Article
This article presents a measure of semantic similarity in an IS-A taxonomy based on the notion of shared information content. Experimental evaluation against a benchmark set of human similarity judgments demonstrates that the measure performs better than the traditional edge-counting approach. The article presents algorithms that take advantage of taxonomic similarity in resolving syntactic and semantic ambiguity, along with experimental results demonstrating their effectiveness.
Parallel management: The DAO to smart ecological technology for complexity management intelligence
  • Wang
Modeling, analysis and synthesis of linguistic dynamic systems: A computational theory
  • F.-Y. Wang
Towards a rigorous science of interpretable machine learning
  • F Doshi-Velez
  • B Kim
Theory of mind may have spontaneously emerged in large language models
  • M Kosinski