November 2024
·
13 Reads
·
4 Citations
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
November 2024
·
13 Reads
·
4 Citations
November 2024
·
25 Reads
In recent years, the application of generative artificial intelligence (GenAI) in financial analysis and investment decision-making has gained significant attention. However, most existing approaches rely on single-agent systems, which fail to fully utilize the collaborative potential of multiple AI agents. In this paper, we propose a novel multi-agent collaboration system designed to enhance decision-making in financial investment research. The system incorporates agent groups with both configurable group sizes and collaboration structures to leverage the strengths of each agent group type. By utilizing a sub-optimal combination strategy, the system dynamically adapts to varying market conditions and investment scenarios, optimizing performance across different tasks. We focus on three sub-tasks: fundamentals, market sentiment, and risk analysis, by analyzing the 2023 SEC 10-K forms of 30 companies listed on the Dow Jones Index. Our findings reveal significant performance variations based on the configurations of AI agents for different tasks. The results demonstrate that our multi-agent collaboration system outperforms traditional single-agent models, offering improved accuracy, efficiency, and adaptability in complex financial environments. This study highlights the potential of multi-agent systems in transforming financial analysis and investment decision-making by integrating diverse analytical perspectives.
March 2024
·
243 Reads
·
59 Citations
Large language models (LLMs) have the potential to transform our lives and work through the content they generate, known as AI-Generated Content (AIGC). To harness this transformation, we need to understand the limitations of LLMs. Here, we investigate the bias of AIGC produced by seven representative LLMs, including ChatGPT and LLaMA. We collect news articles from The New York Times and Reuters, both known for their dedication to provide unbiased news. We then apply each examined LLM to generate news content with headlines of these news articles as prompts, and evaluate the gender and racial biases of the AIGC produced by the LLM by comparing the AIGC and the original news articles. We further analyze the gender bias of each LLM under biased prompts by adding gender-biased messages to prompts constructed from these news headlines. Our study reveals that the AIGC produced by each examined LLM demonstrates substantial gender and racial biases. Moreover, the AIGC generated by each LLM exhibits notable discrimination against females and individuals of the Black race. Among the LLMs, the AIGC generated by ChatGPT demonstrates the lowest level of bias, and ChatGPT is the sole model capable of declining content generation when provided with biased prompts.
January 2024
·
7 Reads
·
1 Citation
ACM Transactions on Information Systems
The tagging system has become a primary tool to organize information resources on the Internet, which benefits both users and the platforms. To build a successful tagging system, automatic tagging methods are desired. With the development of society, new tags keep emerging. The problem of tagging items with emerging tags is an open challenge for automatic tagging system, and it has not been well studied in the literature. We define this problem as a tag-centered cold-start problem in this study and propose a novel neural topic model based few-shot learning method named NTFSL to solve the problem. In our proposed method, we innovatively fuse the topic modeling task with the few-shot learning task, endowing the model with the capability to infer effective topics to solve the tag-centered cold-start problem with the property of interpretability. Meanwhile, we propose a novel neural topic model for the topic modeling task to improve the quality of inferred topics, which helps enhance the tagging performance. Furthermore, we develop a novel inference method based on the variational auto-encoding framework for model inference. We conducted extensive experiments on two real-world datasets and the results demonstrate the superior performance of our proposed model compared with state-of-the-art machine learning methods. Case studies also show the interpretability of the model.
January 2024
·
6 Reads
·
1 Citation
SSRN Electronic Journal
November 2023
·
208 Reads
·
1 Citation
Large language models (LLMs) have the potential to transform our lives and work through the content they generate, known as AI-Generated Content (AIGC). To harness this transformation, we need to understand the limitations of LLMs. Here, we investigate the bias of AIGC produced by seven representative LLMs, including ChatGPT and LLaMA. We collect news articles from The New York Times and Reuters, both known for their dedication to provide unbiased news. We then apply each examined LLM to generate news content with headlines of these news articles as prompts, and evaluate the gender and racial biases of the AIGC produced by the LLM by comparing the AIGC and the original news articles. We further analyze the gender bias of each LLM under biased prompts by adding gender-biased messages to prompts constructed from these news headlines. Our study reveals that the AIGC produced by each examined LLM demonstrates substantial gender and racial biases. Moreover, the AIGC generated by each LLM exhibits notable discrimination against females and individuals of the Black race. Among the LLMs, the AIGC generated by ChatGPT demonstrates the lowest level of bias, and ChatGPT is the sole model capable of declining content generation when provided with biased prompts.
January 2023
·
28 Reads
·
25 Citations
SSRN Electronic Journal
... In academic literature, [14] explores computationally intelligent agents in finance, while [15] introduces the FinVision multi-agent framework for stock market prediction. [16] optimizes AI-agent collaboration in financial research, and [17] discusses the impact of AI traders in financial markets. [18] presents FinRobot, an open-source AI agent platform for financial applications, and [19] introduces Fincon, a synthesized LLM multi-agent system for enhanced financial decision-making. ...
November 2024
... This work adopts in-context learning and instruction learning, and it expresses different abilities as different tasks with domain-specific prompts. [11] tackles the New Community Cold-Start (NCCS) problem by proposing a novel recommendation method that leverages the extensive knowledge and powerful inference capabilities of Large Language Models. It selects In-Context Learning (ICL) as the prompting strategy and designs a coarse-to-fine framework to efficiently choose demonstration examples for creating effective ICL prompts. ...
January 2024
SSRN Electronic Journal
... AIGC "is considered biased if it exhibits systematic and unfair discrimination against certain population groups, particularly underrepresented population groups" (Fang et al., 2024, p. 5224). For example, in a comprehensive scientific report, Fang et al. (2024) collected news articles from two outlets known for their unbiased content and used their headlines as prompts to examine gender and racial biases in AIGC by comparing the AI generated texts to the original news articles. Their study found notable discrimination against female and black identities. ...
March 2024
... 25 Yu (2019) • ----• --Deep learning approach for mobile health analytics to assess senior citizens' risks and health conditions. 26 Che et al. (2024) • ----• ML models and learning algorithms for tagging. 27 Rad et al. (2024) --• ----• Review of datasets for data-driven design approaches. ...
January 2024
ACM Transactions on Information Systems
... Finally, most foundational models upon which GAI tools are built have been shown to be gender and racially biased, with most instances of harm occurring against Black women in particular [25,26]. Indeed, the rapid implementation of novel GAI applications has already caused harm outside medicine. ...
January 2023
SSRN Electronic Journal