Article

Real-GPT: Efficiently Tailoring LLMs for Informed Decision-Making in the Real Estate Industry

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, Gloria et al. (2024) highlights that general language models, while effective across a wide range of tasks, present limitations in advancing specialized knowledge in specific domains or fields. Their training on generalized databases restricts their precision and depth in targeted areas, which may prove insufficient for critical applications or industries with unique complexities, such as the real estate sector. ...
Article
Full-text available
This study explored the integration of Artificial Intelligence (AI) tools in finance education, focusing on student perceptions, emotional reactions, and educator experiences. Quantitative data were gathered using the Synthetic Index of Use of AI Tools (SIUAIT) instrument, administered over three semesters. The findings revealed that finance students perceived AI tools as essential for enhancing their learning experience. Notably, Financial Engineering students exhibited higher proficiency and more positive perceptions of AI tools compared to students in other disciplines, such as Engineering and Business. An observational study utilizing eye tracker technology and facial expression analysis highlighted the emotional dynamics between AI-enhanced and traditional lecture-based classes. Positive emotions, such as joy and surprise, were significantly more prevalent in AI-enhanced environments, contributing to a more engaging and emotionally positive learning experience. However, an increase in fear was also observed, which could be considered a negative activating emotion that, ultimately, still fostered learning. Semi-structured interviews with educators revealed both the opportunities and challenges of AI integration. Educators acknowledged AI’s benefits in enhancing pedagogy but expressed concerns about over-reliance and ethical implications. Thematic analysis identified key dimensions: knowledge, usage, and ethics in AI. The study concluded that AI tools could significantly transform finance education, offering enhanced learning experiences and better preparing students for future careers. However, a balanced approach, addressing ethical and psychological impacts, was essential to maximize benefits and minimize potential drawbacks. Future research should explore AI’s long-term effects and its correlation with academic performance.
Conference Paper
Full-text available
Article
Full-text available
The Generative Pre-trained Transformer (GPT) represents a notable breakthrough in the domain of natural language processing, which is propelling us toward the development of machines that can understand and communicate using language in a manner that closely resembles that of humans. GPT is based on the transformer architecture, a deep neural network designed for natural language processing tasks. Due to their impressive performance on natural language processing tasks and ability to effectively converse, GPT have gained significant popularity among researchers and industrial communities, making them one of the most widely used and effective models in natural language processing and related fields, which motivated to conduct this review. This review provides a detailed overview of the GPT, including its architecture, working process, training procedures, enabling technologies, and its impact on various applications. In this review, we also explored the potential challenges and limitations of a GPT. Furthermore, we discuss potential solutions and future directions. Overall, this paper aims to provide a comprehensive understanding of GPT, its enabling technologies, their impact on various applications, emerging challenges, and potential solutions.
Article
Full-text available
This contribution analyzes the self-perception and political biases of OpenAI’s Large Language Model ChatGPT. Considering the first small-scale reports and studies that have emerged, claiming that ChatGPT is politically biased towards progressive and libertarian points of view, this contribution is aimed at providing further clarity on this subject. Although the concept of political bias and affiliation is hard to define, lacking an agreed-upon measure for its quantification, this contribution attempts to examine this issue by having ChatGPT respond to questions on commonly used measures of political bias. In addition, further measures for personality traits that have previously been linked to political affiliations were examined. More specifically, ChatGPT was asked to answer the questions posed by the political compass test as well as similar questionnaires that are specific to the respective politics of the G7 member states. These eight tests were repeated ten times each and indicate that ChatGPT seems to hold a bias towards progressive views. The political compass test revealed a bias towards progressive and libertarian views, supporting the claims of prior research. The political questionnaires for the G7 member states indicated a bias towards progressive views but no significant bias between authoritarian and libertarian views, contradicting the findings of prior reports. In addition, ChatGPT’s Big Five personality traits were tested using the OCEAN test, and its personality type was queried using the Myers-Briggs Type Indicator (MBTI) test. Finally, the maliciousness of ChatGPT was evaluated using the Dark Factor test. These three tests were also repeated ten times each, revealing that ChatGPT perceives itself as highly open and agreeable, has the Myers-Briggs personality type ENFJ, and is among the test-takers with the least pronounced dark traits.
Article
Full-text available
We investigate the political bias of a large language model (LLM), ChatGPT, which has become popular for retrieving factual information and generating content. Although ChatGPT assures that it is impartial, the literature suggests that LLMs exhibit bias involving race, gender, religion, and political orientation. Political bias in LLMs can have adverse political and electoral consequences similar to bias from traditional and social media. Moreover, political bias can be harder to detect and eradicate than gender or racial bias. We propose a novel empirical design to infer whether ChatGPT has political biases by requesting it to impersonate someone from a given side of the political spectrum and comparing these answers with its default. We also propose dose-response, placebo, and profession-politics alignment robustness tests. To reduce concerns about the randomness of the generated text, we collect answers to the same questions 100 times, with question order randomized on each round. We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK. These results translate into real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media. Our findings have important implications for policymakers, media, politics, and academia stakeholders.
Conference Paper
Full-text available
Prosus is one of the largest technology investors in the world and it is important for us to follow the news, reports and commentary text about the sectors and companies of interest. To create a dashboard overview from overwhelming flow of text data, we built an NLP system, which organizes unstructured text from multiple sources into sector, company, release date and sentiment. Most of the text we harvest has financial context and sentiment analysis in financial domain turned out to be a challenging task because of domain- specific language. Transfer learning has been shown to be successful in adapting to new domains without large training data sets. In this paper, we explore the effectiveness of NLP transfer learning in financial sentiment classification and introduce FinBERT, a language model based on BERT. FinBERT improved the state-of-the-art performance by 15 percentage points for a financial sentiment classification task in FinancialPhrasebank dataset.
Article
Full-text available
The real estate industry is currently undergoing a digital transformation that not only changes its nature in terms of the markets and work environments, but is also influencing its growth. What are the main trends and concerns related to this transformation? To what extent is the real estate industry already prepared for this? This paper reviews the situation in terms of the emergence of a phenomenon known as PropTech. PropTech is characterized by the massive implementation of emerging technology such as home matching tools, drones, virtual reality, building information modelling (BIM), data analytics tools, artificial intelligence (AI), Internet of Things (IoT) and blockchain, smart contracts, crowdfunding in the real estate sector, fintechs related to real estate, smart cities, regions, smart homes and shared economy. This survey of changes in the real estate industry due to PropTech covers four areas: (1) PropTech applications in the real estate industry; (2) implications of PropTech for real estate market transparency; (3) how PropTech could give a region or a company a competitive advantage; and (4) concerns on the wider implications of these changes on a labour market and education. In a plausible scenario, changing the real estate technologies could change system dynamics and improve real estate market transparency. Moreover, it can be asserted that, in a broader sense, PropTech is beneficial for territorial competition and territorial growth strategies. And lastly, under different institutional arrangements, PropTech can affect the changing structure of the real estate market, the dema
Article
Full-text available
Based on the analysis of 190 studies (18,573 participants), we estimate that the average silent reading rate for adults in English is 238 words per minute (wpm) for non-fiction and 260 wpm for fiction. The difference can be predicted by taking into account the length of the words, with longer words in non-fiction than in fiction. The estimates are lower than the numbers often cited in scientific and popular writings. The reasons for the overestimates are reviewed. The average oral reading rate (based on 77 studies and 5,965 participants) is 183 wpm. Reading rates are lower for children, old adults, and readers with English as second language. The reading rates are in line with maximum listening speed and do not require the assumption of reading-specific language processing. Within each group/task there are reliable individual differences, which are not yet fully understood. For silent reading of English non-fiction most adults fall in the range of 175 to 300 wpm; for fiction the range is 200 to 320 wpm. Reading rates in other languages can be predicted reasonably well by taking into account the number of words these languages require to convey the same message as in English.
Chapter
Chapter 8 of this book delves into the transformative potential of AI, particularly the generative AI model ChatGPT, in the real estate sector. The chapter begins by exploring the various ways in which AI is enhancing user experiences, streamlining processes, and fostering innovative solutions in real estate. It further elaborates on the specific applications of ChatGPT in the industry, including property listing and search, customer service, marketing, legal support, home staging, investment analysis, appraisal, home inspection, and property management. However, the incorporation of AI and ChatGPT in real estate does pose certain challenges, which the chapter also addresses. These encompass issues related to data quality and bias, the need for transparency and privacy in handling real estate data, the balance between automation and human intervention, integration with existing real estate systems, and data storage and management. The chapter concludes with an exploration of the synergy between ChatGPT and Web3 in reimagining the real estate sector. It elucidates the intersection of AI and blockchain in real estate, presents potential use cases, and discusses the strategies to overcome related challenges. The final discussion points towards the future, stressing the need for real estate professionals to adapt to a landscape where AI and blockchain become integral parts of the business model.
Article
We examined the productivity effects of a generative artificial intelligence (AI) technology, the assistive chatbot ChatGPT, in the context of midlevel professional writing tasks. In a preregistered online experiment, we assigned occupation-specific, incentivized writing tasks to 453 college-educated professionals and randomly exposed half of them to ChatGPT. Our results show that ChatGPT substantially raised productivity: The average time taken decreased by 40% and output quality rose by 18%. Inequality between workers decreased, and concern and excitement about AI temporarily rose. Workers exposed to ChatGPT during the experiment were 2 times as likely to report using it in their real job 2 weeks after the experiment and 1.6 times as likely 2 months after the experiment.
Article
Purpose This viewpoint article explores the transformative capabilities of large language models (LLMs) like the Chat Generative Pre-training Transformer (ChatGPT) within the property valuation industry. It particularly accentuates the pivotal role of prompt engineering in facilitating valuation reporting and advocates for adopting the “Red Book” compliance Chain-of-thought (COT) prompt engineering as a gold standard for generating AI-facilitated valuation reports. Design/methodology/approach The article offers a high-level examination of the application of LLMs in real estate research, highlighting the essential role of prompt engineering for future advancements in generative AI. It explores the collaborative dynamic between valuers and AI advancements, emphasising the importance of precise instructions and contextual cues in directing LLMs to generate accurate and reproducible valuation outcomes. Findings Integrating LLMs into property valuation processes paves the way for efficiency improvements and task automation, such as generating reports and drafting contracts. AI-facilitated reports offer unprecedented transparency and elevate client experiences. The fusion of valuer expertise with prompt engineering ensures the reliability and interpretability of valuation reports. Practical implications Delineating the types and versions of LLMs used in AI-generated valuation reports encourage the adoption of transparency best practices within the industry. Valuers, as expert prompt engineers, can harness the potential of AI to enhance efficiency, accuracy and transparency in the valuation process, delivering significant benefits to a broad array of stakeholders. Originality/value The article elucidates the substantial impact of prompt engineering in leveraging LLMs within the property industry. It underscores the importance of valuers training their unique GPT models, enabling customisation and reproducibility of valuation outputs. The symbiotic relationship between valuers and LLMs is identified as a key driver shaping the future of property valuations.
Article
Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.
Article
Purpose Although many theories aim to explain initial public offering (IPO) underpricing, initial-day returns of US Real Estate Investment Trust (REIT) IPOs remain a “puzzle”. The literature on REIT IPOs has focused on indirect quantitative proxies for information asymmetries between REITs and investors to determine IPO underpricing. This study, however, proposes textual analysis to exploit the qualitative information, revealed through one of the most important documents during the IPO process – Form S-11 – as a direct measure of information asymmetries. Design/methodology/approach This study determines the level of uncertain language in the prospectus, as well as its similarity to recently filed registration statements, to assess whether textual features can solve the underpricing puzzle. It assumes that uncertain language makes it more difficult for potential investors to price the issue and thus increases underpricing. Furthermore, it is hypothesized that a higher similarity to previous filings indicates that the prospectus provides little useful information and thus does not resolve existing information asymmetries, leading to increased underpricing. Findings Contrary to expectations, this research does not find a statistically significant association between uncertain language in Form S-11 and initial-day returns. This result is interpreted as suggesting that uncertain language in the prospectus does not reflect the issuer's expectations about the company's future prospects, but rather is necessary because of forecasting difficulties and litigation risk. Analyzing disclosure similarity instead, this study finds a statistically and economically significant impact of qualitative information on initial-day returns. Thus, REIT managers may reduce underpricing by voluntarily providing more information to potential investors in Form S-11. Practical implications The results demonstrate that textual analysis can in fact help to explain underpricing of US REIT IPOs, as qualitative information in Forms S-11 decreases information asymmetries between US REIT managers and investors, thus reducing underpricing. Consequently, REIT managers are incentivized to provide as much information as possible to reduce underpricing, while investors could use textual analysis to identify offerings that promise the highest returns. Originality/value This is the first study which applies textual analysis to corporate disclosures of US REITs in order to explain IPO underpricing.
Conference Paper
Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit non-parametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) -- models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.
Article
In this study, we examine the asymmetric effect of positive and negative real estate news on REIT market returns. While findings in the general stock market show a greater market reaction to negative news, we find that the REIT market reacts predominantly to positive rather than to negative real estate news. We show that the content of positive real estate news becomes significantly related to REIT market returns in the modern REIT era and in recession periods. We further find that the asymmetric market reaction to positive real estate news is apparent only in REITs with high institutional ownership and REITs that report high rental income. Our findings imply that the REIT market’s diversification benefits and its unique institutional features could be the driving factors for the asymmetric market responses. In additional analysis, we show that there exists little market reversal in subsequent periods, implying that REIT investors respond to the information contained in the news content and do not solely act on their sentiment. Lastly, we show that our findings are robust to alternative news and return measures.
Article
Purpose The purpose of this paper is to identify and analyse the news coverage and sentiment of real estate-related trends in Germany. Trends are considered as being stable and long-term. If the news coverage and sentiment of trends underlie cyclicity, this could impact investors’ behaviour. For instance, in the case of increased reporting on sustainability issues, investors may be inclined to invest more in sustainable buildings, assuming that this is of growing importance to their clients. Hence, investors could expect higher returns when a trend topic goes viral. Design/methodology/approach With the help of topic modelling, incorporating seed words partially generated via word embeddings, almost 170,000 newspaper articles published between 1999 and 2019 by a major German real estate news provider are analysed and assigned to real estate-related trends. Through applying a dictionary-based approach, this dataset is then analysed based on whether the tone of the news coverage of a specific trend is subject to change. Findings The articles concerning urbanisation and globalisation account for the largest shares of reporting. However, the shares are subject to change over time, both in terms of news coverage and sentiment. In particular, the topic of sustainability illustrates a clearly increasing trend with cyclical movements throughout the examined period. Overall, the digitalisation trend has a highly positive connotation within the analysed articles, while regulation displays the most negative sentiment. Originality/value To the best of the authors’ knowledge, this is the first application to explore German real estate newspaper articles regarding the methodologies of word representation and seeded topic modelling. The integration of topic modelling into real estate analysis provides a means through which to extract information in a standardised and replicable way. The methodology can be applied to several further fields like analysing market reports, company statements or social media comments on real estate topics. Finally, this is also the first study to measure the cyclicity of real estate-related trends by means of textual analysis.
Article
We report on optimized molecular geometries and electronic properties calculated by the PM6 method for 94.0% of the 91.6 million molecules cataloged in PubChem Compounds retrieved on August 29, 2016. In addition to neutral states, we also calculated those for cationic, anionic, and spin flipped electronic states of 56.2%, 49.7%, and 41.3% of the molecules, respectively. Thus, the grand total of the PM6 calculations amounted to 221 million. We compared the resulting molecular geometries with B3LYP/6-31G* optimized geometries for 2.6 million molecules. The root-mean-square deviations in bond length and bond angle were approximately 0.016 Å and 1.7°, respectively. Then, using linear regression to examine the HOMO energy levels E(HOMO) in the B3LYP and PM6 calculations, we found that EB3LYP(HOMO) = 0.876EPM6(HOMO) + 1.975 (eV) and calculated the coefficient of determination to be 0.803. Likewise, we examined the LUMO energy levels and found EB3LYP(LUMO) = 1.069EPM6(LUMO) - 0.420 (eV); the coefficient of determination was 0.842. We also generated four subdata sets, each of which was composed of molecules with molecular weights less than 500. Subdata set i contained C, H, O and N, ii contained C, H, N, O, P, and S, iii contained C, H, N, O, P, S, F, and Cl, and iv contained C, H, N, O, P, S, F, Cl, Na, K, Mg, and Ca. The data sets are available at http://pubchemqc.riken.jp/pm6_datasets.html under a Creative Commons Attribution 4.0 International license.
Article
Purpose This study examines whether language disclosed in the Management Discussion and Analysis (MD&A) of US Real Estate Investment Trusts (REITs) provides signals regarding future firm performance and thus generates a market response. Design/methodology/approach This research conducts textual analysis on a sample of approximately 6,500 MD&As of US REITs filed by the SEC between 2003 and 2018. Specifically, the Loughran and Mcdonald (2011) financial dictionary, and a custom dictionary for the real estate industry created by Ruscheinsky et al. (2018), are employed to determine the inherent sentiment, that is, the level of pessimistic or optimistic language for each filing. Thereafter, a panel fixed-effects regression enables investigating the relationship between sentiment and future firm performance, as well as the markets’ reaction. Findings The empirical results suggest that higher levels of pessimistic (optimistic) language in the MD&A predict lower (higher) future firm performance. Hereby, the use of a domain-specific real estate dictionary, namely that developed by Ruscheinsky et al. (2018) leads to superior results. Corresponding to the notion that the human psyche is affected more strongly by negative than positive news (Rozin and Royzman, 2001), the market responds solely to pessimistic language in the MD&A. Practical implications The results suggest that the market can benefit from textual analysis, as investigating the language in the MD&A reduces information asymmetries between US REIT managers and investors. Originality/value This is the first study to analyze exclusively US REITs, whether language in the MD&A is predictive of future firm performance and whether the market responds to textual sentiment.
Article
Purpose The purpose of this paper is to determine systematically the broader relationship between news media sentiment, extracted through textual analysis of articles published by leading US newspapers, and the securitized real estate market. Design/methodology/approach The methodology is divided into two stages. First, roughly 125,000 US newspaper article headlines from Bloomberg, The Financial Times, Forbes and The Wall Street Journal are investigated with a dictionary-based approach, and different measures of sentiment are created. Second, a vector autoregressive framework is used to analyse the relationship between media-expressed sentiment and REIT market movements over the period 2005–2015. Findings The empirical results provide significant evidence for a leading relationship between media sentiment and future REIT market movements. Furthermore, applying the dictionary-based approach for textual analysis, the results exhibit that a domain-specific dictionary is superior to a general dictionary. In addition, better results are achieved by a sentiment measure incorporating both positive and negative sentiment, rather than just one polarity. Practical implications In connection with fundamentals of the REIT market, these findings can be utilised to further improve the understanding of securitized real estate market movements and investment decisions. Furthermore, this paper highlights the importance of paying attention to new media and digitalization. The results are robust for different REIT sectors and when conventional control variables are considered. Originality/value This paper demonstrates for the first time, that textual analysis is able to capture media sentiment from news relevant to the US securitized real estate market. Furthermore, the broad collection of newspaper articles from four different sources is unique.
Article
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
Longrope: Extending LLM context window beyond 2 million tokens
  • Y Ding
  • L L Zhang
  • C Zhang
  • Y Xu
  • N Shang
  • J Xu
  • M Yang
Retrieval-augmented generation for large language models: A survey
  • Y Gao
  • Y Xiong
  • X Gao
  • J Jia K. Pan
  • Y Bi
  • H Wang
Language models are few-shot learners
  • T Brown
  • B Mann
  • N Ryder
  • M Subbiah
  • J D Kaplan
  • P Dhariwal
  • D Amodei
  • Brown T.
Extending context window of large language models via positional interpolation
  • Chen S Wong
  • S Chen
  • L Tian
Scaling instruction-finetuned language models
  • H W Chung
  • L Hou
  • S Longpre
  • B Zoph
  • Y Tay
  • W Fedus
  • J Wei
  • Chung H. W.
QLoRA: Efficient finetuning of quantized LLMs
  • T Dettmers
  • A Pagnoni
  • A Holtzman
  • L Zettlemoyer
  • Dettmers T.
MoDS: Model-oriented data selection for instruction tuning
  • Q Du
  • C Zong
  • J Zhang
LoRA: Low-rank adaptation of large language models
  • E J Hu
  • Y Shen
  • P Wallis
  • Z Allen-Zhu
  • Y Li
  • S Wang
  • W Chen
CAMEL: Communicative agents for “mind” exploration of large language model society
  • G Li
  • H Hammoud
  • H Itani
  • D Khizbullin
  • B Ghanem
  • Li G.
MAMBA: Linear-time sequence modeling with selective state spaces
  • A Gu
  • T Dao
Efficiently modeling long sequences with structured state spaces
  • A Gu
  • K Goel
  • C Ré
Evaluating large language models: A comprehensive survey
  • Z Guo
  • R Jin
  • C Liu
  • Y Huang
  • D Shi
  • Supryadi
  • D Xiong
Bias testing and mitigation in LLM-based code generation
  • D Huang
  • Q Bu
  • J Zhang
  • X Xie
  • J Chen
  • H Cui
Decoupled weight decay regularization
  • I Loshchilov
  • F Hutter
WizardCoder: Empowering code large language models with evol-instruct
  • Z Luo
  • C Xu
  • P Zhao
  • Q Sun
  • X Geng
  • W Hu
  • D Jiang
The economic potential of generative AI: The next productivity frontier
  • Mckinsey Global Institute
Openassistant conversations—Democratizing large language model alignment
  • A Köpf
  • Y Kilcher
  • D Von Rütte
  • S Anagnostidis
  • Z R Tam
  • K Stevens
  • A Mattick
  • Köpf A.
Is your code generated by ChatGPT really correct? Rigorous evaluation of large language models for code generation
  • J Liu
  • C S Xia
  • Y Wang
  • L Zhang
  • Liu J.
Visual instruction tuning
  • H Liu
  • C Li
  • Q Wu
  • Y J Lee
  • Liu H.
Stanford Alpaca: An instruction-following Llama model
  • R Taori
  • I Gulrajani
  • T Zhang
  • Y Dubois
  • C Li X. Guestrin
  • T B Hashimoto
Training language models to follow instructions with human feedback
  • L Ouyang
  • J Wu
  • X Jiang
  • D Almeida
  • C Wainwright
  • P Mishkin
  • R Lowe
  • Ouyang L.
RICS valuation-global standards
  • Rics
LLaMA: Open and efficient foundation language models
  • H Touvron
  • T Lavril
  • G Izacard
  • X Martinet
  • M.-A Lachaux
  • T Lacroix
  • G Lample