Content uploaded by Angeline Chatelan
Author content
All content in this area was uploaded by Angeline Chatelan on Mar 28, 2023
Content may be subject to copyright.
1
TITLE PAGE
Title of the article
ChatGPT and future AI chatbots: what may be the impact on Registered Dietitian
Nutritionists?
Author Information
Angeline Chatelan, Ph.D., M.Sc., RD, Assistant Professor, Department of Nutrition and
Dietetics, Geneva School of Health Sciences, HES-SO University of Applied Sciences and
Arts Western Switzerland, Rue des Caroubiers 25, 1227 Carouge, Geneva, Switzerland, +41
22 558 51 16, angeline.chatelan@hesge.ch (corresponding author)
ORCID-profile: https://orcid.org/0000-0003-4326-4789
Aurélien Clerc, M.Sc., RD, Lecturer, Department of Nutrition and Dietetics, Geneva School of
Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland,
Geneva, Switzerland, and Clinical Dietitian, HFR Fribourg University Training Hospital,
Fribourg, Switzerland
Pierre-Alexandre Fonta, M.Sc. Eng., Independent Senior Research Engineer in Natural
Language Processing, France
Author’s Contribution
AnC and PAF conceptualized the manuscript. AnC wrote the original draft of the manuscript.
AuC and PAF critically reviewed and edited the manuscript. ChatGPT was used only for
demonstration purposes.
Sources of Support
The authors received no financial support for this article.
Conflicts of interest
The authors declare no conflicts of interest.
Acknowledgments
We thank Celine Pabion, BSc, RD, Research Assistant, Department of Nutrition and Dietetics,
Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western
Switzerland, Geneva, Switzerland for proofreading this commentary.
2
ABSTRACT
Launched in late November 2022, ChatGPT is an unprecedented publicly available and simple
web interface that can “write” human-like texts. This AI chatbot can be used in a variety of
settings, including to practice nutrition and dietetics. This article describes ChatGPT and
discusses the possible opportunities and risks of using ChatGPT in the practice of Registered
Dietitian Nutritionists working in public health, clinical settings, and academia.
Keywords
ChatGPT, chatbot, artificial intelligence, Registered Dietitian Nutritionists, healthcare
professionals.
MAIN TEXT
Introduction
ChatGPT is a conversational service using artificial intelligence (AI), also known as an AI
chatbot. It is accessible with a simple web interface and generates natural-language utterances
in reply to human prompts (i.e., sentence(s) asking a question or raising an issue). Since its
launch on November 30, 2022, it has hit the media headlines and the scientific community (1,
2). This is the first time the public engaged with an AI chatbot with such enthusiasm. ChatGPT
has attracted 100 million users just two months after launching (3). It is being implemented in
Microsoft applications (e.g., Office, Bing) (4). According to Meta's chief AI scientist, Yann
LeCun, the impact of ChatGPT is rather about society’s perception and access than technical
breakthroughs (5). Thus, other AI chatbots using similar technologies, such as Bard by Google,
are being launched (6).
AI chatbots, such as ChatGPT, can be used in a variety of settings and multiple languages.
Optimistic Registered Dietitian Nutritionists (RDNs) will see an opportunity to use AI chatbots
for assistance in their daily work whereas pessimistic ones will consider them as tools that
could harm our profession and partly replace us. This article guides RDNs working in public
health, clinical settings, and academia to perceive the possible opportunities and risks of using
ChatGPT in their practice.
What’s ChatGPT?
The AI company OpenAI developed ChatGPT, which is a conversational interface based on
GPT-3.5 (GPT-4 as of March 14, only for paid users). GPT-3.5 is a large language model. In
other words, ChatGPT is a statistical model whose aim is to predict the next word based on
previous words following contents and styles expected by humans (natural language) (7). The
3
consecutive pairs of words are obtained thanks to probabilities using a neural network with
billion parameters. Like a parrot, ChatGPT does not consider the meaning of words but it
“reacts” to words. With GPT-3.5 users can write prompts with a maximal length of about 3,000
words (8).
To predict words, ChatGPT has been trained on a vast amount of text data from websites all
around the Internet (training data). ChatGPT, GPT-3.5, GPT-4 are not open access models:
no data, no code, no design was released. Therefore, it is not fully transparent which datasets
ChatGPT has “digested”; it probably includes news articles, forum posts, social media
conversations, and English-language Wikipedia (9). At present, the last training of the model
occurred in September 2021 (8). ChatGPT is thus not aware of all information happening
afterward, such as the recent controversy about the American Academy of Pediatrics'
guidelines for the evaluation and treatment of obesity (10).
ChatGPT can generate texts on a wide variety of topics. Usually, the tone is well adapted to
the one from the prompt (e.g., lay vs. expert). In addition, safeguard measures defined by
OpenAI have been put in place so that as few answers as possible can be given to medical,
illegal, and morally wrong questions (7, 11). ChatGPT can also generate tables and computer
codes. More recently, digital images can be generated using ChatGPT from Bing (Microsoft’s
search engine) thanks to a text-to-image model (DALL·E), also developed by OpenAI (12).
You can access ChatGPT via the link https://chat.openai.com/chat, sign up, start a “new chat”,
and test the AI chatbot. It is in its testing phase and will improve over time. There is currently
a free (7) and a paid (20 USD/month) (13) option of ChatGPT. Removing the free option is not
excluded in the future.
When RDNs challenge ChatGPT …
ChatGPT can inform users how to eat healthily, prescribe diets, plan recipes, or provide a
shopping list, considering users’ description of foods they have in their pantry or fridge. To test
ChatGPT abilities, we asked to define the ideal diet for a person with type 2 diabetes (Figure
1).
4
Figure 1. ChatGPT answer about the ideal diet for patients/clients with type 2 diabetes
(Version February 13, 2023).
The answer was clear and relatively in line with the information provided on the Internet by the
American Diabetes Association (14, 15). In addition, there is a warning about the importance
of consulting a registered dietitian or a healthcare provider. In a follow-up prompt, we asked
ChatGPT to define a 1-day menu for this type of diet (without repeating type 2 diabetes, Figure
2).
Our prompt:
Define the ideal diet for a person with type 2 diabetes.
ChatGPT answer:
The ideal diet for a person with type 2 diabetes should aim to regulate blood sugar levels and
promote overall health. Here are some general guidelines to consider:
1. Carbohydrate intake: Carbohydrates can raise blood sugar levels, so it's important to monitor and
control their intake. Complex carbohydrates, such as whole grains, vegetables, and fruits, are
generally better choices than simple carbohydrates, such as white bread and sugary foods. Fiber-
rich foods can also help regulate blood sugar levels.
2. Protein intake: Proteins are important for building and repairing tissues and maintaining muscle
mass. Good sources of protein include lean meats, fish, beans, nuts, and low-fat dairy products.
3. Fat intake: While fat intake should be moderated, healthy fats, such as those found in nuts, seeds,
and fish, can have beneficial effects on blood sugar and cholesterol levels.
4. Portion control: Eating smaller, more frequent meals throughout the day can help regulate blood
sugar levels. It's also important to monitor portion sizes and avoid overeating.
5. Avoid processed and sugary foods: Processed and sugary foods can cause spikes in blood sugar
levels and should be avoided as much as possible.
6. Consult with a registered dietitian: A registered dietitian can provide personalized
recommendations and help design a meal plan that meets individual needs and preferences.
7. Regular monitoring: It's important for people with diabetes to monitor their blood sugar levels
regularly to ensure that they are staying within their target range.
Remember, dietary changes alone may not be enough to control blood sugar levels, and medication
or insulin may be necessary. It's important to work closely with a healthcare provider to develop an
individualized plan for managing type 2 diabetes.
5
Figure 2. ChatGPT answer about the recommended 1-day menu for a person with type 2
diabetes (Version February 13, 2023).
Our prompt:
Define a 1-day menu for this type of diet.
ChatGPT answer:
Sure! Here is an example of a one-day menu that follows the dietary guidelines for people with type
2 diabetes:
Breakfast
• 1 small sweet potato, sliced and roasted
• 2 scrambled eggs
• 1/2 small avocado, sliced
• 1 small orange
• 1 cup unsweetened almond milk
Mid-Morning Snack
• 1 small apple
• 1 ounce of mixed nuts (almonds, walnuts, and cashews)
Lunch
• 2 cups mixed greens
• 1/2 cup cherry tomatoes
• 1/2 cup sliced cucumber
• 3 ounces grilled chicken breast
• 1/4 cup chickpeas
• 2 tablespoons balsamic vinaigrette dressing
Afternoon Snack
• 1 small pear
• 1 string cheese
Dinner
• 3 ounces grilled salmon
• 1/2 cup cooked quinoa
• 1/2 cup steamed broccoli
• 1/4 cup roasted red pepper
• 1 tablespoon olive oil and lemon juice dressing
Dessert
• 1 small serving of mixed berries with 1/4 cup unsweetened Greek yogurt
Remember that this is just an example, and it's important to customize your meal plan to your
individual needs and preferences. You may want to consult with a registered dietitian to help you
design a meal plan that meets your nutritional requirements while also helping you manage your
blood sugar levels.
6
Overall, the menus were in accordance with the Diabetes Plate Method (15). Main meals
contained 1) non-starchy vegetables, 2) lean protein foods, and 3) carbohydrate foods,
although the suggested amount of carbohydrate foods (i.e., sweet potatoes, chickpeas and
quinoa) was small and seems to be inspired by ketogenic diets. When asked, ChatGPT could
also provide a step-by-step recipe for the lunch menu with the appropriate cooking time for the
chicken in the salad. However, it is important to note that ChatGPT answers to the same
prompt (open in a new chat each time) vary over time due to the non-deterministic nature of
the underlying model and the regular optimizations of ChatGPT (8). Therefore, it is difficult to
know which information a patient/client would receive, especially if he/she provides other
requirements related to sex, age, food preferences, available budget, and cultural background,
for instance.
We repeated the test with the ideal diet for people undergoing hemodialysis. The answer was
also relatively detailed and accurate (i.e., lowering the intake of potassium and phosphorus,
limiting fluids, increasing the intake of high-quality protein, monitoring carbohydrates, and
consulting a renal dietitian). Yet, when we asked for one-week menus, the menus included
foods not optimal for hemodialysis patients/clients (e.g., spinach, avocado), without warnings.
Menus were also repetitive and similar to those for type 2 diabetes patients/clients, for instance
with the same “trendy foods” (e.g., grilled chicken, salmon).
Finally, we provided ChatGPT a case study of a patient/client with type 1 diabetes we usually
used for students in nutrition and dietetics and asked ChatGPT to generate nutritional
diagnoses according to the Nutrition Care Process Terminology (NCPT) (16). It generated six
problems relatively in line with the nutritional situation of the patient/client but also included
inaccurate statements (e.g., lack of understanding of food labels, whereas we wrote in the
case study that this was not assessed). Nutritional diagnoses tend to be repetitive and generic
(lack of uniqueness). Furthermore, those diagnoses are not stated according to the NCPT and
are not generated in the form of Problem, Etiology, and Signs/Symptoms (PES statement).
Etiology is often defined but signs and symptoms were never stated. Similar observations were
found in another case study of a patient/client with type 2 diabetes we tested.
7
A useful tool for getting information?
ChatGPT makes online information about nutrition and dietetics more accessible. Indeed,
before ChatGPT, the public and patients/clients had to type keywords in search engines (e.g.,
Google, Bing) to identify Internet sources, open one or several webpages to find the relevant
information, and synthesize information by themselves. By contrast, ChatGPT provides direct
and definite answers to users’ prompts, without the need to open webpages and sort out
information. Still, the user’s ability to define an issue, ask related questions, refine prompts,
and evaluate the accuracy of the answer is crucial for AI chatbot users.
Applications combining ChatGPT and grocery shopping, such as Instacart (17), may also
enable communities and patients/clients to ask about recipes and order foods according to
their personal needs (e.g., age, sex, and dietary requirements of household members). This
could be seen as a cheap way to access tailored nutrition and dietetics advice, of course
potentially influenced by grocery shops. Because ChatGPT can summarize texts in lay terms,
it could also help vulgarize complex subjects around diseases, nutrition, and dietetics and has
the potential to enhance education by providing interactive learning experiences. Furthermore,
crucial warnings to users about working with RDNs or healthcare providers may increase the
number of people aware of our profession and consulting us.
Whereas ChatGPT provides quick nutrition and dietetics answers to communities and
patients/clients, it will not remove the numerous well-known social, cultural, economic,
emotional, and psychological barriers hindering healthy eating daily, e.g., price of healthy
foods, lack of time to cook, and lack of social support (18). One of the added values of RDNs
is the ability to assess and consider these barriers influencing the act of eating to build tailored
nutritional counseling for each patient/client. For RDNs, ChatGPT may lead to missed
opportunities for in-person contact and care by patients/clients thinking that AI chatbot advice
is good enough. It could also encourage users to consult unqualified nutrition professionals if
users are not aware of the difference between certified and uncertified professions in nutrition
and dietetics.
RDNs should know that ChatGPT does not reference the sources of information that it draws
on to generate answers. Therefore, it is impossible to know if the sources are scientific or
otherwise in nature. When asked to provide peer-reviewed citations, ChatGPT provides
references (probably frequently mentioned in the webpages included in the training data) but
references do not always exist. For instance, when asked about the consumption of ultra-
processed food in Switzerland, ChatGPT made up this reference: Fardet A, et al. (2021). Ultra-
Processed Food Consumption and Dietary Quality in a Swiss Population-Based Sample:
8
Evidence from the Swiss National Nutrition Survey. Nutrients, 13(2), 610. The first author is
well-known in the field but has never published a paper with this title in this journal.
ChatGPT could be seen as a useful tool for RDNs to have days and nights a quick second
opinion about nutrition and dietetics issues, once being very clear about its limits. However,
relying only on ChatGPT to provide advice or affirm statements must not be done at the
moment. This was even said by the OpenAI CEO himself (19). First, ChatGPT may “lie”, i.e.,
generate plausible-sounding but inaccurate answers (7), as seen above about the menus
planned for patients/clients undergoing hemodialysis and the case study of the patient/client
with type 1 diabetes. Second, ChatGPT makes up or distorts facts, such as the made-up
reference, known as the hallucination problem in AI. Third, ChatGPT does not hierarchize
sources, let alone based on scientific consensus. Indeed, for nutritional topics with numerous
non-scientific viewpoints expressed on the Internet, widespread falsehoods and non-scientific
opinions might be spread by ChatGPT. For example, when asked “Are low carb diets good for
weight loss?” three times, ChatGPT was rather conservative twice and in favor of it once (with
warnings at the end, though). Fourth, ChatGPT has difficulties with establishing causal
relations, for instance in clinical reasoning (20). Of note, large language models are improving
and these four challenges are active research areas and solutions might arise in the future.
A useful tool for writing information?
ChatGPT can quickly draft texts without spelling mistakes, which need to be proofread and
very often amended. This could be useful to generate a draft of clinical notes, e.g., summaries
of nutrition care processes in electronic health records or discharge letters (20, 21). This could
save RDNs time and increase efficiency.
ChatGPT could also help communicate with communities and patients/clients. It could
formulate answers in email exchanges. Of note, if patients/clients and colleagues also use
ChatGPT themselves, they might send more emails! RDNs should, however, be aware that
ChatGPT may produce prejudiced and offensive texts despite being trained not to (11).
ChatGPT might also be useful to provide tailored, relatively simple, and summarized
explanations around nutrition and dietetics to communities and patients/clients. Furthermore,
it could generate texts for flyers, posters, or tweets to promote healthy nutrition.
RDNs should know that ChatGPT tends to provide verbose answers (7), which may not be
adapted to communities and patients/clients with limited literacy. Thus, it is relevant to clarify
in the prompts the need for concision and/or the expected number of sentences or lines, ask
for the use of bullet points, or simplify and synthesize texts manually. Finally, for privacy and
9
data protection issues, RDNs must not enter identifiable patient/client data into ChatGPT
prompts because the latter can be reviewed and used by OpenAI (22).
A useful tool in academia?
Contents and styles of texts generated by ChatGPT are often so human-like that it can be
difficult for academics to determine whether texts were generated by AI or written by students
(23) or researchers (1, 24). For instance, ChatGPT has passed a test in medicine including
multiple-choice answers in Korea (2). However, ChatGPT score was the worst (60.8% of
correct answers) in comparison to the 76 “real” medical students (2). ChatGPT can also fool
experts in their research topics. In a study, ChatGPT was asked to generate 50 abstracts
based on the titles and journals of 10 abstracts recently published in five high impact factor
medical journals. The findings suggest that blinded human reviewers correctly identified only
68% of the generated abstracts as not human written and incorrectly reported 14% of original
abstracts as being generated by ChatGPT (24).
Academics training students in nutrition and dietetics should quickly adapt the way they assess
students’ knowledge and competencies. For instance, combining oral examination with essays
is essential to test if what has been written in the essay is also well mastered by students orally
(when ChatGPT cannot assist the student). Giving more importance to grades obtained in
practical courses (e.g., workshops) and internships is another option. In scientific essays,
expert knowledge of assessors is also key to evaluating the quality of the content and the
reference list. Academics must also inform students about plagiarism and academic
dishonesty and that using ChatGPT to do homework potentially decreases their ability to pass
exams or practice effectively in their internships. Educating students about information literacy,
scientific integrity, and the value of gaining knowledge and skills from in-depth reading, critical
thinking, and scientific writing (e.g., defining questions, and structuring thoughts) is of utmost
importance. Finally, international academic rules to regulate the use of AI in scientific writing
are also essential (25). In this sense, plagiarism software that identifies AI-generated texts is
needed to help academics rapidly spot texts generated by AI (e.g., GPTZero) (26).
ChatGPT can also assist RDNs working in academia in generating test questions and quizzes,
but also brainstorming pedagogic objectives while developing or revising courses and
programs. ChatGPT raises strong expectations in helping with writing scientific papers or grant
proposals (e.g., summarizing information, brainstorming hypotheses, suggesting experiments,
drafting abstracts), despite the multiple downsides (e.g. persuasive inaccuracies, unknown
sources, hidden biases, reasoning errors) (20, 25, 27). Therefore, supervision from a domain
expert for proper guidance and verification is always required. In addition, by design
10
(parroting), ChatGPT is unlikely to create original ideas or new knowledge, without mentioning
that the most recent scientific articles on the field are currently “unknown” as not used in the
training data.
What is the main usability of ChatGPT in the practice of RDNs?
Since many people can experiment with ChatGPT for a large set of tasks and AI chatbots
evolve quickly, it is hard to predict future usage. At present, we believe that RDNs should be
aware of this technology and its potential opportunities and risks of using it in practice (Table
1).
Regarding potential opportunities, ChatGPT, and other coming AI chatbots, can be useful AI
assistants for forming a quick second opinion (if used advisedly), summarizing information,
and brainstorming. ChatGPT is also useful to draft texts, which may in turn reduce
administrative workload and increase in-person time spent with communities and
patients/clients. For communities and patients/clients, ChatGPT may also provide an
opportunity to gain awareness about nutrition and dietetics, which may, in turn, strengthen our
profession.
At the same time, we should keep in mind that ChatGPT does not always provide accurate
and unbiased information, especially if combined with commercial applications (e.g., ChatGPT
integration in Bing, with advertisements guiding the generated answers). Furthermore, texts
generated by ChatGPT are often verbose, tend to be circular, and lack originality. Using AI
chatbots might also increase the dependence on technology, potentially leading to decreased
critical thinking and clinical judgment. Finally, some students and researchers in nutrition and
dietetics may use it dishonestly to fool readers in essays, tests, and research articles.
11
Table 1. Potential opportunities and risks of using ChatGPT in RDNs’ practice
Potential opportunities
Potential risks
For RDNs:
• Obtaining a quick second opinion days and
nights (if used advisedly).
• Brainstorming ideas (e.g., research
hypotheses, test questions, pedagogic
objectives).
• Summarizing texts quickly.
• Drafting texts with adaptable tone and no
spelling mistakes (e.g., emails, clinical notes,
health promoting material, paper/grant
abstracts).
• Increasing in-person time thanks to a
reduced administrative workload.
• Getting public recognition and being referred
to more patients/clients, when ChatGPT
warns about consulting a RDN.
For communities and patients/clients:
• Getting direct and interactive answers to
questions about healthy eating, nutrition,
dietetics, and diseases for free (for now).
• Gaining awareness of RDNs as resources
for nutrition and dietetics issues, when
ChatGPT warns about consulting a RDN.
• Easing the planification of menus, which
could be tailored according to personal
needs (if specified in the prompts).
For everyone:
• Being unable to get relevant information if
unable to correctly define issues, ask related
questions, refine prompts, and evaluate the
accuracy of answers.
• Relying on made-up, unsourced and outdated
information.
• Relying on incomplete or inaccurate
summaries despite sounding plausible.
• Spreading marketed, biased, or dubious
information.
• Drafting verbose, generic, repetitive,
inaccurate, offensive, biased, or unoriginal
texts, especially if not proofread and edited.
• Not respecting privacy and data protection (in
submitted prompts).
• Depending on this technology, which may lead
to a decrease in critical thinking and clinical
judgment.
For RDNs:
• Missing opportunities for in-person contact
and care because patients/clients rely only on
AI generated advice.
• Facilitating academic dishonesty.
• For communities and patients/clients:
• Referring to uncertified nutrition professionals
if unaware of the difference between them and
RDNs, when ChatGPT warns about consulting
a RDN.
Conclusions
ChatGPT, but also other future AI chatbots, are not search engines that search for relevant
documents nor encyclopedias that compile knowledge. They react to the user’s prompts in a
human-like way. AI chatbots can assist RDNs, but cannot be used as a replacement for their
expertise, judgment, and soft skills. We, as RDNs, should be aware of this technology and
promote our profession and our competencies over ChatGPT to the general public,
communities, and patients/clients. In parallel, we should reinforce our profession to provide
12
high-quality, accurate, and tailored nutritional counseling and care and show empathy,
attentiveness, and ability to motivate communities and patients/clients. This can be achieved
notably with high-performance training programs and continuing education.
13
References
1. Else H. Abstracts written by ChatGPT fool scientists. Nature. 2023;613(7944):423.
2. Huh S. Are ChatGPT's knowledge and interpretation ability comparable to those of
medical students in Korea for taking a parasitology examination?: a descriptive study. J
Educ Eval Health Prof. 2023;20:1.
3. Milmo D. ChatGPT reaches 100 million users two months after launch. The Guardian. Feb
2, 2023.
4. Official Microsoft Blog. Introducing Microsoft 365 Copilot – your copilot for work. Available
from: https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-
copilot-for-work/ (accessed on Mar 26, 2023).
5. Ray T. ChatGPT is 'not particularly innovative,' and 'nothing revolutionary', says Meta's
chief AI scientist. ZDNet. Available from: https://www.zdnet.com/article/chatgpt-is-not-
particularly-innovative-and-nothing-revolutionary-says-metas-chief-ai-scientist/
(accessed on Mar 26, 2023).
6. Google. Try Bard and share your feedback. Available from:
https://blog.google/technology/ai/try-bard/ (accessed on Mar 26, 2023).
7. OpenAI. ChatGPT: Optimizing Language Models for Dialogue. Available from:
https://openai.com/blog/chatgpt (accessed on Mar 26, 2023).
8. OpenAI. Explore the OpenAI API. Documentation. Models. Available from:
https://platform.openai.com/docs/models (accessed on Mar 26, 2023).
9. Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, et al. Language Models
are Few-Shot Learners. 2020. Preprint at arXiv. doi:
https://doi.org/10.48550/arXiv.2005.14165.
10. Hampl SE, Hassink SG, Skinner AC, Armstrong SC, Barlow SE, Bolling CF, et al. Clinical
Practice Guideline for the Evaluation and Treatment of Children and Adolescents With
Obesity. Pediatrics. 2023;151(2).
11. Lamy C. ChatGPT: Testing the moral limits of AI content-generators. Le Monde. Feb 19,
2023.
12. OpenAI. DALL·E: Creating images from text. Available from:
https://openai.com/research/dall-e (accessed on Mar 26, 2023).
13. OpenAI. Introducing ChatGPT Plus. Available from: https://openai.com/blog/chatgpt-plus
(accessed on Mar 26, 2023).
14. American Diabetes Association. What is the Best Diet for Diabetes? Available from:
https://www.diabetesfoodhub.org/articles/what-is-the-best-diet-for-diabetes.html
(accessed on Mar 26, 2023).
15. American Diabetes Association. What is the Diabetes Plate Method? Available from:
https://www.diabetesfoodhub.org/articles/what-is-the-diabetes-plate-
14
method.html#:~:text=The%20Diabetes%20Plate%20Method%20is,you%20need%20is%
20a%20plate! (accessed on Mar 26, 2023).
16. Academy of Nutrition and Dietetics. Nutrition terminology reference manual (eNCPT):
Dietetics language for nutrition care. Available from: https://www.ncpro.org//default.cfm?
(accessed on Mar 26, 2023).
17. Instacart. Order groceries for delivery or pickup in San Francisco Bay Area. Available from:
https://www.instacart.com/ (accessed on Mar 26, 2023).
18. Chatelan A, Bochud M, Frohlich KL. Precision nutrition: hype or hope for public health
interventions to reduce obesity? Int J Epidemiol. 2019;48(2):332-42.
19. Altman S. [...] it's a mistake to be relying on it for anything important right now. [...]. In:
@sama, editor. Twitter. 2022, Dec 11. Available from:
https://twitter.com/sama/status/1601731295792414720 (accessed on Mar 26, 2023).
20. Cascella M, Montomoli J, Bellini V, Bignami E. Evaluating the Feasibility of ChatGPT in
Healthcare: An Analysis of Multiple Clinical and Research Scenarios. J Med Syst.
2023;47(1):33.
21. Patel SB, Lam K. ChatGPT: the future of discharge summaries? Lancet Digit Health. 2023.
22. OpenAI. ChatGPT FAQ. Available from: https://help.openai.com/en/articles/6783457-
chatgpt-general-faq (accessed on Mar 26, 2023).
23. Graham F. Daily briefing: Will ChatGPT kill the essay assignment? Nature. 2022.
24. Gao CA, Howard M, Markov NS, Dyer EC, Ramesh S, Luo Y, et al. Comparing scientific
abstracts generated by ChatGPT to original abstracts using an artificial intelligence output
detector, plagiarism detector, and blinded human reviewers. 2022. Preprint at bioRxiv. doi:
https://doi.org/10.1101/2022.12.23.521610.
25. van Dis EAM, Bollen J, Zuidema W, van Rooij R, Bockting CL. ChatGPT: five priorities for
research. Nature. 2023;614(7947):224-6.
26. GPTZero. The World's #1 AI Detector with over 1 Million Users. Available from:
https://gptzero.me/ (accessed on Mar 26, 2023).
27. Hutson M. Could AI help you to write your next paper? Nature. 2022;611(7934):192-193.