ArticlePDF Available

Towards artificial intelligence-based assessment systems

Authors:

Abstract and Figures

‘Stop and test’ assessments do not rigorously evaluate a student's understanding of a topic. Artificial intelligence-based assessment provides constant feedback to teachers, students and parents about how the student learns, the support they need and the progress they are making towards their learning goals.
Content may be subject to copyright.
NATURE HUMAN BEHAVIOUR 1, 0028 (2017) | DOI: 10.1038/s41562-016-0028 | www.nature.com/nathumbehav 1
comment
PUBLISHED: 1 MARCH 2017 | VOLUME: 1 | ARTICLE NUMBER: 0028
Towards artificial intelligence-
based assessment systems
Rose Luckin
‘Stop and test’ assessments do not rigorously evaluate a student’s understanding of a topic. Artificial
intelligence-based assessment provides constant feedback to teachers, students and parents about how
the student learns, the support they need and the progress they are making towards their learning goals.
Decades of research have shown that
knowledge and understanding
cannot be rigorously evaluated
through a series of 90-minute exams. e
prevailing exam paradigm is stressful,
unpleasant, can turn students away from
education, and requires that both students
and teachers take time away from learning.
And yet we persist globally to rely on these
blunt instruments, sending students o to
universities and the workplace ill-equipped
for their futures.
Perhaps one reason for the long-lasting
persistence of ‘stop and test’ forms of
assessment is that the alternatives available so
far have been unattractive and equally, or even
more, unreliable than current examination
systems. For example, within the school
education system, marks from work that
students complete as part of their course has
formed part, or all, of their exam result. Fears
about the extent to which such coursework
is truly the sole work of the student has
reduced the attractiveness of this option
and we have moved back towards exams. In
higher education, ‘open book exams’ have
been used to reduce the pressure on students
to remember lots of information. is type
of approach can help, but it tackles only a
small part of the overall problem, in this case,
the pressure on memory. Other stressful and
unreliable features remain, such as the exam
conditions, the very limited range of the
assessment, and the accuracy of marking.
However, the situation is now dierent
and a realistic and economically attractive
alternative lies at our ngertips. We have the
technology to build a superior assessment
system — one based on articial intelligence
(AI) — but we now need to see if we have the
social and moral appetite to disrupt tradition.
AI is everywhere
AI can be dened as the ability of computer
systems to behave in ways that we would
think of as essentially human. AI systems
are designed to interact with the world
through capabilities, such as speech
recognition, and intelligent behaviours,
such as assessing a situation and taking
sensible actions towards a goal1. e use
of AI in our day-to-day life has increased
exponentially: we use the intelligent search
behind Google, the AI voice recognition
and knowledge management in the iPhone’s
personal assistant, Siri, and navigation
tools such as Citymapper to help us travel
eectively in cities. Clever AI has penetrated
general use to become so useful that it is not
labelled as AI anymore2. We trust it with
our personal, medical and nancial data
without a thought, so why not trust it with
the assessment of our childrens knowledge
and understanding?
AI and assessment
e application of AI to education has
been the subject of academic research for
more than 30years, with the aim of making
Figure 1 | A simple Open Learner Model for tracking how a child is using the help facilities of a piece of
science software. The map in the dialogue box entitled ‘Activities’ depicts the area of the curriculum that
the child is studying, with each node representing a curriculum topic. When the user clicks on a node in
this map, the bar chart below and to the left of the mapindicates the level of diculty of the work that
the child has completed whileworking on this topic, and the dots on the‘dice’ below and to the right of
the map indicatehow much help the child has received. Figure courtesy of Ecolab (Luckin, 2016).
2 NATURE HUMAN BEHAVIOUR 1, 0028 (2017) | DOI: 10.1038/s41562-016-0028 | www.nature.com/nathumbehav
comment
computationally precise and explicit forms
of educational, psychological and social
knowledge which are oen le implicit”3.
e evidence from existing AI systems that
assess learning as well as provide tutoring
is positive with respect to their assessment
accuracy4. AI is a powerful tool to open up
the ‘black box of learning’, by providing a
deep, ne-grained understanding of when
and how learning actually happens.
In order to open this black box of
learning, AI assessment systems need
information about: (1) the curriculum,
subject area and learning activities that each
student is completing; (2) the details of the
steps each student takes as they complete
these activities; and (3) what counts as
success within each of these activities
and within each of the steps towards the
completion of each activity.
AI techniques, such as computer
modelling and machine learning, are
applied to this information and the AI
assessment system forms an evaluation
of the student’s knowledge of the subject
area being studied. AI assessment systems
can also be used to assess students’ skills,
such as collaboration and persistence, as
well as students’ characteristics, such as
condence and motivation. e information
collection and processing carried out by an
AI assessment system to form an evaluation
of each student’s progress takes place over a
period of time. Unlike the 90-minute exam,
this period of time may be a whole school
semester, a year, several years or more.
e output from AI assessment soware
provides the ingredients that can be
synthesized and interpreted to produce
visualizations (Fig.1). ese visualizations,
referred to as Open Learner Models
(OLMs), represent a student’s knowledge,
skills or resource requirements and they
help teachers and students understand
their performance and its assessment5. For
example, an AI assessment system collects
data about student’s achievements, their
emotional state, or motivation. is data
can be analysed and used to create an
OLM to: (1) help teachers understand their
students’ approach to learning to shape their
future teaching appropriately; and (2) help
motivate students by enabling them to track
their own progress and encouraging them to
reect on their learning.
AIAssess (Box1) is a generic AI
assessment system that exemplies just one
approach to assessing how much a student
knows and understands. e system is
suitable for subjects such as mathematics
or science and is based on existing research
tools6,7. However, there are many dierent
AI techniques — such as natural language
processing, speech recognition and semantic
analysis — that can be used to evaluate
student learning, and an appropriate mix of
tools would be required for other subjects,
such as spoken language or history, and
skills such as collaborative problem-solving.
The cost of AI assessment
Building AI systems is not cheap and a
large-scale project would certainly need
extremely careful management. ere is no
reliable estimate of the cost of a scaled-up AI
assessment system that could assess multiple
school subject areas and skills.
One way of getting a glimpse of the scale
of initial investment needed to develop a
national AI assessment system would be to
look at the costs of other large AI projects.
In January 2016, the Obama administration
announced that it planned to invest
US$4billion over a decade (US$400million
per year) to make autonomous vehicles
viable8, and in November 2015, Toyota
committed to an initial investment of
US$1billion over the next ve years
(US$200million per year) to establish and
sta two new AI and robotics research and
development operation centres9. If we add
AIAssess is intelligent assessment soware
designed for students learning science and
mathematics: it assesses as students learn.
AIAssess was developed by researchers
at UCL Knowledge Lab through multiple
evaluated implementations5,6. Specically,
AIAssess provides activities that assess and
develop conceptual knowledge by oering
students dierentiated tasks of increasing
levels of diculty as the student progresses.
In order to ensure that the student
keeps persevering, AIAssess provides
dierent levels of hints and tips to help
the student complete each task. It assesses
each student’s knowledge of the subject
matter, as well as their metacognitive
awareness, knowledge of their own ability
and learning needs, which is a key skill
possessed by eective students and a good
predictor of future performance.
To assess each student’s progress
AIAssess uses: a Knowledge Component
that stores AIAssess’s knowledge about
science and mathematics so that it can
check if each student’s work is correct;
an Analytics Component that collects
and analyses data about each student’s
interactions with the soware; and a
Student Model Component that constantly
calculates and stores what AIAssess judges
to be each student’s subject knowledge and
metacognitive awareness.
e AIAssess Knowledge Component is
ne-grained so that it can generate correct
and incorrect steps toward a solution, not
just correct and incorrect answers. For any
given task that the student is required to
perform, AIAssess can generate all possible
steps that a student might take as they
complete each task.
e AIAssess Analytics component
collects each student’s interactions with
the soware. Specically, it collects
data about each step the student takes
towards a task solution, the amount of
hints or tips that the student requires to
successfully complete each step and each
task, and the diculty level of each task the
student completes.
e AIAssess Student Model
Component uses outputs from the
Analytics Component to strengthen or
weaken its judgement about every student’s:
Knowledge and understanding of
each concept in a mathematics or
science curriculum, by assessing each
student’s ability to complete a solution
step, or entire task, correctly without
any hints or tips.
Potential for development in their
knowledge and understanding of each
concept in a mathematics or science
curriculum, by assessing each student’s
ability to complete a solution step, or
entire task, correctly with a particular
level of hints or tips.
Metacognitive awareness of their
knowledge and understanding, and the
extent to which they need to use hints
and tips to succeed, by assessing each
student’s accuracy in determining the
level of hints or tips they need in order
to complete a solution step correctly,
and in evaluating the level of diculty
at which they can succeed correctly.
At any point in time, AIAssess can
produce a visualization (Fig.1) that
illustrates its judgements about a student’s
performance on a particular task, across a
set of tasks, and across all tasks completed.
is Open Learner Model can be
interrogated so that teachers and learners
can trace the evidence that supports each
judgement the soware makes.
Box 1 | AIAssess.
AI is a powerful tool to open
up the ‘black box’ of learning.
NATURE HUMAN BEHAVIOUR 1, 0028 (2017) | DOI: 10.1038/s41562-016-0028 | www.nature.com/nathumbehav 3
comment
the estimated costs of making autonomous
vehicles viable, this suggests an annual
budget of US$600million per year for a
complex AI project. It therefore seems
reasonable to suggest that a country, such as
England, might need to spend the equivalent
of US$600million (£500million) per year to
make AI assessment a reality for a set of core
subjects and skills, at least to start with until
the upfront system development costs have
been covered and the focus could shi to
maintenance and improvement.
It is also hard to estimate the cost of
the current exam system to make any
comparison. ere are no publicly available
up-to-date data about the costs of the
existing English exam system. e most
recent information is in a 2005 report, which
was prepared by PricewaterhouseCoopers
for the then exam regulator, the
Qualications and Curriculum Authority
(QCA)10. is report estimated the cost
of the English school exam system as
£610million per annum (Table1).
If we use Bank of England historical
ination rate data to convert this to a
gure for 2015, then the gure is about
£845million (US$1.03billion). Although the
English examination system is not the same
in 2016 as it was in 2005, it is not simpler
and is unlikely to be any less expensive, so
a gure of £845million as an estimate of
the cost of the English exam system in 2016
seems conservative. Although designing
a nationwide learning assessment system
may well be more complex than designing
autonomous vehicles, comparing the level
of investment in an existing complex
AI project to the cost of the current
examination system in England puts the
enterprise of building such a system within a
realistic context.
We also need to bear in mind that the
initial outlay for an AI assessment system
would be much greater than the ongoing
development and maintenance costs. is
is in contrast to the human-resource-heavy
exam systems, for which the costs inevitably
rise each year due to the increasing numbers
of students, and therefore the increasing
number of examiners, and the cost
of ination.
Social equality
e benets of developing an AI assessment
approach go beyond economics. Education
is the key to changing people’s lives, and
yet the changes that education makes to
people’s lives are not always for the better.
e less able and poorer students in society
are generally least well served by education
systems. Wealthier families can aord to
pay for the coaching and tutoring that can
help students access the best schools and
pass exams. AI would provide a fairer, richer
assessment system that would evaluate
students across a longer period of time
and from an evidence-based, value-added
perspective. It would not be possible for
students to be coached specically for an AI
assessment, because the assessment would
be happening ‘in the background’ over
time, without necessarily being obvious to
the student. AI assessment systems would
be able to demonstrate how a student deals
with challenging subject matter, how they
persevere and how quickly they learn when
given appropriate support. In addition,
national AI assessment systems would also
oer support and formative feedback to help
students improve.
Ethical concerns
e ethical questions around AI in general
are equally, if not more, acute when it comes
to education. For example, the sharing of
data introduces a host of challenges, from
individual privacy to proprietary intellectual
property concerns. If we are to build scaled
AI assessment systems that will be welcomed
by students, teachers and parents, it will
be essential to work with educators and
system developers to specify data standards
that prioritize both the sharing of data and
the ethics underlying data use. It is also
essential that we use the older AI approaches
that involve modelling as well as the more
modern machine-learning techniques.
e modelling approach to AI can make
transparent the AI system’s reasoning in
a way that machine-learning techniques
cannot, and it will be essential to be able to
explain the assessment decisions made by
any AI assessment system and constantly
provide informative feedback to students,
teachers and parents.
Looking forward
How do we progress from the current system
to achieve a step change in assessment
using AI? We need to advance on three
fronts. Socially, we need to engage teachers,
learners, parents and other education
stakeholders to work with scientists
and policymakers to develop the ethical
framework within which AI assessment can
thrive and bring benet. Technically, we
need to build international collaborations
between academic and commercial
enterprise to develop the scaled-up AI
assessment systems that can deliver a new
generation of exam-free assessment. And
politically, we need leaders to recognize
the possibilities that AI can bring to
drive forward much-needed educational
transformation within tightening budgetary
constraints. Initiatives on these three
fronts will require nancial support from
governments and private enterprise working
together. Initially, it may be more tractable
to focus on a single subject area as a pilot
project. is approach would enable us
to rm up the costs and demonstrate the
benets so that we can free teachers and
students from the burden of examinations.
Rose Luckin is Professor of Learner Centred Design,
UCL Knowledge Lab, Institute of Education,
University College London, 23–29 Emerald Street,
London WC1N 3QS, UK.
e-mail: r.luckin@ucl.ac.uk
References
1. Luckin, R., Holmes, W., Griths, M. & Forcier, L.B. Intelligence
Unleashed: An Argument for AI in Education (Pearson, 2016);
http://go.nature.com/2jwF0zx
2. Bostrom, N. & Yudkowsky, E. in Cambridge Handbook of
Articial Intelligence (eds Frankish, K. & Ransey, W.M.) 316–334
(Cambridge Univ. Press, 2011).
3. Self, J. Int. J.Artif. Intell. Educ. 10, 350–364 (1999).
4. Hill, P. & Barber, M. Preparing for a Renaissance in Assessment
(Pearson, 2014).
5. Mavrikis, M. Int. J.Artif. Intell. Tools 19, 733–753 (2010).
6. Luckin, R. & du Boulay, B. Int. J.Artif. Intell. Educ.
26, 416–430 (2016).
7. Bull, S. & Kay, J. Int. J.Artif. Intell. Educ. 17, 89–120 (2007).
8. Spector, M. & Ramsey, M. U.S. proposes spending $4 billion to
encourage driverless cars. e Wall Street Journal (14 January
2016); http://go.nature.com/2jZePEM
9. Toyota will establish new articial intelligence research and
development company. Toy o ta http://bit.ly/2jRt1gW
(5 November 2015).
10. Memorandum Submitted by Association of School and College
Leaders (ASCL) (UK Parliament, 2007); http://go.nature.
com/2jpIBBN
Competing interests
e author declares no competing interests.
Table 1 | The cost of the English examination system (2005).
Direct costs Time costs Total
QCA core costs £8m £8m
QCA NCT costs £37m £37m
Awarding body costs £264m £264m
Exam centres: invigilation £97m £97m
Exam centres: support and sundries £61m £9m £70m
Exam centres: exams officers £134m £134m
Total costs £370m £240m £610m
Source: a memorandum submitted by the Association of School and College Leaders (ASCL) to the House of Commons Select Committee
on Children, Schools and Families10. NCT, national curriculum tests.
... Research indicates a wealth of studies focusing on teachers' experience of using AI-driven tools in the educational domain. Some researchers have delved into teachers' attitudes towards the incorporation of AI in English language teaching (Chounta et al., 2022;Sumakul et al., 2022), while others have scrutinized the application of AI-based tools for evaluation purposes (Luckin, 2017). Moreover, some empirical studies have concentrated on teachers' conceptualizations of AI's role in education (Yau et al., 2023), whereas others have measured teachers' trust in using AI technologies (Nazaretsky et al., 2022). ...
... Comprehension of these intertwined concepts is critical to the potential development of AI-driven education. Luckin (2017) state that AI tools are beneficial in terms of their usage in personalized assessment. According to Luckin (2017), AI can create evaluating instruments based on students' individual needs and provide instant feedback as was the case in the study conducted by Karpova (2020) in relation to AI tool Write & Improve. ...
... Luckin (2017) state that AI tools are beneficial in terms of their usage in personalized assessment. According to Luckin (2017), AI can create evaluating instruments based on students' individual needs and provide instant feedback as was the case in the study conducted by Karpova (2020) in relation to AI tool Write & Improve. Moreover, Luckin (2017) claims that AI assessing tools can replace formal, traditional high-stake exams with more formative and continuous assessment. ...
Article
Full-text available
This study investigates the effectiveness of feedback provided by teachers versus feedback generated by the Write & Improve platform in enhancing the writing skills of senior undergraduate students enrolled in a “two foreign language” program at a private university in Kazakhstan. The quasi-experimental design involved four teachers, each teaching one control and one experimental class, totaling eight groups of students. Pre- and post-tests were conducted over a period of five weeks, focusing on task achievement, coherence and cohesion, lexical resource, grammar and accuracy, and overall score. Data analysis included descriptive statistics, Mann-Whitney U tests for pre-test comparisons, and MANCOVA analyses for post-test comparisons. Results show no significant difference in the impact of Write & Improve feedback compared to traditional teacher feedback across multiple dimensions of the writing test, both within individual teachers’ classes and when combined. Longitudinal analysis reveals fluctuating scores over time with no consistent improvement. Thus, the study concludes that the Write & Improve tool is equally effective as teacher feedback in improving students’ writing skills. This implies that educational institutions can potentially integrate technology-based feedback systems like Write & Improve alongside traditional teaching methods to enhance student learning outcomes in writing proficiency.
... These points suggest that AI technology could revolutionize assessment practices, especially in marking and providing feedback. Despite the traditional view that formative assessment relies heavily on personal, text-based, or verbal feedback (Børte et al., 2023;Sadler, 1989), AI has shown success in formative assessments, particularly in the USA (Attali, 2013;Bridgeman, 2013;Luckin, 2017;Zhai & Nehm, 2023). As Whithaus (cited in Shermis & Burstein, 2013: vii) notes, ignoring the role of software systems and feedback-providing agents in students' writing processes is unrealistic in the twenty-first century. ...
... To synthesize this section, integrating AI into educational assessments has gained momentum, with studies showing that AI enhances grading efficiency, reduces bias, and offers accurate feedback (Bridgeman, 2013;Moon & Pae, 2011;Pinot de Moira, 2013). AI tools have proven successful in formative assessments, particularly in the US (Attali, 2013;Luckin, 2017), revolutionizing teaching, learning, and assessment practices (Halaweh, 2023;Mena-Guacas et al., 2023). AI's benefits in education include increased engagement, motivation, learner interaction (Hawes & Arya, 2023;Lin et al., 2021;Nazari et al., 2021), and improved academic performance (Khan et al., 2021). ...
Article
There has been a surge in employing artificial intelligence (AI) in all areas of language pedagogy, not the least among them language testing and assessment. This study investigated the effects of AI-powered tools on English as a Foreign Language (EFL) learners' speaking skills, psychological well-being, autonomy, and academic buoyancy. Using a concurrent mixed-methods design, the study included 28 upper-intermediate EFL students from an Ethiopian university. We gave the Michigan Language Proficiency Test to evaluate degrees of proficiency before the TOEFL iBT speaking section, which used ChatGPT for scoring and feedback. Speaking abilities were assessed using pretests, immediate posttests, and delayed posttests. Furthermore, we evaluated the impacts on psychological well-being, autonomy, and academic buoyancy using narrative frames. We used one-way repeated measurements to examine the quantitative data and thematically evaluated the qualitative data. According to the results, speaking abilities, psychological well-being, learner autonomy, and academic buoyancy showed notable increases. The results suggest that by improving skill development , offering individualized feedback, and meeting students' emotional and psychological needs, AI systems like ChatGPT have the capacity to transform language assessment and pedagogy. Encouraging the incorporation of AI technologies to enhance educational outcomes and provide a more flexible and adaptable learning environment, the study presents important implications for various stakeholders.
... Using machine learning algorithms to learn from a system and improve over time, AI can generally be defined as the capacity of a computer (or a robot controlled by a computer) to demonstrate reasoning, learning, and expression akin to that of humans (e.g., Luckin, 2017). ...
Article
Full-text available
This study explores the generative artificial intelligence (GAI) prompting practices of Australian primary and secondary educators in the first year following the advent of OpenAI's ChatGPT. Following a brief training workshop on prompting GAI for pedagogical purposes, 38 teachers uploaded 252 prompts they deemed 'pedagogically useful' and 19 rated as 'pedagogically poor' via an online form. Out of the participating teachers, 35 also joined a semi-structured interview discussing how they integrated GAI into their daily teaching. The prompt data reveal teachers' GAI use cases span the development of instructional tasks, enhancing creativity, defining, explaining and/or summarising concepts and texts, differentiation of instructional materials and content for students of varying proficiency, assessment-related tasks and administrative and organisational tasks. Thematic analysis of interview data reveals teachers' perceptions of prompting GAI for expanding ideas and approaches, designing assessment-related tasks, modelling students' use of GAI, and personal (non-teaching) applications. Overall, the findings provide empirical insights into what primary and secondary educators are prompting GAI for, how and why they are doing so, and how this is linked to teaching in schools in Australia.
Chapter
In the dynamic 21st-century educational landscape, technology is transforming teacher education and professional development. This chapter explores how cutting-edge digital solutions are revolutionizing traditional approaches, enhancing how educators acquire new skills, refine pedagogical practices, and advance professionally. Online learning platforms and learning management systems (LMS) provide flexible, accessible continuous learning opportunities. These platforms support diverse learning modalities, including microlearning, gamification, and social learning, fostering a culture of lifelong learning among teachers. This chapter also addresses challenges such as ensuring data privacy, overcoming resistance to change, and maintaining human interaction in a digital environment. Strategies for successful implementation are discussed, offering a comprehensive guide for educational institutions seeking to optimize teacher education and professional development in a digital world.
Chapter
The chapter aims to investigate the complex relationship between the exploitation of artificial intelligence (AI) technologies in the educational sector and certain human rights, highlighting concerns regarding the rights to privacy and non-discrimination. The proposed analysis focuses on the K-12 education context, prioritizing children's rights and risks related to student data exploitation. The chapter begins with an overview of the development of learning analytics (LA) and AI technologies, proceeding to an analysis of their costs and benefits. This is followed by an in-depth examination of the right to privacy and data protection, investigating the specifics in the European Union (EU) and United States (US) contexts. It then addresses the issue of algorithmic non-discrimination, especially in the use of student assessment techniques, and investigates the right to education, particularly inclusive education.
Chapter
Technology has become an integral part of the present time. Advancements in technology have drastically changed the working, learning, and interaction style of the people. With the help of innovations, life became easier and faster. Artificial intelligence (AI), a recent development, has caught the attention of people by its capacity to work like humans. Although there are various advantages of AI in education, but it is not free from adverse effects on learning. It was observed that AI may encourage dishonesty and jeopardize academic integrity. The main goals of the chapter are to highlight the benefits and drawbacks of artificial intelligence in education. The chapter will highlight AI's continued significance in education.
Article
This paper investigates the effects of large language model (LLM) based feedback on the essay writing proficiency of university students in Hong Kong. It focuses on exploring the potential improvements that generative artificial intelligence (AI) can bring to student essay revisions, its effect on student engagement with writing tasks, and the emotions students experience while undergoing the process of revising written work. Utilizing a randomized controlled trial, it draws comparisons between the experiences and performance of 918 language students at a Hong Kong university, some of whom received generated feedback (GPT-3.5-turbo LLM) and some of whom did not. The impact of AI-generated feedback is assessed not only through quantifiable metrics, entailing statistical analysis of the impact of AI feedback on essay grading, but also through subjective indices, student surveys that captured motivational levels and emotional states, as well as thematic analysis of interviews with participating students. The incorporation of AI-generated feedback into the revision process demonstrated significant improvements in the caliber of students’ essays. The quantitative data suggests notable effect sizes of statistical significance, while qualitative feedback from students highlights increases in engagement and motivation as well as a mixed emotional experience during revision among those who received AI feedback.
Chapter
Full-text available
Çocukluk, bireylerin fiziksel, duygusal ve zihinsel gelişimlerinin en hızlı olduğu dönemdir. Bu rapor, 2000 yılından bu yana Türkiye'de çocukların demografik bilgilerinden, yaşam koşullarına, sağlık, eğitim ve sosyal refah gibi temel alanlardaki durumlarından, dijital teknolojilere erişimlerine varıncaya kadar istatistiksel dönüşümleri gözden geçirerek, değişim ve sürekliliğin bir panoramasını sunmaktadır. "Sayılarla Türkiye’nin 2000 Sonrası Çocuk Karnesi" adını taşıyan bu çalışma, çocukların yaşam kalitesine dair önemli verileri derleyerek, bu alandaki gelişmeleri ortaya koymayı amaçlar. Bu rapor, konuyla ilgilenen herkes için, çocukların şimdiki durumunu anlamak ve gelecekteki ihtiyaçlarına yönelik planlamalar yapmak adına bir başlangıç noktası teşkil etmektedir. Çocukların refahı, her toplumun sosyal ve ekonomik gelişmişliğinin en temel göstergelerinden biridir. Gelişen teknoloji ve değişen sosyo-ekonomik koşullar altında çocukların yaşam standartları, eğitimi, sağlık hizmetlerine erişimi ve genel yaşam kalitesi üzerine yapılan araştırmalar, politika yapıcılar ve uygulayıcılar için hayati önem taşımaktadır. Akademik araştırmalar göstermiştir ki çocukluk dönemi, bireylerin yetişkinlik dönemlerini ve toplumsal katılımlarını şekillendiren temel bir dönemdir (Heckman, 2006). Bu nedenle, çocukların erken yaşlarda sağlıklı bir çevrede büyümeleri, eğitimde fırsat eşitliğine sahip olmaları ve sosyal adalet ilkeleri doğrultusunda korunmaları hem bireysel hem de toplumsal gelişim için kritik bir öneme sahiptir. Diğer yandan çocukların sağlık ve eğitim standartları, refah düzeyleri, sosyal hizmetlere erişimleri ve sosyoekonomik koşulları Birleşmiş Milletler Çocuk Hakları Sözleşmesi'nde vurgulandığı üzere, temel insan hakları arasındadır (UNICEF, 1989). Bu çalışma, çocukların eğitim, sağlık, konut, güvenlik ve adalet gibi temel ihtiyaçlarını sayısal verilerle analiz ederken, toplumun bu alanlarda kaydettiği ilerlemeyi ve karşılaştığı zorlukları ele almayı amaçlamaktadır. Bu bağlamda, "Geçmişten Günümüze Sayısal Çocuk Karnesi" başlıklı bölümümüz, Türkiye’deki çocukların yaşam koşullarının çeşitli boyutlarını kapsamlı bir şekilde ele almakta ve uzun vadeli eğilimleri ortaya koymaktadır.
Article
Full-text available
This paper on artificial intelligence in education (AIEd) has two aims. The first: to explain to a non-specialist, interested, reader what AIEd is: its goals, how it is built, and how it works. The second: to set out the argument for what AIEd can offer teaching and learning, both now and in the future, with an eye towards improving learning and life outcomes for all. Computer systems that are artificially intelligent interact with the world using capabilities (such as speech recognition) and intelligent behaviours (such as using available information to take the most sensible actions toward a stated goal) that we would think of as essentially human. At the heart of artificial intelligence in education is the scientific goal to make knowledge, which is often left implicit, computationally precise and explicit. In other words, in addition to being the engine behind much ‘smart’ ed tech, AIEd is also designed to be a powerful tool to open up what is sometimes called the ‘black box of learning,’ giving us more fine-grained understandings of how learning actually happens. Although some might find the concept of AIEd alienating, the algorithms and models that underpin ed tech powered by AIEd form the basis of an essentially human endeavor. Using AIEd, teachers will be able to offer learners educational experiences that are more personalised, flexible, inclusive and engaging. Crucially, we do not see a future in which AIEd replaces teachers. What we do see is a future in which the extraordinary expertise of teachers is better leveraged and augmented through the thoughtful deployment of well designed AIEd. We have available, right now, AIEd tools that could support student learning at a scale previously unimaginable by providing one-on-one tutoring to every student, in every subject. Existing technologies also have the capacity to provide intelligent support to learners working in a group, and to create authentic virtual learning environments where students have the right support, at the right time, to tackle real-life problems and puzzles. In the near future, we expect that teaching and learning will increasingly be supported by the thoughtful application of AIEd tools. For example, by lifelong learning companions powered by AI that can accompany and support individual learners throughout their studies - in and beyond school - and new forms of assessment that measure learning while it is taking place, shaping the learning experience in real time. If we are ultimately successful, we predict that AIEd will help us address some of the most intractable problems in education, including achievement gaps and teacher retention. AIEd will also help us respond to the most significant social challenge that AI has already brought - the steady replacement of jobs and occupations with clever algorithms and robots. It is our view that this provides a new innovation imperative in education, which can be expressed simply: as humans live and work alongside increasingly smart machines, our education systems will need to achieve at levels that none have managed to date. True progress will require the development of an AIEd infrastructure. This will not, however, be a single monolithic AIEd system. Instead, it will resemble the marketplace that has developed for smartphone apps: hundreds and then thousands of individual AIEd components, developed in collaboration with educators, conformed to uniform international data standards, and shared with researchers and developers worldwide. These standards will also enable system-level data collation and analysis that will help us to learn much more about learning itself – and how to improve it. Moving forward, we will need to pay close attention to three powerful forces as we map the future of artificial intelligence in education, namely pedagogy, technology, and system change. Paying attention to the pedagogy will mean that the design of new edtech should always start with what we know about learning. It also means that the system for funding this work must be simultaneously opened up and refocused, moving away from isolated pockets of R&D and toward collaborative enterprises that prioritise areas known to make a real difference to teaching and learning. Paying attention to the technology will mean creating smarter demand for commercial grade AIEd products that work. It also means the development of a robust, component-based AIEd infrastructure, similar to the smartphone app marketplace, where researchers and developers can access standardised components that have been developed in collaboration with educators. Paying attention to system change will mean involving teachers, students, and parents in co-designing new tools, so that AIEd will appropriately address the inherent “messiness” of real classroom, university, and workplace learning environments. It also means the development of data standards that promote the safe and ethical use of data. Said succinctly, we need intelligent technologies that embody what we know about great teaching and learning, embodied in enticing consumer grade products, which are then used effectively in real-life settings that combine the best of human and machine. We do not underestimate the new-thinking, inevitable wrong-turns, and effort required to realise these recommendations. However, if we are to properly unleash the intelligence of AIEd, we must do things differently - via new collaborations, sensible funding, and (always) a keen eye on the pedagogy. The potential prize is too great to act otherwise.
Article
Full-text available
In 1999 we reported a study that explored the way that Vygotsky’s Zone of Proximal Development could be used to inform the design of an Interactive Learning Environment called the Ecolab. Two aspects of this work have subsequently been used for further research. Firstly, there is the interpretation of the ZPD and its associated theory that was used to operationalize the ZPD so that it could be implemented in software. This interpretation has informed further research about how one can model context and its impact on learning, which has produced a design framework that has been successfully applied across a range of educational settings. Secondly, there is the Ecolab software itself. The software has been adapted into a variety of versions that have supported explorations into how to scaffold learners’ metacognition, how to scaffold learners’ motivation and the implications of a learner’s goal orientation upon their use of the software. The findings from these studies have informed our understanding of learner scaffolding and have produced consistent results to demonstrate the importance of providing learners with appropriately challenging tasks and flexible support. Vygotsky’s work is as relevant now as it was in 1999: it still has an important role to play in the development of educational software.
Article
Full-text available
Human-Computer Interaction modelling can benefit from machine learning. This paper presents a case study of the use of machine learning for the development of two interrelated Bayesian Networks for the purposes of modelling student interactions within Intelligent Learning Environments. The models predict (a) whether a given student's interaction is effective in terms of learning and (b) whether a student can answer correctly questions in an intelligent learning environment without requesting help. After discussing the requirements for these models, the paper presents the particular techniques used to pre-process and learn from the data. The case study discusses the models learned based on data collected from student interactions on their own time and location. The paper concludes by discussing the application of the models and directions for future work.
Toyota will establish new artificial intelligence research and development company
Toyota will establish new artificial intelligence research and development company. Toyota http://bit.ly/2jRt1gW (5 November 2015).
  • S Bull
  • J Kay
Bull, S. & Kay, J. Int. J. Artif. Intell. Educ. 17, 89-120 (2007).
  • R Luckin
  • B Boulay
Luckin, R. & du Boulay, B. Int. J. Artif. Intell. Educ. 26, 416-430 (2016).
  • J Self
Self, J. Int. J. Artif. Intell. Educ. 10, 350-364 (1999).
Preparing for a Renaissance in Assessment
  • P Hill
  • M Barber
Hill, P. & Barber, M. Preparing for a Renaissance in Assessment (Pearson, 2014).
proposes spending $4 billion to encourage driverless cars
  • M Spector
  • M U Ramsey
Spector, M. & Ramsey, M. U.S. proposes spending $4 billion to encourage driverless cars. The Wall Street Journal (14 January 2016);
Cambridge Handbook of Artificial Intelligence
  • N Bostrom
  • E Yudkowsky
Bostrom, N. & Yudkowsky, E. in Cambridge Handbook of Artificial Intelligence (eds Frankish, K. & Ransey, W. M.) 316-334 (Cambridge Univ. Press, 2011).