Questions related to Instructional Design
CALL FOR CHAPTERS
The Evolution of Artificial Intelligence in Higher Education: Challenges, Risks, and Ethical Considerations
Editors: Miltiadis D. Lytras, Afnan Alkhaldi, Sawsan Malik
Publisher: Emerald Publishing
Emerald Studies in Active and Transformative Learning in Higher Education,
Scope, Strategy and Topics Covered
This volume serves as a reference edition for the challenges, opportunities, risks and adoption of Artificial Intelligence in all aspects of Higher Education. With emphasis on the diverse facets of the AI, namely procedural, methodological, technical and ethical this book covers in a holistic way the evolution of the AI in Higher Education. Case Studies, Lessons Learnt and Research and Development Projects of the utilization of AI in Higher Education are also covered promoting the debate on the future of AI in Higher Education
The aim of this volume is to cover the excessive needs of faculty, administrators, policy makers and stakeholders in the Higher Education industry on timely and trusted knowledge on the impact of the Artificial Intelligence in the Higher Education Institutions and Ecosystem. Our volume is one of the first efforts to accommodate in a single volume the diverse aspects of the phenomenon with a constructive, progressive approach aiming to investigate the positive footprint of the application of the AI in the HE.
The selected structure of our volume is also representative of our own unique strategy. The following are the sections that will host the chapters:
- Section 1. AI as a Catalyst for the Higher Education Ecosystem and Value proposition
- Section 2. Threats, Opportunities, Challenges and Risks on the Adoption of AI in HE
- Section 3. A new era of AI-enabled instructional and learning strategic, engagement and interactivity in Higher Education
- Section 4. Enrichment of Learning Experience and Social Impact through AI in Higher Education
- Section 5. Administrative and Ethical issues: Managing AI as a core function of Higher Education Process
- Section 6. ChatGPT, Generative AI, OpenAI special focus: Hype, Functional and Strategic Perspectives on its use on HE institutions.
In this book, there is an integrated coverage of significant items in the relevant agenda providing a unique value proposition for the relevant area. The following list is indicative and not exhaustive.
- AI as a catalyst for the Higher Education Ecosystem
- AI as an enabler of Digital Transformation in Higher Education: Threats, Promises, implementation strategies and intended impact
- AI as an innovative approach to next generation instructional methods and learning strategies?
- The context of Disrupting Education: How Artificial Intelligence revolutionizes the higher education landscape
- Deployment of Artificial Intelligence in the value chain of the Higher Education.
- The role of the AI in the implementation of Active Learning in STEAM courses in Higher Education
- Promoting interactivity, inclusion and engagement in Higher Education
- Enhancing interactive learning experiences and student engagement through ChatGPT, Generative AI, OpenAI
- The role of ChatGPT, Generative AI, OpenAI in Revolutionizing Lifelong Learning in Higher Education
- Innovative methodological frameworks for the integration of AI in Higher Education
- Ethical issues on the use of AI in Higher Education
- Using ChatGPT, Generative and Open AI in teaching HE courses
- Enhanced Decision Making in Higher Education Administration with Artificial Intelligence
- Technology Literacy and adoption of AI and ChatGPT, Generative AI, OpenAI in Higher Education
- Adopting ChatGPT, Generative AI, OpenAI for designing training modules for HE Courses
- ChatGPT, Generative AI, OpenAI in Higher Education: Hype or a new pillar of developing next generation skills to students
This edition can serve as a reference edition as well as a teaching book for postgraduate studies on the relevant domain.
- 25th November, 2023; Submission of Abstracts (Use the Emerald Template attached)
- 15th February, 2024, Submission of Full Chapters
- 15th March, 2024, Final chapters due (review comments incorporated)
- November 2024, Publication
- There is no charge for contributing authors
- Emerald Publishing is offering complementary ebook for all contributors
- The volume is Scopus Indexed (typically 3-5 months after publication)
Editors (send your abstract to any of them)
Miltiadis D. Lytras, Effat University, Saudi Arabia, email@example.com
Afnan Alkhaldi, Arab Open University, Kuwait Branch, Kuwait, firstname.lastname@example.org
Sawsan Malik, Arab Open University, Kuwait Branch, Kuwait, email@example.com
I genuinely think that we are on the cusp of a major revolution as I open the floor for a conversation on the changing landscape of education. The rise of artificial intelligence (AI), chatbots, and future technologies should not be viewed as a danger, but rather as catalysts for change, comparable to historically disruptive innovations such as television, radio, and the printing press. These technologies did not replace education; rather, they supplemented and transformed it, prompting educators to investigate, adapt, and incorporate these pedagogical innovations. AI and chatbots has the potential to revolutionize education by giving highly tailored learning experiences, quick feedback, and round-the-clock support. However, their effectiveness depends on how well they are incorporated into the educational continuum.
In this context, I allude to research on Attention-Driven Design, as articulated in "Attention-Driven Design: How Instructional Designers Design to Capture the Learner's Attention." Attention-driven design principles can serve as a guiding framework to harness the transformative power of AI in education. Just as the printing press democratized access to knowledge and transformed information dissemination, AI can enhance our ability to provide tailored education experiences.
We stand at a crucial juncture where we must actively embrace the inevitable evolution of education through AI, rather than passively watching students navigate these changes independently. It is akin to a child growing up without parental guidance, still gaining knowledge about life but lacking the crucial structure and mentorship that adults can offer. As educators and education professionals, it is our responsibility to leverage the potential of AI and guide its integration into our educational systems. We should view artificial intelligence as a bridge connecting humans and technology, not as a challenge to our roles as educators. Rather than fearing AI, we can utilize it to enhance our teaching methodologies, streamline administrative tasks, and place greater emphasis on personalized learning.
The transformation of education is inevitable in light of the introduction of AI and related technologies. However, this transformation need not be disruptive; it can be a catalyst for positive change. Let us encourage our colleagues to unleash their creativity and explore innovative AI integration strategies. This may entail developing new curricula, reevaluating assessment methods, or creating immersive learning experiences achievable through Attention-Driven Design methodologies, as I have detailed in my dissertation and with the assistance of AI. Together, we can shape the future of education in a way that maximizes the benefits of these technological advancements while preserving the core values of effective teaching and learning.
Having as a guide the 'Berlin Model' ('Berliner Modell' in German)  & the 'ADDIE Model'  of the instructional process please share your opinion or experience or interesting references on any of the following questions.
Which factors of the instructional process, being 2 conditional & 4 decisional according to the 'Berlin Model', & which of its design phases, being 5 according to the 'ADDIE Model', have been mostly impacted by technology?
How has technology impacted each of these factors & design phases?
You may consult  (SAMR model), [4, § Introduction], [5, § Features of Online Learning Environment],  (Trialogical learning model),  (TPACK model),  (R2D2 model).
Which model could be more suitable than the 'Berlin Model', as regards the correlated factors, & the 'ADDIE Model', as regards the design phases, of the instructional process?
[ Featured references:
9. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3725874/ (Russell)
CALLA is cognitive approach which introduced by Anna Uhl Chamot, PhD and Jill Robins, PhD. I need good references from you about it.
I am conducting an exploratory mixed methods Ph.D-level investigation that focuses on situating management in online learning and concerns two phases.The concluded qualitative phase (QUAL) yielded a testable framework for the quantitative phase (quan). I need to test the factors that explain learning success when learners study partially or fully in an online environment. The link below gives access to the measurement instrument with items requiring scale responses. https://docs.google.com/forms/d/e/1FAIpQLSe2xERY-IX6Cx8aOs5hJw82Xe4f0eoeNYLycS9IQy13Gy8mFA/viewform?usp=sf_link
Given the insular nature educational institutions, I need the help of research practitioners to reach persons who have studied fully or partly online or are doing so. All persons around the world aged 13 and above who have completed all or part of their academic- or professional units, courses, or programs online, and those who are self-taught using online repositories, are eligible to participate in this study. Respondents' identities will be kept secret and their responses will not be divulged to third parties. Only the aggregated data from the entire study shall be analyzed to answer the research questions.
Thank you for your willingness to support this cause.
Victor Avasi Ph.D. Candidate Instructional Design and Open Learning Euclid University, Gambia
I've recently published the following article in the Korea Times.
I am curious, how does your university deal with online technology during a pandemic? It would be nice to hear various opinions in this discussion.
Thanks in advance.
Virtual learning environments during pandemic
Nobody could predict the impact that COVID-19 would have on our world; it changed so much of our daily routines. The stigma of living in the new normal is haunting. As unhinged, to some extent, as it might seem, and maybe even unrealistic, the world doesn't stop revolving and functioning even as such a detrimental health crisis falls upon its shoulders. A lot of changes have had to take place within the ongoing duration of the COVID-19 virus outbreak. One of those changes made online learning a must, but it's quite a win-win situation. It's common knowledge that today's generation is greatly knowledgeable when it comes to technology due to living in a highly-digitized world. It's rational to utilize the wonders that technology brings into continuing the practices adopted and implemented by the education sector globally, but now through online learning programs. Its implementation makes sense and was rightful because of schools physically closing due to health restrictions. The world of academia is not the only one switching its methods in such precarious times, but it is a very major change nonetheless. Online education is primarily centered on internet-powered platforms, and not every teacher and student has equal access to such services. More so, a bigger concern faced by teachers has to do with teaching methods in the online learning set up. Virtual classes can be intimidating and seem to limit the ways that teachers can impart knowledge to students, forcing them to be creative in the teaching methods they use in order to promote a collaborative and interactive learning environment. Also, the online education set-up entails a higher average of screen-time which leads to health issues such as poor vision and posture. Similarly, excessive screen-time and long video-conference hours can lead to "Zoom fatigue." In simple terms, Zoom fatigue, or fatigue from any video platform, is the feeling of tiredness that a person encounters after a conference call. Related to this, online teaching can take a toll on the mental health of teachers. Like students, teachers can also feel burnt-out from the constant exposure and workload that happens through screens and technological systems. Stanford University has published an article identifying four factors that contribute to such fatigue. Namely, those are: 1) The overwhelming amount of screen time, 2) The uncomfortable ability to see one's self during conference calls, 3) Typical motion exaggeratedly decreasing due to video chats, and 4) Video calls increasing the difficulty of cognitive tasks. The first one, when explained, had to do with the stress and social anxiety that comes with the awareness of people staring at you during video chats. The second reason is likened to being constantly followed in a real-life scenario but, instead of actually being followed, people are constantly interacting with you through a screen for a lengthy period of time. Thirdly, it's typical that people stay in one place during a video call, and research is continuously coming up with evidence that cognitive performance is better when an individual performs motion. Lastly, in order to get a message clearly delivered during video calls, more effort is exerted because methods of interaction are limited to the screen and technology in use, unlike in face-to-face interactions. Virtual learning environments (VLE), such as Virbela and vAcademia, allow vast opportunities in educational collaborations through the means of virtual environments. Through such VLEs, we are given the chance to use advanced teaching methods by the means of voice-powered technology, presentation options, 3D recording, and academic environment simulations. Through the use of VLEs, teachers are able to promote inclusivity and accessibility for their students. VLEs are highly flexible. Also, little by little, the more students engage with them, the more they become attuned to and are at ease with online classes. Another way that educational organizations benefit from VLEs is through providing an environment that fosters students to have a wider view of the world. VLEs are highly marketable to students across the globe. The San Diego Times released an article in 2020 showing how the 3D technology of Virbela had a new user growth rate of 78 percent, with half of that coming from the international market. It has proven how 3D technology has paved the way for an upgrade of virtual gatherings, meetings, and events. At the same time, the platform allows a virtual space where students can interactively meet while still being remote; a virtual campus of sorts like that of Stanford University. This virtual campus offers a similar interface to RPGs such as Club Penguin and The Sims, with similar features allowing users to choose outfits, interact, and go to classes. Through buttons, users are given the options to let their avatar perform simple actions like nodding and waving, all made possible through Virbela. Davenport University also did the same thing, using Virbela to conduct their online classes through a virtual university, as featured in a Detroit Free Press article. In a 2017 study conducted by Alves et al., it was found that students who have access to virtual learning environments are equipped to achieve a good academic performance, and that the higher the accessibility rate is to virtual learning environments, the better their performance will be. Rushan Ziatdinov (firstname.lastname@example.org) is a professor in the Department of Industrial Engineering at Keimyung University, Daegu.
I am looking for a piece of research regarding the situation, when the cognitive load, and fatigue, increases due to the wrong way of presenting information - duplicating it (e.g. reading aloud the text on the PowerPoint presentation) instead of using dual-coding method.
Although there are different models of instructional design, for face-to-face or e-learning modalities, and research has been carried out to evaluate the results of the design in terms of learning or student satisfaction, it is necessary to know the domain that a teacher of higher education on the different aspects that constitute the instructional design.
Dear fellow researchers,
while drafting an article about the importance and interlay of previous knowledge and learning with (multiple) external representations (combinations of text + pictures or diagrams etc.) Stumbled numerous times over cases of Expertise-Reversal-Effects, that seem not be explainable in the conventional terms of the Cognitive Load Theory (CLT) so far.
So, I would like to share these findings with you and to invite you to think about alternative explanations.
What is the Expertise-Reversal-Effect (ERE)?
The core idea behind the CLT is, that the better one's previous knowledge is organized (as chunks), the lesser one's working memory is loaded when solving problems or learning new contents. This holds true for most of the experimental observations. However, in some cases, high previous knowledge (HPK) leads to lesser performance outcomes than of participants with low previous knowledge (LPK). This effect is called the Expertise-Reversal-Effect (ERE): HPK learner profit less or even not from a specific treatment than their LKP counterparts.
How is the ERE explained in terms of the Cognitive Load Theory (CLT)?
For explaining this contradiction, CLT also proposes an executive moment of previous knowledge, that guides search & find processes. Those cognitive procedures could conflict with the instructional format as well as previous knowledge can conflict with the presented contents. So, as Slava Kalyuga states, "if external guidance is provided to learners who have sufficient knowledge base for dealing with the same units of information, learners would have to relate and reconcile the related components of available long-term memory base and externally provided guidance. Such integration processes may impose an additional working memory load and reduce resources available for learning new knowledge."
The fact that previous knowledge may induce additional cognitive load would explain lesser (absolute) learning outcomes of HPK learners with a specific treatment in comparison to their HPK counterparts without treatment. It would also explain lesser learning gains compared to their LPK counterparts with treatment (in case ceiling effects can be excluded).
However, it is difficult to follow this explanation for the case that HPK learners with treatment show lesser (absolute) learning outcomes than their LPK counterparts as this implies (by the interpretation of the CLT) that the instructional treatment must have had an enormous effect on cognitive load, overcompensating any advantages of previous knowledge.
Which evidences and limitations of the explanation given by the CLT have been observed?
There are convincing examples that undoubtedly trigger a cognitive conflict between the mental models of the participants and the presented information like in Schnotz & Bannert:
However, these experiments heavily (and intentionally) manipulated previous or presented knowledge to yield their effects. Most treatments we are much less pervasive and therefore their effects in terms of interference between previous knowledge and presented content (including treatment) should be milder. Furthermore, the ability to ignore treatments like signaling by color coding is not taken into account by CLT, it is however been demonstrated by eye tracking studies of Richter and Scheiter:
In this study, recall performance of HPK and LPK learners with that simple treatment equals, while for participants without treatment differ significantly as expected (cf. Fig. 3). The same for the comprehension measures in Richter, Wehrle & Scheiter (cf. Fig. 3):
Even more intriguing are findings by Kragten, Admiraal & Rijlarsdam, who report on an analysis of difficulties without any treatment of diagrams that low cognitive demanding diagrams (i.e. diagrams with low complexity) are even slightly better been understood by LPK than HPK learners. (Diagrams with high complexity instead show the expected characteristics.) Moreover, diagrams with unfamiliar conventions AND that poses high cognitive demands are being significantly better understood than those of eighter complex or with unfamiliar conventions (cf. Fig. 2):
These are some of the ERE findings that are particularly surprising and, in my humble opinion, cannot been explained in plausible way within the framework of CLT.
Is a Dual Processing hypothesis a sufficient candidate for explaining these findings?
Reading the book “Thinking fast and slow” by Daniel Kahnemann, I came across the hypothesis (to my knowledge originated by Stanovich and West) that there are two cognitive processes been postulated that govern problem solving and decision making in economics. According to that theory most cognitive processes in daily live (and learning) are done on an automated base relying on previously acquired cognitive schemata (system 1). These processes require minimal mental effort but are prone to errors. However, if system 1-processes do not lead to a sufficient solution or intentionally attention is shifted to the given problem, system 2 kicks in and starts deeper elaboration processes. So, perceiving hard to solve problems or being forced to shift focused attention to a given problem should significantly decrease error rate. Also see:
This theory has been recently applied to several fields, however to my knowledge not to learning and teaching so far and especially not to multimedia instructional design and external representations.
So my Questions for Discussion:
- In your opinion, is there a need for an alternative explanation of the Expertise-Reversal-Effect? (And why do you think so?)
- In your opinion, is the Dual Processing Theories a good candidate to explain the given data or are there even better ways to do so?
- In your opinion, how to predict an ERE before the experiment based on CLT or any other theory?
What is the best free software to draw different experimental schematic diagrams and crystal structures manually??
(1) I want to draw different crystal structures like cubic, tetragonal, hexagonal etc. manually, and
(2) I want to draw a schematic diagram of a whole experiment, for example, the schematic diagram of the preparation of thin-film Perovskite Solar Cells or else.
Thanks for your precious time and suggestion in advance.
This large international conference will be held online this year due to COVID. I am submitting a paper to the Technology Enhanced Language Learning track (8). It is about a topic I have never seen in scholarly literature & I am proud of my idea. We'll see if the reviewers like it.
The 21th IEEE International Conference on Advanced Learning Technologies Online
July 12-15, 2021
Conference Time: GMT (https://time.is/GMT)
We invite submission of papers reporting original academic or industrial research in the area of Advanced Learning Technologies and Technology-enhanced Learning according to the following tracks:
- Doctoral Consortium
- Track 1. Technologies for Open Learning and Education (i-OPENLearn)
- Track 2. Adaptive and Personalised Technology-Enhanced Learning (APTeL)
- Track 3. Wireless, Mobile, Pervasive and Ubiquitous Technologies for Learning (WMUTE)
- Track 4. Digital Game and Intelligent Toy Enhanced Learning (DIGITEL)
- Track 5. Computer Supported Collaborative Learning (CSCL)
- Track 6. Big Data in Education and Learning Analytics (BDELA)
- Track 7. Technology-Enhanced Science, Technology, Engineering and Math Education (TeSTEM)
- Track 8. Technology-Enhanced Language Learning (TELL)
- Track 9. Technology-Enhanced Learning of Disciplinary Practices (TELeaD)
- Track 10. Technology-supported education for people with disabilities (TeDISABLE)
- Track 11. Artificial Intelligence and Smart Learning Environments (AISLE)
- Track 12. Augmented Reality and Virtual Worlds in Education and Training (ARVWET)
- Track 13. Motivational and Affective Aspects in Technology-enhanced Learning (MA-TEL)
All papers will be double-blindly peer-reviewed. Author guidelines and formating templates can be accessed at ICALT Author guidelines webpage. Complete papers are required to be reviewed. The expected types of submissions include:
- Full paper: 5 pages
- Short paper: 3 pages
- Discussion paper: 2 pages
- Doctoral Consortium paper (posters): 3 pages
There have been many classical fatigue life assessment methods, but most of them have their own limitations. For example, the method of material fatigue life assessment is often not suitable for complex engineering components. If we want to accurately predict the fatigue life of engineering structure, which aspects should we start from? Welcome to leave a message.
Number theory is among cryptography foundations, but sometimes it is hard for students to understand the theory, mostly due the lack of previous skills and knowledge of that mathematical theory by students.
Have you dealt with that problem? Have you faced other problems while teaching number theory? How to overcome them?
I am working on a project that includes the creation of two technologies for learning languages. I decided that adapting a new instructional design model from scratch will help implement these technologies: the model will be followed to develop the instruction needed for learning languages.
When our institution had to switch rapidly to remote instruction, professors who had not taught remotely before wondered what the best blend of synchronous and asynchronous activities would be. Our office of Instructional Design developed an instrument to assist them; Professors completed a relatively short questionnaire asking questions regarding their course, and themselves. These questions led to a recommended model for online instruction, and a recommendation of a second model which is also likely a good fit. The recommendations are not intended to be binding in any way.
We were unable to locate a similar extant instrument, and are seeking critique or resources that would help us refine the instrument.
I used a framework (incorporated from instructional design theory and instructional design model) to design and develop LMOOC for attitudinal learning.
My question about the appropriate ways to measure the effectiveness of the design and develop of LMOOC. In other words, how effective is this framework in designing LMOOC for attitudinal learning?
Thank you in advance!
Here's an article about a research project saying that students learn more when taught using active learning strategies:
What do you think?
Most learning management systems still use a similar structure of threading discussions for asynchronous student-to-student and student-to-instructor dialogue. Working from the assumption that robust dialogue is a necessary component of instruction designed within a social constructivist environment, are there any viable, existing or emerging alternatives to the threaded structure of discussions?
Dear RG Colleagues,
As a teacher, I want to know what are the consequences (The general academic policies on cheating) for a student caught in flagrancy cheating on exams (in your institution or university).
It is perceived that most people are unable to complete their programs on MOOCS. Some due to the fact that the courses tend to be passive and student do not get clear clarifications when they encounter problems. Also it is said that the courses are limited to theory and not practicals.
Took the words from https://educationaltechnology.net/instructional-design-models-and-theories/
"An instructional design model provides guidelines to organize appropriate pedagogical scenarios to achieve instructional goals. Instructional design can be defined as the practice of creating instructional experiences to help facilitate learning most effectively. Driscoll & Carliner (2005) states that “design is more than a process; that process, and resulting product, represent a framework of thinking” (p. 9). "
The ID model used in my research is to design and plan for introducing computational thinking concepts to children in primary school.
The last years we are working with serveral data sets that contain emotion measurements on different levels (self-report, facial action coding, eye tracking, physiological data like heart rate and skin conductance, lexical and sentiment analysis of texts).
We do not naively believe that the components of emotions match all the time or over longer or shorter time spans. But stil we search for correspondences between different emotional components.
Our aim is to measure and to integrate emotions in technology based learning designs.
Are there new ideas how to bring different components of emotions together?
What's about micro emotions, peak values, long or short term patterns; how can they help to identify correspondences?
From a design perspective, when you are creating an item, which may have many artifacts in Blackboard, the analytics sees all of the artifacts as one. So if you select a content holder called week 1, which has articles, web links and videos, Blackboard analytics does not count the individual clicks to each link, article and video, it registers all as one click.
I think this not only affect design but statistically, it definitely decreases the number of accesses within the course.
Any solutions or suggestions?
I am starting an investigation looking at the feasbility of a set of authoring tools that would allow more experienced performers to develop job aids to be used by less experienced performers. I am looking for pointers that would allow me to bootstrap my literature review.
Instructional design has a number of "prescriptions" that suggest useful strategies to apply to promote learner of various outcome types (i.e., if you are trying to teach someone to use a procedure, apply this instuctional sequence). Are there similar prescriptions for job aids and/or cognitive aids? I'd appreciate any pointers to appropriate guidelines that you can provide!
E-portfolios can facilitate teaching and learning processes. At the Faculty of Psychology and Educational Sciences (Ghent University), we noticed that our students often have difficulties reflecting about their learning in a meaningful way. When faced with more complex tasks, such as a Master thesis or internship, students also lack the ability to use their developed competences in an integrated manner. An e-portfolio can help students gain insight in their learning process in a more holistic way.
Through an action research design, we used the platform ‘Pebblepad’ as the medium to re-design the ‘Educational Design’ course. The teacher provided support (scaffolds) and formative feedback to guide the group assignment. Students also started an individual reflection portfolio in which they could use their creativity to work out given reflection assignments.
Now we are looking for similar (free) options. Feel free to share your experiences/research with e-portfolios.
Dear respected colleagues,
Kindly share your great views and references. I would be very grateful. Thanks in advance. Best regards
The entire world is witnessing the technological revolutions every day. The Learning Analytics (LA) is a growing area in the field of the Data Analytics . The technology plays a vital role in today's Higher Educational Campuses. The various tools and technologies are used in the field of data collection, data analytics, developing the predictive model etc... It is very much necessary for every higher education sector to think upon the improving the infrastructure in the field of IT and Data Analytics to win the global race. How Learning Analytics can be applied for education design ?
I need to construct a 3D wooden bicycle trailer for Kids. The weight of bicycle trailers must also be light!
I don't know where I should start? I thought I'd start from chasis, but what criteria should I use to construct a chassis,Children's weights? no more?
The task says that the 90% of material must consist of wood.
There are people who would like to see the kind of high school-to- industry and business integration of apprenticeship programs.
Are any of our RG community working to bring apprenticeships to the US that are as coordinated with industry and business as those are in Germany?
How can this be achieved in the US?
What obstacles are there to creating such apprenticeships?
Do you have research in this area to share?
More than accessibility, more than Universal Design for Learning, how do you integrate social justice into your approach as an instructional designer or curriculum developer?
I'm thinking about integration into your approach to all content design, not just content related social justice.
For example, do you provoke your content experts ("SMEs") or wait for them to bring up social justice ideas? Do you even think about the relationship of social justice to your role as an instructional designer or curriculum developer?
Thank you for your thoughts!
We are planning to conduct a training for elementary science teachers on making improvised instructional materials to improve the quality of science teaching. Most of the science teachers in our city are non-science majors which prompted us to organize a training-workshop. What are improvised science instructional materials you can suggest that we can do?
Here is an example of what we want to do. An improvised microscope
Standards-based education has received much attention in many education systems. Please what is your reflection on standards-based textbooks? what are some practical ideas for developing quality standard-based textbooks, particularly social studies textbooks? What are the differences between standards-based textbooks and traditional textbooks?
While I am aware of many ways in which websites and online services can fail, or appear uncomfortable to use, I am looking for specific manifestations of unpleasant design in the digital domain. My definition of unpleasant design is that which promotes social control through discomfort, pain, and persuasion. It raises the value of a product or its surroundings by preventing specific use scenarios such as sleeping on a park bench or loitering in a shopping mall. Unpleasant Design is not about the failure to make beautiful products but about successfully excluding certain social groups and restricting certain uses of objects.
Taking these into account, can there be such a thing as digital unpleasant design? Does anyone have an experience that would be relevant to share in this context? Which websites and online services come to mind when thinking of unpleasant design? Any feedback or suggestion is welcome!
I am planning to conduct a research entitled "How Teachers in Middle Schools Design Technology Integration Activities". The purpose of the study is to explore factors influencing middle schools’ teachers design technology integration activities and how they design the activities as well as to explore the challenges that teachers faced while designing the materials for technology integration activities. The researchers will focus on one-to-one technology environment in middle schools. The study will focus on the teacher as a designer of technology activities. I am confused about the framework. Should it be from Human-Computer interaction or from instructional technology field?
Moore and Kearsley (2012) state that "Most designers believe that courses should be organized into short, self-contained segments, with frequent summaries and overviews." Are there best practices for determining this measure of time? I suppose using ADDIE, one would eventually arrive at the right mix of content and time. Considering the big data available today, I was just wondering if metrics had been gathered to this end.
I teach with another faculty member in an "Introduction to Interprofessional Health Care" class with over 200 students from 10 different professional programs.
Hi RG community,
While working on a project to support technological uptake in Higher Education I discovered that there is a lot of information on:
1- The difficulties to adopt technology-enhanced learning in Higher education;
2- The strategies for Faculty development as a general problem (connected to the several academics' tasks, from research to pedagogical practices).
My question could be placed at the crossroad between the above mentioned issues. The thing is: which are the specific strategies of faculty development that would enforce quality eLearning adoption?
I know there must be plenty of literature, but your opinion and concrete cases would be of great help!
I recently piloted an assignment in my teaching class that required students to present their message in an infographic. I am looking for a systematic/standardized strategy for assessment.
Marty Nystrand noted that the best predictor for a dialogic spell was a student question. Gordon Wells reminds us that the question that most needs answering is one students want to answer. (Apologies for my poor paraphrasing). I want to look at student questioning and am interested in identifying classroom based studies that included a look at student questions/questioning patterns... I welcome your responses...thanks
I am sure that everyone would agree that the students' experiences are different in the classroom compared to online learning. These experiences, of course, depend greatly on the pedagogy and instructional design.....
But can we generalize that one is better than the other? If so, what can we do to improve the one that is not as good?
If anyone could please provide me a list of substantial differences between the two as they are mostly overlapped. Also is there any substantial difference between instructional and learning design?
I'm planning to look into Augmented Reality in Higher Education, can I have some "experimented" examples from you experts? I know there are a lot of free and paid stuff online, but it would be nice to know from experience. And by the way, I would like to experiment something in the HE level and not a school level. Aurasma is a good tool, anything better than Aurasma in terms of interactivity?
So my school of education reviewer is concerned about the dean's choosing the participants. Now, in a Delphi, defining expert is important. many people allow the participants to decide for themselves but for extra care, I'm going through the deans. The criteria for them to decide who to recommend are all of the following:
· educators with at least 5 years of teaching experience and have been actively teaching with direct contact with learners in the past 6 months (Pilcher, 2015),
· educators who teach an oral communication course (oral communication, public speaking, business and professional communication, small group discussion, speech communication, or other communication classes that have a learning objective of increasing oral communication competency) with more than 30 people (Russ, 2009),
· educators who, according to their dean (or department chair), have demonstrated the use of effective teaching methods in the large section lecture format (Nworie, 2011; Pilcher, 2015) and
· educators who are willing to contribute to the Delphi exercise and can contribute the time necessary to complete the study (Ziglio, 1997, p. 14).
· educators with effective communication skills.
So, in my humble opinion, this is very clear criteria. However, the reviewer is concerned with a confounding element of the research in the choosing. He suggested a rubric. I can't figure out what it will do except restate the above and offer (fits the criteria, does not fit the criteria).
Can you think of another option?
My reviewer has said "Your study has a percentage goal for participation (participant response rate). An important consideration for this study is the non-response bias survey (an error analysis) that should be conducted at the end of any three-round Delphi Study. This should include an investigation of the non-response error by conducting some type of contingency table analysis (e.g., Fischer’s Exact Test). Essentially, this should hopefully show no statistical difference between the responders and non-responders in your research. This is an important component of any doctoral level Delphi study."
I'm not sure how to address this. This study is qualitative (according to the requirements of my institution). Any thoughts?
I'm looking for studies interpreting answers of open response tasks from large-scale or classroom assessment for instructional purposes or for the development of distractors for next generation closed-format assessments.
i got a task which is to design a chairless chair or known as the invisible chair which help workers to comfort their leg after standing up for too long. but i have a problem regarding limited of budget which i plan on replace the hydraulic system with mechanical system using hinge and bolt . but im worried it might not work. do anyone have any suggestion about this?
if you are clueless on what is a chairless chair , here i attached the link about the information
This is an invitation to participate in the next special issue of EDUCAR titled “Emerging technology for formative assessment in pre-professional practice”
End date: 26 February 2016
Pre-professional practices have assumed a relevant role at curricula, with a great concern about continuous evaluation, linked to improvement processes of development of competences. This monograph aims to inquire about opportunities of Information and Communication Technology to evaluate learning, for learning and from authentic learning of professional practice. During last years new emerging technologies have been consolidating. They are technologies oriented to evaluation of learning process (digital portfolios, blogs and audio-visual or virtual resources) and of collaborative work (multimedia annotations and digital rubrics).
Contents could include themes as:
New opportunities for learning at pre-professional practice:
Evaluation of processes with ICT
audiovisual and virtual supports
Evaluation of collaborative work
Antonio Bartolomé (UB)
Manuela Raposo (Uvigo)
I'm looking for a [free] instrument that can replace the Learning and Student Student Inventory (LASSI). Does anyone have an evaluation tool or know of one that can be used to see the success of a course or intervention?
In negative scoring, for example, after the the total score for the student's submission is tallied, penalities are then applied by deducting marks for language mechanics. Does this method benefit the student or should these penalities be included in the rubric in tallying the students total score instead of after? Your comments would be greatly appreciated.
I do not wish to be presumptuous, as I do not teach students actively, I teach teachers how to teach! I do have a certification as a teacher trainer, and I use some out-of-the-box methodology to help teachers and schools (and school systems) to cope with (very) unruly/risky behavior students in K-12. As you all know, or should, if you do not have the individuals’ attention from the first moment you open your mouth, you cannot communicate, thus you cannot teach. Moreover, the real goal of a teacher should be to “intrinsically motivate” each student, thus they will begin to “teach themselves through natural curiosity” and the organic in-born need to please.
The first issue with Mr. Faizes question is that a good percentage of any class will be full of “introverted” students; not active participators verbally. Learn to recognize them the first day, and adjust questions to them that encourage participation. Give your students the Myers-Briggs personality test (or similar instrument), and tell them that if the class (as a team) seriously applies themselves to assure the best results, a reward will follow. Stress that there are no wrong answers or time limit, and to be completely truthful to receive the most accurate results. The reward system is usually something like getting out of class 5 minutes early (last class of the day) or bring in fruit and give everyone some; be creative! (Seriously, tie the class reward system to team participation from the start and they will begin to self-monitor and ‘punish’ non-participation and bad behaviors (verbally only) each other as a team unit.)
If anyone is interested in other ideas, please read the response to Mr. Faize’s post I submitted.
Edward J. Files, AAS, AGS, BSM, MBA (SAPM, CTT, CGPW)
Doctorial Student, Education, Org Development
I am currently developing the course shell for an Internship course that has an organisational work process report as an assignment. I need to post up some research resources that students can use in completing this assignment. I'll appreciate any research resources you can share.
Both models, as far as I know, work on the trinity of learning outcomes, assessment evidence, and learning activities.
I have developed a real-time observation metric that produces a proportional breakdown of class activity in a number of domains. Can any suggest an approach/method for the the metric to be validated (i.e. interrater agreement of multiple observers, multiple sites, etc)?
How to prove a desian solution based on 4C/ID model is effecitve? Is there any framwork, tips or questionnaire for the learners to evaluate the 4C/ID design result from their teachers or trainer?
This is an area I have studied and wish to continue pursuing.
Whats the difference between "Instructional Design" and "Learning Design"?
Definitions are hard to find. Two examples:
- "The term instructional design refers to the systematic and reflective process of translating principles of learning and instruction into plans for instructional materials, activities, information resources, and evaluation. An instructional designer is somewhat like an engineer." (Smith, Patricia L., and Tillman J. Ragan. Instructional design. New York, NY: Wiley, 1999.) vs.
- A learning design “documents and describes a learning activity in such a way that other teachers can understand it and use it in their own context. Typically a learning design includes descriptions of learning tasks, resources and supports provided by the teacher”(Donald, Blake, Girault, Datt, & Ramsay, 2009).
At first I though the perspective is the main difference as instructional design focuses on teachers and their plan. On the other side learning design focuses on tasks/learning oportunities for students. But no definiton backed this differentiation. Some definitions also indicate that learning design is for online courses.
Also the term "design" made me think whether learning design and instructional design are either about concrete plans of a lesson or sequence for teaching or about re-usable patterns which could be re-used and adapted for concrete lessons or sequences. The latter would focus on experts to create these designs whereas the former would focus on teachers as users. So, who are the planners/designers?
Do you know about research that uses dance or music to enhance learning of programming?
At youtube I got to know what Erik Stern and Karl Schaffer do within math (see http://www.youtube.com/watch?v=Ws2y-cGoWqQ). I found it fascinating as they pinpoint the importance of the instructional design.
In programming education I like dance-programming where students act as a programmer and a robot that has to execute the instruction given. The interplay between man and machine depends on the quality of the distinctiveness of the instruction
I have divide my data set in to train data set, valida data set and test data set but during training process valida data does not work (I means my network is over trained and over fitting, it is 100 % match with train data set and result is not closely match with tested data set. I have divided my data set is as following.
p=sample_d(1:24,:);%%%<INPUT node >
%%%%%%%%%%%%%%%%%%%%DATA DIVISION<TOTAL DATA SET 40 AND DIVIDE AS 30 TRAIN 5 TEST 5 VALIDATE >%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%NETWORK CREATE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
net.performFcn = 'sse';
a = sim(net1,test.p);
%%%%% I HAVE PROBLEM OF OVER TRAINING ................ FOR TRAINING DATA R=1 BUT VALIDATION DATA SET OF MY NETWORK DOES NOT WORK >%%%%% WHAT IMPROVEMENT IS TO BE DONE TO AVOID EARLY STOPPING AND TO WORK VALIDATION DATA SET TO ABOVE CODING)%%%%