Science topics: Quantitative Social ResearchEvaluation
Science topic
Evaluation - Science topic
Evaluation in all area.
Questions related to Evaluation
Hello professor,
I am a postgraduate student at the School of Physical Education of Dalian University. Currently, I am working on my master's thesis titled "Research on the Construction of the Technical and Tactical Evaluation System for Male U12 School Football Players - Taking the Football Featured Schools in Jinpu New District, Dalian City as an Example". In order to build a technical and tactical evaluation system for male U12 school football players, we have already converted various indicators into questions and formed the final expert survey scale. This questionnaire is the first round of the Delphi expert questionnaire and is an important part of my master's thesis. Your valuable opinions will provide significant value to this research. Based on the needs of this research, this Delphi method questionnaire needs to be conducted two to three times. Please evaluate the importance of the indicators represented by different items in the scale based on your professional knowledge and understanding of the research content. Your answers are crucial for the successful completion of this research, and your valuable suggestions will also add value to this study. Sincerely thank you for your support and cooperation in this project. I wish you good health and smooth work!
Experts in Finance, Islamic Finance, Environmental Economics, etc. may please share credentials for the Evaluation of Ph.D. Theses. My WhatsApp is 0092-3215866992.
1 Name :
Ph.D. Specialization Area:
2 Position/Designation :
3- Field of specialization Finance, world economy, international relations, etc.
4- Mailing Address, Institutional is preferable
5- Email address Official :
6- Cell Phone/. contact no. Landline, WhatsApp, Mobile preferably :
7- Institution and Country :
Thank you for your consideration in advance
Hello everyone, I am a grad student currently taking a course on research and evaluation in education. I would like to know your thoughts on research vs evaluation. Do you feel there is distinguishing characteristics between or do you feel the overlap? Is there one that your prefer? As a high school algebra teacher, I do believe evaluation has the biggest influence in education especially when I do formal and summative assessments with my students. Thank you so much for taking time out of your day to answer my question and I look forward to hearing back from you.
Today, we shall make great attempts to parse through academic language for words that seem simple enough yet require special attention.
Analyzing the text entitled "Research and Evaluation in Education and Psychology: Integrating Diversity With Quantitative, Qualitative, and Mixed Methods" by Donna M. Mertens, I am attempting to define and delineate between the words research and evaluation.
Research is a means of gathering the effectiveness, presence, quality, or therefore lack of something using a mapped-out plan or strategy to answer a question or respond to a hypothesis.
Evaluation is what happens after results are gathered, and it is a means of assessing the data amassed from research.
Research appears to be a preliminary step that serves as a prerequisite to the evaluation that brings to fruition the answer to the question needing to be evaluated in the first place.
As a graduate student at the Arizona State University in the Mary Lou Fulton College of Education, I am beginning my Capstone process. In one of our classes, we have been asked to develop a definition for "Research" and "Evaluation" in our own words.
Research and Evaluation
· Research is the collection of information with the intent of increasing knowledge and understanding on a specific subject.
· Evaluation is the practice of judging the performance to a specific set of criteria or standards for accomplishment.
To compare and contrast "Research" and "Evaluation" if noticed these specific items.
Compare and Contrast
· Similarities – Both Research and Evaluation should be grounded in empirical evidence. In each, reliable and valid evidence is collected for analysis.
· Differences – The purpose of research is to collect information to explain existing bodies of knowledge or generate new theories. The purpose of evaluation is to assess the understanding or performance against a specified standard.
In your experience as educators or professionals, is there marked differences between these concepts or have they become synonymous?
Hello! I am a graduate student at Arizona State University taking an introduction to educational research course. In an effort to explore research and evaluation our instructors have asked us to seek out input from the academic community regarding the differences between the two practices. How do you think the two practices contrast? is there any overlap between the two, or are they entirely separate from one another? Thanks so much for your response!
Subject: Invitation for Ph.D. Thesis Evaluation in Marketing
Dear Professors and Associate Professors,
Greetings!
I am writing to invite esteemed academicians in the discipline of Commerce, particularly those affiliated with the ResearchGate forum both in India and abroad, to serve as external examiners for the evaluation of Ph.D. theses under my supervision. The research work is in the field of Marketing, and we are seeking experienced professionals to provide critical and valuable assessments.
If you are interested in participating as an examiner, we kindly request you to share your bio-data and contact profile at your earliest convenience. Please send the details to my email: kes7brinda@gmail.com.
Your contribution will be invaluable in enhancing the quality and rigor of our research, and we truly appreciate your support in this academic endeavor.
Thank you for considering this request, and I look forward to your positive response.
Warm regards,
Dr. N. Kesavan
Associate Professor,
Department of Commerce,
Annamalai University,
Annamalai Nagar – 608 002. Tamil Nadu, Republic of India.
Email: kes7brinda@gmail.com
🚀 Build a Simple Linear Regression Model | Step by Step Guide Using Real World Data 📊
Are you looking to strengthen your understanding of Linear Regression? Look no further! In this step-by-step guide, I walk you through building a Simple Linear Regression Model from scratch using real-world data. 🎯
🔍 In this video, you'll learn:
- The fundamentals of Linear Regression and how it works
- How to preprocess real-world data for modeling
- Hands-on implementation using Python and its libraries
- Evaluating model performance with key metrics like R-squared and MSE
👨💻 Whether you're a beginner or brushing up on your skills, this tutorial offers practical insights and code walkthroughs to help you get started in data science and machine learning.
🎥 Watch the full video here: https://youtu.be/CMbjN913mg8
#LinearRegression #MachineLearning #DataScience #AI #Python #ML #RegressionModel #RealWorldData #TechTutorials #ProfessorRahulJain #AIForEveryone
I've recently come across several journals that raise serious concerns regarding their legitimacy, yet some of them have managed to secure a spot in the Scopus database. A prime example is the International Journal of Chemical and Biochemical Sciences (ISSN 2226-9614), which has a notably poor-quality website that clearly doesn't meet the standards expected of reputable academic journals. Although this journal has since been delisted from Scopus, the fact that it was ever included is alarming.
Another example is Kexue Tongbao/Chinese Science Bulletin (https://www.kexuetongbao-csb.com/), which despite its unprofessional presentation, holds a Q2 ranking in Scopus.
This brings up several important questions:
- How do journals like these manage to bypass Scopus' evaluation standards and achieve high rankings?
- Is there a possibility that these journals are engaging in unethical practices to manipulate their inclusion and ranking in Scopus?
- Could there be political or other non-academic factors influencing these decisions within the Scopus community?
- What measures should be taken to prevent such journals from misleading researchers and degrading the integrity of academic publishing?
I’m interested in hearing the community's thoughts, particularly from those with experience in academic publishing, journal evaluation, and Scopus indexing.
This should help stimulate a discussion on the practices and potential issues within the academic publishing world.
Evaluating the high impact of Cybercrime on the telecommunication sector in the South Africa market.
Evaluation of the fisheries resources of a selected project areas
Sample of collection and preparation stock assessment
Evaluation is a part of the teaching-learning process. Many teachers, however, inculcate the habit of listening to themselves and being mainly concerned with what they themselves are saying instead of concerning themselves with listening to and evaluating what learners are saying.
In any professional education students' Competency is Evaluated and used as the measurement of Outcome of the teaching-learning process.
Then how these two are differentiated in curriculum development?
As a graduate student at the Arizona State University in the Mary Lou Fulton College of Education, I am beginning my Capstone process. In one of our classes, we have been asked to develop a definition for "Research" and "Evaluation" in our own words.
Research and Evaluation
· Research is the collection of information with the intent of increasing knowledge and understanding on a specific subject.
· Evaluation is the practice of judging the performance to a specific set of criteria or standards for accomplishment.
To compare and contrast "Research" and "Evaluation" if noticed these specific items.
Compare and Contrast
· Similarities – Both Research and Evaluation should be grounded in empirical evidence. In each, reliable and valid evidence is collected for analysis.
· Differences – The purpose of research is to collect information to explain existing bodies of knowledge or generate new theories. The purpose of evaluation is to assess the understanding or performance against a specified standard.
In your experience as educators or professionals, is there marked differences between these concepts or have they become synonymous?
What are the differences between research and evaluation?
As a graduate student at ASU, I was given the task to answer and ask this question. In your experience, what are the similarities and differences between research and evaluation? How do you separate one from the other?
H-INDEX & CITATION EVALUATIONS OF ACADEMICIANS, HOW MUCH RELIABLE !?
Looking for any corresponding research topic in relation to the topic above
I’m currently learning about the similarities and differences between research and evaluation in my graduate course at ASU.
As an instructional designer, I conduct informal research to learn about new projects, and I create structured evaluation plans to identify the success of projects. So, my experience with research and evaluation is dichotomous; they are mutually exclusive concepts.
I’m curious about others' experiences where research is a subset of evaluation, or vice versa. Would you share examples from your perspective?
I am a graduate student in Arizona State University's Learning Design and Technologies program currently taking a course on research and evaluation in education. Based on your experience, how would you describe the most significant distinctions between the practices of "research" and "evaluation"? Do you find there is meaningful overlap between the two? How do you see both practices fitting into your work?
I have built a hybrid model for a recognition task that involves both images and videos. However, I am encountering an issue with precision, recall, and F1-score, all showing 100%, while the accuracy is reported as 99.35% ~ 99.9%. I have tested the model on various videos and images (related to the experiment data including seperate data), and it seems to be performing well. Nevertheless, I am confused about whether this level of accuracy is acceptable. In my understanding, if precision, recall, and F1-score are all 100%, the accuracy should also be 100%.
I am curious if anyone has encountered similar situations in their deep learning practices and if there are logical explanations or solutions. Your insights, explanations, or experiences on this matter would be valuable for me to better understand and address this issue.
Noted: An ablation study was conducted based on different combinations. In the model where I am confused, without these additional combinations, accuracy, precision, recall, and F1 score are very low. Also, the loss and validation accuracy are very high on other's combinations.
Thank you.
Evaluation of star formation
Please i was already read basic information.
Don't send links of Wikimedia
Write the research proposal Evaluating the Feasibility and Strategies for the Entry of Tech Innovator into the Indian Market?
: Critical Evaluation of the Evolution of 2 Traditional Leadership Theories and 2 Traditional Management Theories to 2023/2024 Context and Drivers
Good day all,
I'm currently working on "Evaluating the Feasibility of Using Earth Observation Technology to Monitor Soil Organic Carbon Quantity and Quality in Comparison to Traditional Laboratory Analysis" using EnMAP and Sentinel 2.
Please I need help with creating tiles for my study area......
Thank you
I haven't been able to find any scholarly sources that explain or even mention this question. However, in the field, I've noticed that the community development organisations I have worked for have preferred to use more traditional evaluation methods. I just want to find a paper that has noticed the same thing!! Please help! Thank you!
- Distinguish between the choice of crops and varieties in organic farming versus conventional farming.
- Evaluate the factors influencing crop selection in organic agriculture and its impact on biodiversity and sustainability.
The aim of this research is to investigate the impact of compensation and reward system in determining employees willingness to continuing staying in a particular job.
The study group is working class Post Graduate students at University of sunderland.
The research method will be interview of prospective students
Dear colleagues,
A research team is conducting a study to evaluate emotions that have been automatically generated by using GANS, through a small survey. The survey presents 20 works of art in four different versions, each aimed at evoking one of the following emotions: amusement, delight, dread, and melancholy.
Your opinion is highly valued. Kindly access the form provided via the link and indicate the emotion you perceive each one of the 20 works of art to evoke. We suggest increasing the screen brightness to have a better view of the images.
Thank you for participating in this research. Your responses will be greatly appreciated. Feel free to share with your contacts.
Distinguish between short-term challenges and potential long-term benefits. Assess the risks, including the potential for introducing harmful pathogens or unintended ecological consequences, and weigh them against the long-term benefits of sustainable agricultural practices.
I need experts in the Field of Measurement and Evaluation?
When you read an epidemiological research paper what are some of the red flags you encounter in phrasing, statistical tests used, and glossing over controlling for confounding? For example, when you evaluate the COVID reports or vaccine research what are key elements that if not present call into question the research or if included raise doubts?
Dear Admin,
You uploaded a study in pdf and the system automatically entered the title of the pdf as the title of the article. I changed the pdf, but only the title remained.
How can I change this so that the title of the study is displayed? "What Does a Tourist See, or, an Environmental–Aesthetic Evaluation of a Street View in Szeged (Hungary)"
thanks,
Ferenc
Someone should please guide me on how to write this project
Dear colleagues,
I am reaching out to you for assistance in finding an approach that will allow me to evaluate the academic profiles of researchers, taking into account quantitative indicators and conducting an analysis of collaborations and funding.
I would greatly appreciate your responses and suggestions.
Best regards,
Sabina
I am writing to invite you to submit a chapter to an edited monograph, titled The End is Nigh: Climate Anxiety in the Classroom, that explores the multiple ways in which climate anxiety permeate and serve to disrupt students’ and teachers’ mental health within kindergarten to grade 12 classrooms.
The monograph book is a contemporary examination of the state of climate anxiety within the field of education. Climate change is one of the most pressing issues of our time. While some continue to deny its existence and question human’s contributions to its effects, climate change is an undeniable fact (e.g., IPCC, 2018; IPCC, 2022). Media addresses climate change by describing it using doomsday language such as catastrophic, urgent, irreversible, and devastating. Popular climate change advocate Greta Thunberg (2019) reinforces the fear by stating, "I don’t want you to be hopeful. I want you to panic. I want you to feel the fear I feel every day. And then I want you to act. I want you to act as you would in a crisis. I want you to act as if our house is on fire. Because it is." (para. 20)
With extensive exposure to the negative impact climate change can have on individuals, their family, community, and the world, it is not surprising that individuals are experiencing climate anxiety (Albrecht, 2011; Clayton, 2020; Maran & Begotti, 2021; Ojala, 2015; Reyes et al., 2021, Weintrobe, 2019). The impact of climate change on mental health is not limited to those who have lived through a natural disaster associated with climate change (Howard-Jones et al., 2021). Within schools, classroom discussions and analysis of the effects of climate on one’s country and across the global may affect students’ and teachers’ mental health in the form of climate anxiety (Helm et al., 2018; Maran & Begotti, 2021). As schools play a key role in the educating students about climate change it is essential that we understand the presence of climate anxiety within our classrooms and its impact on teachers and their students.
As such, this book will offer a global dialogue, critically scrutinizing academic and practical approaches to address the universal challenges associated with climate anxiety within elementary, middle, and high schools. Authors from a variety of nations, will illustrate that climate anxiety is a world-wide phenomenon, that is often neglected from climate change dialog.
Within our call for chapters, we invite contributions that explore the following three themes:
Theme 1: Climate Anxiety within Schools
• Theoretical foundations of climate change education and anxiety
• Intersectionality of culture and climate anxiety within the classroom
• Theoretical foundations of climate change education and anxiety
• Principles of sustainable education, mental health, and climate anxiety
• Pedagogical perspectives of anxiety, sustainable education, and climate change education
Theme 2: The Impact of Climate Anxiety on Students and Teachers
• Evaluation of student and teacher experiences related to climate anxiety.
• Exploration of the psychological manifestation of climate anxiety in students and teachers.
• Critical examination of how climate anxiety impacts students’ learning and development.
• Description of how climate anxiety occurs within the classroom.
• Critical examination of how curriculum generates climate anxiety.
• Critical examination of the impact of climate anxiety on teaching praxis
Theme 3: Addressing Climate Anxiety
• Description of innovative and creative approaches to address climate anxiety in school settings.
• Description of pedagogical strategies to address students’ climate anxiety.
• Exploration of how climate anxiety should be addressed within schools.
• Rebuilding a cohesive learning environment after climate change induced disasters.
• Lessons learned from the challenges and successes of combating climate anxiety.
• Examining the need of policy and administrative support for addressing climate anxiety.
The editors are interested in a range of submissions and encourage proposals from a variety of practitioners within the field of education including, academics, educators, administrators, and graduate students. Submissions should include theoretical stances and practical applications.
Audience:
The book will be useful in both academic and professional circles. The intended audience for this book includes school administrators, educators, and advocates of climate change and reform, all of whom may find this book to be a useful teaching resource. In addition, the book can be used in a variety of courses graduate and undergraduate courses, including, but not limited to: educational psychology, curriculum development, current issues in education, methods and pedagogy, international education, and education law.
Proposals:
All submissions must be written in English.
Please submit as a PDF file for compatibility.
Prospective contributors should submit a 1000-word overview (excluding abstract) of their proposed chapter, including:
• Title
• Abstract – 250 words
• Contact information including name(s), institutional affiliation(s); email and phone number.
• A description of the chapter’s central argument that includes how your chapter addresses one of the central themes of the book.
• A clear explanation of the research underpinning any assertions, as well as the main argument, purpose and outcomes presented in the chapter.
• Where chapters will draw on specific research projects, we’d expect some detail in relation to the type of research, period, data set and size, and of course, the findings.
• 3-5 key words/phrases.
Font: Times New Roman size 12 font, double-spaced.
Please adhere to APA, 7th edition formatting standards.
Contributors will be sent chapter format and guidelines upon acceptance. Full manuscripts will be sent out for blind peer review.
Final Chapters:
Final papers should be approximately 7000 words, not including references.
Review Process:
Each author will be asked to review one chapter from the book and provide feedback to the author(s) and editors.
Important dates
Submission of title, abstract, and author(s) to editors - June 1, 2023
Notification of acceptance to authors - Sept 1, 2023
Submission of full manuscript to editors - January 8, 2024
Feedback from editors to authors - March 1, 2024
Submission of revised manuscripts to editors - May 1, 2024
Please send your submissions to: juliec@nipissingu.ca
Please feel free to contact the editors directly with any questions/queries:
Dr. Julie K. Corkett juliec@nipissingu.ca
Dr. Wafaa Abdelaal w.abdelaal@squ.edu.om
References:
Albrecht, G. (2011). Chronic environmental change: Emerging ‘psychoterratic’ syndromes. Climate Change and Human Well-being. New York. Springer. pp 43-56.
Clayton, S. & Karazsia, B. (2020). Development and validation of a measure of climate anxiety. Journal of Environmental Psychology, 69, 101434. https://doi.org/10.1016/j.jenvp.2020.101434
Helm, S.V., Pollitt, A., Barnett, M.A., Curran, M.A., & Craig, Z.R. (2018). Differentiating environmental concern in the context of psychological adaption to climate change. Global Environmental Change, 48, 158–167. https://doi.org/10.1016/j.gloenvcha.2017.11.012
IPCC (2018). Annex I: Glossary In Masson-Delmotte, V., P. Zhai, H.-O. Pörtner, D. Roberts, J. Skea, P.R. Shukla, A. Pirani, W. Moufouma-Okia, C. Péan, R. Pidcock, S. Connors, J.B.R. Matthews, Y. Chen, X. Zhou, M.I. Gomis, E. Lonnoy, T. Maycock, M. Tignor, and T. Waterfield (eds.) Global Warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty. In Press https://www.ipcc.ch/sr15/chapter/glossary/
IPCC. (2022). Climate Change 2022 Impacts, Adaptation and Vulnerability: Summary for Policymakers. Working Group II contribution to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. [H.-O. Pörtner, D.C. Roberts, M. Tignor, E.S. Poloczanska, K. Mintenbeck, A. Alegría, M. Craig, S. Langsdorf, S. Löschke, V. Möller, A. Okem, B. Rama (eds.)]. Cambridge University Press. https://www.ipcc.ch/report/ar6/wg2/downloads/report/IPCC_AR6_WGII_FinalDraft_FullReport.pdf
Maran, D. A. & Begotti, T. (2021). Media exposure to climate change, anxiety and efficacy beliefs in a sample of Italian university students. International Journal of Environmental Research and Public Health, 18, 1-11. https://doi.org/10.3390/ijerph1879358
Ojala, M. (2015). Hope in the face of climate change: associations with environmental engagement and student perceptions of teachers’ emotion communication style and future orientation. The Journal of Environmental Education, 46(3), 133-148. https://doi.org/10.1080/00958964.2015.1021662
Reyes, M. E. S., Carmen, B. P. B., Luminarias, M. E. P., Mangulabnan, S. A. N. B., Ogunbode, C. A. (2021). An investigation into the relationship between climate anxiety and mental health among Gen Z Filipinos. Current Psychology. 1-9. https://doi.org/10.1007/s12144-021-02099-3
Thunberg, G. (2019, January 25). 'Our house is on fire': Greta Thunberg, 16, urges leaders to act on climate. The Guardian. https://www.theguardian.com/environment /2019/jan/25/our-house-is-on-fire-greta-thunberg16-urges-leaders-to-act-on-climate
Weintrobe, S. (2012). The difficult problem of anxiety in thinking about climate change. In S. Weintrobe (Ed.). Engaging with Climate Change: Psychoanalytic and Interdisciplinary Perspectives (pp 33-47). Routledge.
my question is under game workshop
I am a graduate student, and my class is currently looking at the differences (and similarities) between research and evaluation. We are also currently looking at the work of Mertens’ Research and Evaluation in Education and Psychology (2020) when examining four educational research paradigms (I’ve attached a picture from that book that shows labels commonly associated with different paradigms as a quick descriptor).
I am wondering: What do you believe to be the differences and/or similarities of research and evaluation? Which of the four educational research paradigms (Postpositivism; Constructivism; Transformative; Pragmatic) do you most align with?
Thank you in advance for sharing your thoughts!
Greetings,
I am currently a graduate student taking Introduction to Research and Evaluation in Education. I've been tasked with posing the question of, "How does one define Research vs Evaluation?"
When I was a special education teacher, I completed many evaluations of student abilities both academically and cognitively and I see evaluation as a means to determine a path for a student's education.
Research on the other hand, entails posing a question and then determining possible answers while searching scholarly posts and journals.
Please comment on my question at your earliest convenience.
Mertens, D. M. (2020). Research and Evaluation in Education and Psychology (5th ed.). Sage Publications.
Hello. I am currently a student at Arizona State University in the Mary Lou Fulton College of Education as a graduate student. We have been tasked to define research and evaluation and explore the differences and similarities between them. The text we are using is Research and Evaluation in Education and Psychology by Donna Mertens (2020) which gives different models and paradigms for these two subjects. From my understanding of the text, research is exploration of topics and developing theories about that topic. Evaluation is the methodology to ensure you are properly investigating, documenting, and enhancing the world around you. I am interested in other people's viewpoints and would like to hear what you think.
Thank you.
I have developed a new technique. Although I have performed multiple experiments and obtained various "meaningful" results, I am unsure about how to evaluate the technique's performance and ensure the confidence of the obtained results since there are no other techniques available for comparison. What are the best practices for evaluating and validating a new technique in the absence of a benchmark tool or dataset?
Compare and contrast the different methods for assessing soil compaction, such as bulk density measurements, penetrometer tests, and visual assessments. Evaluate the strengths and limitations of each method, and their suitability for different soil types and land uses.
Hello there! I proposed in my phd mechanisms to support the Metaverse development focused on education of software engineering (coding, modeling, project management etc.).
If you're an XR developer, researcher or professor/teacher that uses XR technologies for education, I would be very grateful if you contribute with my research.
Evaluation link: https://forms.gle/871TyJapysdfKdmTA
How do we evaluate the opportunity in a business? Is there any model specific to this issue? Request input pl.
Hi,
We have few AI (Artificial Intelligence) solutions for different problems in public health. Few of the problems are binary in nature, while the rest is continuous. We need help in calculating sample size for measuring the accuracy of the AI (to reliably predict the problem).
For example, we developed an AI solution to estimate weight of a baby. We expect the AI to predict the weight reliably in 90% of babies - error to be less than 10% of actual weight by gold standard equipment. I can calculate sample size in two ways, I think:
- Assuming that variable of interest is binary - reliability of the AI prediction (yes/no)
- Assuming that variable of interest is continuous - actual error of AI prediction (grams - or %)
What should we choose? In the second option, which SD should we choose for ss calculation?
Thanks for reading and suggesting in advance.
PS - both the methods are applied on same study participant.
I have submitted my paper to the SSCI journal, the status "Evaluating Reviews" goes beyond 20 days, what does it mean?
Hello, I am a graduate student at Arizona State University with the Mary Lou Fulton Teachers College, I am pursuing a M.Ed in Learning Design and Technology, I am currently enrolled in Intro to Research and Evaluation in Education. After completing this weeks reading and formulating a definition of both research and evaluation, and comparing the two, I have a question to pose.
How can empirical data best be used in research and more importantly how can subject and data benefits evaluations, to help measure worth? Is evaluation as cut-and-dried as it seems or is there room for subjectivity?
I have been reading about the SSI scale (Gadzella, 1991) and saw a revised version available (Gadzella, 2012). However, I am unable to find the scale containing 53 items. Does anyone know how to obtain the inventory?
Gadzella, B. M., Baloglu, M., Masten, W. G., & Wang, Q. (2012). Evaluation of the student life-stress inventory-revised. Journal of Instructional Psychology, 39(2).
I do not want to analyze the text inside the book I just want to explore the existence of the main components.
Does anyone know how to access to the software "Similarity Evaluation System for Chromatographic Fingerprint of Traditional Chinese Medicine (Version 2004A) " ?
Otherwise, which other software can I use to analyze HPLC-UV fingerprint similarities ? We are using an UHPLC thermofisher system with Chromeleon.
Thanks.
To date, there have been two Redlists on fishes published in Bangladesh by IUCN, in 2000 and 2015. Both are focused mainly on inland fishes. Is there any work on marine fishes of Bangladesh? We had a publication, where we listed the threatened marine fishes of Bangladesh based on the global evaluation by the IUCN.
Hossain, M.A.R. and Hoq, M.E. 2018. Threatened Fishes and other aquatic animals of Bay of Bengal. FAN – Fisheries & Aquaculture News 5: 37-39.
I am looking for other works, publications and reports in this regard.
I started reading the textbook, Research and Evaluation in Education and Psychology this week. In Chapter 2: Evaluation; Mertens (2020) asserts that evaluation and research share similarities, however the differences between the two types of systematic inquiry are clearly distinguishable. (p. 86) After learning more about research and evaluation I have come to the conclusion that the two practices do not overlap as much as you may think they do.
Research and evaluation have different goals:
- Research is a system used for knowing or understanding
- Evaluation is a system used for discovering merit, worth, or quality
Research and evaluation have different processes:
- Research is made up of 8 main steps; identify world view, establish focus, literature review, identify design, identify data sources, collection methods, analysis and future dirction
- Evaluation is made up of 3 main steps; focusing the evaluation, planning the evaluation and implementing the evaluation
Research and evaluation use different terminology:
- The jargon used in research is similar to terminology you might be familiar with from science courses and experiments; variables, control groups, sample, subject
- The jargon used in evaluation is more similar to terms that you might be familiar with from business or management courses; merit, worth, monitoring, assessment, internal evaluators, stakeholders.
Taking into consideration stark differences between the goals, processes and terminology used in research vs. evaluation, I would argue that these two practices are not intertwined with one another. That is not to say that they can never be used together, rather that they can be used independently and with separate goals in mind.
I would love to hear some different perspectives on research vs. evaluation and how you may be applying them in your own work.
References:
Mertens, D. M. (2020). In Research and Evaluation in Education and Psychology: Integrating diversity with quantitative, qualitative, and mixed methods. 5th ed. SAGE
In the ScholareOne system, after the peer-review is completed, the status changes to "Evaluating Recommendation". How long does this status typically takes before hearing back from the journal editor?
I am looking for a study that dealt with the differences between institutions in student evaluation of the faculty. What I am interested in knowing is whether there is a difference between students from prestigious institutions and ordinary universities and colleges, one can perhaps assume that in private and competitive institutions the students will be critical and demanding. But I can't find any evidence or even a comparative study on the subject, I've been searching Google Scholar for a few days now
English language centers in the non English speaking world assess the English of their teachers and professors by using tests that are appropriate for U.S., Canadian, British, Australian environments. These specific contexts at times do not match the academic needs of language centers outside the U.S. or Great Britain for instance
Evaluation for epistemological and ontological differences between different research methodologies and
Evaluate the strength and weakness of variety of business and management research methods
I have created and validated a Campus Climate Identity Survey, as part of my doctoral work at NYU dealing with my home institution and am now looking for collaborators. The survey is validated with the pilot and really designed as a way to get comprehensive data in all the schools in academic health science centers not just the medical school component. Are you looking to gain a comprehensive view of the plight of your staff, students, and faculty at an academic health science center, then I'd love to chat with you.
I am on a state oral health department fellowship, and I am severely frustrated with picking an evaluation project. It has been months of literature research, brainstorming, planning, and talking with stakeholders. The topic I want to do is the Sugar Sweeten Beverage invention guide (similar to tobacco cessation 5A's) at the state level. However, I cannot formulate a question, target audience, and data to back up the evaluation. Has anyone done any state evaluation intervention method? Or any pointers? Thanks.
Hello,
I am a grad student at Arizona State University earning my degree in Learning & Curriculum in Gifted Education. I am enrolled in Introduction to Research and Evaluation. The major assignment in this course is to write a Research Proposal with Literature Review. We are currently discussing the differences in Research and Evaluation.
As a 5th grade teacher, I believe that research is a process to gain knowledge and information, whereas evaluation assesses the success of a program, organization, etc.
What is your educational role, and how do you differentiate between Research and Evaluation?
I am a graduate student at Arizona State University taking a course in research and evaluation in education. In our class, we are comparing and contrasting research and evaluation. After having read our text, (Mertens, 2020) Research and Evaluation in Education and Psychology, the author discusses the differences and parallels between the two. I had previously considered the two as interchangeable terms, or at least going hand-in-hand, however now there are evident distinctions that I can identify. The two do have overlap, but to me, research seems to be more of a process of uncovering and collecting new information in order to determine the "why" of a problem, scenario, or phenomenon. Evaluation, on the other hand, presents to me as a thorough process through which already available information is compiled to identify the "how well" or worth/value of an existing program or practice.
I am curious as to others' opinions on this topic. Do research and evaluation overlap, or are they singular and distinct? How are they used together? Must they be?
We are also discussing four paradigms that frame research and evaluation. Mertens (2020) describes them as post-positivism, constructivism, transformative and pragmatic. Do you feel that one paradigm would be more useful than another in carrying out research dealing with the efficacy of teachers of gifted populations based on their understanding of those students?
Hello everyone!
I am a graduate student at Arizona State University and we are focusing on the difference between research and evaluation. I teach Kindergarten and am working toward my Literacy Education graduate degree. In my opinion, research focuses on gaining new knowledge about a topic or purpose, while evaluation focuses on the program or purpose already used and then asking questions about it to understand its effectiveness. In your opinion, what is the major difference between research and evaluation?
As a classroom teacher, how do you think this be utilized or defined in a classroom, especially at the primary level?
As part of my fellowship, I want to evaluate the oral health surveillance system as part of my fellowship. I already read CDC's guidelines for evaluating surveillance systems, but I am still confused about how to assess one. Does anyone have examples of work or reviews done for this type of evaluation?
I need to statistically analyse the speed-accuracy trade-off for a reaction time task.
The design of my study is: 2*2*3 (group, task difficulty, valence condition)
I want to check whether there is a speed-accuracy trade-off between the two groups under low and high task difficulty. I came across this paper but the statistical analysis given here is quite confusing to me.
Could someone tell me the stepwise process in SPSS?
What is the best and simplest tool (other than Excel) for making comparison charts such as line charts for algorithms comparison and evaluation purposes?
Dear colleagues,
I’m conducting a study that is intended to identify determinants of evaluation use in evaluation systems embedded in public and non-profit sectors. I’m planning to conduct a survey on a representative sample of organizations that systematically evaluate the effects of their programs and other actions in Austria, Denmark, Ireland and the Netherlands. And here comes my request: can anyone of you, familiar with evaluation practice in these countries, suggest what types of organizations I should include in my sample? Are there any country-specific organizations active in the evaluation field that I should not omit?
It is obvious to me that in all these countries evaluation is present in central and local government (ministries, municipalities, etc.) as well as institutions funding research or development agencies, but I also suspect that there might be some country-specific, less obvious types of organisations which are important “evaluation players”.
Thanks for any hints.
Through multiple empirical studies, I have collected user needs for an ICT intervention. During this study, I intend to design a prototype and then evaluate the prototype to check whether user needs are captured in the proposed design.
What is the most suitable approach? Quantitative, Qualitative or mixed?
Are we evaluating the features of the prototype or evaluate the user requirements?
propolis (Bee Glue) and Evaluate Its Antioxidant Activity
I am now working with the project of "Evaluate the Impact of the Implementation of GDPR on the Role of the European Court". Before conceptulizing it for the discussion, I need to collect some data and have some ideas of the discussion for it. Do you have any articles or reasearches recommended about this topic?
Dear colleagues, dear participatory-action research practitioners,
I would like to open the discussion on the criteria for evaluating participatory research (whether it is action-research, participatory action research, CBPR, etc.).
How do you evaluate participatory research projects that are submitted for research grants and/or publications (papers) ? Do you apply the same criteria as when you evaluate non-participatory research projects? Or have you developed ways to evaluate non-scientific dimensions such as the impact of this research on communities, the quality of connections between co-researchers? And if so, how do you proceed ?
Thank you in advance for sharing your experiences and thoughts.
Pour les collègues francophones, n'hésitez pas à répondre en français ! Quels sont les critères que vous utilisez pour évaluer des projets de recherche participative ? Utilisez-vous les critères d'évaluation scientifique que vous appliquez aux autres types de recherche ou est-ce que vous avez des critères spécifiques et si oui, lesquels ?
Baptiste GODRIE, Quebec-based social science researcher & participatory action research practitioner
Comparative Evaluation of Selected high and Low Molecular Weight Antioxidant Activity in Rat.