Science topic

Evaluation - Science topic

Evaluation in all area.
Questions related to Evaluation
  • asked a question related to Evaluation
Question
1 answer
Hello professor,
I am a postgraduate student at the School of Physical Education of Dalian University. Currently, I am working on my master's thesis titled "Research on the Construction of the Technical and Tactical Evaluation System for Male U12 School Football Players - Taking the Football Featured Schools in Jinpu New District, Dalian City as an Example". In order to build a technical and tactical evaluation system for male U12 school football players, we have already converted various indicators into questions and formed the final expert survey scale. This questionnaire is the first round of the Delphi expert questionnaire and is an important part of my master's thesis. Your valuable opinions will provide significant value to this research. Based on the needs of this research, this Delphi method questionnaire needs to be conducted two to three times. Please evaluate the importance of the indicators represented by different items in the scale based on your professional knowledge and understanding of the research content. Your answers are crucial for the successful completion of this research, and your valuable suggestions will also add value to this study. Sincerely thank you for your support and cooperation in this project. I wish you good health and smooth work!
Relevant answer
Answer
The construction of a Technical and Tactical Evaluation System for male U12 school football players should focus on assessing both technical skills and tactical understanding. For example, the technical evaluation could include drills like passing accuracy, where players are required to pass the ball through cones or to a teammate at varying distances. A player’s success rate, speed, and ability to control the ball would be recorded. Another technical drill might assess dribbling skills, where the player dribbles through a set of obstacles (cones or defenders) to measure their ball control, agility, and ability to maintain possession under pressure.
On the tactical side, the system could involve scenarios that test a player’s ability to make quick, smart decisions during a match. For instance, during a small-sided game, the coach might observe whether a player can correctly position themselves defensively, recognize when to press the ball, or when to pass to a teammate in space. The player’s ability to adjust to the game’s flow, such as choosing when to attack or defend, would be evaluated. An example of this might be a tactical scenario where the team is instructed to defend a counter-attack. Evaluating how each player responds to maintaining shape, covering space, and communicating with teammates during these moments would be critical.
The evaluation system could also include periodic reviews where coaches provide feedback on both strengths and areas for improvement. For example, if a player consistently shows good dribbling but struggles with passing accuracy, the coach would focus on drills to improve their passing. Over time, the system would allow coaches to track players' progress by documenting improvements in both technical skills (like shooting accuracy or dribbling speed) and tactical awareness (like positioning or decision-making). This continuous, individualized feedback loop helps develop well-rounded football players who understand both the skills and the game strategies essential to succeed.
  • asked a question related to Evaluation
Question
3 answers
Experts in Finance, Islamic Finance, Environmental Economics, etc. may please share credentials for the Evaluation of Ph.D. Theses. My WhatsApp is 0092-3215866992.
1 Name : Ph.D. Specialization Area: 2 Position/Designation : 3- Field of specialization Finance, world economy, international relations, etc. 4- Mailing Address, Institutional is preferable 5- Email address Official : 6- Cell Phone/. contact no. Landline, WhatsApp, Mobile preferably : 7- Institution and Country : Thank you for your consideration in advance
Relevant answer
Answer
1. Name: Dr CMA Alagesan M V
2. Ph.D. Specialization Area: Non Performing Loans of Indian Banks.
3. Position/Designation : Assistant Professor
4. Field of specialization : Finance and Accounting, Cost and Management Accounting.
5. Mailing Address, Institutional is preferable : alagesan100@yahoo.co.in and mvalagesan100@gmail.com
6. Email address Official : alagesan@xime.org
7. Cell Phone/. contact no. Landline, WhatsApp, Mobile preferably : + 91 9538677040.
8. Institution and Country : Xavier Institute of Management and Entrepreneurship, Bangalore, India, 560100.
  • asked a question related to Evaluation
Question
4 answers
Hello everyone, I am a grad student currently taking a course on research and evaluation in education. I would like to know your thoughts on research vs evaluation. Do you feel there is distinguishing characteristics between or do you feel the overlap? Is there one that your prefer? As a high school algebra teacher, I do believe evaluation has the biggest influence in education especially when I do formal and summative assessments with my students. Thank you so much for taking time out of your day to answer my question and I look forward to hearing back from you.
Relevant answer
Answer
Dear Saily Valadez, you raise an important, but not entirely simple question.
The distinction between 'research' and 'evaluation' lies in their objectives, methods, and areas of application.
Research aims to generate new knowledge or expand existing knowledge. It is often theory-driven and exploratory, seeking to discover or understand fundamental principles. Research investigates open questions and tests hypotheses to systematically gain new insights. In contrast, evaluation focuses on assessing the value or effectiveness of an existing practice, program, or intervention. It seeks to evaluate outcomes, processes, or structures to determine whether specific goals have been met.
In terms of methods, research employs a wide variety of approaches, including qualitative, quantitative, experimental, and non-experimental methods, often aimed at developing new theories or models. It can be exploratory and is often foundational or applied in nature. Evaluation, on the other hand, tends to use more standardized or pragmatic methods (such as surveys, interviews, or data analysis) to answer specific questions about a project or program. Its goal is to provide measurable results based on concrete criteria or benchmarks.
Regarding areas of application, research is conducted in various fields, such as science, technology, social sciences, and medicine, and is typically long-term and broad in scope. Its primary goal is to contribute to general knowledge, without necessarily aiming for immediate practical application. Evaluation, however, is often carried out in practical fields like education, public administration, healthcare, development aid, or project management. It is more practice-oriented and seeks to provide actionable insights to improve programs or decision-making processes.
In summary, research seeks to uncover new knowledge and theoretical foundations, whereas evaluation assesses existing measures or processes to determine their effectiveness or efficiency.
  • asked a question related to Evaluation
Question
2 answers
Today, we shall make great attempts to parse through academic language for words that seem simple enough yet require special attention.
Analyzing the text entitled "Research and Evaluation in Education and Psychology: Integrating Diversity With Quantitative, Qualitative, and Mixed Methods" by Donna M. Mertens, I am attempting to define and delineate between the words research and evaluation.
Research is a means of gathering the effectiveness, presence, quality, or therefore lack of something using a mapped-out plan or strategy to answer a question or respond to a hypothesis.
Evaluation is what happens after results are gathered, and it is a means of assessing the data amassed from research.
Research appears to be a preliminary step that serves as a prerequisite to the evaluation that brings to fruition the answer to the question needing to be evaluated in the first place.
Relevant answer
Answer
What is commonly known as "evaluation research" focuses on the effectiveness of programs, i.e., did a program make a difference for its participants? As such, it is a specific type of research.
  • asked a question related to Evaluation
Question
5 answers
As a graduate student at the Arizona State University in the Mary Lou Fulton College of Education, I am beginning my Capstone process. In one of our classes, we have been asked to develop a definition for "Research" and "Evaluation" in our own words.
Research and Evaluation
· Research is the collection of information with the intent of increasing knowledge and understanding on a specific subject.
· Evaluation is the practice of judging the performance to a specific set of criteria or standards for accomplishment.
To compare and contrast "Research" and "Evaluation" if noticed these specific items.
Compare and Contrast
· Similarities – Both Research and Evaluation should be grounded in empirical evidence. In each, reliable and valid evidence is collected for analysis.
· Differences – The purpose of research is to collect information to explain existing bodies of knowledge or generate new theories. The purpose of evaluation is to assess the understanding or performance against a specified standard.
In your experience as educators or professionals, is there marked differences between these concepts or have they become synonymous?
Relevant answer
Answer
Academic research and evaluation both involve systematic inquiry, but they differ in purpose, scope, and methods. Academic research aims to generate new knowledge, explore theories, or address unanswered questions, often contributing to broader scholarly discourse. It emphasizes theory-building, hypothesis testing, and rigorous methodologies, with findings typically shared through peer-reviewed publications. Evaluation, on the other hand, focuses on assessing the effectiveness, efficiency, or impact of a specific program, policy, or intervention to inform decision-making. While both use qualitative and quantitative methods, evaluation is more applied, context-specific, and results-driven, offering actionable insights for stakeholders. A key similarity lies in their reliance on data collection and analysis to draw conclusions, but academic research prioritizes knowledge creation, whereas evaluation emphasizes practical application.
  • asked a question related to Evaluation
Question
3 answers
Hello! I am a graduate student at Arizona State University taking an introduction to educational research course. In an effort to explore research and evaluation our instructors have asked us to seek out input from the academic community regarding the differences between the two practices. How do you think the two practices contrast? is there any overlap between the two, or are they entirely separate from one another? Thanks so much for your response!
Relevant answer
Answer
All evaluations are research but not all research is evaluation. Evaluation examines the effectiveness and efficiency of a practice--standard research tools can be used. A judgment about values is the end goal. Research does not have to be evaluative: DEEP is a general framework of four research purposes: descriptive, exploratory, explanatory, and predictive.
  • asked a question related to Evaluation
Question
4 answers
Subject: Invitation for Ph.D. Thesis Evaluation in Marketing
Dear Professors and Associate Professors,
Greetings!
I am writing to invite esteemed academicians in the discipline of Commerce, particularly those affiliated with the ResearchGate forum both in India and abroad, to serve as external examiners for the evaluation of Ph.D. theses under my supervision. The research work is in the field of Marketing, and we are seeking experienced professionals to provide critical and valuable assessments.
If you are interested in participating as an examiner, we kindly request you to share your bio-data and contact profile at your earliest convenience. Please send the details to my email: kes7brinda@gmail.com.
Your contribution will be invaluable in enhancing the quality and rigor of our research, and we truly appreciate your support in this academic endeavor.
Thank you for considering this request, and I look forward to your positive response.
Warm regards, Dr. N. Kesavan Associate Professor, Department of Commerce, Annamalai University, Annamalai Nagar – 608 002. Tamil Nadu, Republic of India. Email: kes7brinda@gmail.com
Relevant answer
Answer
Dear Respected Sir, Kindly share your CV in my email kes7brinda@gmail.com
  • asked a question related to Evaluation
Question
1 answer
🚀 Build a Simple Linear Regression Model | Step by Step Guide Using Real World Data 📊
Are you looking to strengthen your understanding of Linear Regression? Look no further! In this step-by-step guide, I walk you through building a Simple Linear Regression Model from scratch using real-world data. 🎯
🔍 In this video, you'll learn:
  • The fundamentals of Linear Regression and how it works
  • How to preprocess real-world data for modeling
  • Hands-on implementation using Python and its libraries
  • Evaluating model performance with key metrics like R-squared and MSE
👨‍💻 Whether you're a beginner or brushing up on your skills, this tutorial offers practical insights and code walkthroughs to help you get started in data science and machine learning.
🎥 Watch the full video here: https://youtu.be/CMbjN913mg8
#LinearRegression #MachineLearning #DataScience #AI #Python #ML #RegressionModel #RealWorldData #TechTutorials #ProfessorRahulJain #AIForEveryone
Relevant answer
Answer
There are hundreds (if not thousands) of books, papers, textbooks, and online resources in which a simple linear regression and all its aspects are covered in detail. What makes you think a short 6.5-minute video tutorial contributes to anything not already covered in numerous other sources?
  • asked a question related to Evaluation
Question
3 answers
I've recently come across several journals that raise serious concerns regarding their legitimacy, yet some of them have managed to secure a spot in the Scopus database. A prime example is the International Journal of Chemical and Biochemical Sciences (ISSN 2226-9614), which has a notably poor-quality website that clearly doesn't meet the standards expected of reputable academic journals. Although this journal has since been delisted from Scopus, the fact that it was ever included is alarming.
Another example is Kexue Tongbao/Chinese Science Bulletin (https://www.kexuetongbao-csb.com/), which despite its unprofessional presentation, holds a Q2 ranking in Scopus.
This brings up several important questions:
  1. How do journals like these manage to bypass Scopus' evaluation standards and achieve high rankings?
  2. Is there a possibility that these journals are engaging in unethical practices to manipulate their inclusion and ranking in Scopus?
  3. Could there be political or other non-academic factors influencing these decisions within the Scopus community?
  4. What measures should be taken to prevent such journals from misleading researchers and degrading the integrity of academic publishing?
I’m interested in hearing the community's thoughts, particularly from those with experience in academic publishing, journal evaluation, and Scopus indexing.
This should help stimulate a discussion on the practices and potential issues within the academic publishing world.
Relevant answer
Answer
  • asked a question related to Evaluation
Question
2 answers
Evaluating the high impact of Cybercrime on the telecommunication sector in the South Africa market.
Relevant answer
Answer
It is a good question. However, you need to define the limits of the telecommunications sector. Because in the modern era, the whole digital economy components are intervened and absolutely difficult to completely treat each and everyone in isolation.
  • asked a question related to Evaluation
Question
3 answers
Evaluation of the fisheries resources of a selected project areas
Sample of collection and preparation stock assessment
Relevant answer
Answer
A number of recent developments have made assessment methodologies more readily accessible and less data-hungry. Rainer Froese has been spearheading some particularly successful ones with a group of international scientists. See e.g.
Assessment in support of management should be particularly alert to listening to the experiences of practitioners, particularly small-scale fishers, out on the water every day, and be in line with a few rather simple principles. See
Some initial elements can be derived even from length-weight relationships - support parameters and routines are regularly added in www.fishbase.org
  • asked a question related to Evaluation
Question
3 answers
Evaluation is a part of the teaching-learning process. Many teachers, however, inculcate the habit of listening to themselves and being mainly concerned with what they themselves are saying instead of concerning themselves with listening to and evaluating what learners are saying.
Relevant answer
Answer
La evaluación es un ciclo recursivo y reflexivo en 360°. Por tanto, la evaluación que se realiza a los estudiantes es un insumo para la reflexión y autoevaluación del docente y de los otros actores del contexto educativo (padres, madres, directivos, personal paradocente, entre otros), que también influyen en este proceso de aprendizaje y enseñanza. Respecto a los tipos de evaluación sumativa o formativa no es concebible pensarlas en forma excluyente, ambas forman parte del proceso evaluativo, si comprendemos la evaluación como un proceso de mejora continua.
  • asked a question related to Evaluation
Question
25 answers
In any professional education students' Competency is Evaluated and used as the measurement of Outcome of the teaching-learning process.
Then how these two are differentiated in curriculum development?
Relevant answer
Answer
CBE is more effective than OBE. CBE makes one more competent to perform anywhere. OBE is product of CBE. In some institutions, It is observed that the result is marvelous, but they are not competent. At any rate they are showing the outcomes. For example, all A grade acredetated institutions are not competent enough. Hence, the combination of both approaches is more meaningful and need of the hour.
  • asked a question related to Evaluation
Question
1 answer
As a graduate student at the Arizona State University in the Mary Lou Fulton College of Education, I am beginning my Capstone process. In one of our classes, we have been asked to develop a definition for "Research" and "Evaluation" in our own words.
Research and Evaluation
· Research is the collection of information with the intent of increasing knowledge and understanding on a specific subject.
· Evaluation is the practice of judging the performance to a specific set of criteria or standards for accomplishment.
To compare and contrast "Research" and "Evaluation" if noticed these specific items.
Compare and Contrast
· Similarities – Both Research and Evaluation should be grounded in empirical evidence. In each, reliable and valid evidence is collected for analysis.
· Differences – The purpose of research is to collect information to explain existing bodies of knowledge or generate new theories. The purpose of evaluation is to assess the understanding or performance against a specified standard.
In your experience as educators or professionals, is there marked differences between these concepts or have they become synonymous?
Relevant answer
Answer
This question seems to get asked here at least twice a year. You can use the search function at the top of the screen to see previous discussions of this issue.
  • asked a question related to Evaluation
Question
1 answer
What are the differences between research and evaluation?
As a graduate student at ASU, I was given the task to answer and ask this question. In your experience, what are the similarities and differences between research and evaluation? How do you separate one from the other?
Relevant answer
Answer
These question seems to be a class assignment, and as such it has already been addressed several times here. Use the search function at the top of the page to see those previous answers.
  • asked a question related to Evaluation
Question
1 answer
H-INDEX & CITATION EVALUATIONS OF ACADEMICIANS, HOW MUCH RELIABLE !?
Relevant answer
Answer
The opinion that the first author always did the most for the results and for writing an article is not correct. Sometimes the leader of a group of authors is at the first place, sometimes all authors are given in alphabetic order.
  • asked a question related to Evaluation
Question
1 answer
Looking for any corresponding research topic in relation to the topic above
Relevant answer
Answer
Cooperatives are effective tools for advancing sustainable livelihoods in rural areas globally. Cooperatives serve a vital role in promoting economic empowerment, social inclusion, and environmental stewardship by utilizing the combined strength and resources of their members. It plays a crucial role in promoting sustainable livelihoods by offering rural community members opportunities to access markets, loans, and technical help. By engaging in collective action, small-scale producers can surmount obstacles to entering markets, secure more favourable prices for their goods, and gain access to financial services that empower them to invest in their enterprises and enhance their productivity. In addition, cooperatives frequently give priority to environmentally conscious techniques and sustainable management of resources, such as organic farming, agroforestry, and water conservation. This ensures the long-term sustainability of natural resources and ecosystems, which are crucial for rural livelihoods. Furthermore, cooperatives enhance social unity and integration by encouraging underrepresented demographics such as women, youth, indigenous communities, and individuals with disabilities to engage in economic endeavours and decision-making procedures.
  • asked a question related to Evaluation
Question
8 answers
I’m currently learning about the similarities and differences between research and evaluation in my graduate course at ASU.
As an instructional designer, I conduct informal research to learn about new projects, and I create structured evaluation plans to identify the success of projects. So, my experience with research and evaluation is dichotomous; they are mutually exclusive concepts.
I’m curious about others' experiences where research is a subset of evaluation, or vice versa. Would you share examples from your perspective?
Relevant answer
Answer
A simple (but hopefully not simplified) explanation, research generally aims to explore while evaluation aims to improve. But there can be a lot of overlap between the two so we can also have "evaluative research."
  • asked a question related to Evaluation
Question
3 answers
I am a graduate student in Arizona State University's Learning Design and Technologies program currently taking a course on research and evaluation in education. Based on your experience, how would you describe the most significant distinctions between the practices of "research" and "evaluation"? Do you find there is meaningful overlap between the two? How do you see both practices fitting into your work?
Relevant answer
Answer
I think of research as a broad set of activities that involve systematically collecting and analyzing data in order to answer questions. In contrast, I think of evaluation as a specific type of research that is defined by the goal of assessing the effectiveness of programs and interventions.
  • asked a question related to Evaluation
Question
19 answers
I have built a hybrid model for a recognition task that involves both images and videos. However, I am encountering an issue with precision, recall, and F1-score, all showing 100%, while the accuracy is reported as 99.35% ~ 99.9%. I have tested the model on various videos and images (related to the experiment data including seperate data), and it seems to be performing well. Nevertheless, I am confused about whether this level of accuracy is acceptable. In my understanding, if precision, recall, and F1-score are all 100%, the accuracy should also be 100%.
I am curious if anyone has encountered similar situations in their deep learning practices and if there are logical explanations or solutions. Your insights, explanations, or experiences on this matter would be valuable for me to better understand and address this issue.
Noted: An ablation study was conducted based on different combinations. In the model where I am confused, without these additional combinations, accuracy, precision, recall, and F1 score are very low. Also, the loss and validation accuracy are very high on other's combinations.
Thank you.
Relevant answer
Answer
Results after some modifications in the code where I made mistakes before.
  • asked a question related to Evaluation
Question
2 answers
Evaluation of star formation
Please i was already read basic information.
Don't send links of Wikimedia
Relevant answer
Answer
Probably just random chance. There is no reason that binary stars can't have habitable planets (though there are some constraints on the kind of orbits such planets can have), and if we happened to live on one of them, then the Sun would be part of a binary system. But there are many planets around single stars, and we just happen to live on one of them, so it naturally follows that the Sun isn't a binary star.
  • asked a question related to Evaluation
Question
1 answer
Write the research proposal Evaluating the Feasibility and Strategies for the Entry of Tech Innovator into the Indian Market?
Relevant answer
Answer
I recommend AnswerThis, an AI research tool to facilitate the writing. https://answerthis.io/signup.
  • asked a question related to Evaluation
Question
5 answers
: Critical Evaluation of the Evolution of 2 Traditional Leadership Theories and 2 Traditional Management Theories to 2023/2024 Context and Drivers
Relevant answer
Answer
The evolution of traditional leadership and management theories has undergone critical evaluation over time, reflecting shifts in organizational dynamics and societal paradigms. Two prominent traditional leadership theories, trait theory and contingency theory, have experienced significant transformation under the scrutiny of contemporary perspectives. Trait theory, once focused on inherent characteristics of effective leaders, has expanded to include the recognition of situational influences and the development of leadership skills. Similarly, contingency theory, initially emphasizing the match between leadership styles and situational factors, has evolved to encompass a broader understanding of contextual complexities and the need for adaptive leadership approaches.
In contrast, traditional management theories such as scientific management and bureaucratic management have faced scrutiny regarding their applicability in modern organizational settings. Scientific management, championed by Frederick Taylor, emphasized efficiency through task specialization and standardized processes. However, criticisms have emerged regarding its potential to stifle creativity and innovation in today's dynamic workplaces. Likewise, bureaucratic management, advocated by Max Weber, has been challenged for its hierarchical structure and rigid bureaucratic procedures, which may hinder agility and responsiveness in rapidly changing environments.
Overall, the critical evaluation of traditional leadership and management theories underscores the importance of adapting these frameworks to contemporary organizational contexts, emphasizing flexibility, innovation, and a deeper understanding of human behavior and organizational dynamics.
  • asked a question related to Evaluation
Question
1 answer
Good day all,
I'm currently working on "Evaluating the Feasibility of Using Earth Observation Technology to Monitor Soil Organic Carbon Quantity and Quality in Comparison to Traditional Laboratory Analysis" using EnMAP and Sentinel 2.
Please I need help with creating tiles for my study area......
Thank you
  • asked a question related to Evaluation
Question
5 answers
I haven't been able to find any scholarly sources that explain or even mention this question. However, in the field, I've noticed that the community development organisations I have worked for have preferred to use more traditional evaluation methods. I just want to find a paper that has noticed the same thing!! Please help! Thank you!
  • asked a question related to Evaluation
Question
1 answer
Which journal is the best?
Relevant answer
Dear fellow,
Check the following journals where you can submit your research.
The publication standards for the following journals are very high.
- Journal of Agricultural Economics
- Development Studies Research
- African Journal of Agricultural Research
- Journal of Agricultural and Applied Economics
- Food Policy
In case you can't publish in the above journals, you can consider the journals below:
- East African Journal of Science, Technology, and Innovation
- Journal of Agribusiness in Developing and Emerging Economies
- African Journal of Agricultural Marketing
- International Journal of Agricultural Economics and Extension
Keep it up!
  • asked a question related to Evaluation
Question
1 answer
  • Distinguish between the choice of crops and varieties in organic farming versus conventional farming.
  • Evaluate the factors influencing crop selection in organic agriculture and its impact on biodiversity and sustainability.
Relevant answer
Answer
I understood your question in two manners ! that's why I will respond for both types of interpretations from my own point of view :
First , the crop selection in the concept of organic agriculture means the most competitive crop who will survive among others , will grow and dessiminate and will be the most productive and resistant to climatic conditions , abiotic and biotic diseases : This is will be more probably for the most vigorous species , the species presenting mutualistic microbium like the fungi ( AMF) , the rhizobium the case of Fabacea , and the species resistent to certain climatic conditions and most of it the plants acclimatized and adapted to these conditions ( example the Opuntia ficus indica is adapted to semi-arid and arid species and it is resistant to drout ,beaten , soils because it is a xerophytic CAM plant or the Olive trees is resistent to calcarous soils ....) and in function of type of the soil you have to choose the plants that will be a perfect fit for your soil composition ( content in N-P-K and in micro-elements ) . and you have to choose the optimum combination in case of intercropping for example with the olive trees you can plant Fabacea since it enrich soil in nitroge nand in order to reduce the biotic and abiotic stress you have to optimize your cultural practices namely : prunning , desinfecting the tools , aeration ......)
The second type of interpretation the selection : that means the type of plant that you should select : The selection depends of the soil's type , the climatic conditions , and the aimed productivity , the histroy of the parcel and the neiner parcels ( the kind of pests attacking and the plants ) , the water irrigation content , and the common pests in the region
this will make you optimize your choice of plant
  • asked a question related to Evaluation
Question
4 answers
The aim of this research is to investigate the impact of compensation and reward system in determining employees willingness to continuing staying in a particular job.
The study group is working class Post Graduate students at University of sunderland.
The research method will be interview of prospective students
Relevant answer
Answer
This two variables show significant correlation to the criterion variable in that the employees will be more committed and motivated consequent upon which production increase.
  • asked a question related to Evaluation
Question
2 answers
Dear colleagues,
A research team is conducting a study to evaluate emotions that have been automatically generated by using GANS, through a small survey. The survey presents 20 works of art in four different versions, each aimed at evoking one of the following emotions: amusement, delight, dread, and melancholy.
Your opinion is highly valued. Kindly access the form provided via the link and indicate the emotion you perceive each one of the 20 works of art to evoke. We suggest increasing the screen brightness to have a better view of the images.
Thank you for participating in this research. Your responses will be greatly appreciated. Feel free to share with your contacts.
Relevant answer
Answer
I'm not sure, as we will need to analyse the data and write the paper over the next year.
  • asked a question related to Evaluation
Question
1 answer
Distinguish between short-term challenges and potential long-term benefits. Assess the risks, including the potential for introducing harmful pathogens or unintended ecological consequences, and weigh them against the long-term benefits of sustainable agricultural practices.
Relevant answer
Answer
Some of these impacts include algae blooms causing the depletion of oxygen in surface waters, pathogens and nitrates in drinking water, and the emission of odors and gases into the air. Long term uses to restore the soil's fertility; bio fertilizers are necessary. The use of chemical fertilizers over a lengthy period damages the soil and reduces crop output. On the other side, bio fertilizers improve the soil's ability to hold water while also adding vital minerals like nitrogen, vitamins, and proteins.Soil Temperature, pH, Bio fertilizer source and species, and soil aeration. Factors affecting the efficiency of bio fertilizers are many, the most important of which are the C / N ratio, ventilation, soil moisture content, and degree of reaction PH of the soil.
  • asked a question related to Evaluation
Question
1 answer
I need experts in the Field of Measurement and Evaluation?
Relevant answer
Answer
Measurement and evaluation as subject teaching at M.Phil level . If any guidance or help require It will be my pleasure.
  • asked a question related to Evaluation
Question
3 answers
When you read an epidemiological research paper what are some of the red flags you encounter in phrasing, statistical tests used, and glossing over controlling for confounding? For example, when you evaluate the COVID reports or vaccine research what are key elements that if not present call into question the research or if included raise doubts?
Relevant answer
Answer
So basically you're discussing the use of observational data to answer causal questions (which are often best answered using randomized trials). In that case, the most important thing will be bias (selection bias, measurement error, and confounding bias). I will specifically look for the detail of study design and statistical analyses on this part, i.e., how the authors came up with measures to mitigate bias when they recruit and follow up participants, or if this is a retrospective study, how they can confirm the validity of the measurements.
Confounding is often seen as the most important problem in answering causal questions, so of course I'll look for the methods used to control for confounding. As you said in your comment, it may be multiple regression where the potential confounders are selected based on a DAG (or DAGs), some causal inference method such as propensity score or instrumental variable, or the confounders are controlled by design (restriction, matching, etc). However, I believe the use of one method is not sufficient; I'll read the description, too, to make sure that their actual implementation is valid. Unfortunately, you'll see that not all papers described their DAG in detail (e.g., how they came up with the variables and the relationships between variables); some didn't even publish their DAG. Or for propensity score methods, they didn't show the diagnostic results (SMDs, Love plot, histogram of weights).
It's hard to summarize everything I'll look for in a few keywords, but yeah, probably "did they handle bias properly?".
  • asked a question related to Evaluation
Question
1 answer
Dear Admin,
You uploaded a study in pdf and the system automatically entered the title of the pdf as the title of the article. I changed the pdf, but only the title remained.
How can I change this so that the title of the study is displayed? "What Does a Tourist See, or, an Environmental–Aesthetic Evaluation of a Street View in Szeged (Hungary)"
thanks,
Ferenc
Relevant answer
Answer
Ohh, sorry! I got it! :-)
F
  • asked a question related to Evaluation
Question
2 answers
Someone should please guide me on how to write this project
Relevant answer
Answer
The following is a proposal template for evaluating the phytochemical characteristics of plants used in the treatment of malaria. Please tailor it to fit the specific requirements and guidelines provided by your institution or funding agency.
Title: Evaluating the Phytochemical Characteristics of Plants Used in the Treatment of Malaria
  1. Introduction: Malaria remains a significant public health challenge worldwide, particularly in regions where resources are limited. Traditional medicine, including the use of plant-based remedies, has been a vital source of healthcare for many communities. Various plants have been traditionally employed in the treatment of malaria symptoms, and their therapeutic potential is of great interest. This research proposal aims to evaluate the phytochemical characteristics of plants known for their anti-malarial properties, with the ultimate goal of discovering potential new sources of anti-malarial drugs.
  2. Objectives: The primary objectives of this study are as follows: a. Identify and collect plant species commonly used in traditional medicine for malaria treatment. b. Extract and analyze the phytochemical composition of the selected plants. c. Determine the potential anti-malarial activity of the plant extracts through in vitro assays. d. Investigate the safety profile of promising extracts using relevant toxicity tests.
  3. Methodology: a. Plant Collection: A comprehensive survey will be conducted to identify and collect plant species commonly used in the treatment of malaria in the target region. Ethnobotanical studies and consultations with traditional healers will guide the selection of plants for analysis.
b. Phytochemical Extraction: The selected plant samples will undergo a sequential extraction process using various solvents of increasing polarity (e.g., hexane, ethyl acetate, methanol). These extracts will be concentrated to obtain crude extracts for further analysis.
c. Phytochemical Analysis: The crude extracts will be subjected to phytochemical screening to identify the presence of various secondary metabolites, including alkaloids, flavonoids, terpenoids, phenolics, and saponins. Modern analytical techniques such as High-Performance Liquid Chromatography (HPLC) and Gas Chromatography-Mass Spectrometry (GC-MS) will be utilized to identify and quantify the individual bioactive compounds.
d. In vitro Anti-malarial Activity: The extracts will be evaluated for their anti-malarial potential against Plasmodium falciparum strains using established in vitro assays. The efficacy of the extracts will be compared to standard anti-malarial drugs.
e. Toxicity Assessment: The safety profile of promising extracts will be assessed through cytotoxicity studies using human cell lines. Additionally, acute toxicity studies on small animal models will be conducted to gauge potential adverse effects.
  1. Data Analysis: Quantitative data obtained from the phytochemical analysis and anti-malarial assays will be subjected to appropriate statistical analysis using software such as R or SPSS. The results will be interpreted, and correlations between phytochemical constituents and anti-malarial activity will be explored.
  2. Implications: This research is expected to provide valuable insights into the phytochemical composition of plants used in malaria treatment and their potential as sources of anti-malarial agents. The discovery of new bioactive compounds can contribute to the development of novel and affordable anti-malarial drugs. Moreover, the study may also highlight the importance of preserving traditional knowledge related to medicinal plants.
  3. Ethical Considerations: Ethical approvals will be obtained from relevant authorities and institutions before conducting the research. Informed consent will be obtained from participants, and proper measures will be taken to respect the intellectual property and cultural rights of the local communities involved.
  4. Budget: A detailed budget will be prepared, including costs for plant collection, laboratory analyses, research materials, and personnel.
  5. Timeline: A realistic timeline will be developed to ensure the smooth execution of the project, with appropriate milestones for each phase.
  6. Conclusion: By evaluating the phytochemical characteristics of plants used in the treatment of malaria, this research aims to contribute to the advancement of scientific knowledge in the field of anti-malarial drug discovery. The potential findings could have a significant impact on global health, especially in malaria-endemic regions.
  7. References: A list of relevant references and sources will be provided to support the proposal's background and methodology.
Remember, the format and specific content of the proposal may vary based on the requirements of the funding agency or institution you are submitting it to.
  • asked a question related to Evaluation
Question
4 answers
Dear colleagues,
I am reaching out to you for assistance in finding an approach that will allow me to evaluate the academic profiles of researchers, taking into account quantitative indicators and conducting an analysis of collaborations and funding.
I would greatly appreciate your responses and suggestions.
Best regards,
Sabina
Relevant answer
Answer
You are interested in finding an approach to evaluate the academic profiles of researchers, including quantitative indicators, collaborations, and funding. One possible approach is to use bibliometric analysis, which is a quantitative approach to analyze the characteristics of publications, authors, and citations within a particular field. Bibliometric analysis can help you identify the most productive researchers, the most impactful publications, and the patterns of collaborations and funding within your field of interest.
There are several tools and databases that you can use for bibliometric analysis, such as Web of Science, Scopus, and Google Scholar. These tools can provide a range of metrics and indicators, such as citation counts, h-index, and collaboration networks, that can help evaluate the academic profiles of researchers.
It's also important to note that bibliometric analysis has some limitations, such as potential biases in the selection of databases and metrics. Therefore, it's important to carefully consider the research question and the data available before conducting bibliometric analysis.
  • asked a question related to Evaluation
Question
1 answer
I am writing to invite you to submit a chapter to an edited monograph, titled The End is Nigh: Climate Anxiety in the Classroom, that explores the multiple ways in which climate anxiety permeate and serve to disrupt students’ and teachers’ mental health within kindergarten to grade 12 classrooms.
The monograph book is a contemporary examination of the state of climate anxiety within the field of education. Climate change is one of the most pressing issues of our time. While some continue to deny its existence and question human’s contributions to its effects, climate change is an undeniable fact (e.g., IPCC, 2018; IPCC, 2022). Media addresses climate change by describing it using doomsday language such as catastrophic, urgent, irreversible, and devastating. Popular climate change advocate Greta Thunberg (2019) reinforces the fear by stating, "I don’t want you to be hopeful. I want you to panic. I want you to feel the fear I feel every day. And then I want you to act. I want you to act as you would in a crisis. I want you to act as if our house is on fire. Because it is." (para. 20)
With extensive exposure to the negative impact climate change can have on individuals, their family, community, and the world, it is not surprising that individuals are experiencing climate anxiety (Albrecht, 2011; Clayton, 2020; Maran & Begotti, 2021; Ojala, 2015; Reyes et al., 2021, Weintrobe, 2019). The impact of climate change on mental health is not limited to those who have lived through a natural disaster associated with climate change (Howard-Jones et al., 2021). Within schools, classroom discussions and analysis of the effects of climate on one’s country and across the global may affect students’ and teachers’ mental health in the form of climate anxiety (Helm et al., 2018; Maran & Begotti, 2021). As schools play a key role in the educating students about climate change it is essential that we understand the presence of climate anxiety within our classrooms and its impact on teachers and their students.
As such, this book will offer a global dialogue, critically scrutinizing academic and practical approaches to address the universal challenges associated with climate anxiety within elementary, middle, and high schools. Authors from a variety of nations, will illustrate that climate anxiety is a world-wide phenomenon, that is often neglected from climate change dialog.
Within our call for chapters, we invite contributions that explore the following three themes:
Theme 1: Climate Anxiety within Schools
• Theoretical foundations of climate change education and anxiety
• Intersectionality of culture and climate anxiety within the classroom
• Theoretical foundations of climate change education and anxiety
•  Principles of sustainable education, mental health, and climate anxiety
•  Pedagogical perspectives of anxiety, sustainable education, and climate change education
Theme 2: The Impact of Climate Anxiety on Students and Teachers
•  Evaluation of student and teacher experiences related to climate anxiety.
•  Exploration of the psychological manifestation of climate anxiety in students and teachers.
•  Critical examination of how climate anxiety impacts students’ learning and development.
•  Description of how climate anxiety occurs within the classroom.
•  Critical examination of how curriculum generates climate anxiety.
•  Critical examination of the impact of climate anxiety on teaching praxis
Theme 3: Addressing Climate Anxiety
•  Description of innovative and creative approaches to address climate anxiety in school settings.
•  Description of pedagogical strategies to address students’ climate anxiety.
•  Exploration of how climate anxiety should be addressed within schools.
•  Rebuilding a cohesive learning environment after climate change induced disasters.
•  Lessons learned from the challenges and successes of combating climate anxiety.
•  Examining the need of policy and administrative support for addressing climate anxiety.
The editors are interested in a range of submissions and encourage proposals from a variety of practitioners within the field of education including, academics, educators, administrators, and graduate students. Submissions should include theoretical stances and practical applications.
Audience:
The book will be useful in both academic and professional circles. The intended audience for this book includes school administrators, educators, and advocates of climate change and reform, all of whom may find this book to be a useful teaching resource. In addition, the book can be used in a variety of courses graduate and undergraduate courses, including, but not limited to: educational psychology, curriculum development, current issues in education, methods and pedagogy, international education, and education law.
Proposals:
All submissions must be written in English.
Please submit as a PDF file for compatibility.
Prospective contributors should submit a 1000-word overview (excluding abstract) of their proposed chapter, including:
• Title
• Abstract – 250 words
• Contact information including name(s), institutional affiliation(s); email and phone number.
• A description of the chapter’s central argument that includes how your chapter addresses one of the central themes of the book.
•  A clear explanation of the research underpinning any assertions, as well as the main argument, purpose and outcomes presented in the chapter.
•  Where chapters will draw on specific research projects, we’d expect some detail in relation to the type of research, period, data set and size, and of course, the findings.
•  3-5 key words/phrases.
Font: Times New Roman size 12 font, double-spaced.
Please adhere to APA, 7th edition formatting standards.
Contributors will be sent chapter format and guidelines upon acceptance. Full manuscripts will be sent out for blind peer review.
Final Chapters:
Final papers should be approximately 7000 words, not including references.
Review Process:
Each author will be asked to review one chapter from the book and provide feedback to the author(s) and editors.
Important dates
Submission of title, abstract, and author(s) to editors - June 1, 2023
Notification of acceptance to authors - Sept 1, 2023
Submission of full manuscript to editors - January 8, 2024
Feedback from editors to authors - March 1, 2024
Submission of revised manuscripts to editors - May 1, 2024
Please send your submissions to: juliec@nipissingu.ca
Please feel free to contact the editors directly with any questions/queries:
Dr. Julie K. Corkett juliec@nipissingu.ca
Dr. Wafaa Abdelaal w.abdelaal@squ.edu.om
References:
Albrecht, G. (2011). Chronic environmental change: Emerging ‘psychoterratic’ syndromes. Climate Change and Human Well-being. New York. Springer. pp 43-56.
Clayton, S. & Karazsia, B. (2020). Development and validation of a measure of climate anxiety. Journal of Environmental Psychology, 69, 101434. https://doi.org/10.1016/j.jenvp.2020.101434
Helm, S.V., Pollitt, A., Barnett, M.A., Curran, M.A., & Craig, Z.R. (2018). Differentiating environmental concern in the context of psychological adaption to climate change. Global Environmental Change, 48, 158–167. https://doi.org/10.1016/j.gloenvcha.2017.11.012
IPCC (2018). Annex I: Glossary In Masson-Delmotte, V., P. Zhai, H.-O. Pörtner, D. Roberts, J. Skea, P.R. Shukla, A. Pirani, W. Moufouma-Okia, C. Péan, R. Pidcock, S. Connors, J.B.R. Matthews, Y. Chen, X. Zhou, M.I. Gomis, E. Lonnoy, T. Maycock, M. Tignor, and T. Waterfield (eds.) Global Warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty. In Press https://www.ipcc.ch/sr15/chapter/glossary/
IPCC. (2022). Climate Change 2022 Impacts, Adaptation and Vulnerability: Summary for Policymakers. Working Group II contribution to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. [H.-O. Pörtner, D.C. Roberts, M. Tignor, E.S. Poloczanska, K. Mintenbeck, A. Alegría, M. Craig, S. Langsdorf, S. Löschke, V. Möller, A. Okem, B. Rama (eds.)]. Cambridge University Press. https://www.ipcc.ch/report/ar6/wg2/downloads/report/IPCC_AR6_WGII_FinalDraft_FullReport.pdf
Maran, D. A. & Begotti, T. (2021). Media exposure to climate change, anxiety and efficacy beliefs in a sample of Italian university students. International Journal of Environmental Research and Public Health, 18, 1-11. https://doi.org/10.3390/ijerph1879358
Ojala, M. (2015). Hope in the face of climate change: associations with environmental engagement and student perceptions of teachers’ emotion communication style and future orientation. The Journal of Environmental Education, 46(3), 133-148. https://doi.org/10.1080/00958964.2015.1021662
Reyes, M. E. S., Carmen, B. P. B., Luminarias, M. E. P., Mangulabnan, S. A. N. B., Ogunbode, C. A. (2021). An investigation into the relationship between climate anxiety and mental health among Gen Z Filipinos. Current Psychology. 1-9. https://doi.org/10.1007/s12144-021-02099-3
Thunberg, G. (2019, January 25). 'Our house is on fire': Greta Thunberg, 16, urges leaders to act on climate. ​The Guardian.​ ​https://www.theguardian.com/environment /2019/jan/25/our-house-is-on-fire-greta-thunberg16-urges-leaders-to-act-on-climate
Weintrobe, S. (2012). The difficult problem of anxiety in thinking about climate change. In S. Weintrobe (Ed.). Engaging with Climate Change: Psychoanalytic and Interdisciplinary Perspectives (pp 33-47). Routledge.
Relevant answer
Answer
Hi Professor,
Is it still acceptable to submit the chapter proposal?
  • asked a question related to Evaluation
Question
1 answer
my question is under game workshop
Relevant answer
Answer
Are you using a particular model such as the Kirkpatrick four-level model for program evaluation? I would like to engage your question, but need a reference point for specific discussion on evaluation and program success.
  • asked a question related to Evaluation
Question
6 answers
I am a graduate student, and my class is currently looking at the differences (and similarities) between research and evaluation. We are also currently looking at the work of Mertens’ Research and Evaluation in Education and Psychology (2020) when examining four educational research paradigms (I’ve attached a picture from that book that shows labels commonly associated with different paradigms as a quick descriptor).
I am wondering: What do you believe to be the differences and/or similarities of research and evaluation? Which of the four educational research paradigms (Postpositivism; Constructivism; Transformative; Pragmatic) do you most align with?
Thank you in advance for sharing your thoughts!
Relevant answer
Answer
Your question has been asked a million times. Here are three brief thoughts.
1. Colloquially (and often pragmatically), research and evaluation mean the same thing. In a technical sense, there is a difference. All evaluations are research, but not all research is evaluation.
2. Evaluation means to judge or to assign a value. Most likely one ranks practices and programs. In contrast, research without evaluation can be non-judgmental and descriptive, causal, etc. One's research aims dictate the purpose; one would no more evaluate one's religious belief as correct/incorrect than one hired to evaluate an educational intervention would purely describe it without a judgment of worth.
3. One's purpose influences one's paradigm, but don't get hung up on the indefensible idea one is connected with a solo paradigm throughout a study. I like the marriage of pragmatism plus critical realism (think Joseph Maxwell). We act with what tools we have, though we should be optimal versus satisficing. Critical realism fits qualitative research and cuts across most paradigms; we investigate a sample because each has a story to tell, informed by a perspective. Still, everything isn't reduced to unknowable and completely individuated.
  • asked a question related to Evaluation
Question
2 answers
Greetings,
I am currently a graduate student taking Introduction to Research and Evaluation in Education. I've been tasked with posing the question of, "How does one define Research vs Evaluation?"
When I was a special education teacher, I completed many evaluations of student abilities both academically and cognitively and I see evaluation as a means to determine a path for a student's education.
Research on the other hand, entails posing a question and then determining possible answers while searching scholarly posts and journals.
Please comment on my question at your earliest convenience.
Mertens, D. M. (2020). Research and Evaluation in Education and Psychology (5th ed.). Sage Publications.
Relevant answer
Answer
Thank you. I appreciate your insight.
  • asked a question related to Evaluation
Question
1 answer
Hello. I am currently a student at Arizona State University in the Mary Lou Fulton College of Education as a graduate student. We have been tasked to define research and evaluation and explore the differences and similarities between them. The text we are using is Research and Evaluation in Education and Psychology by Donna Mertens (2020) which gives different models and paradigms for these two subjects. From my understanding of the text, research is exploration of topics and developing theories about that topic. Evaluation is the methodology to ensure you are properly investigating, documenting, and enhancing the world around you. I am interested in other people's viewpoints and would like to hear what you think.
Thank you.
Relevant answer
Answer
Research and evaluation serve slightly different purposes. Research is the process of investigating a topic in order to gain new insights or discover new knowledge. This may involve collecting and analyzing data through various methods, such as qualitative interviews or quantitative surveys. The focus of research is often on generating new ideas or theories.
In contrast, evaluation is the process of determining the value or quality of a program, policy, or intervention. Evaluation may involve assessing the effectiveness of a program in achieving its objectives, identifying areas for improvement, or determining whether the program is worth continuing or funding. The focus of evaluation is often on making decisions about the future of a program or policy.
While there is some overlap between research and evaluation, evaluation tends to be more focused on decision-making and program improvement, whereas research is more focused on generating new knowledge or theories. However, both research and evaluation are important for improving our understanding of the world and for making informed decisions about policies and programs.
  • asked a question related to Evaluation
Question
6 answers
I have developed a new technique. Although I have performed multiple experiments and obtained various "meaningful" results, I am unsure about how to evaluate the technique's performance and ensure the confidence of the obtained results since there are no other techniques available for comparison. What are the best practices for evaluating and validating a new technique in the absence of a benchmark tool or dataset?
Relevant answer
Answer
I would argue you can always compare to some state-of-the-art techniques. Then you can build up the comparison as "To evaluate our technique, we compare it to <state-of-the-art technique>. Although <state-of-the-art technique> works inherently different than our technique, comparing results enables us to evaluate how we measure up to state-of-the-art."
In the discussion part you can then discuss the results of your technique and how it relates to state-of-the-art. Is your performance better? Do you make a better trade-off between criteria? Are you approximately as good as state-of-the-art but easier to apply? etc.
As long as you explain why you set up a certain comparison and why you select specific state-of-the-art techniques, the evaluation has value.
  • asked a question related to Evaluation
Question
1 answer
Compare and contrast the different methods for assessing soil compaction, such as bulk density measurements, penetrometer tests, and visual assessments. Evaluate the strengths and limitations of each method, and their suitability for different soil types and land uses.
Relevant answer
Answer
Find the doc attached
  • asked a question related to Evaluation
Question
1 answer
Hello there! I proposed in my phd mechanisms to support the Metaverse development focused on education of software engineering (coding, modeling, project management etc.). If you're an XR developer, researcher or professor/teacher that uses XR technologies for education, I would be very grateful if you contribute with my research. Evaluation link: https://forms.gle/871TyJapysdfKdmTA
Relevant answer
Answer
I have an avatar based virtual world where you can code, terraform and build.
An equivalent platform to Second Life.
Having server access means total control.
Welcome to have look if you want too. It is a good platform for bringing people in as avatars and education
  • asked a question related to Evaluation
Question
3 answers
How do we evaluate the opportunity in a business? Is there any model specific to this issue? Request input pl.
Relevant answer
  • asked a question related to Evaluation
Question
4 answers
Hi,
We have few AI (Artificial Intelligence) solutions for different problems in public health. Few of the problems are binary in nature, while the rest is continuous. We need help in calculating sample size for measuring the accuracy of the AI (to reliably predict the problem).
For example, we developed an AI solution to estimate weight of a baby. We expect the AI to predict the weight reliably in 90% of babies - error to be less than 10% of actual weight by gold standard equipment. I can calculate sample size in two ways, I think:
  1. Assuming that variable of interest is binary - reliability of the AI prediction (yes/no)
  2. Assuming that variable of interest is continuous - actual error of AI prediction (grams - or %)
What should we choose? In the second option, which SD should we choose for ss calculation?
Thanks for reading and suggesting in advance.
PS - both the methods are applied on same study participant.
Relevant answer
Answer
Dear sir,
Download the Aidos-x system for free from the site http://lc.kubagro.ru/aidos/_Aidos-X.htm . It is an excellent intelligent system for diagnosing and classifying human diseases.
Methods for using the system Aidos-x for diagnosing human diseases are disclosed in lectures with sound "Using automated system-cognitive analysis for the classification of human organ tumors", "Intelligent system for diagnosing early stages of chronic kidney disease", which can be downloaded right now from the website https://www.patreon.com/user?u=87599532 After subscribing to this site, you will receive databases for medical research to identify those diseases that you will read about in lectures. The acquired skills in working in the system Aidos-x will allow you to apply for grants to perform scientific research in medicine.
Sincerely, Vladimir Ryabtsev, Doctor of Technical Sciences, Professor of Information Technology
  • asked a question related to Evaluation
Question
2 answers
I have submitted my paper to the SSCI journal, the status "Evaluating Reviews" goes beyond 20 days, what does it mean?
Relevant answer
Answer
I agree with the interpretation by Marina Bazhutina about what this means - that the editor is considering the reviews which have been sent back. At this stage, you can only wait. Considering it is 20 days, this is a lot less than what some other people have said on RG!!
  • asked a question related to Evaluation
Question
4 answers
Hello, I am a graduate student at Arizona State University with the Mary Lou Fulton Teachers College, I am pursuing a M.Ed in Learning Design and Technology, I am currently enrolled in Intro to Research and Evaluation in Education. After completing this weeks reading and formulating a definition of both research and evaluation, and comparing the two, I have a question to pose.
How can empirical data best be used in research and more importantly how can subject and data benefits evaluations, to help measure worth? Is evaluation as cut-and-dried as it seems or is there room for subjectivity?
Relevant answer
Answer
I'm glad to see comments questioning the distinction between empirical and subjective data. In my view, all data are empirical. And quantitative data are not as "objective" as we are often lead to believe. In fact, quantitative data are also prone to interpretation based theoretical frameworks, designs and methods which often reflect researchers' subjective views about the nature of "reality," and how we access it.
  • asked a question related to Evaluation
Question
3 answers
I have been reading about the SSI scale (Gadzella, 1991) and saw a revised version available (Gadzella, 2012). However, I am unable to find the scale containing 53 items. Does anyone know how to obtain the inventory?
Gadzella, B. M., Baloglu, M., Masten, W. G., & Wang, Q. (2012). Evaluation of the student life-stress inventory-revised. Journal of Instructional Psychology, 39(2).
Relevant answer
I think the following paper will help you out:
Stress and depression in undergraduate students during the COVID-19 pandemic: Nursing students compared to undergraduate students in non-nursing majors
LMB Thomas - Journal of Professional Nursing, 2022 - Elsevier
  • asked a question related to Evaluation
Question
5 answers
I do not want to analyze the text inside the book I just want to explore the existence of the main components.
Relevant answer
Answer
Can you look for themes? Sure. Thematic analysis can be used. Obviously your data can be "thin," but a well structured sample can improve validity and reliability. Often we think there's only Braun and Clarke, but Muir and Fereday provide a competing perspective. Boyatzis also used thematic analysis with documents. Here my recommendation: Be pragmatic. Thematic analysis is adaptable to most any situation. (Note Muir/Friday used thematic analysis with documents as well as Boyatzis.)
Fereday, J., & Muir-Cochrane, E. (2006). Demonstrating rigor using thematic analysis: A hybrid approach of inductive and deductive coding and theme development. International journal of qualitative methods, 5(1), 80-92.
  • asked a question related to Evaluation
Question
2 answers
Does anyone know how to access to the software "Similarity Evaluation System for Chromatographic Fingerprint of Traditional Chinese Medicine (Version 2004A) " ?
Otherwise, which other software can I use to analyze HPLC-UV fingerprint similarities ? We are using an UHPLC thermofisher system with Chromeleon.
Thanks.
Relevant answer
Answer
This software seems to be available here:
but i havent tried it yet.
  • asked a question related to Evaluation
Question
4 answers
To date, there have been two Redlists on fishes published in Bangladesh by IUCN, in 2000 and 2015. Both are focused mainly on inland fishes. Is there any work on marine fishes of Bangladesh? We had a publication, where we listed the threatened marine fishes of Bangladesh based on the global evaluation by the IUCN.
Hossain, M.A.R. and Hoq, M.E. 2018. Threatened Fishes and other aquatic animals of Bay of Bengal. FAN – Fisheries & Aquaculture News 5: 37-39.
I am looking for other works, publications and reports in this regard.
Relevant answer
Answer
Sorry, I was hoping to help you
  • asked a question related to Evaluation
Question
6 answers
I started reading the textbook, Research and Evaluation in Education and Psychology this week. In Chapter 2: Evaluation; Mertens (2020) asserts that evaluation and research share similarities, however the differences between the two types of systematic inquiry are clearly distinguishable. (p. 86) After learning more about research and evaluation I have come to the conclusion that the two practices do not overlap as much as you may think they do.
Research and evaluation have different goals:
  • Research is a system used for knowing or understanding
  • Evaluation is a system used for discovering merit, worth, or quality
Research and evaluation have different processes:
  • Research is made up of 8 main steps; identify world view, establish focus, literature review, identify design, identify data sources, collection methods, analysis and future dirction
  • Evaluation is made up of 3 main steps; focusing the evaluation, planning the evaluation and implementing the evaluation
Research and evaluation use different terminology:
  • The jargon used in research is similar to terminology you might be familiar with from science courses and experiments; variables, control groups, sample, subject
  • The jargon used in evaluation is more similar to terms that you might be familiar with from business or management courses; merit, worth, monitoring, assessment, internal evaluators, stakeholders.
Taking into consideration stark differences between the goals, processes and terminology used in research vs. evaluation, I would argue that these two practices are not intertwined with one another. That is not to say that they can never be used together, rather that they can be used independently and with separate goals in mind.
I would love to hear some different perspectives on research vs. evaluation and how you may be applying them in your own work.
References:
Mertens, D. M. (2020). In Research and Evaluation in Education and Psychology: Integrating diversity with quantitative, qualitative, and mixed methods. 5th ed. SAGE
Relevant answer
Answer
I would treat evaluation as a specific kind of research, better known as "Evaluation Research." In particular, I think Evaluation Research is distinguished by its goals and its research questions, rather than anything about its methods.
  • asked a question related to Evaluation
Question
16 answers
In the ScholareOne system, after the peer-review is completed, the status changes to "Evaluating Recommendation". How long does this status typically takes before hearing back from the journal editor?
Relevant answer
Answer
I agree with the comment by Avishag Gordon that it should not take long and the suggestion by Faraed Salman that it may be "a few weeks" already seems long to me. My experience is that it is faster, perhaps 1-2 weeks and I have noticed that some journals (and those editors) are quite fast and some are less fast.
  • asked a question related to Evaluation
Question
9 answers
Evaluation metrics needed
Relevant answer
Answer
Dear Muralidhar Patruni . The best research is the one which helps the community more than other researches in all directions.
  • asked a question related to Evaluation
Question
6 answers
I am looking for a study that dealt with the differences between institutions in student evaluation of the faculty. What I am interested in knowing is whether there is a difference between students from prestigious institutions and ordinary universities and colleges, one can perhaps assume that in private and competitive institutions the students will be critical and demanding. But I can't find any evidence or even a comparative study on the subject, I've been searching Google Scholar for a few days now
Relevant answer
Answer
Visit also the following useful RG link:
  • asked a question related to Evaluation
Question
6 answers
English language centers in the non English speaking world assess the English of their teachers and professors by using tests that are appropriate for U.S., Canadian, British, Australian environments. These specific contexts at times do not match the academic needs of language centers outside the U.S. or Great Britain for instance
Relevant answer
Answer
I think that the best of these tests is the IELTS test because of the test's ability to distinguish between the language proficiency of the testers
  • asked a question related to Evaluation
Question
3 answers
Evaluation for epistemological and ontological differences between different research methodologies and
Evaluate the strength and weakness of variety of business and management research methods
Relevant answer
Answer
To study this topic, I think that the comparative research method is useful in these topics
  • asked a question related to Evaluation
Question
1 answer
I have created and validated a Campus Climate Identity Survey, as part of my doctoral work at NYU dealing with my home institution and am now looking for collaborators. The survey is validated with the pilot and really designed as a way to get comprehensive data in all the schools in academic health science centers not just the medical school component. Are you looking to gain a comprehensive view of the plight of your staff, students, and faculty at an academic health science center, then I'd love to chat with you.
Relevant answer
Answer
thanks for the great information. where does it take place?
  • asked a question related to Evaluation
Question
1 answer
Evaluation metrics in fuzzy systems.
Relevant answer
Answer
Any type of error metrics such as MSE, IAE, ISE, ITAE, MAE etc
  • asked a question related to Evaluation
Question
1 answer
I am on a state oral health department fellowship, and I am severely frustrated with picking an evaluation project. It has been months of literature research, brainstorming, planning, and talking with stakeholders. The topic I want to do is the Sugar Sweeten Beverage invention guide (similar to tobacco cessation 5A's) at the state level. However, I cannot formulate a question, target audience, and data to back up the evaluation. Has anyone done any state evaluation intervention method? Or any pointers? Thanks.
Relevant answer
Answer
Clare Mendioro it is important to identify the state of the program (e.g., early stages, well-established, etc.) so that you can propose the proper evaluation form (e.g., implementation study, outcomes-based, impact, etc.). Once those have been established, then you can frame your evaluation questions based on what others who have completed that evaluation form have asked. From the evaluation questions will emerge the necessary target audience and data you will need.
  • asked a question related to Evaluation
Question
5 answers
Hello,
I am a grad student at Arizona State University earning my degree in Learning & Curriculum in Gifted Education. I am enrolled in Introduction to Research and Evaluation. The major assignment in this course is to write a Research Proposal with Literature Review. We are currently discussing the differences in Research and Evaluation.
As a 5th grade teacher, I believe that research is a process to gain knowledge and information, whereas evaluation assesses the success of a program, organization, etc.
What is your educational role, and how do you differentiate between Research and Evaluation?
Relevant answer
Answer
I see no practical difference. However, evaluation typically identifies a single causal path from treatment to effect. Tools like random assignment, instrumental variables, regression discontinuity, and difference-in-differences are especially useful. “Research” may do the same, but often it takes a broader perspective where causal paths are entangled. Good research is often exploratory, but good evaluation always attempts to establish causality.
  • asked a question related to Evaluation
Question
5 answers
I am a graduate student at Arizona State University taking a course in research and evaluation in education. In our class, we are comparing and contrasting research and evaluation. After having read our text, (Mertens, 2020) Research and Evaluation in Education and Psychology, the author discusses the differences and parallels between the two. I had previously considered the two as interchangeable terms, or at least going hand-in-hand, however now there are evident distinctions that I can identify. The two do have overlap, but to me, research seems to be more of a process of uncovering and collecting new information in order to determine the "why" of a problem, scenario, or phenomenon. Evaluation, on the other hand, presents to me as a thorough process through which already available information is compiled to identify the "how well" or worth/value of an existing program or practice.
I am curious as to others' opinions on this topic. Do research and evaluation overlap, or are they singular and distinct? How are they used together? Must they be?
We are also discussing four paradigms that frame research and evaluation. Mertens (2020) describes them as post-positivism, constructivism, transformative and pragmatic. Do you feel that one paradigm would be more useful than another in carrying out research dealing with the efficacy of teachers of gifted populations based on their understanding of those students?
Relevant answer
Answer
Dear Ms. Hunt!
You raised a very important issue to consider. May I argue that research is a "collective platform" for working with science while evaluation (measuring) is a toolset to advance science and education. I searched for resources to support my claim:
1) Valle, N., Brishke, J., Shenkman, E. et al. Design, Development and Evaluation of the Citizen Science Cancer Curriculum (CSCC): a Design and Development Case Study. TechTrends (2022). https://doi.org/10.1007/s11528-022-00737-6, Open access:
2) Blankenberger, B., Gehlhausen Anderson, S. & Lichtenberger, E. Improving Institutional Evaluation Methods: Comparing Three Evaluations Using PSM, Exact and Coarsened Exact Matching. Res High Educ 62, 1248–1275 (2021). https://doi.org/10.1007/s11162-021-09632-0, Open access:
3) Álex Escolà-Gascón, Josep Gallifa, How to measure soft skills in the educational context: psychometric properties of the SKILLS-in-ONE questionnaire, Studies in Educational Evaluation, Volume 74, 2022, https://doi.org/10.1016/j.stueduc.2022.101155. Available at: https://www.sciencedirect.com/science/article/abs/pii/S0191491X22000323
4) Daraio, C., Vaccari, A. How should evaluation be? Is a good evaluation of research also just? Towards the implementation of good evaluation. Scientometrics (2022). https://doi.org/10.1007/s11192-022-04329-2, Open access:
Yours sincerely, Bulcsu Szekely
  • asked a question related to Evaluation
Question
7 answers
Hello everyone!
I am a graduate student at Arizona State University and we are focusing on the difference between research and evaluation. I teach Kindergarten and am working toward my Literacy Education graduate degree. In my opinion, research focuses on gaining new knowledge about a topic or purpose, while evaluation focuses on the program or purpose already used and then asking questions about it to understand its effectiveness. In your opinion, what is the major difference between research and evaluation?
As a classroom teacher, how do you think this be utilized or defined in a classroom, especially at the primary level?
Relevant answer
Answer
The research is based on evaluating the reality and then making proposals for its development, as well as the evaluation, based on the results of evaluating the actual performance, and then suggesting procedures for development. Both are used as tools in evaluation.
There is an essential difference between scientific research and evaluation: it is that scientific research is carried out according to the steps of scientific research: it begins with defining the problem, and ends with providing better solutions to it, according to a scientific methodology.
  • asked a question related to Evaluation
Question
4 answers
As part of my fellowship, I want to evaluate the oral health surveillance system as part of my fellowship. I already read CDC's guidelines for evaluating surveillance systems, but I am still confused about how to assess one. Does anyone have examples of work or reviews done for this type of evaluation?
Relevant answer
Answer
Dear Mendiore
CDC guideline have 9 component of evaluation. Some.of these are quantitative like Sensitivity, PPV and data Quality. So these are easily calculated. However some of the indicator are qualitative. Which are to be ascertained through interview of related stake holders.
We have used CDC guideline these are easy and friendly to use. I will share you my evaluation report which have all these CDC guideline indicators of evaluation of public health programs.
Please mail me on drsandeepguriro@gmail.com
Thanks
  • asked a question related to Evaluation
Question
3 answers
I need to statistically analyse the speed-accuracy trade-off for a reaction time task.
The design of my study is: 2*2*3 (group, task difficulty, valence condition)
I want to check whether there is a speed-accuracy trade-off between the two groups under low and high task difficulty. I came across this paper but the statistical analysis given here is quite confusing to me.
Could someone tell me the stepwise process in SPSS?
Relevant answer
Answer
I do not recommend the approach used in the linked study. Truth be told, I cannot figure out why the authors used the statistical procedures they did, and I'm not convinced the results support the claims they're trying to make.
I suggest keeping things simple. Do two separate analyses - one with the reaction time data - and one for participant accuracy. In SPSS, you could conduct a multiple regression for the RT data. You could then conduct a binomial logistic regression for the accuracy data, coding responses as 0 = error, 1 = correct.
I presume the task difficulty manipulation is intended to create variability in participant accuracy. If participants are using a speed-accuracy trade-off strategy, you should observe the following pattern across the two tests: as accuracy decreases, response time also decreases and vice versa.
  • asked a question related to Evaluation
Question
3 answers
What is the best and simplest tool (other than Excel) for making comparison charts such as line charts for algorithms comparison and evaluation purposes?
Relevant answer
Hi,
It depends on which operating system you use on your desktop. Some suggestions I have for you are:
1 - Graph
2 - Graphmatics
3 - R Studio
This software are for building line graphs, hope I helped.
  • asked a question related to Evaluation
Question
3 answers
Dear colleagues,
I’m conducting a study that is intended to identify determinants of evaluation use in evaluation systems embedded in public and non-profit sectors. I’m planning to conduct a survey on a representative sample of organizations that systematically evaluate the effects of their programs and other actions in Austria, Denmark, Ireland and the Netherlands. And here comes my request: can anyone of you, familiar with evaluation practice in these countries, suggest what types of organizations I should include in my sample? Are there any country-specific organizations active in the evaluation field that I should not omit?
It is obvious to me that in all these countries evaluation is present in central and local government (ministries, municipalities, etc.) as well as institutions funding research or development agencies, but I also suspect that there might be some country-specific, less obvious types of organisations which are important “evaluation players”.
Thanks for any hints.
Relevant answer
Answer
Austria is conducting such an evalutation through the AUVA. You can contact, and use my name :
DI Georg Effenberger
Austrian Workers’ Compensation Board
Head of Prevention Department
Vienna, Austria
  • asked a question related to Evaluation
Question
7 answers
Through multiple empirical studies, I have collected user needs for an ICT intervention. During this study, I intend to design a prototype and then evaluate the prototype to check whether user needs are captured in the proposed design.
What is the most suitable approach? Quantitative, Qualitative or mixed?
Are we evaluating the features of the prototype or evaluate the user requirements?
Relevant answer
Answer
Hello Manoja Weerasekara , even though think-aloud protocol is considered by some authors as a widely used method for usability testing of software, interfaces, websites, and (instructional) documents. It is possible to perceive the satisfaction of users by his/her expression, frustrations, frictions and comments. An interview or Likert scale could be added after the Think-aloud procedure, focused on user's satisfaction with the experience.
The basic principle of this method (Think-aloud protocol) is that potential users are asked to complete a set of tasks with the artefact tested, and to constantly verbalize their thoughts while working on the tasks.
From all possibilities of prototype testing, the Think-aloud Protocol is the most
suitable choice to help observe usability and interaction problems from users’ point of view. According to Villanueva [4], the technique consists of a researcher observing 1–4 users doing specific tasks within a controlled environment. The user’s actions and thoughts are to be described verbally aloud by him/herself on real time. The researcher records the user actions by written notifications, video or voice recorder.
The participants are to perform a simulation of tasks using the prototype. The Think-aloud Protocol has to be executed individually and the steps of the task to be closely observed with participants verbalizing actions, thoughts and confusion: do users understand what is the app? Do users cognitively understand icons and actions to take? What is the visual perception of users regarding colors, types, size, icons and visual impact? Is the process an easy and smooth experience? How do users explore the options?
Nevertheless, I've been feeling that for remote research, cooperative evaluation can be more useful e easier to execute.
  • asked a question related to Evaluation
Question
8 answers
propolis (Bee Glue) and Evaluate Its Antioxidant Activity
Relevant answer
Answer
Propolis can interact with ACE2 and TMPRSS2, potentially blocking or reducing SARS-CoV-2 invasion of the host cell. Propolis has also shown promise as an aid in the treatment of various comorbidities that are particularly dangerous in COVID-19 patients, including respiratory diseases, hypertension, diabetes, and cancer.
  • asked a question related to Evaluation
Question
3 answers
I am now working with the project of "Evaluate the Impact of the Implementation of GDPR on the Role of the European Court". Before conceptulizing it for the discussion, I need to collect some data and have some ideas of the discussion for it. Do you have any articles or reasearches recommended about this topic?
Relevant answer
Answer
If you search on the European court website, what are you looking for?
  • asked a question related to Evaluation
Question
4 answers
Dear colleagues, dear participatory-action research practitioners,
I would like to open the discussion on the criteria for evaluating participatory research (whether it is action-research, participatory action research, CBPR, etc.).
How do you evaluate participatory research projects that are submitted for research grants and/or publications (papers) ? Do you apply the same criteria as when you evaluate non-participatory research projects? Or have you developed ways to evaluate non-scientific dimensions such as the impact of this research on communities, the quality of connections between co-researchers? And if so, how do you proceed ?
Thank you in advance for sharing your experiences and thoughts.
Pour les collègues francophones, n'hésitez pas à répondre en français ! Quels sont les critères que vous utilisez pour évaluer des projets de recherche participative ? Utilisez-vous les critères d'évaluation scientifique que vous appliquez aux autres types de recherche ou est-ce que vous avez des critères spécifiques et si oui, lesquels ?
Baptiste GODRIE, Quebec-based social science researcher & participatory action research practitioner
Relevant answer
Answer
the Health Research Board in Ireland has adopted the following approach to its evaluation of grant applications
...Until recently, public reviews have been used solely to provide direct feedback to applicant teams so they could take that feedback on board, and thereby gain experience of incorporating Public and Patient Involvement (PPI) into their research proposals.
From now on, integrating the public reviews into panel decision making will be the norm for calls which undergo public review, and this step is in line with our published plans to strengthen PPI input into HRB decision-making processes.
In addition to feedback on the scientific aspects from the international peer-reviewers, the HRB receives written feedback on the quality of Patient and Public Involvement (PPI) from two public reviewers for each application ahead of the panel meeting.
All of the reviewers’ comments (both public and scientific) are passed on to the applicants, who have the opportunity to respond. The reviews and the related applicant responses are made available to the panel before they meet....
  • asked a question related to Evaluation
Question
5 answers
Comparative Evaluation of Selected high and Low Molecular Weight Antioxidant Activity in Rat.
Relevant answer
Answer