Science topics: Quantitative Social ResearchEvaluation
Science topic
Evaluation - Science topic
Evaluation in all area.
Questions related to Evaluation
The aim of this research is to investigate the impact of compensation and reward system in determining employees willingness to continuing staying in a particular job.
The study group is working class Post Graduate students at University of sunderland.
The research method will be interview of prospective students
Dear colleagues,
A research team is conducting a study to evaluate emotions that have been automatically generated by using GANS, through a small survey. The survey presents 20 works of art in four different versions, each aimed at evoking one of the following emotions: amusement, delight, dread, and melancholy.
Your opinion is highly valued. Kindly access the form provided via the link and indicate the emotion you perceive each one of the 20 works of art to evoke. We suggest increasing the screen brightness to have a better view of the images.
Thank you for participating in this research. Your responses will be greatly appreciated. Feel free to share with your contacts.
Distinguish between short-term challenges and potential long-term benefits. Assess the risks, including the potential for introducing harmful pathogens or unintended ecological consequences, and weigh them against the long-term benefits of sustainable agricultural practices.
I need experts in the Field of Measurement and Evaluation?
When you read an epidemiological research paper what are some of the red flags you encounter in phrasing, statistical tests used, and glossing over controlling for confounding? For example, when you evaluate the COVID reports or vaccine research what are key elements that if not present call into question the research or if included raise doubts?
Dear Admin,
You uploaded a study in pdf and the system automatically entered the title of the pdf as the title of the article. I changed the pdf, but only the title remained.
How can I change this so that the title of the study is displayed? "What Does a Tourist See, or, an Environmental–Aesthetic Evaluation of a Street View in Szeged (Hungary)"
thanks,
Ferenc
Someone should please guide me on how to write this project
Dear colleagues,
I am reaching out to you for assistance in finding an approach that will allow me to evaluate the academic profiles of researchers, taking into account quantitative indicators and conducting an analysis of collaborations and funding.
I would greatly appreciate your responses and suggestions.
Best regards,
Sabina
I am writing to invite you to submit a chapter to an edited monograph, titled The End is Nigh: Climate Anxiety in the Classroom, that explores the multiple ways in which climate anxiety permeate and serve to disrupt students’ and teachers’ mental health within kindergarten to grade 12 classrooms.
The monograph book is a contemporary examination of the state of climate anxiety within the field of education. Climate change is one of the most pressing issues of our time. While some continue to deny its existence and question human’s contributions to its effects, climate change is an undeniable fact (e.g., IPCC, 2018; IPCC, 2022). Media addresses climate change by describing it using doomsday language such as catastrophic, urgent, irreversible, and devastating. Popular climate change advocate Greta Thunberg (2019) reinforces the fear by stating, "I don’t want you to be hopeful. I want you to panic. I want you to feel the fear I feel every day. And then I want you to act. I want you to act as you would in a crisis. I want you to act as if our house is on fire. Because it is." (para. 20)
With extensive exposure to the negative impact climate change can have on individuals, their family, community, and the world, it is not surprising that individuals are experiencing climate anxiety (Albrecht, 2011; Clayton, 2020; Maran & Begotti, 2021; Ojala, 2015; Reyes et al., 2021, Weintrobe, 2019). The impact of climate change on mental health is not limited to those who have lived through a natural disaster associated with climate change (Howard-Jones et al., 2021). Within schools, classroom discussions and analysis of the effects of climate on one’s country and across the global may affect students’ and teachers’ mental health in the form of climate anxiety (Helm et al., 2018; Maran & Begotti, 2021). As schools play a key role in the educating students about climate change it is essential that we understand the presence of climate anxiety within our classrooms and its impact on teachers and their students.
As such, this book will offer a global dialogue, critically scrutinizing academic and practical approaches to address the universal challenges associated with climate anxiety within elementary, middle, and high schools. Authors from a variety of nations, will illustrate that climate anxiety is a world-wide phenomenon, that is often neglected from climate change dialog.
Within our call for chapters, we invite contributions that explore the following three themes:
Theme 1: Climate Anxiety within Schools
• Theoretical foundations of climate change education and anxiety
• Intersectionality of culture and climate anxiety within the classroom
• Theoretical foundations of climate change education and anxiety
• Principles of sustainable education, mental health, and climate anxiety
• Pedagogical perspectives of anxiety, sustainable education, and climate change education
Theme 2: The Impact of Climate Anxiety on Students and Teachers
• Evaluation of student and teacher experiences related to climate anxiety.
• Exploration of the psychological manifestation of climate anxiety in students and teachers.
• Critical examination of how climate anxiety impacts students’ learning and development.
• Description of how climate anxiety occurs within the classroom.
• Critical examination of how curriculum generates climate anxiety.
• Critical examination of the impact of climate anxiety on teaching praxis
Theme 3: Addressing Climate Anxiety
• Description of innovative and creative approaches to address climate anxiety in school settings.
• Description of pedagogical strategies to address students’ climate anxiety.
• Exploration of how climate anxiety should be addressed within schools.
• Rebuilding a cohesive learning environment after climate change induced disasters.
• Lessons learned from the challenges and successes of combating climate anxiety.
• Examining the need of policy and administrative support for addressing climate anxiety.
The editors are interested in a range of submissions and encourage proposals from a variety of practitioners within the field of education including, academics, educators, administrators, and graduate students. Submissions should include theoretical stances and practical applications.
Audience:
The book will be useful in both academic and professional circles. The intended audience for this book includes school administrators, educators, and advocates of climate change and reform, all of whom may find this book to be a useful teaching resource. In addition, the book can be used in a variety of courses graduate and undergraduate courses, including, but not limited to: educational psychology, curriculum development, current issues in education, methods and pedagogy, international education, and education law.
Proposals:
All submissions must be written in English.
Please submit as a PDF file for compatibility.
Prospective contributors should submit a 1000-word overview (excluding abstract) of their proposed chapter, including:
• Title
• Abstract – 250 words
• Contact information including name(s), institutional affiliation(s); email and phone number.
• A description of the chapter’s central argument that includes how your chapter addresses one of the central themes of the book.
• A clear explanation of the research underpinning any assertions, as well as the main argument, purpose and outcomes presented in the chapter.
• Where chapters will draw on specific research projects, we’d expect some detail in relation to the type of research, period, data set and size, and of course, the findings.
• 3-5 key words/phrases.
Font: Times New Roman size 12 font, double-spaced.
Please adhere to APA, 7th edition formatting standards.
Contributors will be sent chapter format and guidelines upon acceptance. Full manuscripts will be sent out for blind peer review.
Final Chapters:
Final papers should be approximately 7000 words, not including references.
Review Process:
Each author will be asked to review one chapter from the book and provide feedback to the author(s) and editors.
Important dates
Submission of title, abstract, and author(s) to editors - June 1, 2023
Notification of acceptance to authors - Sept 1, 2023
Submission of full manuscript to editors - January 8, 2024
Feedback from editors to authors - March 1, 2024
Submission of revised manuscripts to editors - May 1, 2024
Please send your submissions to: juliec@nipissingu.ca
Please feel free to contact the editors directly with any questions/queries:
Dr. Julie K. Corkett juliec@nipissingu.ca
Dr. Wafaa Abdelaal w.abdelaal@squ.edu.om
References:
Albrecht, G. (2011). Chronic environmental change: Emerging ‘psychoterratic’ syndromes. Climate Change and Human Well-being. New York. Springer. pp 43-56.
Clayton, S. & Karazsia, B. (2020). Development and validation of a measure of climate anxiety. Journal of Environmental Psychology, 69, 101434. https://doi.org/10.1016/j.jenvp.2020.101434
Helm, S.V., Pollitt, A., Barnett, M.A., Curran, M.A., & Craig, Z.R. (2018). Differentiating environmental concern in the context of psychological adaption to climate change. Global Environmental Change, 48, 158–167. https://doi.org/10.1016/j.gloenvcha.2017.11.012
IPCC (2018). Annex I: Glossary In Masson-Delmotte, V., P. Zhai, H.-O. Pörtner, D. Roberts, J. Skea, P.R. Shukla, A. Pirani, W. Moufouma-Okia, C. Péan, R. Pidcock, S. Connors, J.B.R. Matthews, Y. Chen, X. Zhou, M.I. Gomis, E. Lonnoy, T. Maycock, M. Tignor, and T. Waterfield (eds.) Global Warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty. In Press https://www.ipcc.ch/sr15/chapter/glossary/
IPCC. (2022). Climate Change 2022 Impacts, Adaptation and Vulnerability: Summary for Policymakers. Working Group II contribution to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. [H.-O. Pörtner, D.C. Roberts, M. Tignor, E.S. Poloczanska, K. Mintenbeck, A. Alegría, M. Craig, S. Langsdorf, S. Löschke, V. Möller, A. Okem, B. Rama (eds.)]. Cambridge University Press. https://www.ipcc.ch/report/ar6/wg2/downloads/report/IPCC_AR6_WGII_FinalDraft_FullReport.pdf
Maran, D. A. & Begotti, T. (2021). Media exposure to climate change, anxiety and efficacy beliefs in a sample of Italian university students. International Journal of Environmental Research and Public Health, 18, 1-11. https://doi.org/10.3390/ijerph1879358
Ojala, M. (2015). Hope in the face of climate change: associations with environmental engagement and student perceptions of teachers’ emotion communication style and future orientation. The Journal of Environmental Education, 46(3), 133-148. https://doi.org/10.1080/00958964.2015.1021662
Reyes, M. E. S., Carmen, B. P. B., Luminarias, M. E. P., Mangulabnan, S. A. N. B., Ogunbode, C. A. (2021). An investigation into the relationship between climate anxiety and mental health among Gen Z Filipinos. Current Psychology. 1-9. https://doi.org/10.1007/s12144-021-02099-3
Thunberg, G. (2019, January 25). 'Our house is on fire': Greta Thunberg, 16, urges leaders to act on climate. The Guardian. https://www.theguardian.com/environment /2019/jan/25/our-house-is-on-fire-greta-thunberg16-urges-leaders-to-act-on-climate
Weintrobe, S. (2012). The difficult problem of anxiety in thinking about climate change. In S. Weintrobe (Ed.). Engaging with Climate Change: Psychoanalytic and Interdisciplinary Perspectives (pp 33-47). Routledge.
my question is under game workshop
I am a graduate student, and my class is currently looking at the differences (and similarities) between research and evaluation. We are also currently looking at the work of Mertens’ Research and Evaluation in Education and Psychology (2020) when examining four educational research paradigms (I’ve attached a picture from that book that shows labels commonly associated with different paradigms as a quick descriptor).
I am wondering: What do you believe to be the differences and/or similarities of research and evaluation? Which of the four educational research paradigms (Postpositivism; Constructivism; Transformative; Pragmatic) do you most align with?
Thank you in advance for sharing your thoughts!

Greetings,
I am currently a graduate student taking Introduction to Research and Evaluation in Education. I've been tasked with posing the question of, "How does one define Research vs Evaluation?"
When I was a special education teacher, I completed many evaluations of student abilities both academically and cognitively and I see evaluation as a means to determine a path for a student's education.
Research on the other hand, entails posing a question and then determining possible answers while searching scholarly posts and journals.
Please comment on my question at your earliest convenience.
Mertens, D. M. (2020). Research and Evaluation in Education and Psychology (5th ed.). Sage Publications.
Hello. I am currently a student at Arizona State University in the Mary Lou Fulton College of Education as a graduate student. We have been tasked to define research and evaluation and explore the differences and similarities between them. The text we are using is Research and Evaluation in Education and Psychology by Donna Mertens (2020) which gives different models and paradigms for these two subjects. From my understanding of the text, research is exploration of topics and developing theories about that topic. Evaluation is the methodology to ensure you are properly investigating, documenting, and enhancing the world around you. I am interested in other people's viewpoints and would like to hear what you think.
Thank you.
I have developed a new technique. Although I have performed multiple experiments and obtained various "meaningful" results, I am unsure about how to evaluate the technique's performance and ensure the confidence of the obtained results since there are no other techniques available for comparison. What are the best practices for evaluating and validating a new technique in the absence of a benchmark tool or dataset?
Compare and contrast the different methods for assessing soil compaction, such as bulk density measurements, penetrometer tests, and visual assessments. Evaluate the strengths and limitations of each method, and their suitability for different soil types and land uses.
Hello there! I proposed in my phd mechanisms to support the Metaverse development focused on education of software engineering (coding, modeling, project management etc.).
If you're an XR developer, researcher or professor/teacher that uses XR technologies for education, I would be very grateful if you contribute with my research.
Evaluation link: https://forms.gle/871TyJapysdfKdmTA
How do we evaluate the opportunity in a business? Is there any model specific to this issue? Request input pl.
Hi,
We have few AI (Artificial Intelligence) solutions for different problems in public health. Few of the problems are binary in nature, while the rest is continuous. We need help in calculating sample size for measuring the accuracy of the AI (to reliably predict the problem).
For example, we developed an AI solution to estimate weight of a baby. We expect the AI to predict the weight reliably in 90% of babies - error to be less than 10% of actual weight by gold standard equipment. I can calculate sample size in two ways, I think:
- Assuming that variable of interest is binary - reliability of the AI prediction (yes/no)
- Assuming that variable of interest is continuous - actual error of AI prediction (grams - or %)
What should we choose? In the second option, which SD should we choose for ss calculation?
Thanks for reading and suggesting in advance.
PS - both the methods are applied on same study participant.
Does anyone know how to access to the software "Similarity Evaluation System for Chromatographic Fingerprint of Traditional Chinese Medicine (Version 2004A) " ?
Otherwise, which other software can I use to analyze HPLC-UV fingerprint similarities ? We are using an UHPLC thermofisher system with Chromeleon.
Thanks.
I have submitted my paper to the SSCI journal, the status "Evaluating Reviews" goes beyond 20 days, what does it mean?
Hello, I am a graduate student at Arizona State University with the Mary Lou Fulton Teachers College, I am pursuing a M.Ed in Learning Design and Technology, I am currently enrolled in Intro to Research and Evaluation in Education. After completing this weeks reading and formulating a definition of both research and evaluation, and comparing the two, I have a question to pose.
How can empirical data best be used in research and more importantly how can subject and data benefits evaluations, to help measure worth? Is evaluation as cut-and-dried as it seems or is there room for subjectivity?
I have been reading about the SSI scale (Gadzella, 1991) and saw a revised version available (Gadzella, 2012). However, I am unable to find the scale containing 53 items. Does anyone know how to obtain the inventory?
Gadzella, B. M., Baloglu, M., Masten, W. G., & Wang, Q. (2012). Evaluation of the student life-stress inventory-revised. Journal of Instructional Psychology, 39(2).
I do not want to analyze the text inside the book I just want to explore the existence of the main components.
To date, there have been two Redlists on fishes published in Bangladesh by IUCN, in 2000 and 2015. Both are focused mainly on inland fishes. Is there any work on marine fishes of Bangladesh? We had a publication, where we listed the threatened marine fishes of Bangladesh based on the global evaluation by the IUCN.
Hossain, M.A.R. and Hoq, M.E. 2018. Threatened Fishes and other aquatic animals of Bay of Bengal. FAN – Fisheries & Aquaculture News 5: 37-39.
I am looking for other works, publications and reports in this regard.
I started reading the textbook, Research and Evaluation in Education and Psychology this week. In Chapter 2: Evaluation; Mertens (2020) asserts that evaluation and research share similarities, however the differences between the two types of systematic inquiry are clearly distinguishable. (p. 86) After learning more about research and evaluation I have come to the conclusion that the two practices do not overlap as much as you may think they do.
Research and evaluation have different goals:
- Research is a system used for knowing or understanding
- Evaluation is a system used for discovering merit, worth, or quality
Research and evaluation have different processes:
- Research is made up of 8 main steps; identify world view, establish focus, literature review, identify design, identify data sources, collection methods, analysis and future dirction
- Evaluation is made up of 3 main steps; focusing the evaluation, planning the evaluation and implementing the evaluation
Research and evaluation use different terminology:
- The jargon used in research is similar to terminology you might be familiar with from science courses and experiments; variables, control groups, sample, subject
- The jargon used in evaluation is more similar to terms that you might be familiar with from business or management courses; merit, worth, monitoring, assessment, internal evaluators, stakeholders.
Taking into consideration stark differences between the goals, processes and terminology used in research vs. evaluation, I would argue that these two practices are not intertwined with one another. That is not to say that they can never be used together, rather that they can be used independently and with separate goals in mind.
I would love to hear some different perspectives on research vs. evaluation and how you may be applying them in your own work.
References:
Mertens, D. M. (2020). In Research and Evaluation in Education and Psychology: Integrating diversity with quantitative, qualitative, and mixed methods. 5th ed. SAGE
In the ScholareOne system, after the peer-review is completed, the status changes to "Evaluating Recommendation". How long does this status typically takes before hearing back from the journal editor?
I am looking for a study that dealt with the differences between institutions in student evaluation of the faculty. What I am interested in knowing is whether there is a difference between students from prestigious institutions and ordinary universities and colleges, one can perhaps assume that in private and competitive institutions the students will be critical and demanding. But I can't find any evidence or even a comparative study on the subject, I've been searching Google Scholar for a few days now
English language centers in the non English speaking world assess the English of their teachers and professors by using tests that are appropriate for U.S., Canadian, British, Australian environments. These specific contexts at times do not match the academic needs of language centers outside the U.S. or Great Britain for instance
Evaluation for epistemological and ontological differences between different research methodologies and
Evaluate the strength and weakness of variety of business and management research methods

I have created and validated a Campus Climate Identity Survey, as part of my doctoral work at NYU dealing with my home institution and am now looking for collaborators. The survey is validated with the pilot and really designed as a way to get comprehensive data in all the schools in academic health science centers not just the medical school component. Are you looking to gain a comprehensive view of the plight of your staff, students, and faculty at an academic health science center, then I'd love to chat with you.
I am on a state oral health department fellowship, and I am severely frustrated with picking an evaluation project. It has been months of literature research, brainstorming, planning, and talking with stakeholders. The topic I want to do is the Sugar Sweeten Beverage invention guide (similar to tobacco cessation 5A's) at the state level. However, I cannot formulate a question, target audience, and data to back up the evaluation. Has anyone done any state evaluation intervention method? Or any pointers? Thanks.
Hello,
I am a grad student at Arizona State University earning my degree in Learning & Curriculum in Gifted Education. I am enrolled in Introduction to Research and Evaluation. The major assignment in this course is to write a Research Proposal with Literature Review. We are currently discussing the differences in Research and Evaluation.
As a 5th grade teacher, I believe that research is a process to gain knowledge and information, whereas evaluation assesses the success of a program, organization, etc.
What is your educational role, and how do you differentiate between Research and Evaluation?
I am a graduate student at Arizona State University taking a course in research and evaluation in education. In our class, we are comparing and contrasting research and evaluation. After having read our text, (Mertens, 2020) Research and Evaluation in Education and Psychology, the author discusses the differences and parallels between the two. I had previously considered the two as interchangeable terms, or at least going hand-in-hand, however now there are evident distinctions that I can identify. The two do have overlap, but to me, research seems to be more of a process of uncovering and collecting new information in order to determine the "why" of a problem, scenario, or phenomenon. Evaluation, on the other hand, presents to me as a thorough process through which already available information is compiled to identify the "how well" or worth/value of an existing program or practice.
I am curious as to others' opinions on this topic. Do research and evaluation overlap, or are they singular and distinct? How are they used together? Must they be?
We are also discussing four paradigms that frame research and evaluation. Mertens (2020) describes them as post-positivism, constructivism, transformative and pragmatic. Do you feel that one paradigm would be more useful than another in carrying out research dealing with the efficacy of teachers of gifted populations based on their understanding of those students?
Hello everyone!
I am a graduate student at Arizona State University and we are focusing on the difference between research and evaluation. I teach Kindergarten and am working toward my Literacy Education graduate degree. In my opinion, research focuses on gaining new knowledge about a topic or purpose, while evaluation focuses on the program or purpose already used and then asking questions about it to understand its effectiveness. In your opinion, what is the major difference between research and evaluation?
As a classroom teacher, how do you think this be utilized or defined in a classroom, especially at the primary level?
As part of my fellowship, I want to evaluate the oral health surveillance system as part of my fellowship. I already read CDC's guidelines for evaluating surveillance systems, but I am still confused about how to assess one. Does anyone have examples of work or reviews done for this type of evaluation?
I need to statistically analyse the speed-accuracy trade-off for a reaction time task.
The design of my study is: 2*2*3 (group, task difficulty, valence condition)
I want to check whether there is a speed-accuracy trade-off between the two groups under low and high task difficulty. I came across this paper but the statistical analysis given here is quite confusing to me.
Could someone tell me the stepwise process in SPSS?
What is the best and simplest tool (other than Excel) for making comparison charts such as line charts for algorithms comparison and evaluation purposes?
Dear colleagues,
I’m conducting a study that is intended to identify determinants of evaluation use in evaluation systems embedded in public and non-profit sectors. I’m planning to conduct a survey on a representative sample of organizations that systematically evaluate the effects of their programs and other actions in Austria, Denmark, Ireland and the Netherlands. And here comes my request: can anyone of you, familiar with evaluation practice in these countries, suggest what types of organizations I should include in my sample? Are there any country-specific organizations active in the evaluation field that I should not omit?
It is obvious to me that in all these countries evaluation is present in central and local government (ministries, municipalities, etc.) as well as institutions funding research or development agencies, but I also suspect that there might be some country-specific, less obvious types of organisations which are important “evaluation players”.
Thanks for any hints.
Through multiple empirical studies, I have collected user needs for an ICT intervention. During this study, I intend to design a prototype and then evaluate the prototype to check whether user needs are captured in the proposed design.
What is the most suitable approach? Quantitative, Qualitative or mixed?
Are we evaluating the features of the prototype or evaluate the user requirements?
propolis (Bee Glue) and Evaluate Its Antioxidant Activity
I am now working with the project of "Evaluate the Impact of the Implementation of GDPR on the Role of the European Court". Before conceptulizing it for the discussion, I need to collect some data and have some ideas of the discussion for it. Do you have any articles or reasearches recommended about this topic?
Dear colleagues, dear participatory-action research practitioners,
I would like to open the discussion on the criteria for evaluating participatory research (whether it is action-research, participatory action research, CBPR, etc.).
How do you evaluate participatory research projects that are submitted for research grants and/or publications (papers) ? Do you apply the same criteria as when you evaluate non-participatory research projects? Or have you developed ways to evaluate non-scientific dimensions such as the impact of this research on communities, the quality of connections between co-researchers? And if so, how do you proceed ?
Thank you in advance for sharing your experiences and thoughts.
Pour les collègues francophones, n'hésitez pas à répondre en français ! Quels sont les critères que vous utilisez pour évaluer des projets de recherche participative ? Utilisez-vous les critères d'évaluation scientifique que vous appliquez aux autres types de recherche ou est-ce que vous avez des critères spécifiques et si oui, lesquels ?
Baptiste GODRIE, Quebec-based social science researcher & participatory action research practitioner
Comparative Evaluation of Selected high and Low Molecular Weight Antioxidant Activity in Rat.
Hello my fellow Scientists!
I'm a psychology student (Bachelor's 4th year) and I'm writing my final thesis on problematic gaming (gaming disorder) among students and it's correlation between attachment style (Bowlby theory). I was hoping to find fellow scientists who could help me by sharing (or directing me to) a questionnaire about problematic gaming and attachment style (anxious/avoidant/secure) evaluation scales.
Also how can I get in contact with the right people who have such information?
What is the Impact of drip irrigation on water use and crop production? What percent of water does drip irrigation save compared to flood irrigation? By what amount does drip irrigation increase the crop production compared to flood irrigation?
Can you please also share any relevant publication?
Does anyone have any idea how to evaluate a supper capacitor with a 10Watt solar PV system?
In order to create new procedure for performance evaluation studies, i need an ECCLS document which named "Guidelines for the Evaluation of Diagnostic Kits: Part 2: General Principles and Outline Procedures for the Evaluation of Kits for Qualitative Tests 1990, no. 1". Unfortunatelly, i could not find the document. If anyone have it, please share me. For your interest, many thanks.
Evaluation tool in a qualitative research in promoting values educ using different episodes
Dear colleagues
I have a query regarding the most appropriate experimental design and statistical analysis for a research project. The project study area is located in a high altitude lagoon (Los Andes, Peru). The study subject is an endangered frog species (the Lake Junín frog).
The research question is: What is the impact of heavy metals, eutrophication and water level variation on the abundance and biomass of the Telmatobius macrostomus and T. Brachydactylu population?
After many field visits and literature research we've found out the 3 main environmental pressures on the frog population: (i) heavy metals from mining activities, (ii) eutrophication produced by untreated urban sewage discharge and (iii) water level variation to assure enough water for hydropower downstream. We have monitoring data (from secondary sources) on heavy metal concentration and some eutrophication indicators (N, P, DBO). For now we only have the resources to collect field data on water level variation, and the frog's biomass and abundance.
Currently we don't have resources to collect more data on heavy metal pollution or nutrient content in the water. Therefore, with the available data, we want to have some idea on what are the most relevant environmental pressures to:
- Know where to allocate more resources on monitoring and
- Evaluate some remediation techniques to improve the frog's habitat.
Thanks in advance for your comments.
ps. Feel free to contact me if any of you are interested in helping designing the study.
I need to evaluate a pure content-based recommender system for documents extraction (It may also be seen as a search engine) that gets top N results based on a similarity score. I know there are some metrics like HR@k, accuracy@k, NDCG@k, CTR, etc. However, if I understand right, all those metrics require a pre-evaluation from expert coders, rating score for documents (e.g., scale from 1 to 5) or click pattern from users.
This content-based recommender systems have no user (yet) to rate/click on query results, and I can not understand how the expert coders can provide ratings for every document against every possible query.
Are there any means to evaluate such types of content-based recommender systems?
I am working on EEG classification task. I segmented each hour into 30 seconds windows. I want to calculate FPR/hour. I found this formula ===> FPR/h= fp/[((fp+tn)*30)/60*60] but I didn't understood it and used my formula which is ==> FPR=fp/(fp+tn) then I divide FPR to number of hours
number of hours= ((tn+fp)*30)/60*60 then FPR/h=FPR/number of hours.
I want to be sure that my formula is true and right to use.
I am working on a binary classification task. I want to calculate the evaluation metrics to calculate sensitivity, FPR and accuracy for each patient. I used the threshold method to calculate the metrics. I took mutliple threshold values (from 0.4 to 0.75 with step 0.05 ) to choose the best threshold. My question is can I use different threshold for each patient??
kindly provide me with the link
1. Role of Monitoring and Evaluating finances in enhancing the performance: Comparing the two and explain their usage?
Monitoring Finances
Evaluating Finances
2. What do you think should be done, procedure, or be used to be followed?
3. Which is the best way or any mentioned above can be used to ensure that monitoring and evaluating finances can help overcome?
Is it the novelty of the research idea that matters or the impact factor of the journal in which the research article is published while applying for a post-doc position? Impact factors are regularly updated and keeps changing. How the impact factor truly evaluates the quality of a research article?
Evaluation Metrics
RMSE - Root Mean Square Error
RMSLE - Root Mean Squared Log Error
- In general, there are types of data that interact with different types of policies, for any thematic involved.
- It is important to identify the types of data that are transversal to the public policy evaluation steps, so that they can be reused several times.
I am trying to work on how variable characteristics can help to determine variable structure and pattern. I am half way into the work though.
I am currently working on my master thesis and have faced some problems in designing a survey. The goal is to analyze a transition from ordinary offline retailing towards physical showrooms effectuating fulfillment of products through an online shop.
I use as dependent variable customer satisfaction (reaching from 1-10) and as independent variables the following ones:
F= fulfillment (1/0) 1=now 0=in 3 days
A=assortment (from 10 to 20 units per shop)
P=price (from 25 to 25*0,7discount->17,5)
Is it possible to design a survey/ experiment in a way to get the needed data for this equation?
I would like to start a discussion on which index is more reliable, H-Index or i10-Index. Both are usable, however their ways of calculation are different. There is also G-Index. I am not asking on the differences but on their reliability. Welcome to any comments.
I'm excited to be taking on a secondment role with the University's Student Engagement, Evaluation and Research (STEER) team and am building up my reading list!
I am training a custom dataset (RarePlane) with DeepLab V3+ using Detectron2 (open source are wroten base on Pytorch).
The custom dataset is fixed with an image size is 512x512. When I trained with 100000 interactions, I got the mIoU values (bellow).
[05/10 06:13:49] d2.evaluation.sem_seg_evaluation INFO: OrderedDict([('sem_seg', {'mIoU': 48.263697089435894, 'fwIoU': 93.17537826963293, 'IoU-a': nan, 'IoU-i': 0.0, 'IoU-r': nan, 'IoU-c': nan, 'IoU-f': nan, 'IoU-t': nan, 'mACC': 50.0, 'pACC': 96.52739417887179, 'ACC-a': nan, 'ACC-i': 0.0, 'ACC-r': nan, 'ACC-c': nan, 'ACC-f': nan, 'ACC-t': nan})])
[05/10 06:13:49] d2.engine.defaults INFO: Evaluation results for custom_dataset_test in csv format:
[05/10 06:13:49] d2.evaluation.testing INFO: copypaste: Task: sem_seg
[05/10 06:13:49] d2.evaluation.testing INFO: copypaste: mIoU,fwIoU,mACC,pACC
[05/10 06:13:49] d2.evaluation.testing INFO: copypaste: 48.2637,93.1754,50.0000,96.5274
I'm looking for a solution to configuring the DeepLab code with detectron2 and how to increase mIoU values.
Thanks.
I am conducting an evaluation of professional development using Guskey's Five Levels of Evaluation. I am trying to decide if it is an incorrect application of his model to use the same evaluation question at level 3 and level 4.
My simulation is stuck at 'evaluating n1-dvs.cmd.
Does anyone know what causes this? The simulation does not error out, but remains at this point.
i am simulating an AlInN/GaN stack. Anyone willing to review my structure file?
What is the best superimposition software to the comparison of two similar virtual 3D objects?
I'm working on generative models for medical image synthesis, specifically GANs for CT image synthesis. What are the evaluation metrics best suited for evaluating a proposed model?
Please help me to prove the code to solve the following problem;
Problem: "Semantic segmentation of humans and vehicles in images".
Following are the given information related to solve this problem;
Experimental study:
using a learning machine model: SVM, KNN, or another model
Using a deep learning model :
either Semi-dl: resNet, VGg, inception (Google net) or others
full DL site: Yolo, unet, CNN family (CNN, RCNN, faster RCNN), or others
Evaluation of the two models in the learning phase
Evaluation of both models with test data
Exploration & descriptions & analysis of the results obtained (confusion matrix, specificity, accuracy, FNR)
Why Green-Gauss Node Based Gradient Evaluation is preferred over default Green-Gauss cell Based in ANSYS FLUENT?
I am looking for advice concerning a (supposedly) known practical issue : article overloads. While doing my PhD I was convinced that everything who went through publication was worth reading and understanding. My opinion as evolved since then for very practical consideration : lack of time to read biblio and absolute necessity to "pre-screen" something before deciding if it's worth reading or not.
Concerning scientific paper, the prescreening can be tricky. Since the format is very standardized as well as the wording (nothings sounds more like a paper than a paper), I often end up reading half a dozen page on a paper, annotates parts, spend time... before deciding I shouldn't spend time on it.
Do you have some "tricks" to share in order to lower that waste of time? While these "tricks" might be completely non-scientific of course, I still would enjoy them
I want Researchers from Educational Measurement and Evaluation, relating to teaching, learning , academic performance and test validation
I just completed my doctorate, I live in Massachusetts, and I have 2 years of experience with evaluation in the social sciences. If I were to create an evaluation plan for an organization, how much should I charge per hour of work? My acquaintance has recently started a business and neither of us know how much to charge for evaluation. If possible, please leave a rough numerical range... even if it is just a guess. Thank you so much in advance!