Science topics: Quantitative Social ResearchEvaluation
Science topic
Evaluation - Science topic
Evaluation in all area.
Questions related to Evaluation
I am looking for a study that dealt with the differences between institutions in student evaluation of the faculty. What I am interested in knowing is whether there is a difference between students from prestigious institutions and ordinary universities and colleges, one can perhaps assume that in private and competitive institutions the students will be critical and demanding. But I can't find any evidence or even a comparative study on the subject, I've been searching Google Scholar for a few days now
English language centers in the non English speaking world assess the English of their teachers and professors by using tests that are appropriate for U.S., Canadian, British, Australian environments. These specific contexts at times do not match the academic needs of language centers outside the U.S. or Great Britain for instance
Evaluation for epistemological and ontological differences between different research methodologies and
Evaluate the strength and weakness of variety of business and management research methods
In the ScholareOne system, after the peer-review is completed, the status changes to "Evaluating Recommendation". How long does this status typically takes before hearing back from the journal editor?
I have created and validated a Campus Climate Identity Survey, as part of my doctoral work at NYU dealing with my home institution and am now looking for collaborators. The survey is validated with the pilot and really designed as a way to get comprehensive data in all the schools in academic health science centers not just the medical school component. Are you looking to gain a comprehensive view of the plight of your staff, students, and faculty at an academic health science center, then I'd love to chat with you.
As part of my fellowship, I want to evaluate the oral health surveillance system as part of my fellowship. I already read CDC's guidelines for evaluating surveillance systems, but I am still confused about how to assess one. Does anyone have examples of work or reviews done for this type of evaluation?
I am on a state oral health department fellowship, and I am severely frustrated with picking an evaluation project. It has been months of literature research, brainstorming, planning, and talking with stakeholders. The topic I want to do is the Sugar Sweeten Beverage invention guide (similar to tobacco cessation 5A's) at the state level. However, I cannot formulate a question, target audience, and data to back up the evaluation. Has anyone done any state evaluation intervention method? Or any pointers? Thanks.
Hello, I am a graduate student at Arizona State University with the Mary Lou Fulton Teachers College, I am pursuing a M.Ed in Learning Design and Technology, I am currently enrolled in Intro to Research and Evaluation in Education. After completing this weeks reading and formulating a definition of both research and evaluation, and comparing the two, I have a question to pose.
How can empirical data best be used in research and more importantly how can subject and data benefits evaluations, to help measure worth? Is evaluation as cut-and-dried as it seems or is there room for subjectivity?
Hello,
I am a grad student at Arizona State University earning my degree in Learning & Curriculum in Gifted Education. I am enrolled in Introduction to Research and Evaluation. The major assignment in this course is to write a Research Proposal with Literature Review. We are currently discussing the differences in Research and Evaluation.
As a 5th grade teacher, I believe that research is a process to gain knowledge and information, whereas evaluation assesses the success of a program, organization, etc.
What is your educational role, and how do you differentiate between Research and Evaluation?
I am a graduate student at Arizona State University taking a course in research and evaluation in education. In our class, we are comparing and contrasting research and evaluation. After having read our text, (Mertens, 2020) Research and Evaluation in Education and Psychology, the author discusses the differences and parallels between the two. I had previously considered the two as interchangeable terms, or at least going hand-in-hand, however now there are evident distinctions that I can identify. The two do have overlap, but to me, research seems to be more of a process of uncovering and collecting new information in order to determine the "why" of a problem, scenario, or phenomenon. Evaluation, on the other hand, presents to me as a thorough process through which already available information is compiled to identify the "how well" or worth/value of an existing program or practice.
I am curious as to others' opinions on this topic. Do research and evaluation overlap, or are they singular and distinct? How are they used together? Must they be?
We are also discussing four paradigms that frame research and evaluation. Mertens (2020) describes them as post-positivism, constructivism, transformative and pragmatic. Do you feel that one paradigm would be more useful than another in carrying out research dealing with the efficacy of teachers of gifted populations based on their understanding of those students?
Hello everyone!
I am a graduate student at Arizona State University and we are focusing on the difference between research and evaluation. I teach Kindergarten and am working toward my Literacy Education graduate degree. In my opinion, research focuses on gaining new knowledge about a topic or purpose, while evaluation focuses on the program or purpose already used and then asking questions about it to understand its effectiveness. In your opinion, what is the major difference between research and evaluation?
As a classroom teacher, how do you think this be utilized or defined in a classroom, especially at the primary level?
I need to statistically analyse the speed-accuracy trade-off for a reaction time task.
The design of my study is: 2*2*3 (group, task difficulty, valence condition)
I want to check whether there is a speed-accuracy trade-off between the two groups under low and high task difficulty. I came across this paper but the statistical analysis given here is quite confusing to me.
Could someone tell me the stepwise process in SPSS?
What is the best and simplest tool (other than Excel) for making comparison charts such as line charts for algorithms comparison and evaluation purposes?
Dear colleagues,
I’m conducting a study that is intended to identify determinants of evaluation use in evaluation systems embedded in public and non-profit sectors. I’m planning to conduct a survey on a representative sample of organizations that systematically evaluate the effects of their programs and other actions in Austria, Denmark, Ireland and the Netherlands. And here comes my request: can anyone of you, familiar with evaluation practice in these countries, suggest what types of organizations I should include in my sample? Are there any country-specific organizations active in the evaluation field that I should not omit?
It is obvious to me that in all these countries evaluation is present in central and local government (ministries, municipalities, etc.) as well as institutions funding research or development agencies, but I also suspect that there might be some country-specific, less obvious types of organisations which are important “evaluation players”.
Thanks for any hints.
Through multiple empirical studies, I have collected user needs for an ICT intervention. During this study, I intend to design a prototype and then evaluate the prototype to check whether user needs are captured in the proposed design.
What is the most suitable approach? Quantitative, Qualitative or mixed?
Are we evaluating the features of the prototype or evaluate the user requirements?
propolis (Bee Glue) and Evaluate Its Antioxidant Activity
I am now working with the project of "Evaluate the Impact of the Implementation of GDPR on the Role of the European Court". Before conceptulizing it for the discussion, I need to collect some data and have some ideas of the discussion for it. Do you have any articles or reasearches recommended about this topic?
Dear colleagues, dear participatory-action research practitioners,
I would like to open the discussion on the criteria for evaluating participatory research (whether it is action-research, participatory action research, CBPR, etc.).
How do you evaluate participatory research projects that are submitted for research grants and/or publications (papers) ? Do you apply the same criteria as when you evaluate non-participatory research projects? Or have you developed ways to evaluate non-scientific dimensions such as the impact of this research on communities, the quality of connections between co-researchers? And if so, how do you proceed ?
Thank you in advance for sharing your experiences and thoughts.
Pour les collègues francophones, n'hésitez pas à répondre en français ! Quels sont les critères que vous utilisez pour évaluer des projets de recherche participative ? Utilisez-vous les critères d'évaluation scientifique que vous appliquez aux autres types de recherche ou est-ce que vous avez des critères spécifiques et si oui, lesquels ?
Baptiste GODRIE, Quebec-based social science researcher & participatory action research practitioner
Comparative Evaluation of Selected high and Low Molecular Weight Antioxidant Activity in Rat.
Hello my fellow Scientists!
I'm a psychology student (Bachelor's 4th year) and I'm writing my final thesis on problematic gaming (gaming disorder) among students and it's correlation between attachment style (Bowlby theory). I was hoping to find fellow scientists who could help me by sharing (or directing me to) a questionnaire about problematic gaming and attachment style (anxious/avoidant/secure) evaluation scales.
Also how can I get in contact with the right people who have such information?
What is the Impact of drip irrigation on water use and crop production? What percent of water does drip irrigation save compared to flood irrigation? By what amount does drip irrigation increase the crop production compared to flood irrigation?
Can you please also share any relevant publication?
Does anyone have any idea how to evaluate a supper capacitor with a 10Watt solar PV system?
In order to create new procedure for performance evaluation studies, i need an ECCLS document which named "Guidelines for the Evaluation of Diagnostic Kits: Part 2: General Principles and Outline Procedures for the Evaluation of Kits for Qualitative Tests 1990, no. 1". Unfortunatelly, i could not find the document. If anyone have it, please share me. For your interest, many thanks.
Evaluation tool in a qualitative research in promoting values educ using different episodes
I need to evaluate a pure content-based recommender system for documents extraction (It may also be seen as a search engine) that gets top N results based on a similarity score. I know there are some metrics like HR@k, accuracy@k, NDCG@k, CTR, etc. However, if I understand right, all those metrics require a pre-evaluation from expert coders, rating score for documents (e.g., scale from 1 to 5) or click pattern from users.
This content-based recommender systems have no user (yet) to rate/click on query results, and I can not understand how the expert coders can provide ratings for every document against every possible query.
Are there any means to evaluate such types of content-based recommender systems?
Dear colleagues
I have a query regarding the most appropriate experimental design and statistical analysis for a research project. The project study area is located in a high altitude lagoon (Los Andes, Peru). The study subject is an endangered frog species (the Lake Junín frog).
The research question is: What is the impact of heavy metals, eutrophication and water level variation on the abundance and biomass of the Telmatobius macrostomus and T. Brachydactylu population?
After many field visits and literature research we've found out the 3 main environmental pressures on the frog population: (i) heavy metals from mining activities, (ii) eutrophication produced by untreated urban sewage discharge and (iii) water level variation to assure enough water for hydropower downstream. We have monitoring data (from secondary sources) on heavy metal concentration and some eutrophication indicators (N, P, DBO). For now we only have the resources to collect field data on water level variation, and the frog's biomass and abundance.
Currently we don't have resources to collect more data on heavy metal pollution or nutrient content in the water. Therefore, with the available data, we want to have some idea on what are the most relevant environmental pressures to:
- Know where to allocate more resources on monitoring and
- Evaluate some remediation techniques to improve the frog's habitat.
Thanks in advance for your comments.
ps. Feel free to contact me if any of you are interested in helping designing the study.
I am working on EEG classification task. I segmented each hour into 30 seconds windows. I want to calculate FPR/hour. I found this formula ===> FPR/h= fp/[((fp+tn)*30)/60*60] but I didn't understood it and used my formula which is ==> FPR=fp/(fp+tn) then I divide FPR to number of hours
number of hours= ((tn+fp)*30)/60*60 then FPR/h=FPR/number of hours.
I want to be sure that my formula is true and right to use.
I am working on a binary classification task. I want to calculate the evaluation metrics to calculate sensitivity, FPR and accuracy for each patient. I used the threshold method to calculate the metrics. I took mutliple threshold values (from 0.4 to 0.75 with step 0.05 ) to choose the best threshold. My question is can I use different threshold for each patient??
kindly provide me with the link
1. Role of Monitoring and Evaluating finances in enhancing the performance: Comparing the two and explain their usage?
Monitoring Finances
Evaluating Finances
2. What do you think should be done, procedure, or be used to be followed?
3. Which is the best way or any mentioned above can be used to ensure that monitoring and evaluating finances can help overcome?
Is it the novelty of the research idea that matters or the impact factor of the journal in which the research article is published while applying for a post-doc position? Impact factors are regularly updated and keeps changing. How the impact factor truly evaluates the quality of a research article?
Evaluation Metrics
RMSE - Root Mean Square Error
RMSLE - Root Mean Squared Log Error
- In general, there are types of data that interact with different types of policies, for any thematic involved.
- It is important to identify the types of data that are transversal to the public policy evaluation steps, so that they can be reused several times.
Watershed development is the set of practices to impound the flowing rainwater, and thus help it percolate. But the groundwater recharge is also dependent on many other factors like the amount of rainfall, rainfall pattern, topographic slope, soil type, soil thickness, rock type etc. How do we measure the impact of watershed development on the groundwater?
Please do share publications on this topic.
Thank you.
I am trying to work on how variable characteristics can help to determine variable structure and pattern. I am half way into the work though.
I am currently working on my master thesis and have faced some problems in designing a survey. The goal is to analyze a transition from ordinary offline retailing towards physical showrooms effectuating fulfillment of products through an online shop.
I use as dependent variable customer satisfaction (reaching from 1-10) and as independent variables the following ones:
F= fulfillment (1/0) 1=now 0=in 3 days
A=assortment (from 10 to 20 units per shop)
P=price (from 25 to 25*0,7discount->17,5)
Is it possible to design a survey/ experiment in a way to get the needed data for this equation?
Dear Colleague,
Part of my Ph.D. thesis needs to be completed questionnaire, which, unfortunately, due to Covid 19, we cannot attend the company under review. For this reason, I request supply chain experts who wish to complete the questionnaire to notify me, that I will email the questionnaire to them. It would be your generosity to respond to the questionnaires and also distribute them among your colleagues, students, and networks.
Thank you in advance for your help and cooperation.
I would like to start a discussion on which index is more reliable, H-Index or i10-Index. Both are usable, however their ways of calculation are different. There is also G-Index. I am not asking on the differences but on their reliability. Welcome to any comments.
I'm excited to be taking on a secondment role with the University's Student Engagement, Evaluation and Research (STEER) team and am building up my reading list!
I am training a custom dataset (RarePlane) with DeepLab V3+ using Detectron2 (open source are wroten base on Pytorch).
The custom dataset is fixed with an image size is 512x512. When I trained with 100000 interactions, I got the mIoU values (bellow).
[05/10 06:13:49] d2.evaluation.sem_seg_evaluation INFO: OrderedDict([('sem_seg', {'mIoU': 48.263697089435894, 'fwIoU': 93.17537826963293, 'IoU-a': nan, 'IoU-i': 0.0, 'IoU-r': nan, 'IoU-c': nan, 'IoU-f': nan, 'IoU-t': nan, 'mACC': 50.0, 'pACC': 96.52739417887179, 'ACC-a': nan, 'ACC-i': 0.0, 'ACC-r': nan, 'ACC-c': nan, 'ACC-f': nan, 'ACC-t': nan})])
[05/10 06:13:49] d2.engine.defaults INFO: Evaluation results for custom_dataset_test in csv format:
[05/10 06:13:49] d2.evaluation.testing INFO: copypaste: Task: sem_seg
[05/10 06:13:49] d2.evaluation.testing INFO: copypaste: mIoU,fwIoU,mACC,pACC
[05/10 06:13:49] d2.evaluation.testing INFO: copypaste: 48.2637,93.1754,50.0000,96.5274
I'm looking for a solution to configuring the DeepLab code with detectron2 and how to increase mIoU values.
Thanks.
I am conducting an evaluation of professional development using Guskey's Five Levels of Evaluation. I am trying to decide if it is an incorrect application of his model to use the same evaluation question at level 3 and level 4.
My simulation is stuck at 'evaluating n1-dvs.cmd.
Does anyone know what causes this? The simulation does not error out, but remains at this point.
i am simulating an AlInN/GaN stack. Anyone willing to review my structure file?
What is the best superimposition software to the comparison of two similar virtual 3D objects?
I'm working on generative models for medical image synthesis, specifically GANs for CT image synthesis. What are the evaluation metrics best suited for evaluating a proposed model?
Please help me to prove the code to solve the following problem;
Problem: "Semantic segmentation of humans and vehicles in images".
Following are the given information related to solve this problem;
Experimental study:
using a learning machine model: SVM, KNN, or another model
Using a deep learning model :
either Semi-dl: resNet, VGg, inception (Google net) or others
full DL site: Yolo, unet, CNN family (CNN, RCNN, faster RCNN), or others
Evaluation of the two models in the learning phase
Evaluation of both models with test data
Exploration & descriptions & analysis of the results obtained (confusion matrix, specificity, accuracy, FNR)
Why Green-Gauss Node Based Gradient Evaluation is preferred over default Green-Gauss cell Based in ANSYS FLUENT?
It is natural that employees can have emotional effort as well as physical and cognitive effort. Emotional labor—the effort required to manage one's feelings or emotions at work—plays a significant part in many occupations.
Employees’ emotional efforts that are in harmony with business ethics can be defined as emotional labor. Evaluating emotional labor based on business ethics seeks to enable managers to reduce the negative consequences of emotional labor while preserving the positive ones.
Surface acting does not involve real feelings. It depends on fake emotional presentations. Therefore, surface acting can be evaluated as unethical emotional efforts. As a result, these fake emotional presentations can not be accepted as emotional labor.
Öngöre's findings (2019, 2020) showed that natural emotions do not cause emotional exhaustion (burnout), while surface acting causes emotional exhaustion. Meanwhile, natural feelings causes vigor and dedication (work engagement).
References:
Öngöre, Ö. (2016). A theoretical study about the place and value of emotional labor in working life, Atatürk University Journal of Economics & Administrative Sciences, 30(5), 1161-1177.
Öngöre, Ö. (2019). Determining the Effect of Emotional Labor on Work Engagement: Service-Sector Employees in Private Enterprises. Turkish Journal of Business Ethics, 12(1), 126-134.
Öngöre, Ö. (2020). Evaluating emotional labor: A new approach. Global Business and Organizational Excellence, 39, (4):35–44.
I am looking for advice concerning a (supposedly) known practical issue : article overloads. While doing my PhD I was convinced that everything who went through publication was worth reading and understanding. My opinion as evolved since then for very practical consideration : lack of time to read biblio and absolute necessity to "pre-screen" something before deciding if it's worth reading or not.
Concerning scientific paper, the prescreening can be tricky. Since the format is very standardized as well as the wording (nothings sounds more like a paper than a paper), I often end up reading half a dozen page on a paper, annotates parts, spend time... before deciding I shouldn't spend time on it.
Do you have some "tricks" to share in order to lower that waste of time? While these "tricks" might be completely non-scientific of course, I still would enjoy them
I want Researchers from Educational Measurement and Evaluation, relating to teaching, learning , academic performance and test validation
I just completed my doctorate, I live in Massachusetts, and I have 2 years of experience with evaluation in the social sciences. If I were to create an evaluation plan for an organization, how much should I charge per hour of work? My acquaintance has recently started a business and neither of us know how much to charge for evaluation. If possible, please leave a rough numerical range... even if it is just a guess. Thank you so much in advance!
Hello Everyone,
I have a questionnaire of a 3-point Likert scale for an overall evaluation for a service, and several detailed attributes evaluation of that service. For example:
Overall Evaluation (1-3), then
Cleanliness (1-3)
Comfort (1-3)
Privacy (1-3)
..etc
I am looking for a way to find what are the most important variables (attributes) that have the highest impact on overall evaluation.
Is ordinary regression helping?
There are the following well-known ontology evaluation methods of computational ontology
1. Evaluation by Human
2. Evaluation using ontology-based Application
3. Data-driven evaluation
4. The Gold Standard Evaluation
We designed and developed a domain ontology and implemented it in OWL semantic language. How to evaluate it?
How do you measure the outcomes of general education (up to 12th grade) and higher education (college and above) in a given country? what are some indicators you would suggest I use?
Have been looking for the following article: Review of Experimental Techniques for Evaluating Unsaturated Shear Strength of Soil. In book: Advances in Civil Engineering and Infrastructural Development, Select Proceedings of ICRACEID 2019. DOI: 10.1007/978-981-15-6463-5_57. Is there anyone who will be able to provide it?
How do you measure the outcomes of general education (up to 12th grade) and higher education (college and above) in a given country? what are some indicators you would suggest I use?
I am doing text mining on the performance on research for development interventions in low income countries. I need some standard dictionaries and typologies to map the content of the paper in a standardized way. Can you the international standards you are aware of? Thanks in advance.
Topic
Influence of Sino-Beninese Cooperation on Structural Adjustment in Benin: The Case of textile industry
PROBLEMATIC
Cooperation is the situation in which two or more nations interact, exchange and build a common implementation that benefits them. The cooperation between China and Benin begun far beyond 1972 where diplomaticrelation was re-established. Since then, China has become an important development partner of Benin. The distance between the two countries and the language barrierdid not hamper the cooperation and resulted to couple of achievements, especially to Benin.For example, in the areas of government assistance, agriculture and fisheries, industry, Public Works, Public Health, energy, telecommunications, trade and human resources, highlights illustrate the excellent relationship between the two countries.
Furthermore, in development perspective, His Excellence Jingtao Peng said that China's cooperation includes five priorities that are, policy coordination, infrastructure interconnection which is the priority area of " Belt and Road ", trade facilitation, financial integration and cooperation of peoples with emphasis on SITEX-Société textile du Benin.Since 1987, the cooperation with target on Sitex has yielded mere production and marketing of greige fabric 100%cotton. According to Josaphat 2018, it annually imported 15 million to 20 million unbleached meters from the outside. Unfortunately, weakened by its internal underperformance, the company has succumbed to the crisis of the textile sector as the COTEB (textile complex of Benin) another purely Beninese textile company.
However, literature reveals that more need to be done under the textile fabric and derived product in order to boost the economy of African country having Cotton as state product as in Benin.Indeed, it is estimated that about 90% of the fibre is exported, with only 10% being processed into yarns and then into textiles by local industries, which is partly due to the demand for foreign exchange from parastatal marketing agencies (ICAC, 2015).By processing and exporting finished products instead of fibre, African countries would be less penalized by subsidies to American producers and Africa would industrialize.
Therefore, it is worth to assess the result of the cooperation and inform decision makers onthe impact of the long-life cooperation between Benin and China. In order words, this research aims at investigating on the influence that the partnership with China and Benin could have through its textile industry as we address the following main research question: Can cooperation between China and Benin revive Benin's textile industry?
RESEARCH OBJECTIVE
The main objective of this research is to understand why Beninese textile companies are not viable to the point where almost all cotton production is nowadays,transforms abroad. The study also intends to analyse the influence and prospects of a Sino-Beninese cooperation for the development of the textile industry.
Specifically, the research will :
§ Evaluate the influence of Benin's Sino cooperation on Benin's economic development and, above all, its textile industry
§ Evaluate the productive capacities of Benin's textile industries
§ Analyse the internal and external factors that affect the performance of textile mills
§ Develop a strategic plan to promote and enhance local processing plants with Chinese expertise
§ Suggest the clauses of a new partnership that will benefit both Benin and the Chinese side for the processing of Benin cotton
Ultimately, the study could contribute to understanding the optimization of local processing, improving the management and performance of existing enterprises as well as developing the added value of cotton through local processing of the product (into by-products).
RESEARCH HYPOTHESES
In the quest to responding to the research question, the following assumptionsare formulated:
H1) The new cooperation between China and Benin has a positive impact on Benin's textile industry
H2) The productivity of processing raw cotton into fibre and other derived products is very low compared to highly industrialized countries such as China.(Looks not good written)
H3) The cost-income structure (inputs-outputs); the quality of industrial equipment (machinery); the qualification of personnel; logistics; lead time are some of the factors affecting the performance of the textile industries in Benin..(Looks not good written)
H4) Management plans for the cotton industry have a negative impact on the processing of raw cotton into finished products that are competitive on the international market.
H5) The clauses of a new partnership are very beneficial to Benin than to the Chinese side for the processing of Benin cotton
Hello,
I am looking to evaluate the quality of a land use plan in my country. However I am limited by availability of criteria to use. Do you know of literature I can review or advice on standards used in the planning profession when conducting a plan quality evaluation?
Your response will be much appreciated.
Regards, Malakia
How many of you have had a look on https://arxiv.org/pdf/1810.01605.pdf where Prof. J.E. Hirsch suggests new, improved "h-indexes" "to quantify an individual’s scientific leadership"? Formal evaluators just love the original h-index, it's easily obtained from WoS, GS, Scopus etc data bases. How much can or should this (or some improved parameter) be relayed on evaluating researchers success, contribution and impact?
Peer assessment helps students to be more objective about their own learning and it can assist learning, it is argued. Its worth to student evaluation would perhaps be questionable too.
I know there are plenty evaluate methods can be used to evaluate the clustering result for a single data set, I am trying to apply the same clustering technique to two different data sets and then compare the similarity of the resulting clusters.
for example I want to compare the result of a same clustering algorithms on two consecutive time intervals with different data.
Hello everyone,
In case of balanced classes, what is the best metric to evaluate a supervised binary classifier that predicts if a tweet will be relevant or not to a user: MCC (Matthews correlation coefficient) or F1-Score (F-Measure)?
Thanks.
Evaluation of vulnerability of school children below 10 years being investigated.
Abstract and CV submission deadline – June 30th, 2020
Call details
The John Molson School of Business at Concordia University kindly invites contributions to the forthcoming edited book Beyond the 2ºC - Business and Policy Trajectories to Climate Change Adaptation to be published by Palgrave Macmillan and being considered for the “Palgrave Studies in Sustainable Business: In Association with Future Earth” book series.
ABOUT THE BOOK
Climate change mitigation, understood as an approach to reduce human-induced emissions, has taken centre stage in climate action debates and efforts in the last decades. Currently published reports and studies present scenarios under which we can limit the global temperature rise to a 2°C threshold. However, to stay within the 2°C threshold, we need to move towards net-negative global emissions. This would require mobilization on a global scale and improvements in our approaches to mitigating global warming. After passing the symbolic 400 parts per million (PPM) threshold of carbon dioxide equivalents (CO2-eq) in the atmosphere in 2016, recent studies have highlighted that the current emission trajectory can easily lead to concentrations of up to 1,000 PPM of CO2-eq – leading to an average global warming of up to 5.4°C by the end of this century.
While many governments, businesses and researchers like to believe that a mitigation-focused approach can keep the 2°C threshold within reach, this edited book intends to investigate the business and policy adaptation trajectories beyond what are currently understood to be some of the major tipping points in the climate system. In these scenarios, the planet will be on an accelerated path towards deforestation, biodiversity loss, erosion of inhabited and uninhabited coastal areas, and the possible disappearance of entire island states. These events will be coupled with the possible proliferation of disease, human migration, and increased conflicts over resources. This calls for academics, practitioners, and policymakers to shift their attention away from the almost exclusive focus on climate change mitigation, to also consider adaptation plans.
Beyond the 2ºC - Business and Policy Trajectories to Climate Change Adaptation is an edited collection that will review and critically analyze new and innovative business and policy approaches to climate change adaptation across different economic sectors and for different locations. The edited collection will aim to ignite an academic discussion regarding the necessary, and potentially urgent, adaption strategies that could address the risks induced by the fast-changing climate. The contributions should demonstrate how we can adapt to a world where fresh water is scarce, where extreme weather events are a daily reality, where global sea levels are up to 2.4 m higher than today, and where flooding and wildfires are no longer discrete events. The collection plans to evaluate the readiness of our businesses and policies to adapting to this “new” world and to explore strategies that move beyond the current incremental approaches.
CALL FOR CONTRIBUTIONS
Beyond the 2ºC - Business and Policy Trajectories to Climate Change Adaptation aims to explore and propose business and policy solutions for climate-induced economic, technical, and societal challenges.
The editors are accepting contributions by experts in both the academic and practitioner communities in business and policy, as well as related fields such as economics, management, development studies, finance, and entrepreneurship. The editors are inviting contributions that:
· Shed new light on our understanding of climate-related vulnerabilities and risks
· Explore innovative risk management procedures
· Present new and emerging processes for internalizing adaptation in existing business and policy approaches
· Identify new barriers to large scale and/or local climate change adaptation
· Introduce methodologies for mapping and understanding synergies and trade-offs in adaptation
· Investigate approaches to overcoming conflicts in business and policy adaptation trajectories
The editors are encouraging contributions that move beyond the current disciplinary divides and present novel interdisciplinary approaches, which use scenario building methodologies in their investigations and study the social, economic, environmental, and cultural dimensions of the complex adaptation trajectories. Moreover, the editors will also be accepting chapters that incorporate new concepts or tools beyond the academic fields of business administration and political science. These fields will include the natural and social sciences, which make connections to the business and policy. The editors also encourage contributions that move beyond carbon emissions to focus on emerging challenges and themes regarding adaptation, which includes health, wellbeing, air quality, waste, and biodiversity. In addition, chapters that use case studies or comparative studies (between different solutions, applications in different industries, or variations between regions) are strongly encouraged. Finally, considering the global nature of climate change and its multi-scale consequences, the editors invite authors to critically consider the scalar relevance – local, regional, national, and supranational levels – of their contributions.
The submissions will be reviewed with an open mind and with a particular focus on the relevance of the chapter with respect to adapting to climate change and its consequences beyond the 2ºC threshold. The edited book will serve as an academic reference for senior undergraduate, graduate, and post-graduate scholars in the fields of business, public affairs, social science, environmental studies, and law across the globe. It will also function as a practical guide and a reference for emerging best practices on the topic of climate change adaptation for industry and business leaders, regulators, and policymakers around the world. Although the book can be used as a reference book in academic courses, it will not be specifically organized as a textbook.
POTENTIAL TOPICS FOR CHAPTERS
1. CLIMATE CHANGE HAZARDS AND THEIR MANAGEMENT
a. Understanding the hazards and their management
b. Technological hazards
c. Political hazards
d. Natural hazards (cyclones, floods, storms, floods, droughts)
e. Socio-economic risks
f. Human health risks
g. Planetary health and biodiversity risks
h. Geoengineering and climate management
i. Greenhouse gas management
ii. Solar radiation management
2. THE FUTURE OF FOSSIL FUELS AND EMISSIONS
a. Fossil fuel subsidies
b. Carbon pricing/carbon taxation
c. Biofuel and other alternative fuels
d. Renewable energy (wind, solar, geothermal)
e. The future of nuclear power (challenges and opportunities)
f. Battery electric vehicles (BEVs)
g. Hydrogen fuels
3. ADAPTING CITIES, URBAN SETTLEMENTS, AND CHANGES TO HUMAN BEHAVIOUR
a. Urban planning, urban design, and cities beyond the 2ºC
b. Waterfront settlements, island states, and other high-risk human settlements
c. Buildings and construction (design, materials, codes/standards/certifications, retrofitting)
d. Local modes of transportation (cars and other private transport, public transit, collective passenger transport, human-powered transport, etc.)
e. Intra-continental travel (rail, advanced trains and emerging technologies)
f. Inter-continental travel (aviation fuel, turbofan/turboprop engines, emissions and contrails, emerging technologies, etc.)
g. Global product transport and logistics
4. ADAPTING THE PRODUCTION AND CONSUMPTION PATTERNS
a. Agriculture, soil, and forests
i. Animal and marine farming
ii. Agriculture, agroforestry, reforestation
iii. Soil and its rehabilitation
b. Demand-side management
i. Incentive and financing programs
ii. Change and development in consumption patterns
iii. Consumer behaviour beyond a 2ºC warmer climate
c. Supply-side management
i. Change and development in production patterns
ii. Recycling, upcycling, reuse, and regeneration
iii. Closed-loop production models
iv. Living and biotic natural resources
v. Non-living natural resources (metals, minerals, and stone)
vi. Renewability of resources
d. New and emerging modes of production and consumption
5. FINANCING GLOBAL CLIMATE ADAPTATION
a. Microfinance (micro-credit, micro-insurance, risk, etc.)
b. Philanthropy and venture capital
c. ESG investment (trends, renewable energy investment, partnerships, water, etc.)
d. Climate finance (private climate finance, green funds, adaptation funds, the low carbon market, divestment, etc.)
e. Evaluating and managing the financial risks of adaptation
f. Natural capital accounting (efforts, innovations, and effects)
g. Financial policies
6. LIMITATION AND THE FUTURE OF CLIMATE ADAPTATION
a. The limits to climate change mitigation
b. Political and policy limits
c. Capital limits
d. Technological limits
e. Societal and cultural limits
IMPORTANT DATES
· Abstract and CV submission deadline – June 30th, 2020
· Selection of abstracts and notification to successful contributors – July 31st, 2020
· Full chapter submission – November 30th, 2020
· Revised chapter submission – February 28th, 2021
GUIDELINES FOR CONTRIBUTORS
Submissions should be written in English using a non-technical writing style. The contributions may include diagrams/illustrations in order to present data, or photographs/figures (all in black & white) to better illustrate the topic of discussion. Submitted chapters should be original and exclusively prepared for the present book. No part of the article should be published elsewhere. Chapters must not exceed 7,000 words (including all references, appendices, biographies, etc.), must use 1.5-line spacing and 12 pt. Times New Roman font, and must use the APA 7th edition reference style.
Researchers and practitioners are invited to submit abstracts of no more than 500 words, a bibliography for their proposed chapter, and a CV. Abstract submissions are expected by June 30th, 2020. Submissions should be sent via email to climatechange.adaptation@concordia.ca
Authors will be notified about the status of their proposals and will be sent complete chapter guidelines. Full chapters are expected to be submitted by November 30th, 2020.
Please note there are no submission or acceptance fees for the manuscripts.
ABOUT THE EDITORS
Thomas Walker[1]
Dr. Walker holds an MBA and PhD degree in Finance from Washington State University. Prior to his academic career, he worked for several years in the German consulting and industrial sector at firms such as Mercedes Benz, Utility Consultants International, Lahmeyer International, Telenet, and KPMG Peat Marwick. He has taught as a visiting professor at the University of Mannheim, the University of Bamberg, the European Business School, and the WHU – Otto Beisheim School of Management. His research interests are in sustainability & climate change, corporate governance, securities regulation and litigation, and insider trading and he has published over sixty articles and book chapters in these areas. He is the lead-editor of five books on sustainable financial systems, sustainable real estate, sustainable aviation, emerging risk management, and environmental policy. Dr. Walker has held numerous administrative and research positions during his career. For instance, he served as the Laurentian Bank Professor in Integrated Risk Management (2010-2015), Chair of the Finance Department (2011-2014), Director/Co-director of the David O’Brien Centre for Sustainable Enterprise (2015-2017), and as Associate Dean, Research and Research Programs (2016-2017) at Concordia University. In addition, he has been an active member of various advisory boards and steering committees including, among others, the human resources group of Finance Montréal, the steering committee of the Montreal chapter of the Professional Risk Managers’ International Association (PRMIA), the academic advisory board of the MMI/Morningstar Sustainable Investing Initiative, and the advisory board for Palgrave Macmillan’s Future Earth book series on sustainability.
Stefan Wendt[2]
Dr. Wendt is an Associate Professor and Director of the Graduate Programs in Business at Reykjavik University’s Department of Business Administration. From March 2005 until March 2015 he was Research and Teaching Assistant at the Department of Finance at Bamberg University, Germany, where he received his doctoral degree in 2010. He has taught as a visiting lecturer at École Supérieure de Commerce Montpellier, France, and Baden-Württemberg Cooperative State University (DHBW), Mosbach, Germany. His fields of research include corporate finance and governance, risk management, financial markets and financial intermediation, small and medium-sized enterprises, and behavioural finance.
Sherif Goubran[3]
Sherif is a PhD. candidate in the Individualized Program (INDI) at Concordia University, a Vanier Scholar, and a Concordia Public Scholar. He is conducting interdisciplinary research within the fields of design, architecture, building engineering and real-estate finance. His PhD research investigates the alignment between sustainable building practices and Sustainable Development Goals (SDGs). His research focus includes building sustainability and sustainability assessment, sustainability in architectural design and human approaches in design. Sherif completed a MASc in building engineering in 2016 with a focus on energy efficiency in commercial buildings. Before that, he completed a BSc in Architecture at the American University in Cairo (AUC-Egypt). Today, he is actively engaged in several research laboratories, centers, and groups where he teaches and conducts research in design, engineering, architecture, and business. He is also involved in several sustainability committees and projects at Concordia on the student as well as the administrative levels.
Tyler Schwartz[4]
Tyler is currently a research and book publication assistant in the Department of Finance at Concordia University. He recently completed his undergraduate degree at the John Molson School of Business in which he received an Honours in Finance. As part of his undergraduate degree, he completed a thesis project in which he wrote a paper focusing on the relationship between data breaches, security prices, and crisis communication. He was also presented with the CUSRA scholarship in 2017, which is awarded to undergraduate students who have an interest in pursuing research activities. His research interests include sustainable finance, machine learning, data breaches, and cognitive science.
[1] Concordia University: thomas.walker@concordia.ca
[2] Reykjavik University: stefanwendt@ru.is
[3] Concordia University: sherif.goubran@mail.concordia.ca
[4] Concordia University: tyler.schwartz@mail.concordia.ca
Bartlett H, Westcott L, Hind P, Taylor H (1998) An Evaluation of Pre-Registration Nursing Education: A Literature Review and Comparative Study of Graduate Outcomes. Oxford Centre for Health Care Research and Development, Oxford Brookes University, Oxford.
Thanks
Hey everybody :)
I am writing my masterthesis about the influence of climate of Innovation on the innovativeness of Ideas.
The study was conducted in a company and employees could post their ideas on an online-plattform. There were two "Idea-Boxes" to sort the ideas. 1. New Working Processes and 2. New Products and Services.
I have a scale to measure product innovativeness based on Schultz et al., 2013. "Measuring New Product Portfolio Innovativeness: How Differences in Scale Width and Evaluator Perspectives
Affect its Relationship with Performance"
But with this scale it is hard to measure the innovativeness of working procesess f.e. one idea is to establish an "Innovation Team" or another idea is the use a "Cloud-saving" System. So basically in this box is every idea, which can improve the processes of teams or the whole organization.
What I need is a scale to externally measure innovativeness of working processes. So experts have to rate the ideas.
I searched for this kind of scale, but did not find any.
Thanks in advance.
Greetings
Christian
Evaluation metric to assess superpixel based algorithms
Educ 6782 Assessment and Evaluation course
Hi everyone, I want to learn as much as possible about survey methodology for evaluation purposes. I'm particularly interested in research around validity and reliability of: question types (format, types, what's best for what purpose) likert-type scales (how many items, what labels to use, etc), and how to design questionnaires to avoid or minimize biases (e.g. primacy, recency effects, leniency effects, halo effects etc). Basically, I'm trying to answer a question like: "Does it matter how you ask the questions when you are evaluating others?".
Could anyone recommend some journals or meta-analyses or authors or specific search terms that could help me learn more?
Let's suppose we want to compare for a given problem several supervised learning algorithms, in terms of accuracy and speed.
If we don't have a lot of data, is it acceptable to duplicate instances to assess the speed performance?
I know it's not recommended when assessing the accuracy, but what about the speed?
Thanks in advance.