Science method
Collecting Data - Science method
Explore the latest questions and answers in Collecting Data, and find Collecting Data experts.
Questions related to Collecting Data
One of my well-known contacts is conducting research and collecting data through a survey, with the motto:
Be Part of Our Innovation in Financial Management!
They are working on a cutting-edge, AI-powered financial advisor as part of a capstone project, and your input can make a significant impact! Your valuable data will help design a smarter, more personalized tool to manage finances and achieve financial goals.
Don’t worry, your data will be safe and protected, I promise. The survey is quick and easy, taking just 5–7 minutes of your time.
On their behalf, I kindly request everyone to fill out this form and provide your valuable information.
Thank you so much!
Hi all,
I'm currently trying to have an overview of existing sequences on NCBI, and I would like to look at their collection dates and locations. But I currently am only able to manually click into their BioSample page to look for this information, and it would take ages to collect this data for 4000 sequences. Is there anyway I could download this in a batch for all sequences at once?
Thanks
The stages of the scientific research methodology include defining the problem, formulating hypotheses, identifying variables, designing the study, collecting data, analyzing data, and presenting conclusions.
I want to collect data from 1000 admitted cardiac patients within 03 months. I will collect data from 4 male wards and 4 female wards. Daily patient turnover in these wards is as follows: (Male ward 1=113, Male ward 2=56, Male ward 3=60, Male ward 4=83) and (Female ward 1=88, Female ward 2=69, Female ward 3=87, Female ward 4=71). Which sampling technique will be best to collect data from these male and female wards?
Recently, I have been published in the RINP and SBSR journals. The Quartile-ranks of both journals are showing Q1 in the main Scopus database. On the contrary, it is showing Q2 for both of the two in the Scimago webpage. Since Scimago collects data periodically from Scopus, which metric system is more reliable?
Hello everyone,
I am conducting a study on the emotional well-being of students before and after participating in a bibliotherapy intervention. The study is based on a hypothesis, and I plan to collect data using the Barbara Fredrickson Positive and Negative Emotions Questionnaire. Each question in this questionnaire offers five response options based on a Likert scale: very much, a lot, medium, little, and not at all.
Given that the collected data will be ordinal in nature, I would like to ask for your advice on the best approach for analysis. Should I convert the responses to scale data, or would it be more appropriate to use tests designed for ordinal data, such as the Wilcoxon Signed-Rank Test?
I would appreciate any guidance or insights you may have on this topic. Thank you in advance for your help!
Data analysis is a fundamental aspect of academic research, enabling researchers to make sense of collected data, draw meaningful conclusions, and contribute to the body of knowledge in their field. This article examines the critical role of data analysis in academic research, discusses various data analysis techniques and their applications, and provides tips for interpreting and presenting data effectively.
Overview of Data Analysis in Research
Data analysis involves systematically applying statistical and logical techniques to describe, summarize, and evaluate data. It helps researchers identify patterns, relationships, and trends within the data, which are essential for testing hypotheses and making informed decisions. Effective data analysis ensures the reliability and validity of research findings, making it a cornerstone of academic research.
Descriptive vs. Inferential Statistics
1. Descriptive Statistics:
• Purpose: Descriptive statistics summarize and describe the main features of a dataset. They provide simple summaries about the sample and the measures.
• Techniques: Common techniques include measures of central tendency (mean, median, mode), measures of variability (range, variance, standard deviation), and graphical representations (histograms, bar charts, scatter plots).
• Applications: Descriptive statistics are used to present basic information about the dataset and to highlight potential patterns or anomalies.
2. Inferential Statistics:
• Purpose: Inferential statistics allow researchers to make inferences and predictions about a population based on a sample of data. They help determine the probability that an observed difference or relationship is due to chance.
• Techniques: Common techniques include hypothesis testing (t-tests, chi-square tests), confidence intervals, regression analysis, and ANOVA (analysis of variance).
• Applications: Inferential statistics are used to test hypotheses, estimate population parameters, and make predictions about future trends.
Qualitative Data Analysis Methods
1. Content Analysis:
• Purpose: Content analysis involves systematically coding and categorizing textual or visual data to identify patterns, themes, and meanings.
• Applications: Used in fields such as sociology, psychology, and media studies to analyze interview transcripts, open-ended survey responses, and media content.
2. Thematic Analysis:
• Purpose: Thematic analysis focuses on identifying and analyzing themes or patterns within qualitative data.
• Applications: Commonly used in social sciences to analyze interview data, focus group discussions, and qualitative survey responses.
3. Grounded Theory:
• Purpose: Grounded theory involves generating theories based on data collected during the research process. It is an iterative process of data collection and analysis.
• Applications: Used in fields such as sociology, education, and health sciences to develop new theories grounded in empirical data.
4. Narrative Analysis:
• Purpose: Narrative analysis examines the stories or accounts provided by participants to understand how they make sense of their experiences.
• Applications: Used in psychology, anthropology, and literary studies to analyze personal narratives, life histories, and case studies.
Tools and Software for Data Analysis
1. Statistical Software:
• SPSS: Widely used for statistical analysis in social sciences. It offers a range of statistical tests and data management tools.
• R: A powerful open-source software for statistical computing and graphics. It is highly extensible and widely used in academia.
• SAS: A comprehensive software suite for advanced analytics, multivariate analysis, and data management.
2. Qualitative Data Analysis Software:
• NVivo: A popular software for qualitative data analysis, offering tools for coding, categorizing, and visualizing qualitative data.
• ATLAS.ti: Another widely used software for qualitative research, providing tools for coding, memoing, and network visualization.
3. Data Visualization Tools:
• Tableau: A powerful data visualization tool that helps create interactive and shareable dashboards.
• Microsoft Power BI: A business analytics tool that provides interactive visualizations and business intelligence capabilities.
Tips for Interpreting and Presenting Data
1. Understand Your Data: Before analyzing data, ensure you have a thorough understanding of its source, structure, and limitations. This helps in selecting appropriate analysis techniques and interpreting results accurately.
2. Use Clear Visualizations: Visual representations such as charts, graphs, and tables can make complex data more accessible and understandable. Choose the right type of visualization for your data and ensure it is clear and well-labelled.
3. Contextualize Findings: Interpret your data in the context of existing literature and theoretical frameworks. Discuss how your findings align with or differ from previous research.
4. Report Limitations: Be transparent about the limitations of your data and analysis. Discuss potential sources of bias, measurement errors, and the generalizability of your findings.
5. Communicate Clearly: Present your data and findings in a clear and concise manner. Avoid jargon and technical language that may confuse readers. Use straightforward language and provide explanations for complex concepts.
In conclusion, data analysis plays a crucial role in academic research, enabling researchers to draw meaningful conclusions and contribute to their field. By understanding different data analysis techniques, utilizing appropriate tools, and following best practices for interpreting and presenting data, researchers can enhance the quality and impact of their work.
#ResearchAdvice #WritingAdvice #AcademicHelp #ResearchHelp #WritingHelp #AcademicSupport #ResearchSupport #dataanalysis #datacollection #researchdata #WritingSupport #hamnicwriting #hamnicsolutions #AcademicResources #ResearchResources
In a recently exchanged correspondence, my use of the PVGIS software to collect data for subsequent analysis was questioned, with the view taken that this may not represent the most professional approach. In light of the fact that the software in question provides all the requisite data for calculating electricity production from photovoltaics and that other scientific studies have previously used it, I am interested to learn your views on this matter.
I have a research to decrease the waiting time in Emergency department..
I collected data from more than 1000 patients (pre)
then I made some change in the procedure to enhance the waiting time, then I collected the same data variables but from other 1000 patients (post)
what is the statistically suitable test
in other word .. is these data dependent or not dependent?
What is the justification for focusing on the mechanism of collecting data on questionnaires? Is it better than others? Do you trust that the culture of the respondent encourages reading the questionnaire carefully before answering?
Hi! I am looking for data for my psychological research. Can any one suggest me how to collect data in 1 month. My sample size is 80. Another question is there any website or online platform for data collection that suitable for my research. And l want to collect data from India, west bengal. Please guide me.
Hi! I am looking for data for my psychological research. Can any one suggest me how to collect data in 1 month. My sample size is 80. Another question is there any website or online platform for data collection that suitable for my research. And l want to collect data from India, west bengal. Please guide me.
I am conducting experiments using a 1-g shake table and have collected data from both accelerometers and strain gauges. However. I am uncertain about the appropriate filtering method for the data. Should I apply a low-pass filter or a band-pass filter for optimal results? The shake table has a maximum frequency of 50 Hz, while the excitation frequency is 2 Hz.
I am working on thesis and on a dissertation. I am at the stage of collecting data. I aim to succeed it online. How can I do to perform this tasks?
QUERY REGARDING ETHICAL APPROVAL FROM INSTITUTION
Dear connections,
While collecting data using questionnaires and respondents were well informed of the purpose of collecting data and data was collected from the respondents only after obtaining their consent to participate. In this case also, do we require to take ethical approval from institutes?
Any clarification/insights regarding this topic would be appreciated.
Dear esteemed colleagues
This must be a very silly question. But I thought it would be interesting to ask you and hear your thoughts in this basic time management issue.
I am not finding enough time to pursue many research projects that I started last years due to my current academic commitments. How would you manage your time to prepare for your classes, review academic artcles, supervise your masters students, plan and conduct your research, write your papers on already collected data and so on. All these things I have to do on a daily basis and makes me very busy to play with my little son. If possible, please share your time management strategies like how many hours you will allocate for job each day?
I am currently validating a tool on assessing patients perspectives on primary care. Initial research showed me that using a reverse order Likert scale ( 1= " I completely agree' , 5= "I completely disagree", 3 = "" I don't know") would avoid response bias so I collected data from this tool. I have passed the analysis stage. What are the important aspects to consider during the scale validation? What is the impact on descriptive features of the tool- Scoring? What precautions to follow?
You are probably collecting data from tools such as interview guides where every respondent is giving different answers/opinions
This dataset, available at https://zenodo.org/records/11711230, contains the data of 4011 videos about the ongoing outbreak of measles published on 264 websites on the internet between January 1, 2024, and May 31, 2024. These websites primarily include YouTube and TikTok, which account for 48.6% and 15.2% of the videos, respectively. The remainder of the websites include Instagram and Facebook as well as the websites of various global and local news organizations. For each of these videos, the URL of the video, title of the post, description of the post, and the date of publication of the video are presented as separate attributes in the dataset.
After developing this dataset, sentiment analysis (using VADER), subjectivity analysis (using TextBlob), and fine-grain sentiment analysis (using DistilRoBERTa-base) of the video titles and video descriptions were performed. This included classifying each video title and video description into (i) one of the sentiment classes i.e. positive, negative, or neutral, (ii) one of the subjectivity classes i.e. highly opinionated, neutral opinionated, or least opinionated, and (iii) one of the fine-grain sentiment classes i.e. fear, surprise, joy, sadness, anger, disgust, or neutral. These results are presented as separate attributes in the dataset for the training and testing of machine learning algorithms for performing sentiment analysis or subjectivity analysis in this field as well as for other applications. The paper associated with this dataset (please see the following citation) also presents a list of open research questions that may be investigated using this dataset.
Please cite the following paper when using this dataset:
N. Thakur, V. Su, M. Shao, K. Patel, H. Jeong, V. Knieling, and A. Bian “A labelled dataset for sentiment analysis of videos on YouTube, TikTok, and other sources about the 2024 outbreak of measles,” Proceedings of the 26th International Conference on Human-Computer Interaction (HCII 2024), Washington, USA, 29 June - 4 July 2024. (Accepted as a Late Breaking Paper, Preprint Available at: https://doi.org/10.48550/arXiv.2406.07693)

I have seen studies using platforms like Prolific to collect data from customers or potential customers. However, I haven't yet seen studies in human resources using Prolific for data collection. There are filters on prolific that allow you to screen participants according to employment related variables. I am wondering whether prolific is suitable or not for studies in this field. Thank you.
My research plan: First, via literature review and on-site investigation, summarize public space elements. Then, learn from literature that place attachment theory has two factors: place dependence (shallow-level, emphasizing visual and functional use) and place identity (deep-level, emphasizing related historical, cultural, and personal experiences). Hypothesize that through questionnaire to collect data and statistical analysis, a positive correlation can be found between public space elements and place dependence, and that public space elements' perception can indirectly impact place identity via place dependence, affecting place attachment level. Third, aim to study the correlation; encode and quantify public space elements as can't directly research on quantitative data (affected by many factors). Fourth, conduct correlation analysis between public space elements' quantitative data and people's perception data to analyze the relationship, and by optimizing the design, enhance people's place attachment level.
Performing Structural Equation Modeling (SEM) involves several steps. Here’s a detailed guide:
1. Define the Model
- Specify the Theoretical Model: Determine the relationships among variables based on theory or prior research. This includes identifying latent variables (unobserved constructs) and observed variables (measured indicators).
- Draw a Path Diagram: Create a visual representation of the model showing latent variables, observed variables, and the hypothesized relationships between them.
2. Collect Data
- Design the Survey/Experiment: Develop a questionnaire or an experiment to collect data for the observed variables.
- Sample Size: Ensure an adequate sample size. SEM typically requires a large sample size to provide reliable estimates. A common rule of thumb is at least 200-400 respondents.
3. Estimate the Model
- Select the Software: Choose SEM software like AMOS, LISREL, Mplus, or R (using packages such as lavaan).
- Input Data: Load your dataset into the software.
- Specify the Model in Software: Define the model structure in the chosen software, including latent variables, observed variables, and their relationships.
- Run the Analysis: Use the software to estimate the model parameters.
4. Evaluate the Model
- Model Fit Indices: Check various fit indices such as Chi-square, RMSEA, CFI, TLI, and SRMR to evaluate how well the model fits the data.
- Modify the Model: If the model does not fit well, consider modifications based on theoretical justification and modification indices provided by the software.
5. Interpret Results
- Parameter Estimates: Look at the path coefficients, factor loadings, and other parameter estimates to understand the relationships between variables.
- Statistical Significance: Check the p-values to determine which paths are statistically significant.
- Effect Sizes: Consider the magnitude of the relationships.
6. Report the Findings
- Write the Report: Present the model, data collection method, analysis procedure, and results. Include path diagrams and fit indices.
- Discuss Implications: Discuss the theoretical and practical implications of your findings.
- Limitations and Future Research: Acknowledge the limitations of your study and suggest directions for future research.
Example Workflow in R using lavaan:
- Install and Load lavaan Package:
install.packages("lavaan")
library(lavaan)
Specify the Model:
model <- '
# Measurement model
latent1 =~ observed1 + observed2 + observed3
latent2 =~ observed4 + observed5 + observed6
# Structural model
latent2 ~ latent1
'
Evaluate the Model Fit:
summary(fit, fit.measures=TRUE)
Interpret the Results:
parameterEstimates(fit)
Key Considerations
- Theory-Driven: Always base your model on strong theoretical foundations.
- Model Identification: Ensure your model is identified (the number of knowns should be greater than the number of unknowns).
- Multicollinearity: Check for multicollinearity among observed variables.
- Missing Data: Handle missing data appropriately, using techniques like multiple imputation if necessary.
By following these steps, you can effectively perform Structural Equation Modeling to test complex relationships among variables.
To give reference
Singha, R. (2024). How do you perform Structural Equation Modeling (SEM)? Retrieve From https://www.researchgate.net/post/How_do_you_perform_Structural_Equation_Modeling_SEM
Developing a new multidimensional psychometric tool involves several key steps to ensure the tool is valid, reliable, and useful for its intended purpose. Here's an overview of the process:
1. Conceptualization
a. Define the Purpose: Identify the specific psychological constructs or dimensions you want to measure and the context in which the tool will be used. b. Literature Review: Conduct a thorough review of existing literature to understand how these constructs have been previously defined and measured. c. Theoretical Framework: Develop a theoretical framework that outlines the relationships between the constructs and guides the development of the tool.
2. Item Generation
a. Generate Items: Create a pool of items (questions or statements) that reflect the constructs you intend to measure. This can be done through brainstorming sessions, expert consultations, and reviewing existing tools. b. Initial Item Review: Have experts review the items for clarity, relevance, and comprehensiveness. Revise items based on their feedback.
3. Pilot Testing
a. Preliminary Testing: Administer the initial item pool to a small, representative sample. Collect data to evaluate the items' performance. b. Item Analysis: Perform item analysis to determine which items are functioning well. This may include examining item difficulty, item-total correlations, and response distributions.
4. Item Refinement
a. Refine Items: Based on the pilot test data, refine or eliminate items that do not perform well. This process may involve rewording items, removing ambiguous items, or adding new items. b. Second Round of Testing: Administer the refined items to another sample, preferably larger than the pilot sample, to further test their performance.
5. Factor Analysis
a. Exploratory Factor Analysis (EFA): Use EFA to identify the underlying factor structure of the items. This helps in understanding how items group together to form dimensions. b. Confirmatory Factor Analysis (CFA): After establishing a factor structure, use CFA on a different sample to confirm the structure. This step tests the hypothesis that the items fit the proposed model.
6. Reliability and Validity Testing
a. Reliability: Assess the reliability of the tool using measures such as Cronbach’s alpha for internal consistency, test-retest reliability, and inter-rater reliability (if applicable). b. Validity: Evaluate the validity of the tool through various methods:
- Content Validity: Ensure the items comprehensively cover the construct.
- Construct Validity: Confirm the tool measures the theoretical constructs it claims to measure. This includes convergent and discriminant validity.
- Criterion-related Validity: Assess how well the tool correlates with other established measures of the same construct (concurrent validity) or predicts future outcomes (predictive validity).
7. Standardization
a. Norming: Administer the tool to a large, representative sample to establish normative data. This helps in interpreting individual scores relative to a population. b. Scoring: Develop a scoring system that is easy to use and interpret. Ensure that the scoring method aligns with the theoretical framework.
8. Finalization and Documentation
a. Final Revisions: Make any final adjustments based on the testing and analysis phases. b. User Manual: Create a comprehensive manual that includes instructions for administration, scoring, interpretation, and evidence of reliability and validity. c. Training: Develop training materials for practitioners who will administer the tool.
9. Implementation and Ongoing Evaluation
a. Implementation: Roll out the tool for use in real-world settings. b. Ongoing Evaluation: Continuously collect data to monitor the tool's performance. Make updates and refinements as necessary based on user feedback and new research findings.
By following these steps, developers can create a psychometric tool that is both scientifically sound and practically useful.
To give reference
Singha, R. (2024). What are the processes involved in developing a new multidimensional psychometric tool? Retrieved from https://www.researchgate.net/post/What_are_the_processes_involved_in_developing_a_new_multidimensional_psychometric_tool
I need to know the stages and steps of the stratified random sampling process.
In addition, What random sampling process will best suit collecting data from mobile financial service users and non-users?
Can anybody help me outline a correlational research proposal for the teaching and learning of English as a Second Language (ESL) and Arabic as a Second Language (ASL) in Muslim countries? Can he/she also provide examples of both qualitative and quantitative questionnaires that can be used to collect data for my research?
As I am working on my research so I want to use some scales to collect data from the population.
Respected researchers, scholars, teachers,
This doubt has been in my mind for the past couple of days. I went through several articles and book chapters but couldn't find an answer to it. While deploying a CONCURRENT MULTIPLE BASELINE DESIGN, should I collect data from each participant, for example, during the baseline phase, at precisely the same point in time? Or, is it that it should be on the same day? Is there a stipulation that the data points of each participant should coincide/be at the same moment? If it's so, it would require a team of therapists/researchers to carry out the intervention and collect data, which is going to be difficult in my case. Hence this query. Thank you for sparing your time to go through this.
During these days, the using of the AI become ubiquitous. The AI have been used for collecting data as well. Hence, the statistical analysis of these data has become significantly interesting, especially in the absence of the gold standard. Therefore, I have been working on data that was extracted from AI tool in order to estimate its accuracy. In addition, as this data focused on studying the non-native insect pest in India, the next step was the prevalence estimation under the imperfect accuracy. As a result, if any university or research team interests in this work, contact me via direct message to organise online seminar.
Dear Colleagues,
Because this is not exactly a technical question I decided to open this issue as a discussion to comply with the Research Gate rules. The purpose of this discussion is quite personal - I would like to get in touch with the researchers who work with the Biolog EcoPlate technique. I have several years of experience with the technique but still encounter difficulties analysing and interpreting the data so I need to exchange my current knowledge and to expand it further.
Sometimes not all the collected data are suitable for publication but they are important for the perception about the preciseness of the method, its advantages and disadvantages, and such information can be exchanged rather in personal communication than in the formal one. If you are interested in the topic you can send me also a personal message at katia_dimitrova@yahoo.com
Kind regards,
Katya Dimitrova
I am planning a study around a sensitive topic where I may personally be acquainted with certain participants. I am also embedded in the empirical context of the research, in the manner I could also be a potential participant of the study. We are a team of two researchers and are currently wondering how we can collect data from interviews and analyse them without knowing any identifiable information to the largest possible extent.
Thanks in advance!
Can we conduct an interview and fill questionnaire from the same participant in a mixed methods study? If yes or No then why? Kindly provide a reference article if any. Thank you
Hello all,
I am seeking to collect data from students taking the Enterprise module. Kindly suggest methods (survey) that are in use to assess students' intention to become entrepreneurs. Thank you for your assistance.
Thanks
I'm a teacher of EFL (English as a foreign language) in Turkey. I have used a new teaching model in my face to face classes this semester, and now I'd like to see how much it has affected my secondary school students' motivation towards English (maybe along with their attitude towards English lessons, their engagement with English lessons, etc (any other possible areas of study). What is the easiest way of collecting data for a descriptive research on this matter (I don't have time to develop a scale or cannot find one that would perfectly fit my context), and what questions do you think would be better to ask? Open-ended, yes/no/undecided, Likert-scale style? What about the reliability and validity issues? Can't I simply come up with some questions by myself and use them?
Thank you so much for taking time to have a look at this. Any sort of answers are highly appreciated 🙏
Yusuf
Collected data of ten years for recovery(%) of four channels of recovery for recovery from NPAs of banks. Want to know if I can use ANOVA to test my hypotheses relating to recovery(%) of any two channels or all four channels.
Hello RG Family! In my transition to qualitative research, I’m confronted with the challenge of validating qualitative interviews.
From my knowledge of quantitative research, I’m well aware that Principal Component Analysis and Cronbach’s Alpha methods are popular for validity and reliability of Likert-scaled questionnaires. But in the case of qualitative interviews, the arena is different. That’s why I need your help.
From your wealth of experience with qualitative research, please describe the most effective methods for carrying out validity and reliability of qualitative interviews. And which software is suitable for this procedure?
Your contributions will be immensely appreciated. Thank you.

Is it usual to use the closed ended questions in a Google sheet before carrying out semi-structured interviews for collecting data in a phenomenological study?
If it is possible how can it be analyzed, which comes first?
Can you please suggest me such researches if there are any?
I'm a new student and I need ideas that help me in the future in Hematology or Blood Transfusion but I need it in a lab filed so I can collect data easily.
As we know, a massive of data are available online, which can be reachable sometimes, but defining the truth behind the actual research is very difficult, which leads to misinformation for some researchers. A survey is a tool to collect some information from the specialists and professionals to connect to the reality of the data. In addition, it will help us improve the research information by using primary collective data. However, why do researchers post surveys to collect data and not receive any answers for years?
Can artificial intelligence help optimize remote communication and information flow in a corporation, in a large company characterized by a multi-level, complex organizational structure?
Are there any examples of artificial intelligence applications in this area of large company operations?
In large corporations characterized by a complex, multi-level organizational structure, the flow of information can be difficult. New ICT and Industry 4.0 information technologies are proving to be helpful in this regard, improving the efficiency of the flow of information flowing between departments and divisions in the corporation. One of the Industry 4.0 technologies that has recently found various new applications is artificial intelligence. Artificial intelligence technology is finding many new applications in recent years. The implementation of artificial intelligence, machine learning and other Industry 4.0 technologies into various business fields of companies, enterprises and financial institutions is associated with the increase in digitization and automation of processes carried out in business entities. For several decades, in order to refine and improve the flow of information in a corporation characterized by a complex organizational structure, integrated information systems are being implemented that informationally connect applications and programs operating within specific departments, divisions, plants, etc. in a large enterprise, company, corporation. Nowadays, a technology that can help optimize remote communication and information flow in a corporation is artificial intelligence. Artificial intelligence can help optimize information flow and data transfer within a corporation's intranet.
Besides, the technologies of Industry 4.0, including artificial intelligence, can help improve the cyber security techniques of data transfer, including that carried out in email communications.
In view of the above, I address the following question to the esteemed community of researchers and scientists:
Can artificial intelligence help optimize remote communication and information flow in a corporation, in a large company characterized by a multi-level, complex organizational structure?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz

What are the best steps to conduct a cross-sectional study? I'm planning to collect data using an electronic questionnaire.
I collected data using a constant sum scale (forced choice scale) and I need guidance on how to analyse the data.
Good afternoon,
I am thinking about how I can present the data cleaning stage of my research project. I am hesitating between generating a summary table (could be complex because it includes 16 different datasets) with a paragraph presenting the overall steps, for example, n rows were removed due to missing values, duplicated data, spelling was modified for n rows, etc. or generating a list of items (could appear redundant for the reader) for each encountered issues or checking, for example, days of the week were investigated to check that all collected data were recorded during a school day. In the article format, this stage was usually not really developed except in the data supplementary appendix due to the size of the format. I wanted to develop this section in my thesis report but I am not sure about the most appropriate format to make it clear, concise, simple to understand, and interesting for the readers. Please, could you tell me if as a reader you prefer to have a table or descriptive paragraphs or more visual elements like charts to understand how the research team cleaned the data?
Thanks in advance for all your feedback on the data cleaning oresentation
I am completing an evaluation using pre collected data from a training group which has been around for a few years. The way the data was collected was by using a questionnaire which allows for short/ medium answers.
I am unsure if a thematic analysis is the best way forward and if so I am unsure if there is a certain way that I should do it.
Thank you in advance
I have collected data through survey from in 5 point likert scale where 1 as Strongly Disagree to 5 as Strongly Agree. I have total 20 questions in 4 constructs: economy, psychology, political and social. Among 20 questions, around 8 questions are in negative statements. So, after entering all data into SPSS file; I did reverse coding for those 8 statements of negative meaning. Here, I am in confusion for the next step; that should I change these 8 negative meaning statements into positive meaning statements? If I have to change, what should be the format of sentence?
For example,
negative statement: I am a bad student.
positive statement : I am not a bad student. (option-1)
or, I am a good student. (option-2)
If I have to change the sentence, which option will be appropriate?
Thanks and regards.
Dear Academics/expertise
I am writing to seek your guidance and validation for the questionnaire that I have developed as a crucial component of my PhD thesis research in management.
I kindly request your expertise in assessing and validating the questionnaire I have designed, intended to collect data for my research study, and I am committed to ensuring its validity and reliability. Your valuable insights and feedback will significantly contribute to the robustness of the data collected and enhance the overall quality of my research.
Thank you for your support and guidance.
Sincerely,
Taher Ali Othman
Contact Information: 0060146731995
Hi All,
I'm an Assistant Professor from HKBU (Zhuhai campus) who is looking for one or two collaborators to work on a piece related to environmental risks (e.g., the release of radioactive water). My research mainly focuses on consumer information behavior.
Since the study is intended to be cross-cultural, I'd like to invite a scholar fluent in Korean with the resources to gather data from Korea. Ideally, I'd also like to ask another scholar who is currently or has the resources to collect data from the States. Of course, you can be both.
Also, it'd be fantastic if you are familiar with SEM.
I plan to finish the project by the end of this year, so I hope we can work efficiently. If you are interested in this opportunity, please don't hesitate to message me via ResearchGate or xiaoshanli@uic.edu.hk.
Thank you, and I look forward to hearing from you!
X -> Y: X negatively affecting Y.
Y -> Z: Y negatively affecting Z.
X-> Z: X positively effecting Z
Y as a mediator.
I have completed collecting data and upload to PLS-SEM. During analysis all the value of path coefficients became positive. Do I continue to the step or not?
I got a question when estimating sample size. After getting the theoretical sample size, it needs to consider the response rate and invalid response rate when collecting data in the study based on an online survey. So, how to determine the invalid response rate before collecting data?
Qualitative research involves non-random sampling techniques and qualitative data-collecting methods such as in-depth interviews. Can I use a mixed form here when collecting data? For example, demographic data of the sample? Are demographic data not crucial in qualitative research?
For example, I want to capture on Facebook all pages or accounts three months ago communicated a particular event, say an Earthquake.
Where can we collect data on swarm intelligence algorithms and optimisation work, and what software is used to mimic it?
In the case of standing in public and attempting to stop passers-by to complete a short questionnaire, what is your basic approach? In a previous life, I was a commission salesman, so I know how important it is to make the first utterance out of your mouth the right one if you want people to give you even two seconds of their time, so what tends to work best for collecting data for research? A yes-no question? An open-ended question? A statement about the weather? Asking if they have a couple minutes vs avoiding mentioning time? Here have been some of my attempts, but it's hard to gauge which works best with the few attempts I've made:
- Excuse me, do you have a moment to talk (about ...)?
- Excuse me, what do you think about ...?
- How's it going? Hot today, isn't it?
- Hi, can you spare three minutes for ...?
(For specifics on my current survey, it's a bunch of Likert scale questions aimed at getting opinions about bilingual signage in a small town. I'm standing in front of grocery stores to encounter people as there really aren't any other places with regular foot traffic.)
If a researcher has a hypothesis in his study, i.e., "There is no significant difference in perception of Ph.D., Masters, & bachelors students of higher education towards internet usage", and the researcher developed a Likert scale and collected data by using Google form and sending it to various WhatsApp groups through friend and colleagues circle and got one hundred fifty responses. Can these procedures, which the researcher has used in his study, meet the assumption of ANOVA to use for testing the above null hypothesis?
Another thing is, nowadays, online data collection has been very popular due to the advancement of technology, and the Google form is very general and useful also. So what kind of sampling procedure will it follow when we share it person to person and collect data? Is it scientific to use inferential statistics in the analysis of online collected data, as it may violate the assumption of random sampling?
I request the experts and experienced people and also the scholars to share their experiences related to this query.
Thank you
In the field of research, survey type research is very popular and widely used as a research method. But some times I felt that collecting data by closed ended questionnaire may lead to socially desirable responses or behavior, example - if anyone is conducting a research on "Current Status Professional Development of Secondary Teacher" and the researcher is putting questions about the activities like - Are you a enrolling in refresher courses, are you participating in seminar, workshop or Faculty development programme etc
Then the respondent here may answer in his positive side whether he/she is doing these activities or not (So these type of socially desirable behavior they are giving and happening also). It is very much affecting the authenticity of the data we got from our sample and what we report for the development of the society may be wrong and we are providing to some extend the wrong data. So how can we mitigate or limit or reduce this effect.
I need a new tool to collect data about the opinions of a sample of customers about a particular product. Need to hear your views and experiences
I am using OPENBCI 21 channel cap with Cyton and daisy board. My PHD thesis is to collect data for emotion classification using random visual stimuli. I have developed a MATLAB software to separate and label the collected data for classification. I have an issue that the data I am getting from my OPENBCI device is too noisy .I have settled everything in GUI (Smoothing On, Notch filter 50, Bandpass 5 to 50 Hz) All channels are NOT RAILED and impedance is Green with all channels though the values are high but channels are GREEN as per vendor recommendations.
Can anyone guide what to do .I am also attaching the images of my Eye closed of 2 minutes data. Application of filters is in MATLAB code is not creating any difference in it.
(https://ieeexplore.ieee.org/abstract/document/9862906) papers talks about cancelling gain of differential amplifier. Can anyone that do we subtract differential gain using this formulae:
%DC offset Removal
for i=1:16
final(:,i)=data(:,i)-mean(data(:,i))
end;
If a researcher has collected data by using Google form and got the data from different universities which is scattered in the country , how can he/she state the demographic structure of the sample and what type of sampling procedure he/she will use?
I'm carrying out a longitudinal study,
For data analysis we have to collect data several times for each variable and for the same patient
I use SPSS to draw up statistical analysis and I found a difficult how I can enter data for each patient?
any suggestion to resolve this problem please ?
I a trying to carry out a simulation of the HTL process and I am hoping to find fluid properties based on experimental data, including heat of reaction and rheology of non-newtonian behavior, etc.
Alternatively, the actual collected data that I could use to feed my simulation.
Thank you.
Arnold Keller
I read rings on birds in a landfill for two winters. In this way, by reading codes on the rings, I got their ages from their life histories. For each species, I'd like to compare the mean age between the first winter and the second winter, in order to search for a significant difference. Which test should I use? And, should I include the rings that I read in both winters?
This a graph example. The p values and data are invented. X axis has got species names, Y the ages, whereas colours stand for the two winters in which I collected data.
Thanks in advance!

Metabolomics has emerged as an invaluable tool for prognostic and diagnostic purposes, last in the cascade of others OMICS -genomics, transcriptomics, and proteomics. Omics training usually covers experiment design, data generation, and collection, data preparation, data analysis, and the last but not the least - data interpretation.
At the end of this meticulous energy, time, and financial-consuming path, it might be totally none sense to fail to put your results into the broader biological context.
For those like me that have never been trained to interpret metabolomics data, how can we make sure to not miss important points? Is Basic knowledge in Biochemistry, Physiology, or physiopathology of the disease of your interest, enough to harness the full potential of metabolomics technologies for biomarker screening u.a?
I would like to discuss with experts out there, the most important assets for a right and successful data interpretation of metabolomics data.
Thank you for sharing your experience in the Metabolomics journey as well.
Hi. Has anyone used an online workforce to collect data from adolescents? Something akin to Prime Panel, Prolific, etc. but with adolescent respondents? Any recommendations will be appreciated!
Suppose I have collected data on customer churn rates and various customer attributes. How can I use regression to predict the likelihood of customer churn based on these attributes?
Dear colleagues, I am doing a study on the role that Ubuntu can play in research, especially in the data collection process. Studies have shown that the response rates from Africa are higher than in other areas. Our study aims to show a connection between Ubuntu and the willingness of people to participate in research projects.
If you have collected data from any region, please consider participating in this research. The survey will take less than 10 minutes to complete (15 questions only). Please see the link below, and thank you in advance.
Hello everyone. I´m a Brazilian student at the Masters programme in Teaching, Learning, and Media Education at Tampere University, Finland. I am investigating curiosity on teachers and education professionals in applying board games in education and the potential benefits of this strategy. I am collecting data about state-related curiosity and the perception of educators about the benefits of using board games in educational settings. I would much appreciate if you help me providing data to make my research more meaningful.
To take part in this study you just have to click the link below and fill in the questionaire. Approximate time to answer is 5 minutes. Thank you! Obrigado!
Im trying to make a webpage that makes collecting data efficiently from a person compared to pen and paper methods. What should I do? Is there a way to do this without having a a sample or like just do developer testing?
Hi, I've collected data from a strain of E.coli that has been transformed several times to produce different proteins. The strains have then been exposed to a growth limiting factor. I intend to compare the ability of the bacteria by determining the time it takes for the strains to pass the exponential growth phase threshold. How do I calculate this using OD absorbance?
I need to collect data but minimize contact between interviewer and interviewee
To give some context, I have a background in acoustics but I am now conducting research in biomedical engineering. I'm guessing the practices and requirements when measuring human participants is different, so I though I would like to ask the biomedical community for recommendations to do it right.
I will be measuring and collecting data from the heart-rate (HR) and heart-rate variability (HRV) of participants while being exposed to sound. I ensuring that everything is safe and I am in the process of obtaining ethics permission to do these experiments. No personal information of the participants will be acquired other than gender and age.
Still, what are the considerations I should have when acquiring this sort of data considering I would like to publish its results. Number of participants? What sort of pre-screening? The way data is presented? Statistical approaches for analyzing the data?
Thank you in advance!
I am doing a qualitative research with a prime purpose to develop a marketing model. For the same i have collected data via a semi structured interview. How should i proceed now with the analysis? Can anyone recommend any work for the same.