Observation - Science topic
The act of regarding attentively and studying facts and occurrences, gathering data through analyzing, measuring, and drawing conclusions, with the purpose of applying the observed information to theoretical assumptions. Observation as a scientific method in the acquisition of knowledge began in classical antiquity; in modern science and medicine its greatest application is facilitated by modern technology. Observation is one of the components of the research process.
Questions related to Observation
Fellow Researchers, I need some wisdom.
I am currently performing an experiment involving CaCo2 cells seeded on a Transwell. I don't have experience with such assays, therefore please forgive me, if my questions are somewhat ignorant. I've searched ResearchGate and found some answers in this discussion:
(23) Can transwell-cultured Caco-2 monolayers be imaged with BD Pathway confocal microscopy while keeping them in transwell? (researchgate.net)
However, I wonder - is there a possibility to observe the Transwells while on a plastic plate, without the necessity of transferring the Transwells to a glass dish? Additionally, I use the type of Transwells that stand on the bottom of the well, not hang on the borders of the well, therefore I am not sure if this solution would work in my case. My idea was- if I used a non-inverted microscope, could I maybe see my cells in the Transwell? I understand that after seeding, my cells need 21 days to differentiate and form a monolayer, however, how can I check that they are in fact attached to the Transwell and growing if I have no way of seeing them? Is staining with some viability dye the only way? I am afraid of a scenario in which I wait 21 days for my cells to differentiate while in fact – they didn’t even attach?
[ These criticisms may apply more to studies in the behavioral sciences, those being the ones I know about. ]
There is a big tendency for researchers to do research that [supposedly] TRIES to "build on" previous research. AND, there is a belief that such studies will lead to better understanding of (/definition-of) core concepts in a "field". AND, ALSO, other even less related (less concretely or physically interrelated) studies, such as interdisciplinary studies, are believed to lead to better understanding as well.
I believe neither of these is necessarily the case or even likely true (and, to a notable extent, never true, with some research as it is). I believe it is more often NOT true that progress is being made these ways, since the unit of analysis and its aspects are not clear, OR that real (proven) developed relations have not been found. Given the present research ways (many having long, numerous historical/philosophical roots), I believe that more often than good, real desired results (from findings), the results will NOT be interpretable in any reliable or valid way. This an area where some good scientific analytic philosophers could be of good help (thus, the reason for the existence of this discussion question).
My view is that if you do not have well-guided/well-justified and WELL-related studies, specifically, with all phenomenon involved or of present interest RELATING AS MUCH AS POSSIBLE TO DIRECT OBSERVATIONS OF essentially FOR-SURE FOUNDATIONAL OVERT PHENOMENON __AND___/__OR___ a clear case or clear reflection of such actual phenomenon (and, here too, CONCRETE LINKS at some time were shown and INVOLVED), then you are "off-track". Such is needed for science advancement ITSELF (<-- this being key to science and a MAJOR indication OF REAL SCIENCE itself). [ (In Psychology, the subject and aims of studies and findings should be to illuminate KEY Behavior PATTERNS, by clearly relating all of them to directly observable overt behavior patterns that ARE reliably and validly seen (with clear concrete foundations) OR to such "things" THAT WERE (and, ideally: have been) once so clearly and reliably seen during development (i.e. ontogeny)) (yet notice: STILL there is plenty of latitude left for many types of concepts to be involved in proposed explanations, given development and the demonstrated possibilities of the huge capacities of the Memories).) ]
It was required from me to find a research problem (or research statement) regarding to student-teacher observation peers method (a method of observation, analysis and evaluating teaching in class-room)
do you have any ideas?
can you provide me with appropriate references that can help me read and know more about this topic?
This morning (December 2, 2019) at about 6 UTC+1 and at a geographic place of about 52° N 13° E I saw a chain of 12 or more satellites in two groups (5 or more + 7), flying with equal distances in each group from west to east over the zenith (i.e. far from the equator). Does someone know this constellation of satellites or a web site which may be searched for orbits of satellites visible by naked eye?
I have a regression problem with two objective variables or outputs (named E & r). I made a model for every objective separately. I used Gaussian processes regression.
I obtained prediction for both objectives as can be seen in the attached images (error bar shows standard deviation).
Title of the plots shows R2 & RMSE of prediction. There is a categorical variable in dataset (Mixer) which has two values (50L, 2400L), shown by different colors on the plot.
Next, I calculated R2 & RMSE separately for every Mixer (shown in the legend in the attached images).
As you can see, for objective "E", RMSE of Mixer 2400L (blue color) is less than RMSE of Mixer 50L (orange color). But, its R2 is very low which is surprising for me. I expect that when RMSE is lower, R2 should be higher.
And for objective "r", RMSE of both Mixers are almost similar. But, R2 of Mixer 2400L is much lower.
I have only one assumption about this phenomenon. Reason of low R2 is because of lower No. of samples for Mixer 2400L.
No. of Observations:
- Total : 119
- Mixer 50L : 106
- Mixer 2400L : 13
If you have any idea. please let me know.
I recently posted an observation on this question on Why most research use Wrought Alloys when Researching Cast Al-MMC's? Most of papers I see coming to IJMC on Al-MMC's are using wrought as host metal. The common ones are: 6061, 6063, 7045, 7075, and 2000 series. These are very difficult to cast, which is why most of the researchers use very simple shapes just to get test coupons. They also usually conduct only limited mechanical tests and focus on wear testing, which is of limited value to design engineers. These are mostly academic studies and they tend to cite each other's works. I understand trying to limit compositional interactions, but this approach doesn't develop alloys that can be used into useful commercial shapes. The early Al-MMC's were based upon A356 (Al-Si) and also Al-Si-Cu (like 319 and 383) as a means to improve stiffness (modulus) and high temperature properties. There were collaborative efforts to see what material was needed to solve problems and see if it worked.
Prof. Rohatgi gave an AFS Golden Anniversary Lecture in 2019 that was published in the IJMC:
Ajay Kumar, P., Rohatgi, P. & Weiss, D. 50 Years of Foundry-Produced Metal Matrix Composites and Future Opportunities. Inter Metalcast 14, 291–317 (2020). https://doi.org/10.1007/s40962-019-00375-4
See the online version: https://rdcu.be/cB8IH
Unfortunately, this trend of using wrought alloys as the base metal is why a significant number get Transferred to other more materials orientated journals or Open Access. Very few look at can the MMC's developed be cast into complex shapes and can the material be re-processed. This is a BIG PROBLEM!!!! Our universities and researchers must address this issue, if we want to advance the use of materials that can see practical applications. Don't just seek Solutions in Search of a Problem.
I have monthly data and I fount seasonality in my data. I want to apply SARIMA but i am not sure that on 28 observation I can use SARIMA or not? kindly share any reference if there can apply SARIMA on 28 monthly observations.
I have a secondary data that I am to carryout a spatial price transmission on for my master thesis. I have 33 missing observations in the Urban and 23 missing observations in rural , out of a total 100 weekly observations. I planned to use VAR model but the large number missing observations compared to the total data makes me a little confused, even if I interpolate, I am scared the research might not really make sense?
Please, has anyone experienced similar challenge, please what did you so?
Please what do you suggest I do with the missing observations?
Suppose we have a system designed to deliver services to customers arriving during weekdays. The arrival process is modeled as a Poisson process with an Arrival rate of λ, also we use agent-based modeling with NetLogo to study the behavior of customers. After multiple Observation and replication of the model, the first 8 hours was selected as the warm-up period and the remaining time as the steady-state. If we consider the average length of stay (ALOS) as the crucial output data, how should we handle the initialization bias in this case?
As a workaround, is removing those data sufficient if we take into account the effects of the warm-up period on the ALOS?
I have a dataset of 850 subjects and 6 potential raters. Each subject can be classified into 4 nominal categories. My idea is to split the dataset equally among the raters but holding out a subset for an initial calculation of agreement.
I wonder what should be the size of this subset so that all raters can evaluate it and calculate the agreement. From what I have read, the proper calculation would be Lights' Kappa (since there are more than two raters with fully-crossed design [Hallgren, 2012]).
Is there any software package that can help me estimate the size of that subset?
Hallgren, K .A. (2012), Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial. Tutor Quant Methods Psychol. 2012 ; 8(1): 23–34.
My question is:
Q. Is it sensible/possible to use Quantile Autoregressive Distributed Lag (QARDL) approach to assess the association among the variables, when there are only 27 observations?
Some of the common assumptions for the parametric tests include Normality, Randomness, Absence of Outliers, Homogeneity of Variances, Independence of Observations, and Linearity.
whether Random sampling (data collection method) is included this list or not?
Reply with references .
I am currently working on an Ed-tech project where we are trying to evaluate Android based Learning application. We are working with primary students and are in process of designing an Observational tool for measuring student engagement. We are aiming to measure Behavioral, cognitive and Emotional engagement but have not been able to find out any observational tool which has been used to measure all three dimensions.
Can anyone suggest any observational tools which has been used in Pakistan or India or in developed countries to measure student engagement in primary grades ?
Thanks in advance.
Is there any reading resource available as reference to conduct research trials in the field. I am looking for step by step guidance for example 1. Site selection, 2. soil sample (which Composite or other), 3. Experimental design (RCBD, Factorial or other), 4. Observations (in plant pathology trial), 5. appropriate analysis and finally conclusion.
I have a big data which contains 4787 Observations and almost 100 variables. Questionnaire has some nested questions like selected respondents are asked to Answer Q#2 if they have answers Q#1 as YES and Q#8 would be answered by those who answered Q#4 as YES, like that data is shrinking and missing values are increasing. So, how to handle this kind of missing data in R which are systematic missing not the user-missing data.
Firstly, if I am deleting all the observation with NA, it results in losing 75% of the data and losing good data points.
Secondly, Mice package in R is for user-missing data ( situation in which respondent failed to answer the question).
Kindly help in this regard
Locally derived surveillance data to track resistance patterns and better understand the burden of AMR on patients there.
Observations from minned resources now show that Pfizer Inc sponsored Antimicrobial Testing Leadership and Surveillance (ATLAS) is an online platform that provides widespread access to data on both emerging bacterial and fungal resistance patterns.
This is supporting Public health and invariably steps that backup health promotion
Much of this is quoted from elsewhere, but I think deserves its own thread:
Kuhn, who I have always seen as having a only a partial (that is: just a "some-parts" understanding) of a paradigm, still seems at least in the direction of being correct in some noteworthy ways. According to Kuhn : An immature science is preparadigmatic -- that is, it is still in its natural history phase of competing schools. Slowly, a science matures and becomes paradigmatic. (End of short summary of some of his views.) [ It will be clear I do not fully agree with these views, in particular: the " 'natural' history" part. ]
I would say that preparadigmatic is not yet science at all and characterized by flailing and floundering UNTIL a paradigm is found (and RATHER: actually, this should be done NOW and with any necessary efforts: FORMULATED). Preparadigmatic is nothing good, clear or even "natural"; it is a state of insufficiency, failing to provide for making for clear sustained integrated progress (and even, as indicated, I would say this situation is: unnecessary -- see my delineation of the characteristics of a paradigm * to see why this situation in Psychology is unnecessary and INEXCUSABLE, because clearly you MUST be doing paradigm definition the best you can, clearly and respectably). _AND_ we are not talking about progress in one vein (sub-"area"), but some interpretable, agreeable findings for the whole field -- a necessary condition of HAVING ANY sort of general SCIENCE AT ALL; obviously Psychology does not have that and should not be considered a science just because people in that field want to say that and supposedly aspire in that way [ ("aspire" somehow -- usually essentially mythologically, irrationally, and just "hoping beyond hope" (as people say)) ] In short: that state of preparadigmatic should not be tolerated; major efforts should be clearly going on to improve from this state immediately ("if not sooner", as they say -- i.e. this SHOULD HAVE BEEN DONE SOONER).
Since I think I DO KNOW at least many of the characteristics of a paradigm (presented elsewhere, for one: in the description of the "... Ethogram Theory" Project *) AND since mine is the only paradigm being "offered up", Psychology people should damn well take full note of that and fully read and come to a reasonable understanding of my perspective and approach -- all that leading to clear, testable hypotheses that, IF SHOWN CORRECT, would be of general applicability and importance and very reliable (in the formal sense) and , thus (as I say): agreeable. IN short, I OFFER THE ONLY FULL-FLEDGED GENERAL PSYCHOLOGY PARADIGM and if someone is in the Psychology field and really cares about science, they must take note (and fully assess it) (no reason for any exception): Minimally, all must "see" AND READ:
Barring any "competition", my paradigm should be studied and fully understood -- NO REASONABLE SCIENCE CHOICE ABOUT IT. It stands alone in Psychology, as a proposal for a NECESSARY "ingredient" for SCIENCE for Psychology.
* FOOTNOTE (this footnote is referenced-to twice in the essay above): The characteristics of a paradigm are presented the Project referred to: https://www.researchgate.net/project/Human-Ethology-and-Development-Ethogram-Theory-A-Full-Fledged-Paradigm-Shift-for-PSYCHOLOGY (in particular, in its description)
I am analyzing the data using two instruments i.e. a Questionnaire and Observation (Checklist). A questionnaire used to asked teaching strategies used by the teachers. They rated themselves using Likert scale. But I need to confirm whether they are using teaching strategies for real in classroom. I used observation checklist to confirm it.
Here I have two data set. One is questionnaire in the form of Likert scale and second one is observation checklist.
What statistical tool I use to compare the results and produce the concrete findings of my research.
I have read literature and came into conclusion that chi-square test would be appropriate. I need valuable suggestions that what I need to do?
Hello, I'm working on a terminology project with Regenstrief Institute for LOINC terms (Logical Observations Identifiers Names and Codes). They've already created terms for the CDC and WHO assays; commercial vendors are now applying for terms, introducing signal combinations of ORF, E, N, RdRp, S, etc. We're trying to proactively determine where data analytics on population health is going to be best served. Is it enough to know nucleic acids of SARS CoV 2 was detected, or might the greater good be served by recording the individual signals of samples in databases? Thank you in advance.
Our data is yearly time series on macroeconomic variables. Observations = 20 years. Dependent variable is GDP growth. Independent variables are governement terms (dummies) Consumptions, net trade, FDI, F. reserve, population growth, and agri. prod. The main objective is to examine the impact of government terms on economic development.
How do I reference the NIH Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies (in text and in the reference list).
Thank you so much for your help,
There are, of course (correspondingly), "idiotic" Questions. And ridiculous speculations by-analogy (and by-analogy is always suspect). (An example of "Answers" by foundation-less and link-less analogy is " 'quantum' explanations" to several psychological things; it is a bad joke any such person could have a graduate degree !) Sorry: my perception of common things here on RG. And, after years of nonsense, I have decided to state my observation.
It is likely very nearly pointless that I am here on RG (and there have been only so many years I have been able to try to fool myself).
The focus of this discussion is software for Football. According Chang (2018), mentioning Carling (2005), generally, performance analysis can be classified into two main categories: notational analysis and motion analysis. The two systems have different focuses. Notational analysis provides factual record about the position of the ball, the players involved, the action concerned, the time and the outcome of the activity, etc. Motion analysis focuses on raw features of an individual’s activity and movement, for example, identifying fatigue and measuring of work rate.
The two systems contribute for the performance analysis which has three main aims:
- Observing one’s own team’s performance to identify strengths and weakness
- Analysing opposition performance by using data and trying to counter opposing strengths and exploit weaknesses
- To evaluate whether a training programme has been effective in improving match performance
Performance analysis is not just about analysing matches and games. It is essential in the training programme to help coaches improve players’ performance. The following figure shows the coaching cycle. Performance analysis plays a key role in this cycle. Starting from the top, “Performance” means the performance in the game or training. “Observation” can be from the coach or video camera. Since research indicates that coaches are able to recall fewer than half of the key incidents that arise during the game, video camera is a better way which can record all the key events (actions and movements) for further analysis. In “Analysis”, it means analysis of data which include data management. For example, using performance analysis software to code the game, editing footages from the camera, extraction of data from data provider, etc. These are the areas in which the performance analyst spent most of the time. The product of this “analysis” stage can be statistical analysis and video recordings. In “Interpretation”, it can be put in two ways according to my experience. It could be done by coach or performance analyst. Some analysts have the authorisation from coach to interpret the data and then write a report or make a presentation to the coach or team. Some coaches just want the data from performance analysts and the coaches will interpret the data by themselves. It really depends on the coaches’ preference and the partnership between the analyst and the coach. After that, “planning” means the coach plan what to do after knowing what went wrong or which part the team did well. The coaches have to evaluate the performance prior to this planning stage. Otherwise, he doesn’t know how to improve the team’s performance in the next match. In most of the cases, it means the planning of the coaching session using the result of the performance analysis. “Preparation” means the execution of those coaching session in the training so prepare the team for the coming game. It will go back to the “Performance” stage and the whole cycle keep going.
What kind of Software or App are you using for Performance Analysis in football? Can you share with us the positives and negatives aspects according your experience?
I am aware of articles that have considered 10, 15, 20, 25 degrees for elevation mask. Which one is reliable for the equatorial and low-latitude, why is it? I would be glad if a citable reference is also included.
Hi everyone, I am doing a systematic review on the assessment of the spine with a specific assessment tool. Most studies included punctually assess subjects, and no intervention is done. It only consists of assessing subjects' kinematics or other variables. Do we agree that this study design is observational? Do you know some quality assessment tools for this type of studies? I already found the "National Heart, Lung and Blood Institute (NIH) Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies", but I don't know if this one is the best for this topic. (link: https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools) Thank you, Alexandre Luc
I need to assess the quality of studies included in a systematic review of the literature based on the tool developed by the National Heart, Lung, and Blood Institute (NIH): Quality Assessement Tool for Observational cohort and cross-sectional studies.
I'm having difficulty answering question #5: "Was a (...) power description, or variance and effect estimates provides?"
In the cohort studies included in my systematic review (N= between 564 and 50,000 participants), how do I know if a description of power or variance and effect estimates was provided? How is this information usually formulated?
Many thanks in advance.
Hello everyone. I have yearly Observation and Raster value data. I need to calculate monthly Pearson Correlation Coefficient. I started to do it, but it took lots of time. I realize It will take too much time if I calculate it manually.
Are there any ways (codes) to calculate it by machine programing ? or the only way to calculate it by hand ?
Thanks in advance.
I don't know what is the criteria for classify as "good, fair or poor" when i using Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies. Could anyone help me, please, with this question?
I want to correct thirty years of daily and monthly output of wind data from two ecmwf models using station observation data to obtain suitable patterns for different levels in the region. What is the best way to do this?
If Psychology continues (even thoughtlessly) with its baseless, unproven, and (actually) UNLIKELY-true (i.e. likely false) core assumptions, won't just a lot of very poor research be done and none good ? Here is something so you can just see the "tip of the iceberg":
Psychology theorists/researchers do not find behavior patterns of a biological nature (showing biological patterning); even more telling is that the VERY RARELY even use the phrase "behavior PATTERNS" -- which absolutely MUST be the way it is. THIS ALONE MAKES THE CASE OF THE CLOSED-OFF AND ARTIFICIAL NATURE OF PSYCHOLOGY AND HOW IT IS NOT A SCIENCE.
[( By the way : If you want to see what a real paradigm shift looks like -- THE paradigm shift -- then see the papers and writings on Ethogram Theory (under my Profile). (Beyond Kuhn.))]
[(As Psychology continues its extreme negligence, I can provide equally extreme well-founded criticism (and put it all down in writing, with all the reasoning and justification -- much better assumptions and arguments than they can mount). I guess its a "standoff": but its me vs [who-knows-who, the heck, or their numbers] -- they certainly might be characterized as cowards, at least in "these parts" (MT).)]
Observation (participant/direct) is one of the data collection methods in qualitative research (e.g.ethnographic study). I am just wondering how consent is sought in such a way that behaviors of those being observed will not be affected? What are the common requirements (or comments) of IRBs in using this method of data collection?
My data are I(2). So, after first difference they become I(1). My optimal lag length is 5. I run Johansen co-integration test, and result shows they are co-integrated. After that, I run VECM. As procedure suggest, I used 4 lag length in VECM calculation. But it returned with following message "Insufficient Number of Observation". It should be noted that, I have 28 observation , from 1990 to 2017.
Hey guys. I ve been trying financial support for my research here in Brazil.... but without success. I ve tried four times in Brazilian agencies, and still waiting for some answers. I ve got some money by the means of collective financing here... but not enough.
My student Thibault Mlg started other campaign on a French website: https://www.leetchi.com/c/comment-les-organismes-se-fossilisent-projet-de-recherche
For those who can contribute or divulge: Thank you! For those who want to know a little about my work, see my papers here in research gate, or send me an email: email@example.com
Here follow an abstract of my project:
"How do fossils preserve?" Although many physicists consider traveling to the past as practically impossible, paleontologists have long known how to access past events. The temporal ordering of the paleontological record in rocks allows us to travel to the past and revisit ecosystems and environments that no longer exist. More than telling the tale of extinct organisms, fossil and rock can provide informations on past biological factors, environmental conditions, and other natural processes that help in our knowledge about origin and evolution of life on Earth and maybe even on other planets. Observation and experimentation are the methods that dominate science. Paleontology excels by testing hypotheses based mostly on observation of the fossil record; and continues to be endure with conflicting conceptual and explanatory questions about it. Clays are a broad spectrum of mineral varieties with different chemical properties. They are possibly related to life in its origin, and certainly involved with contexts of exceptional preserved paleobiotas (such as Marizal, Tamengo and Corumbataí formations, Brazil). In this project we expect to understand the effect of mineralogy and chemical properties of sediments on the preservation of soft tissues. For this purpose, we will test experimentally different processes and environments of fossilization, using analitic techniques in order to access data.
I have all my field notes (Observation and Semi structured interview) questions and answers I just need to analysis them so I can write my findings, analysis and evaluation.
I have been reading about the observer effect as conceptualised in Physics. However, I would like to know how this concept has been applied in the social sciences.
In some lecture report they say Zonal flow stands for the mode with toroidal/poloidal mode number n=m=0 mode in plasma physics. So the n=0, m!=0 Geodesic Acoustic Mode (GAM) is not Zonal flow. But in other paper, they say GAM is also a kind of Zonal flow. So who is right and who is wrong? Exactly what is zonal flow in plasma physics?
 L.W. Yan et al 2007 Nucl. Fusion 47 1673 Three-dimensional features of GAM zonal flows in the HL-2A tokamak
 L. Lachhvani, J. Ghosh, P. K. Chattopadhyay, N. Chakrabarti, and R. Pal, “Observation of geodesic acoustic mode in SINP-tokamak and its behaviour with varying edge safety factor,” Phys. Plasmas, vol. 24, no. 11, 2017.
I have pairs of data for two different methods (i.e., biometric data taken for two different sensors). However, those data are repeated for each human subject (i.e., each sensor captures biometric data several times for each subject).
Human Subject Method x - Method y
1 4.231 5.53344
1 2.112 4.111
2 1.432 2.473
2 7.666 3.234
Since we have the following:
1) two groups where the dependent variable is the biometric variable
and independent variables refer to sensors (i.e., different methods),
2) Data are continuous,
3) The data follow the normal distribution
This will allow us to use the paired t-test. The only thing that I am not sure about is that can we simply put the two columns Method x and y and run the t-test? My understanding is that we cannot and we need to take the mean for each subject and then use them for the t-test instead of the actual data.
Let me know know what you think.
In a paper published this week in nature, researchers at XENON1T reported that they had observed radioactive decay of xenon 124. Xenon 124 went to tellurium 124, and the atomic number went from 54 to 52.
I am working on panel data set (220 Observations (Countries,years)), and after implementing outliers test on stata (Interquartile test) I found that I have 25 outliers in different countries.
11 on one country and the rest are existed in the others. I maked sure that is not a human error.
I don't want to remove them as it could be the behavior of the data as my data set applied on MENA region.
thanks in advance.
Observation/data plays a crucial role in any kind of research study, in general and scientific research, in particular. But some people claim that the real skill exists in analysing those data statistically for interpreting the trend of research towards a meaningful conclusion.
What is your precious remark or view in this regard?
Yep. RG is equating science with experiments. There may be those who like this, but experimentation is NOT THE ENTIRE SCIENTIFIC METHOD (and I would argue that experimentation is THE LEAST OF IT -- especially if one is developing a new perspective and approach). RG appears to have no appreciation for "just" verified observations -- even though that may be exactly what really new discovery looks like . Those observations may, in time (but not right away), be followed by experimentation. Verified observations by themselves may be very important and all we have for some time (in some new areas/kinds of investigation).
The outrageous bias of RG is so great that they now hide the Project Updates (of the Log) with multiple queries about one's experiments and hypotheses -- as if all good, clear hypotheses could be put "in a nut shell" (in a small "blank", with little context) AND that experiments are all that matter (or at least all that deserves several special headings). How about a heading for: "Verified Observations"?
I would ask: What experiments did Einstein do to lead and come up with his understanding of the universe? Did he start with experiments? NO!! He started with observation and MATH (which is basically verified observation). True, eventually some experiments were done to VERIFY HIS IMPORTANT OBSERVATIONS -- but all this did NOT begin with experiments..
And, all of this is not to mention major swathes of Biology. Come on, give us a break.
Observation is a very vital scientific method that helps a lot in the collection of the primary information that is reliable in nature in which direct study of the situation is involved.
Observations sometimes act scientifically, when used by the researchers in various research works but it should be noted that all observations are not scientific in nature.
It is important that investigator (observer) understand the functions of the observation, otherwise the gathered data may not be accurate.
So, it is the prime requirement that observer first planned out properly about 'what has to be observed'.
Therefore, I am looking for the correct procedure to develop a details about the observation before I start observation.
"Perception affects perspective and vise versa"
Looking for a short and concise ans.
We are working on reflection-in-action. More precisely, how junior doctors engage in reflection during the action and we are using shadowing as a data collection method. Of course, this is different from reflection-on-action (Schon), in which practitioners reflect after they have taken decision and actions. There is much more evidence on reflection-on-action.
The question is self-explanatory. Observations in the common region are not too much ( 5-10%).
Excuse my questions presented as statements. I actually mean that I have an idea but want others' thoughts.
I have a strong argument that verifiability in science does not carry an "axiomatic" value in science, but that it is there to reduce uncertainty (equiv: increase certainty). When we extrapolate too far, we cannot be that certain of our theory. How do we reduce it? Observation.
Bedford and Cooke: "In practical scientific and engineering contexts, certainty is achieved through observation, and uncertainty is that which is removed by observation. Hence in these contexts uncertainty is concerned with the results of possible observations."
Ref: Bedford, Tim; Cooke, Roger. Probabilistic Risk Analysis: Foundations and Methods (Page 19). Cambridge University Press. Kindle Edition.
In my study, video recordings (360 ° video) of different rooms were made. Several people watched these videos and created a room sketch of the room in the video. Are there any established techniques with which such self-drawn floor plans can be compared?
In what extension there are observational indications from gravitational waves emerging from neutron star mergers of non-negligible contribution from vacuum energy to their total mass?
In China, the data collection from rivers, lakes and wetlands is crucial to global freshwater biodiversity observation network.
I am planning to research on application of certain theories using case studies. Since this is considered as a qualitative approach, decided to use Structured Observation technique. Please advice whether Structured Observation can be used in deductive research.
It is stated in some interpretations of QM (exceptionally to mention Copenhagen interpretation) that an object stays in superposition (SP) as long as not a human (good physicist– just kidding) brain observes it and so projects it on a definite state (dead/alive). But as I think the following very simple example disproves this statement. I consider a working clock in a closed room. It was left at 2 o’clock and it is not observed (even put in vacuum to avoid decoherence induced by the medium) – e.g. closed there for 2 hours and then by entering the room one sees of course that it is showing 4 o’clock instead of 2 o’clock. But according to the above mentioned interpretations every instant it must have been in a superposion of showing the next second or not. If observation is what causes it to move then there were not such and hence it must still be showing 2 o’clock. (analogously as the cat is both dead/alive) So I think this disproves the statements for staying in SP until a brain or environment appears to project it. Do you agree?
Although its a very basic question but i think it must be discussed in forum like this, that before we start studying any property like electrical, optical, thermo-mechanical etc. of any sample, what must be kept in mind regarding material characterization e.g.
Electrical and Magnetic Techniques
Spectroscopic Techniques etc.
Like If one is studying electrical properties than what characterization techniques like Scanning Electron Spectroscopy (SEM), Transmission Electron Microscopy (TEM), FTIR, XRD, EDS, EDXA, EDXMA, XPS, etc. one must have to go through ?
and similarly for other properties......?
Hospitals are required to comply with various laws and bylaws. Investigator want to study whether a particular hospital is complying with green building laws, he will check the dimensions of the patient and other areas in the hospital and compare them with green building standards. What would be this study design? Will it be a case study ? or Observational descriptive study? or any other type of study design? Please help
In performing transient simulations with ansys fluent (Workbench),The ansys file that is
created is huge (148 GB),and I have 27 case to simulate.
I am looking for any way to reduce the size of the results file.
- In all cases ,I am interesting just for the inlet and outlet of fluid temperature and it's velocity.
- In all cases I have to simulate 93 days ,and all days contain the first 11 hours (I want to record it results ) and the last 13 hours(I am not interesting to record it).
While counterstaining the alcian blue stained cells with fast red, the alcian blue is getting washed off.
The protocol I followed was- staining the fixed cells with alcian blue (1% prepared in 3% acetic acid pH 2.5), 30 min incubation. Washing with distilled water and staining with fast red (0.1 g fast red + 5 g aluminium sulfate in 100 ml distilled water), 5 min incubation. Observation under light microscope.
Kindly suggest the modification or flaw in this protocol.
I have a numerical dataset made up of features set extracted from brain MRI images. The observations in the dataset represents two classes such as Tumor or Non-Tumor cell. I want to train a deep learning model on these features and test the performance on unseen data. So, can anybody help me to get the matlab implementation of the deep learning model. As a researcher, It would be a big relief.
.How therapeutic are levels of observations in the management of suicide or self harm?
For my research sampling I chose 8 people from the population, however, I can't locate reasons why a specific number. The number is for observation and interview.
While evaluating a general circulation model (GCM) with observation, we can see the trends obtained from few GCMs are though similar to observation (say, both have negative trend) but are not always significant (for example at 5%). So, How are we to interpret the trends in a GCM that are not significant?
I am having trouble finding sources explicitly talking about this, but get the sense that highly polymorphic species don't much occur at high latitudes. Does anyone have anecdotes or papers refuting or supporting this observation?
We are three students and need your support for our project about the equipment for Nature Observation and Hunting in Scandinavia.
Its mostly about binoculars and scopes.
As we do it in Scandinavia, it turned out to be pretty hard for us to find people.
That´s why we would be so pleased if you can support us! It would help us a lot!!
Just fill out this survey:
Thanks a lot!
Dear scientific community,
Someone could tell me the relationship between all Reactivity Descriptors.
For example, I have a system with:
E(HOMO) ---> -7.1351
E(LUMO) ---> -2.5527
Gap energy ---> 4.5824
Electronegativity (χ) ---> 4.8439
Hardness (ƞ) ---> 2.2912
Chemical Poten. (µ) ---> -4.8439
Softness (Ѕ) ---> 0.4364
Electrophilicity (ω) ---> 11.731
Note: all values are in eV.
So, with these reactivity descriptors, what observations can I do?
Hi fellow researches,
For a descriptive analysis on capital flows and interest rate differentials, I construct a variable for net private capital flows from quarterly BOPS data. However, of course not all quarterly observations include all categories of flows (e.g. for certain quarters, portfolio flows are missing completely). How do you tend to deal with these incomplete observations? Do you use them while constructing your series or do you apply a certain rule (e.g. include only if at least net fdi, net portfolio and net other are available)?
Thanks very much in advance!
i have annual data of one station values upto 50 years with time(year)
i) observation recorded values=200,150,300,123,154,102,154,.........123,201,361 (50 values)