Science topics: Geodesy and SurveyingObservation
Science topic
Observation - Science topic
The act of regarding attentively and studying facts and occurrences, gathering data through analyzing, measuring, and drawing conclusions, with the purpose of applying the observed information to theoretical assumptions. Observation as a scientific method in the acquisition of knowledge began in classical antiquity; in modern science and medicine its greatest application is facilitated by modern technology. Observation is one of the components of the research process.
Questions related to Observation
Hello,
I am having a serious overload (shown in figures) in the rig when I insert the bath electrode in the perfusion chamber.
I tried:
1) I check the amplifier problem using a model cell- which worked perfectly fine.
2) then I tried check the connections from the headstage to the reference electrode- by altering with another electrodes, yet the problem persisted.
3) I have two channel units and both of the channel units are not working.
4) I checked all the parameters- all looked fine and the grounding from the electrodes were also removed.
Interestingly, my colleague had been able to patch the previous Friday and I started patching on the following Monday. But today my colleague also observed the same problem in their rig.
1) What could be the reason for this ? Is this default with the electrical system supply to the room? Because our supply is from different sockets in the same room? Could someone help and explain this ? Even after taking the plug from the socket the reference electrode-showed overload.
2) then how did the model cell work ? When I inserted the model cell the resistance was only 500 Mohm.
Observation - the voltage shown in multiclamp when not immersed is less and when immersed it's going to -ve direction.
I use a 700B Axopatch. Any advice is appreciated.
Thank you, Sooraj.




Weather stations are key facilities to record long-term change in surface air temperature (SAT) in land. However, some weather stations are located in or near cities, and they may suffer from relocations due to the expansion of the cities and the resulting deterioration of observing settings. This is especially true in developing regions like China mainland. Most of the national stations in this region have been relocated for at least one time.
The practice has led to breakpoints in the SAT data series, and researchers have to make an adjustment called homogenization before they go further to analyze long-term change in SAT. Different homogenization procedures produced different results of the SAT trends, with some effectively restoring the urbanization effect in the historical SAT records. Thus the homogenization will bring in a new bias in the site and regional SAT data series.
Each country or region may have different practices when the weather stations are engulfed by buildings. What are you seeing in your countries or regions? Have the observational stations been also moved away from urban areas? How do you think about the possible influence of urbanization on the historical SAT records no matter what strategies have been practiced?
Has it occurred to any of you AI/AGI people, that if my writings are of a science of truly empirical psychology , even if just an outline with just clear or clear-types needed for such ... !!!!!
... If you make that input central whenever it is (would be) relevant, that would be good material for a Generalized Artificial machine.
Also see my Answer (to this same Question) below for more stimulation of insight ! (Click the Question's title to see it and my answer. )
Dear Researchers and Scientists,
I conducted zeta potential measurements on several metal oxides, specifically focusing on magnesium oxide samples prepared during my research work. I encountered an intriguing trend that I would appreciate your insights on.
Experimental Method:
- I prepared dispersions by adding 1, 5, 10, 15, and 20 mg of magnesium oxide to deionized water.
- Each dispersion was ultrasonicated to ensure uniform particle distribution.
- I obtained samples from these dispersions for zeta potential measurements using a DLS instrument (Malvern Zetasizer, ZS-90, U.K.).
- All experimental conditions, such as temperature, ultrasonication time, etc. consistent for each sample.
Observation: I observed a peculiar trend: as the concentration of magnesium oxide increased, the zeta potential values consistently decreased. Significantly, there was an observed variance of approximately 4-5 mV among every individual sample.
Research Efforts: Despite all my efforts to find an explanation for this trend, I was unable to find a satisfactory answers. Therefore, I kindly request researchers and scientists working in the field of zeta potential to comment and please share your opinions regarding this phenomenon.
Potential Considerations:
- Surface charge: Could changes in surface charge properties of the particles at higher concentrations of magnesium oxide contribute to the observed trend?
- Aggregation: Is it possible that particle aggregation or interaction influences the zeta potential, particularly at higher concentrations.
I would sincerely appreciate any insights, theories and suggestions that could help me understand the observed trend of zeta potentials value.
Thank you in advance.
Regards
Amid
According to James C. Keith, see appended PDF files, two components of gravitational drag are to be expected on rotating systems, a 1/c5 order drag to be observed preferably on systems of astronomical size, and a 1/c3 order drag preferably observable on millimeter size systems. Observation is limited in either case by experimental resolution of rotational frequency shift. In the early 1970s, Hulse and Taylor [1] determined relative deceleration of a binary neutron star system at 1.5 x 10-12 per second in agreement with theoretically accepted gravitational quadrupole radiation. A relative deceleration of about 2 x 10-11 per second was observed around the same time [2] on a 2.5 mm diameter steel ball at 75 kHz rotational speed. This observation has not yet been accepted by established theoretical science as due to gravitational interaction. The access of experimental results to serious analysis seems to be largely a matter of decision by representatives of theoretical rather than engineeering science.
[1] Astrophysical Journal, Vol. 195, p. L51-L53 (1975)
[2] Physical Review Letters, Vol. 30 (16), p. 753-757 (1973)
I'm interested in getting the identification of the plankton species seen in the observation photos I uploaded. The photo shows a microscopic organism that resembles a gastropod, with a soft body and a small brown shell on top. I would like to ask for help in identifying this plankton species. Can anyone identify this organism and provide further information on the plankton species seen in the photo? Thank you

GENERAL DISCUSSION : ALL can keep "playing a game" AS their lives, but that will NOT work; our absolute BEST PROBABLY WILL NOT work (in my view) -- for survival of the species, the ultimate criteria. What does doing our best look like? : https://www.researchgate.net/post/With_Climate_Change_and_all_I_will_tell_you_what_I_think_the_minimum_needed_for_survival_is
[ I am childless. I have absolutely NO PERSONAL REASON TO CARE ABOUT YOU -- I guess it is, in good part, for "sound logic"'s sake -- we can DO with that IFF [ (two "f"s are not a typo) ] we actually DO with THAT. Be an animal, forget any and all religion. (I am, by the way, a Theravada Buddhist -- an atheist, believing IN NO supernatural at all ; we have enough to do with the natural, the actual actual. We very much must be concerned with, as-much(and well)-as-possible, educating ourselF (as much as we can)(all of us, doing this) . BUT, do it in a NO-SELF WAY (to have no clutter) in YOUR [own] way, i.e. to not falsely "connect" to YOU (your own "self") in any way not necessary (or, if you must, (as you may have to) : temporarily) (AND expel any processing where you are BELIEVING something, via essentially NOTHING or nothing clear) Don't be clingy; and verify ALL YOU CAN, for yourselF (and others), and thereby come to something closer to reality [(as much as we can)], reality (or realities, if you like [(but now all at one time)]) as it really is -- this is a WAY (i.e. cross-contexts), it does not come automatically. But, it is as simple as it need be. Good luck. (For a bit more guidance, see https://mynichecomp.com .) I have NOT BEEN PAID AT ALL for half a life (the latter (later?) half ); there is really no reason not to trust me, unless you're very confused and/or [must] see me as insane.. I have no vested interest(s) (in any conventional sense) AT ALL (we all do have some interests).
I DO MEAN : much of psychology should be reconsidered in order to have CLEAR EMPIRICAL FOUNDATIONS, FOR ALL NECESSARY CONCEPTS -- for concepts to clearly correspond to some demonstrably important directly observable phenomena (like in all true sciences; another way to say this is : THE SUBJECT DEFINES ALL). This does NOT mean throwing findings out, but putting them in better contexts. Likely empirical realities (including possible observations of a concrete nature; i.e. such , at times, showing as clear OBSERVABLE bases , in clear, agreeable and reliable ways, and seen by the relationships to established PATTERNS : valid; and, that is, in really HARD FACTS -- the concrete bases at least SEEN at some points in ontogeny) . SUCH phenomena have not been discovered and are not sufficiently represented in Psychology (AND nothing much is even "begging" for what is needed, showing needed thought is not being given (in the dictatorships of the universities)).
And, they will not be as long as the group or grouped stuff (know it by p<.05 etc) is thought to be meaningful FOR THE INDIVIDUAL ORGANISM (THE unit-of analysis , always -- if you want a science). AND NOW IT IS NOT clear that THAT is, in the essential ways, usual (when such clear connections are not made and clear justifications (in THAT empiricism) cannot be given). In fact, it is totally clear that the essential features are NOT THERE.
On the positive side, I do like quite a lot of the Memories research, because some good "chunk" of it does fulfill the needed empirical foundations.
Again, as some have seen me say before, another way you can tell that most "psychology" is "OFF", is by the failure to see BEHAVIOR **_PATTERNS__** PER SE as a type of BIOLOGICAL (organismic) patterning. If behavior is not seen as Biological in nature, it is not seen well.
Teachers can use multiple methods to assess character development in their classrooms by selecting and combining different types of measures that suit their learning objectives, students' needs, and available resources⁴. For example, teachers can use:
- **Surveys or questionnaires** to measure students' self-reported character traits or skills at the beginning and end of a unit, course, or school year. Surveys can be standardized and objective measures that are easy to administer and score² .
- **Observations or behavioral measures** to monitor how students demonstrate character in their actions and interactions in the classroom or school environment. Observations can be subjective and interpretive measures that require clear criteria and training for raters.
- **Portfolio or performance assessments** to showcase students' character development through various products or tasks that they create or complete. Portfolios or performances can be authentic and creative measures that require clear rubrics and feedback for students² .
- **Interviews or focus groups** to elicit students' perspectives and experiences on character development. Interviews or focus groups can be qualitative and exploratory measures that require careful planning and analysis.
By using multiple methods, teachers can capture a more comprehensive and accurate picture of students' character growth over time and across different domains. Teachers can also involve students in the assessment process by asking them to self-assess, peer-assess, or co-create the assessment criteria or tools⁴. This can help students become more aware, reflective, and responsible for their own character development.
Source:
(1) Methods of assessment: An Introduction - Teaching and Learning. https://teaching.usask.ca/articles/methods-of-assessment.php.
(2) A Tool to Help Students Assess and Improve Their Character - Edutopia. https://www.edutopia.org/article/tool-help-students-assess-and-improve-their-character.
(3) 61 Effective Assessment Strategies for Teachers To Use. https://www.indeed.com/career-advice/career-development/assessment-strategies.
(4) How to Incorporate Character Development in the Classroom. https://www.northcentralcollege.edu/news/2022/03/10/how-incorporate-character-development-classroom.
'Organismic' is a word that should be used WAY more often (as is 'PATTERNS'). ( I am writing for psychology here.) [ One could either say a LOT more here to make it clearer, but the words (would-be terms) give one sufficient guidance. ]
If psychology cannot do BOTH (and more, read on), it will never ever be a science (main sign of something that is NOT science : P<.01 P< .05 and the like). Real things are not group things (as statistical things often are) AND beyond probabilistic (e.g. Piaget didn't need statistics)). Unit of analysis : individual human/organism : any other view is HOPELESSLY doomed NOT to be a science.
(<-- If you cannot see and document this unit-of-analysis, you are off in your own universe(s) (yet may have many friends and professors with you).) SEE AND READ MY LAST DISCUSSION POSTED to understand "the problem" more. AND: NO, I do not accept actuarial "science" work -- which most of psychology actually is, today.
[ ( I used to tell you my writings are THE way, BUT still no listeners/readers among the lazy (which is about all OTHERS or IS all others) -- my writings still show the way. Things could hardly be worse under a dictatorship. Hear that professors who profess ???.) ]
Psychology People :
I have a hard time believing that , in effect, few (if anyone) believes there might be a bit of "conditioning" to see a new perspective and approach. (Reflect on the fact that Buddha needed to use much repetition (and that in several different contexts) for people to "see" what he was talking about -- that is a fact.) See my next post (Discussion) for more.
The study will contain a triangulation of data:
Quantitative Questionnaire,
Semi-structure Interviews,
Staff Observations/questionnaire, and
Institutional Records.
Any advice would be much appreciated.
I do not find this decently sensible in any way. They (RG site persons) imply that they just want to have members go to (or back to) what THEY 'see' as THE major research (so, ALL should be reading just largely out-of context and esoteric published research (or pre-published articles) and, that is all).
Somehow they, apparently, think Projects are needlessly distracting and of lesser value; this outlook will make what is presented by each and every member LIMITED (not to mention boring and with many of the Articles' presentations poorly-founded and esoteric). And, these very poorly based or poorly contextualized studies ARE OFTEN unintelligible as presented in the revered peer-reviewed Articles, no matter how learned a reader may be.. (Citations or multiple citations for just about EVERY sentence, surely does not mean high quality NOR well-integrated OR useful for ANY true science OR for any supposedly developing science.) Researchgate wants to recreate that which is top of mind for many :Psychology pseudo-'scientists' ; publications, publications, publications !! -- peer-reviewed, but often RUINED. And for academics used to or highly rewarded for such, this place (RG), simply stated, will be just what was present before the Internet.
I, myself, in a shortly-upcoming post, will provide everyone a way to my major Projects and ALL the Updates to them. ALL THAT will be available through my OWN DOMAIN AND WEBSITE. (Watch for new post, a post with the needed address information.)
I have been trying to optimize gene expression qPCR assays that is already setup in my lab for my genes. However, when I do the PCRs, I am getting late Ct values than expected and thus my standard curve suffers from non-linearity and low efficiency. Also I see that my Ct values are increasing for the same set of primers/ same dilution series of positive control day-by-day.
Things I have tried:
- Use fresh reagents and plastic ware - primers, SYBR, positive control (human reference RNA converted to cDNA), DEPC treated water, filter tips, vials, fumigate working area/ lab.
- Recent calibration of qPCR instrument.
- Reagents have minimal freeze-thaw cycles.
Observations:
- Single melting temp peak, mostly.
- R2 of >0.9 during standard curve generation.
- No NTC contamination.
- Cts for lower dilutions (10, 5 and 1 ng/ul) are usually as expected. Mostly dilutions below 0.5 ng/ul are very late. And thus lower slope (< -4.0) and less efficiency (50-60%) in standard curve generation.
TIA,
Nikhil
As we know, in many references (Farmer et al., 1975; Schmidt et al., 2018; Austin, 2019), the harmonic analysis on the individual thermistor temperature records was applied, especially for the high-frequency water temperature data. I think this method is helpful for water temperature analysis, but I still do not fully understand the physical significance of this method. Can anyone make a clear explanation for this?
After a harmonic analysis, we can obtain a signal. It is easy to understand that the magnitude of the signal decreases with water depth. But some researchers assume that it can be fitted with an offset exponential equation (Austin, 2019). In this way, I can not understand. Hope some warm-hearted can help to explain it.
Thanks very much!
Reference:
[1] Farmer D M. Penetrative convection in the absence of mean shear[J]. Quarterly Journal of the Royal Meteorological Society, 1975, 101(430): 869-891.
[2] Austin J A. Observations of radiatively driven convection in a deep lake[J]. Limnology and Oceanography, 2019, 64(5): 2152-2160.
[3] Schmidt S R, Gerten D, Hintze T, et al. Temporal and spatial scales of water temperature variability as an indicator for mixing in a polymictic lake[J]. Inland Waters, 2018, 8(1): 82-95.
I have a data table which contains some discontinuation for the time series analysis. This is a monthly data table of the ocean water quality parameters for 15 years of data. Which imputation method should I use for the time series analysis?
Kalman or interpolation, Last Observation Carried Forward, or weighted moving average.
How to decide?
Update 29.1.2023:
I updated the work with new findings. It turned out, that all three, the speed of light in vacuum and the electric field constant and the magnetic field constant all are derived from the rotating body of earth. Therefore, it becomes much more clear on how the 12 Dimensional Universe is constructed.
I updated also the draft paper with the reasoning about Pyramids of Giza:
Now it seems even more convicing, that the pyramids had use for astronomical observations.

Fellow Researchers, I need some wisdom.
I am currently performing an experiment involving CaCo2 cells seeded on a Transwell. I don't have experience with such assays, therefore please forgive me, if my questions are somewhat ignorant. I've searched ResearchGate and found some answers in this discussion:
(23) Can transwell-cultured Caco-2 monolayers be imaged with BD Pathway confocal microscopy while keeping them in transwell? (researchgate.net)
However, I wonder - is there a possibility to observe the Transwells while on a plastic plate, without the necessity of transferring the Transwells to a glass dish? Additionally, I use the type of Transwells that stand on the bottom of the well, not hang on the borders of the well, therefore I am not sure if this solution would work in my case. My idea was- if I used a non-inverted microscope, could I maybe see my cells in the Transwell? I understand that after seeding, my cells need 21 days to differentiate and form a monolayer, however, how can I check that they are in fact attached to the Transwell and growing if I have no way of seeing them? Is staining with some viability dye the only way? I am afraid of a scenario in which I wait 21 days for my cells to differentiate while in fact – they didn’t even attach?
I'm doing a structured observation tallying how many times a teachers does a certain practice in 10 minute intervals. How would you analyse this data?
[ These criticisms may apply more to studies in the behavioral sciences, those being the ones I know about. ]
There is a big tendency for researchers to do research that [supposedly] TRIES to "build on" previous research. AND, there is a belief that such studies will lead to better understanding of (/definition-of) core concepts in a "field". AND, ALSO, other even less related (less concretely or physically interrelated) studies, such as interdisciplinary studies, are believed to lead to better understanding as well.
I believe neither of these is necessarily the case or even likely true (and, to a notable extent, never true, with some research as it is). I believe it is more often NOT true that progress is being made these ways, since the unit of analysis and its aspects are not clear, OR that real (proven) developed relations have not been found. Given the present research ways (many having long, numerous historical/philosophical roots), I believe that more often than good, real desired results (from findings), the results will NOT be interpretable in any reliable or valid way. This an area where some good scientific analytic philosophers could be of good help (thus, the reason for the existence of this discussion question).
My view is that if you do not have well-guided/well-justified and WELL-related studies, specifically, with all phenomenon involved or of present interest RELATING AS MUCH AS POSSIBLE TO DIRECT OBSERVATIONS OF essentially FOR-SURE FOUNDATIONAL OVERT PHENOMENON __AND___/__OR___ a clear case or clear reflection of such actual phenomenon (and, here too, CONCRETE LINKS at some time were shown and INVOLVED), then you are "off-track". Such is needed for science advancement ITSELF (<-- this being key to science and a MAJOR indication OF REAL SCIENCE itself). [ (In Psychology, the subject and aims of studies and findings should be to illuminate KEY Behavior PATTERNS, by clearly relating all of them to directly observable overt behavior patterns that ARE reliably and validly seen (with clear concrete foundations) OR to such "things" THAT WERE (and, ideally: have been) once so clearly and reliably seen during development (i.e. ontogeny)) (yet notice: STILL there is plenty of latitude left for many types of concepts to be involved in proposed explanations, given development and the demonstrated possibilities of the huge capacities of the Memories).) ]
It was required from me to find a research problem (or research statement) regarding to student-teacher observation peers method (a method of observation, analysis and evaluating teaching in class-room)
do you have any ideas?
can you provide me with appropriate references that can help me read and know more about this topic?
This morning (December 2, 2019) at about 6 UTC+1 and at a geographic place of about 52° N 13° E I saw a chain of 12 or more satellites in two groups (5 or more + 7), flying with equal distances in each group from west to east over the zenith (i.e. far from the equator). Does someone know this constellation of satellites or a web site which may be searched for orbits of satellites visible by naked eye?
I have a regression problem with two objective variables or outputs (named E & r). I made a model for every objective separately. I used Gaussian processes regression.
I obtained prediction for both objectives as can be seen in the attached images (error bar shows standard deviation).
Title of the plots shows R2 & RMSE of prediction. There is a categorical variable in dataset (Mixer) which has two values (50L, 2400L), shown by different colors on the plot.
Next, I calculated R2 & RMSE separately for every Mixer (shown in the legend in the attached images).
As you can see, for objective "E", RMSE of Mixer 2400L (blue color) is less than RMSE of Mixer 50L (orange color). But, its R2 is very low which is surprising for me. I expect that when RMSE is lower, R2 should be higher.
And for objective "r", RMSE of both Mixers are almost similar. But, R2 of Mixer 2400L is much lower.
I have only one assumption about this phenomenon. Reason of low R2 is because of lower No. of samples for Mixer 2400L.
No. of Observations:
- Total : 119
- Mixer 50L : 106
- Mixer 2400L : 13
If you have any idea. please let me know.




I recently posted an observation on this question on Why most research use Wrought Alloys when Researching Cast Al-MMC's? Most of papers I see coming to IJMC on Al-MMC's are using wrought as host metal. The common ones are: 6061, 6063, 7045, 7075, and 2000 series. These are very difficult to cast, which is why most of the researchers use very simple shapes just to get test coupons. They also usually conduct only limited mechanical tests and focus on wear testing, which is of limited value to design engineers. These are mostly academic studies and they tend to cite each other's works. I understand trying to limit compositional interactions, but this approach doesn't develop alloys that can be used into useful commercial shapes. The early Al-MMC's were based upon A356 (Al-Si) and also Al-Si-Cu (like 319 and 383) as a means to improve stiffness (modulus) and high temperature properties. There were collaborative efforts to see what material was needed to solve problems and see if it worked.
Prof. Rohatgi gave an AFS Golden Anniversary Lecture in 2019 that was published in the IJMC:
Ajay Kumar, P., Rohatgi, P. & Weiss, D. 50 Years of Foundry-Produced Metal Matrix Composites and Future Opportunities. Inter Metalcast 14, 291–317 (2020). https://doi.org/10.1007/s40962-019-00375-4
See the online version: https://rdcu.be/cB8IH
Unfortunately, this trend of using wrought alloys as the base metal is why a significant number get Transferred to other more materials orientated journals or Open Access. Very few look at can the MMC's developed be cast into complex shapes and can the material be re-processed. This is a BIG PROBLEM!!!! Our universities and researchers must address this issue, if we want to advance the use of materials that can see practical applications. Don't just seek Solutions in Search of a Problem.
I have monthly data and I fount seasonality in my data. I want to apply SARIMA but i am not sure that on 28 observation I can use SARIMA or not? kindly share any reference if there can apply SARIMA on 28 monthly observations.
I have a secondary data that I am to carryout a spatial price transmission on for my master thesis. I have 33 missing observations in the Urban and 23 missing observations in rural , out of a total 100 weekly observations. I planned to use VAR model but the large number missing observations compared to the total data makes me a little confused, even if I interpolate, I am scared the research might not really make sense?
Please, has anyone experienced similar challenge, please what did you so?
Please what do you suggest I do with the missing observations?
Suppose we have a system designed to deliver services to customers arriving during weekdays. The arrival process is modeled as a Poisson process with an Arrival rate of λ, also we use agent-based modeling with NetLogo to study the behavior of customers. After multiple Observation and replication of the model, the first 8 hours was selected as the warm-up period and the remaining time as the steady-state. If we consider the average length of stay (ALOS) as the crucial output data, how should we handle the initialization bias in this case?
As a workaround, is removing those data sufficient if we take into account the effects of the warm-up period on the ALOS?
Hello,
I have a dataset of 850 subjects and 6 potential raters. Each subject can be classified into 4 nominal categories. My idea is to split the dataset equally among the raters but holding out a subset for an initial calculation of agreement.
I wonder what should be the size of this subset so that all raters can evaluate it and calculate the agreement. From what I have read, the proper calculation would be Lights' Kappa (since there are more than two raters with fully-crossed design [Hallgren, 2012]).
Is there any software package that can help me estimate the size of that subset?
Best,
Bruno
Hallgren, K .A. (2012), Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial. Tutor Quant Methods Psychol. 2012 ; 8(1): 23–34.
Hello Researchers,
My question is:
Q. Is it sensible/possible to use Quantile Autoregressive Distributed Lag (QARDL) approach to assess the association among the variables, when there are only 27 observations?
thanks
Observation of cell wall of a bactéria in electron microscope
Some of the common assumptions for the parametric tests include Normality, Randomness, Absence of Outliers, Homogeneity of Variances, Independence of Observations, and Linearity.
whether Random sampling (data collection method) is included this list or not?
Reply with references .
I am currently working on an Ed-tech project where we are trying to evaluate Android based Learning application. We are working with primary students and are in process of designing an Observational tool for measuring student engagement. We are aiming to measure Behavioral, cognitive and Emotional engagement but have not been able to find out any observational tool which has been used to measure all three dimensions.
Can anyone suggest any observational tools which has been used in Pakistan or India or in developed countries to measure student engagement in primary grades ?
Thanks in advance.
Is there any reading resource available as reference to conduct research trials in the field. I am looking for step by step guidance for example 1. Site selection, 2. soil sample (which Composite or other), 3. Experimental design (RCBD, Factorial or other), 4. Observations (in plant pathology trial), 5. appropriate analysis and finally conclusion.
I have a big data which contains 4787 Observations and almost 100 variables. Questionnaire has some nested questions like selected respondents are asked to Answer Q#2 if they have answers Q#1 as YES and Q#8 would be answered by those who answered Q#4 as YES, like that data is shrinking and missing values are increasing. So, how to handle this kind of missing data in R which are systematic missing not the user-missing data.
Firstly, if I am deleting all the observation with NA, it results in losing 75% of the data and losing good data points.
Secondly, Mice package in R is for user-missing data ( situation in which respondent failed to answer the question).
Kindly help in this regard
Locally derived surveillance data to track resistance patterns and better understand the burden of AMR on patients there.
Observations from minned resources now show that Pfizer Inc sponsored Antimicrobial Testing Leadership and Surveillance (ATLAS) is an online platform that provides widespread access to data on both emerging bacterial and fungal resistance patterns.
This is supporting Public health and invariably steps that backup health promotion
Much of this is quoted from elsewhere, but I think deserves its own thread:
Kuhn, who I have always seen as having a only a partial (that is: just a "some-parts" understanding) of a paradigm, still seems at least in the direction of being correct in some noteworthy ways. According to Kuhn :
An immature science is preparadigmatic -- that is, it is still in its natural history phase of competing schools. Slowly, a science matures and becomes paradigmatic. (End of short summary of some of his views.) [ It will be clear I do not fully agree with these views, in particular: the " 'natural' history" part. ]
I would say that preparadigmatic is not yet science at all and characterized by flailing and floundering UNTIL a paradigm is found (and RATHER: actually, this should be done NOW and with any necessary efforts: FORMULATED). Preparadigmatic is nothing good, clear or even "natural"; it is a state of insufficiency, failing to provide for making for clear sustained integrated progress (and even, as indicated, I would say this situation is: unnecessary -- see my delineation of the characteristics of a paradigm * to see why this situation in Psychology is unnecessary and INEXCUSABLE, because clearly you MUST be doing paradigm definition the best you can, clearly and respectably). _AND_ we are not talking about progress in one vein (sub-"area"), but some interpretable, agreeable findings for the whole field -- a necessary condition of HAVING ANY sort of general SCIENCE AT ALL; obviously Psychology does not have that and should not be considered a science just because people in that field want to say that and supposedly aspire in that way [ ("aspire" somehow -- usually essentially mythologically, irrationally, and just "hoping beyond hope" (as people say)) ] In short: that state of preparadigmatic should not be tolerated; major efforts should be clearly going on to improve from this state immediately ("if not sooner", as they say -- i.e. this SHOULD HAVE BEEN DONE SOONER).
Since I think I DO KNOW at least many of the characteristics of a paradigm (presented elsewhere, for one: in the description of the "... Ethogram Theory" Project *) AND since mine is the only paradigm being "offered up", Psychology people should damn well take full note of that and fully read and come to a reasonable understanding of my perspective and approach -- all that leading to clear, testable hypotheses that, IF SHOWN CORRECT, would be of general applicability and importance and very reliable (in the formal sense) and , thus (as I say): agreeable. IN short, I OFFER THE ONLY FULL-FLEDGED GENERAL PSYCHOLOGY PARADIGM and if someone is in the Psychology field and really cares about science, they must take note (and fully assess it) (no reason for any exception): Minimally, all must "see" AND READ:
Barring any "competition", my paradigm should be studied and fully understood -- NO REASONABLE SCIENCE CHOICE ABOUT IT. It stands alone in Psychology, as a proposal for a NECESSARY "ingredient" for SCIENCE for Psychology.
* FOOTNOTE (this footnote is referenced-to twice in the essay above): The characteristics of a paradigm are presented the Project referred to: https://www.researchgate.net/project/Human-Ethology-and-Development-Ethogram-Theory-A-Full-Fledged-Paradigm-Shift-for-PSYCHOLOGY
(in particular, in its description)
I am analyzing the data using two instruments i.e. a Questionnaire and Observation (Checklist). A questionnaire used to asked teaching strategies used by the teachers. They rated themselves using Likert scale. But I need to confirm whether they are using teaching strategies for real in classroom. I used observation checklist to confirm it.
Here I have two data set. One is questionnaire in the form of Likert scale and second one is observation checklist.
What statistical tool I use to compare the results and produce the concrete findings of my research.
I have read literature and came into conclusion that chi-square test would be appropriate. I need valuable suggestions that what I need to do?
Hello, I'm working on a terminology project with Regenstrief Institute for LOINC terms (Logical Observations Identifiers Names and Codes). They've already created terms for the CDC and WHO assays; commercial vendors are now applying for terms, introducing signal combinations of ORF, E, N, RdRp, S, etc. We're trying to proactively determine where data analytics on population health is going to be best served. Is it enough to know nucleic acids of SARS CoV 2 was detected, or might the greater good be served by recording the individual signals of samples in databases? Thank you in advance.
Our data is yearly time series on macroeconomic variables. Observations = 20 years. Dependent variable is GDP growth. Independent variables are governement terms (dummies) Consumptions, net trade, FDI, F. reserve, population growth, and agri. prod. The main objective is to examine the impact of government terms on economic development.
Hello all,
How do I reference the NIH Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies (in text and in the reference list).
Thank you so much for your help,
Yossi
There are, of course (correspondingly), "idiotic" Questions. And ridiculous speculations by-analogy (and by-analogy is always suspect). (An example of "Answers" by foundation-less and link-less analogy is " 'quantum' explanations" to several psychological things; it is a bad joke any such person could have a graduate degree !) Sorry: my perception of common things here on RG. And, after years of nonsense, I have decided to state my observation.
It is likely very nearly pointless that I am here on RG (and there have been only so many years I have been able to try to fool myself).
The focus of this discussion is software for Football. According Chang (2018), mentioning Carling (2005), generally, performance analysis can be classified into two main categories: notational analysis and motion analysis. The two systems have different focuses. Notational analysis provides factual record about the position of the ball, the players involved, the action concerned, the time and the outcome of the activity, etc. Motion analysis focuses on raw features of an individual’s activity and movement, for example, identifying fatigue and measuring of work rate.
The two systems contribute for the performance analysis which has three main aims:
- Observing one’s own team’s performance to identify strengths and weakness
- Analysing opposition performance by using data and trying to counter opposing strengths and exploit weaknesses
- To evaluate whether a training programme has been effective in improving match performance
Performance analysis is not just about analysing matches and games. It is essential in the training programme to help coaches improve players’ performance. The following figure shows the coaching cycle. Performance analysis plays a key role in this cycle. Starting from the top, “Performance” means the performance in the game or training. “Observation” can be from the coach or video camera. Since research indicates that coaches are able to recall fewer than half of the key incidents that arise during the game, video camera is a better way which can record all the key events (actions and movements) for further analysis. In “Analysis”, it means analysis of data which include data management. For example, using performance analysis software to code the game, editing footages from the camera, extraction of data from data provider, etc. These are the areas in which the performance analyst spent most of the time. The product of this “analysis” stage can be statistical analysis and video recordings. In “Interpretation”, it can be put in two ways according to my experience. It could be done by coach or performance analyst. Some analysts have the authorisation from coach to interpret the data and then write a report or make a presentation to the coach or team. Some coaches just want the data from performance analysts and the coaches will interpret the data by themselves. It really depends on the coaches’ preference and the partnership between the analyst and the coach. After that, “planning” means the coach plan what to do after knowing what went wrong or which part the team did well. The coaches have to evaluate the performance prior to this planning stage. Otherwise, he doesn’t know how to improve the team’s performance in the next match. In most of the cases, it means the planning of the coaching session using the result of the performance analysis. “Preparation” means the execution of those coaching session in the training so prepare the team for the coming game. It will go back to the “Performance” stage and the whole cycle keep going.
What kind of Software or App are you using for Performance Analysis in football? Can you share with us the positives and negatives aspects according your experience?





+3
I am aware of articles that have considered 10, 15, 20, 25 degrees for elevation mask. Which one is reliable for the equatorial and low-latitude, why is it? I would be glad if a citable reference is also included.
Hi everyone,
I am doing a systematic review on the assessment of the spine with a specific assessment tool. Most studies included punctually assess subjects, and no intervention is done. It only consists of assessing subjects' kinematics or other variables.
Do we agree that this study design is observational?
Do you know some quality assessment tools for this type of studies?
I already found the "National Heart, Lung and Blood Institute (NIH) Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies", but I don't know if this one is the best for this topic.
(link: https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools)
Thank you,
Alexandre Luc
Hello,
I need to assess the quality of studies included in a systematic review of the literature based on the tool developed by the National Heart, Lung, and Blood Institute (NIH): Quality Assessement Tool for Observational cohort and cross-sectional studies.
I'm having difficulty answering question #5: "Was a (...) power description, or variance and effect estimates provides?"
In the cohort studies included in my systematic review (N= between 564 and 50,000 participants), how do I know if a description of power or variance and effect estimates was provided? How is this information usually formulated?
Many thanks in advance.
Murielle
Hello everyone. I have yearly Observation and Raster value data. I need to calculate monthly Pearson Correlation Coefficient. I started to do it, but it took lots of time. I realize It will take too much time if I calculate it manually.
Are there any ways (codes) to calculate it by machine programing ? or the only way to calculate it by hand ?
Thanks in advance.
I don't know what is the criteria for classify as "good, fair or poor" when i using Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies. Could anyone help me, please, with this question?
Best regards
Irismar Gonçalves
I want to correct thirty years of daily and monthly output of wind data from two ecmwf models using station observation data to obtain suitable patterns for different levels in the region. What is the best way to do this?
If Psychology continues (even thoughtlessly) with its baseless, unproven, and (actually) UNLIKELY-true (i.e. likely false) core assumptions, won't just a lot of very poor research be done and none good ? Here is something so you can just see the "tip of the iceberg":
Psychology theorists/researchers do not find behavior patterns of a biological nature (showing biological patterning); even more telling is that the VERY RARELY even use the phrase "behavior PATTERNS" -- which absolutely MUST be the way it is. THIS ALONE MAKES THE CASE OF THE CLOSED-OFF AND ARTIFICIAL NATURE OF PSYCHOLOGY AND HOW IT IS NOT A SCIENCE.
[( By the way : If you want to see what a real paradigm shift looks like -- THE paradigm shift -- then see the papers and writings on Ethogram Theory (under my Profile). (Beyond Kuhn.))]
[(As Psychology continues its extreme negligence, I can provide equally extreme well-founded criticism (and put it all down in writing, with all the reasoning and justification -- much better assumptions and arguments than they can mount). I guess its a "standoff": but its me vs [who-knows-who, the heck, or their numbers] -- they certainly might be characterized as cowards, at least in "these parts" (MT).)]
Observation (participant/direct) is one of the data collection methods in qualitative research (e.g.ethnographic study). I am just wondering how consent is sought in such a way that behaviors of those being observed will not be affected? What are the common requirements (or comments) of IRBs in using this method of data collection?
My data are I(2). So, after first difference they become I(1). My optimal lag length is 5. I run Johansen co-integration test, and result shows they are co-integrated. After that, I run VECM. As procedure suggest, I used 4 lag length in VECM calculation. But it returned with following message "Insufficient Number of Observation". It should be noted that, I have 28 observation , from 1990 to 2017.
Hey guys. I ve been trying financial support for my research here in Brazil.... but without success. I ve tried four times in Brazilian agencies, and still waiting for some answers. I ve got some money by the means of collective financing here... but not enough.
My student Thibault Mlg started other campaign on a French website: https://www.leetchi.com/c/comment-les-organismes-se-fossilisent-projet-de-recherche
For those who can contribute or divulge: Thank you! For those who want to know a little about my work, see my papers here in research gate, or send me an email: forancelli.ufscar@gmail.com
Here follow an abstract of my project:
"How do fossils preserve?"
Although many physicists consider traveling to the past as practically impossible, paleontologists have long known how to access past events. The temporal ordering of the paleontological record in rocks allows us to travel to the past and revisit ecosystems and environments that no longer exist. More than telling the tale of extinct organisms, fossil and rock can provide informations on past biological factors, environmental conditions, and other natural processes that help in our knowledge about origin and evolution of life on Earth and maybe even on other planets. Observation and experimentation are the methods that dominate science. Paleontology excels by testing hypotheses based mostly on observation of the fossil record; and continues to be endure with conflicting conceptual and explanatory questions about it. Clays are a broad spectrum of mineral varieties with different chemical properties. They are possibly related to life in its origin, and certainly involved with contexts of exceptional preserved paleobiotas (such as Marizal, Tamengo and Corumbataí formations, Brazil). In this project we expect to understand the effect of mineralogy and chemical properties of sediments on the preservation of soft tissues. For this purpose, we will test experimentally different processes and environments of fossilization, using analitic techniques in order to access data.
I have all my field notes (Observation and Semi structured interview) questions and answers I just need to analysis them so I can write my findings, analysis and evaluation.
I have been reading about the observer effect as conceptualised in Physics. However, I would like to know how this concept has been applied in the social sciences.
In some lecture report[1] they say Zonal flow stands for the mode with toroidal/poloidal mode number n=m=0 mode in plasma physics. So the n=0, m!=0 Geodesic Acoustic Mode (GAM) is not Zonal flow. But in other paper[2][3], they say GAM is also a kind of Zonal flow. So who is right and who is wrong? Exactly what is zonal flow in plasma physics?
[2] L.W. Yan et al 2007 Nucl. Fusion 47 1673 Three-dimensional features of GAM zonal flows in the HL-2A tokamak
[3] L. Lachhvani, J. Ghosh, P. K. Chattopadhyay, N. Chakrabarti, and R. Pal, “Observation of geodesic acoustic mode in SINP-tokamak and its behaviour with varying edge safety factor,” Phys. Plasmas, vol. 24, no. 11, 2017.
Hi,
I have pairs of data for two different methods (i.e., biometric data taken for two different sensors). However, those data are repeated for each human subject (i.e., each sensor captures biometric data several times for each subject).
For example,
Human Subject Method x - Method y
1 4.231 5.53344
1 2.112 4.111
.......
2 1.432 2.473
2 7.666 3.234
.....................
Since we have the following:
1) two groups where the dependent variable is the biometric variable
and independent variables refer to sensors (i.e., different methods),
2) Data are continuous,
3) The data follow the normal distribution
This will allow us to use the paired t-test. The only thing that I am not sure about is that can we simply put the two columns Method x and y and run the t-test? My understanding is that we cannot and we need to take the mean for each subject and then use them for the t-test instead of the actual data.
Let me know know what you think.
Thank you,
In a paper published this week in nature, researchers at XENON1T reported that they had observed radioactive decay of xenon 124. Xenon 124 went to tellurium 124, and the atomic number went from 54 to 52.
Dear all,
I am working on panel data set (220 Observations (Countries,years)), and after implementing outliers test on stata (Interquartile test) I found that I have 25 outliers in different countries.
11 on one country and the rest are existed in the others. I maked sure that is not a human error.
I don't want to remove them as it could be the behavior of the data as my data set applied on MENA region.
thanks in advance.
Observation/data plays a crucial role in any kind of research study, in general and scientific research, in particular. But some people claim that the real skill exists in analysing those data statistically for interpreting the trend of research towards a meaningful conclusion.
What is your precious remark or view in this regard?
Yep. RG is equating science with experiments. There may be those who like this, but experimentation is NOT THE ENTIRE SCIENTIFIC METHOD (and I would argue that experimentation is THE LEAST OF IT -- especially if one is developing a new perspective and approach). RG appears to have no appreciation for "just" verified observations -- even though that may be exactly what really new discovery looks like . Those observations may, in time (but not right away), be followed by experimentation. Verified observations by themselves may be very important and all we have for some time (in some new areas/kinds of investigation).
The outrageous bias of RG is so great that they now hide the Project Updates (of the Log) with multiple queries about one's experiments and hypotheses -- as if all good, clear hypotheses could be put "in a nut shell" (in a small "blank", with little context) AND that experiments are all that matter (or at least all that deserves several special headings). How about a heading for: "Verified Observations"?
I would ask: What experiments did Einstein do to lead and come up with his understanding of the universe? Did he start with experiments? NO!! He started with observation and MATH (which is basically verified observation). True, eventually some experiments were done to VERIFY HIS IMPORTANT OBSERVATIONS -- but all this did NOT begin with experiments..
And, all of this is not to mention major swathes of Biology. Come on, give us a break.
Observations and suggestions requested.
Hi,
I want to compare soil parameters and water table data (observed vs model output). Please list all the statistics possible.
Observation is a very vital scientific method that helps a lot in the collection of the primary information that is reliable in nature in which direct study of the situation is involved.
Observations sometimes act scientifically, when used by the researchers in various research works but it should be noted that all observations are not scientific in nature.
It is important that investigator (observer) understand the functions of the observation, otherwise the gathered data may not be accurate.
So, it is the prime requirement that observer first planned out properly about 'what has to be observed'.
Therefore, I am looking for the correct procedure to develop a details about the observation before I start observation.
"Perception affects perspective and vise versa"
Looking for a short and concise ans.
We are working on reflection-in-action. More precisely, how junior doctors engage in reflection during the action and we are using shadowing as a data collection method. Of course, this is different from reflection-on-action (Schon), in which practitioners reflect after they have taken decision and actions. There is much more evidence on reflection-on-action.
The question is self-explanatory. Observations in the common region are not too much ( 5-10%).
Excuse my questions presented as statements. I actually mean that I have an idea but want others' thoughts.
I have a strong argument that verifiability in science does not carry an "axiomatic" value in science, but that it is there to reduce uncertainty (equiv: increase certainty). When we extrapolate too far, we cannot be that certain of our theory. How do we reduce it? Observation.
Bedford and Cooke: "In practical scientific and engineering contexts, certainty is achieved through observation, and uncertainty is that which is removed by observation. Hence in these contexts uncertainty is concerned with the results of possible observations."
Agree? Comments?
Ref: Bedford, Tim; Cooke, Roger. Probabilistic Risk Analysis: Foundations and Methods (Page 19). Cambridge University Press. Kindle Edition.
In my study, video recordings (360 ° video) of different rooms were made. Several people watched these videos and created a room sketch of the room in the video. Are there any established techniques with which such self-drawn floor plans can be compared?