Science topics: PsychometricsMeasurement and Metrology

Science topic

# Measurement and Metrology - Science topic

Measurement and Metrology is a measurement, assessment, and quantification.

Questions related to Measurement and Metrology

In the early 1960s, James C. Keith proposed an experiment, based on relativistic theories of gravity, in which a small steel sphere at maximum rotational velocity would lose rotational energy at measurable rates due to gravitational interaction with distant masses of the universe (see appended PDF files). Experiments conducted in the early 1970s appear to confirm Keith's predictions:

The above question emerges from a parallel session [1] on the basis of two examples:

1. Experimental data [2] that apparently indicate the validity of

*Mach’s Principle*stay out of discussion after main-stream consensus tells Mach to be out, see also appended PDF files.2. The negative outcome of gravitational wave experiments [3] apparently does not affect the main-stream acceptance of claimed discoveries.

If so, experimental results and related theory might also be helpful ...

I like to do project in laser measurement system. My objective is to eliminate or reduce the cosine error, abbe error in Laser measurement system. I have Rhenishaw XL80 system.

Please tell the procedure to eliminate that particular error. And help me to complete my project.

Did somebody use Ameriflux data before? And applied it into Penman-Monteith equation or calculating aerodynamic resistance?

When I need to compute the aerodynamic resistance, I need to know the canopy heights and measurement heights of each site in AmeriFlux first. But I really did not find the descriptions of these on the website of FluxNet. I can only find some of them in reference papers but just part of them. And this can take much time.

So, I wondered if somebody knows where I can find canopy heights and measurement heights for Fluxnet sites(like Ameriflux) easily, can you tell me or send me a link. I know I can "find" them but I did not find actually. Thanks for your time.

As recently concluded in a parallel discussion, see reference below, LIGO is unable to exclude that mirror displacements as observed along their interferometer arms in fact result from much larger mirror displacements of similar profile along the vertical.

This is because mirror suspensions act along the vertical which over a distance of 4 km varies by an angle of about 2 arc min (a nautic mile = 1,852 m by definition corresponds to 1 arc min of angular distance at sea level). So every vertical mirror displacement will exhibit a displacement component about three orders of magnitude smaller along the connecting interferometer tube.

As LIGO is unable to directly measure vertical mirror displacements with adequate sensitivity they cannot distinguish whether horizontal displacements such as assigned to gravitational wave interaction are due to horizontal excitation or to vertical excitation at three orders of magnitude larger amplitudes.

I have a dynamic force +-4000N with frequency 1.5 Hz and I want to use a load cell to measure this force.

Some peoples told me that load cell S type can withstand such little frequency !

I'm not sure so if anyone know then please guide me.

Kindly list the advanced functions you want and elaborate that why you need the same?

Dear Sirs,

1, 2, 3 laws of Newtons need closed system (net force is zero). How do we practically realize, create such closed system?

One example. Let us look at a body motion. One can say If the body velocity is constant, e.g. zero then no forces act to it. Is it true? I think no. According to the 1st Newton law the velocity constance is the CONSEQUENCE of F=0.

So are there precise ways to construct closed system? Or all physical theory is just a mean to generate a hypothesis which has more higher probability to be true then other random thought?

How can I identify and remove multiple seasonal components for an hourly time series over several years with at least two seasonal components (daily and annual) and one trend component with a function as for example the triple exponential smoothing (Holt-Winters) with R-project?

There also might be some unknown seasonal effects, which I want to identify. Is there any R function to do this?

Greetings,

Carina

I have a water container surrounded by the electrical heater with insulation. There is a temperature sensor and and opening for the compressed air supply below at the container .We were interested to measure the humidity at different temperature .We have measured the humidity with FTIR instrument which is supposed to be accurate. The reading shows by FTIR agreed to our calculated humidity for the temperature range till 90 degree but afterwards it got deviated largely from our calculated measurements. Meanwhile to be sure we have again measured the humidity with silica gel by gravimetric method. The reading obtained by silica gel also shows the same trend as shown by FITR like the silica gel readings were in agreement with our calculated measurement till 90 degree but not in agreement with higher temperature.

Has any one idea what might be the reason. I believed that my equation for humidity does not seems to be valid for higher temperature.Would like to hear feedback or suggestion for equation that will be valid for all temperature range.

Dear Sirs,

I am trying to estimate ETP vía the Penman-Monteith equation. My wind data is expressed in km/24hours., since it is measured with a run-of-wind anemometer I would like to convert it into m.s-1 in order to use it in the equation. Is it okay to multiplied it by 1000 to convert km into m and then divide the result by 86400? I presume I may be underestimating the daily mean wind speed if I do so. I have data from other station in m.s-1, is there any way to correct or improve the conversion using that data?

Thank you very much indeed.

Regards,

I´m doing a Tauc plot of my NPs, and i´m getting value around -1.5E-19 eV.

This clearly makes no sense, i guess i´m making some mistakes in the units of measure, but i can´t find which.

Can anybody help me?

thank in advance.

The aim of a measurement is the estimation of the measurand true value and the respective uncertainty. When the measurand is obtained indirectly (through a functional relationship) what is the true value of the measurand? mu(Y) or f(mu(X1), mu(X2),..., mu(Xn))? Why?

Where:

Y = f(X1, X2,..., Xn)

Y is the measurand

f is the functional relationship

X1, X2,..., Xn are the input quantities

mu(*) is the population mean of *

If f(X1, X2,..., Xn) is a nonlinear function, then generally, mu(Y) is not equal to f(mu(X1), mu(X2),..., mu(Xn)) and if the true value of the measurand is mu(Y) then the approximation mu(Y) =~ f(mu(X1), mu(X2),..., mu(Xn)) is another source of uncertainty that is not considered in the ISO GUM.

So far I have found MIDSS.org and Incamresearch.ca as the only helpful resources. If you know of any others, please let me know.

Hello

I have to use one Hydrological model for which I require daily maximum and minimum temperature data, so can you please help me by suggesting the links from where I can have this.

I contacted Indian Meteorological Department (IMD) for same but they also don't have the data for this location.

i am interested in first how can fabricate the set up in which wire is moving at high speed ,and then by using laser scan micrometer how can measure the outside diameter of wire

I'm doing a studying on Chennai flood and for this I need the annual rainfall for the past 20 years to understand the rainfall pattern.

I want to buy both micrometer slides. Where can I get it and what may be the price of them ?

With regards,

Shiva Nedle

[

**Thermal break system of a windows and door.**] Means i want to know what is the temperature on the outer surface and inner surface of the glass. That can help me to know efficiency of the window.Inline pH probe and offline desktop pH probe, calibrated with same buffers show different pH of a solution at the same time. What can be the reason?

Inside a closed chamber, how can relative humidity be maintained using bubbling technique?

**IUPAC kindly welcomes comments on its Provisional Recommendation "Guidelines for the use of atomic weights" by 31 Aug 2016.**

Standard atomic weights are widely used in science, yet the uncertainties associated with these values are not well-understood. This recommendation provides guidance on the use of standard atomic weights and their uncertainties. Furthermore, methods are provided for calculating standard uncertainties of molecular weights of substances. Methods are also outlined to compute material-specific atomic weights whose associated uncertainty may be smaller than the uncertainty associated with the standard atomic weights.

Please see the links below:

what are the pertinent measures for a crowdfunding campaign sucess ?

Lets assume I have three phenomena A, B C that i can measure. Furthermore, I know that P(A) = 0.7, P(B) = 0.2, P(C) = 0.1. Now, I get the measurement which says that the observed phenomena is C with probability 0.8.The question is what is the probability that the phenomena is A, B or C knowing these facts.

The applications of the LSA for adjusting values are very wide. My interest is limited here to the field of quantitative measurements and for two peculiar fields of high-accuracy measurements, where it is used for computing adjusted values of the so-called “universal constants” (CODATA task) and of the atomic masses (often called atomic weights) (AME, AMDC task), for two purposes: to obtain an evaluation the consistency with each other of these (large) sets of values with a minimised associated uncertainty; to provide a set of recommended values. I report at the end some basic references about these two frames and their use of the LSA.

The LSA is used for minimising, according to a L2 norm, the standard deviation of a set, by computing new values (or deviations from the original values, called “adjustments”) of each member of the set of quantities, and the new uncertainty associated to each member of the set—generally lower, thanks to the minimisation.

However, the system cannot provide ‘absolute’ adjusted values when any of the values can be assumed to be ‘exact’. In fact, at least one of the original values must be kept constant, so, in actuality, all the adjustments are relative to this member, taken as ‘reference’ (please note, this does not generally mean ‘exact’). Should another member be chosen as the fixed one, all adjustments would be different, with a peculiar characteristics: the differences between two members of the set still remain the same, irrespective to the choice of the reference. Sometimes more than one member is kept fixed: I skip here this case for simplicity.

This ambiguity stands unless an additional assumption is made, concerning the ‘best’ reference, ‘best’ according to a chosen criterium. This limitation arises directly from the fact that, in measurement, the ‘true’ value cannot be known; consequently, no objective way exist to state which member carries the correct numerical value, implying that its value should not be adjusted. In the case of the use of fundamental constants for the definition of the measurement units, the additional assumption might consist of an independent way to estimate the minimisation of the discontinuity between the units, before and after the change in definition, which should strictly be avoided.

In my opinion, the LSA is a sound method to evaluate the consistency of the set of values with the lowest associated uncertainty level, by taking advantage of the statistical properties of a larger overall set. On the contrary, as to obtain and recommend ‘best’ values for standard tables of nuclides or of fundamental constants, the fact that the LSA evaluation is biased by the arbitrary choice of the reference member(s) should be carefully taken in consideration: in my opinion this bias makes the method inappropriate for that purpose, with respect to statistical means to obtain the ‘best value’ for each member of the set. In addition, with the LSA a relationship is construed between all members of the set, which could conflict with the fact that they originally are, at least in part, independent with each other.

Some basic and latest references:

CODATA: http://physics.nist.gov/cuu/Constants/index.html, http://www.bipm.org/extra/codata/. Last adjustment: P.J. Mohr, B.N. Taylor and D.B. Newell, CODATA Recommended Values of the Fundamental Physical Constants: 2010, Rev. Modern Phys. 84 (2012) 1–94. LSA application: Cohen E R, Crowe K M and DuMond J W M 1957 The Fundamental Constants of Physics (Tamworth, UK: G. & J. Chesters); C. Eisenhart Spec.Publ. 300 NBS paper 4.5 (1961); F. Pavese, Metrologia 51 (2014) L1–L4.

AME, AMDC: http://www-csnsm.in2p3.fr/amdc/. Adjustments using LSA: A.H. Wapstra, G. Audi and C. Thibault, Nucl. Phys. A 729 (2003) 129, M. Wang, G. Audi, A.H. Wapstra, F.G. Kondev, M. MacCormick, X. Xu and B. Pfeiffer, Chinese Phys C 36 (2012) 1603.

Can a laser tachometer be used for rpm measurement for following:

rotating device is with in perspex column, while rpm has to be measured from outside column. (Column is not opaque)

Hello every body

I want to know in kinematic calibration of e.g parallel robot with external device(such as laser,vision system,...), in order to measure pose of end effector , what accuracy for this device is acceptable?

thank to every body answer this

Dear Chemists and Physicists, I need a detailed procedure/ SOP for verification/ performance monitoring of a calibrated digital balance using reference calibrated weights to meet the requirements of ISO 17025.

In order to analyze the displacement, an equipment that control the amount of displacement has to be used in the setup. Example, xyz translational stage. But this one limited to milli metre varition. I want to analyze the displacement in micrometer range. What is another equipment that can be used?

Any lessons learned? Things to prepare?

Am using AMOS to study moderation effect on my model, however I have seen different values to the total and direct effect between standardised and non-standardised estimates, these values contradict each others and highly affect the assumptions, which one should I use and why ?

I want to measure deflection of cantilever beam in micrometers.

I want to measure the spectral response of solar radiation and it's effect on solar PV technology. A spectroradiometer is a costly instrument for this measurement, so can anyone suggest an alternative method?

What is a common measurement standard for mouse ES cell/embryo (life and medical sciences) research?

DeSimone and the National Research Council committee state the following in “Convergence: Facilitating Transdisciplinary Integration of Life Sciences, Physical Sciences, Engineering, and Beyond”:

“Many believe that life and medical sciences have not focused as extensively as physics and engineering on developing common measurement standards and common guidelines for collecting data from biological samples. In order to move beyond information encoded in individual genomes to translational application, further attention to this challenge of standardization and reproducibility is required. Strategies adapted from the physics and engineering communities can contribute, although the complexity and individual variability of living organisms make measurement challenges in life and medical sciences unique.”

So, is this a legitimate problem that concerns life scientists explicitly or are engineers and physicists simply asking for too much because they can't appreciate the difficulty of measuring what we all should want to know?

Is a common measurement standard for ES cell research, etc., the best someone has already done or is it defined as the best we can achieve for some stated purpose? What is that purpose and the best we can achieve? Can there be one colligated, stated purpose and measurement standard for something like, "in vitro generation of hemangioblasts for rejuvenation and health"?

Would a "stem cell scientist", a "developmental biologist", a "cell biologist", a "materials engineer", a "network physicist", a "computer scientist", a "physiologist", a "tissue engineer", a "geneticist", a "reasonably informed citizen" and a "potential patient" (

*i.e*., all who investigate) agree this is what we want to know? Why bother? That is, what can we know in that future that we can't know now with current standards?If we achieve this standard, would we even know what we're looking at? For example, if we saw regular flashing behavior associated with a construct that's not supposed to flash, how can we use that information? Who is responsible for assigning meaning if the behavior is novel to everyone investigating?

If we can't achieve that standard, whose problem would that be? Why should it be the biologists' problem? Isn't 21st century biology different in that it encompasses and synthesizes all disciplines?

Speculations are also welcome, so please... :)

Does anybody know some norms, regulations, good practices, scientific articles or any tips for measurement of small, approx. circular, rough (laser drilled) holes?

I'm using imaging system with telecentric lenses, telecentric light source, and I'm getting shadow images, but the edge is not fully sharp around, as there are sometimes some artefacts (particles, frozen drops of material) deeper in the hole that give me blurred background. It is difficult even to choose the proper threshold in this situation. It's not a correct measurement as there is no clear edge. What to do in such situation?

I'm going to make couple of such images along the axis of the hole and combine it in order to get the infinity-focus image, and then measure f.ex. the maximum inscribed circle. I'm I going in good direction, or are there better solutions?

I need to focus on clearly defined and measurable dimensions of citizen expectations on public services, in order to develop a questionnaire or an interview scheme

I would like to record wind speed and direction with a small/medium sized portable wind data logger (possible to carry in a backpack). I would like to hear your experience with portable wind loggers and will be happy to get recommendations of different types.

Does anyone have experiences with scanning rubber surface profile (like conveyor) with the use of laser scanners (profiler or 3D), such as SICK, Riegel, Leica? What is the practical achievable accuracy of the profile scanning? I've tried to scan it with Leica P20, and there is a noise +/-3mm at short distance (3m) - much bigger than in specification of the scanner...

I have a thin-walled machined tube which I want to measure for it's thickness deviation in different positions but I don't know how. The specs: L=500mm , Ri=58mm , Ro=60mm

I want to study the surface evolution of a fractured surface. I'm interested in its scaling behavior so I want to calculate its roughness exponent from the 2D-PSD. However there are too many different expressions. Which one should I use?

The bearing area curve was achieved when using conventional and wiper geometry by hard turning.

We have developed a novel method for imaging/scanning the surface of sidewall structures by using conventional AFM equipment in combination with special AFM tips and new scanning mechanisms. In the attached image you see as an example the 3D representation of an AFM scan obtained on the sidewall of a trench that was etched into a Si substrate by means of Bosch-etching process. The sidewall scanning technique could definitely widen the range of applications for AFM technology. But, which specific applications come to your mind? Could this be useful for your research, too?

Health Care Quality Measurement may involve making measurement tradeoffs. The attached presentation on AMI quality measurement describes some of these tradeoffs and some approaches to handling them. What has your experience been in dealing with these kinds of measurement challenges?

We are employing salt baths for generating stable humidity environments for the routine verification of RH instrumentation. This technique provides reproducibility with in ± 0.5% RH, if temperature is controlled with in ± 2.0`C. But these salt baths take five to six hours for stabilization and is very cumbersome to operate.

Has any one come across techniques which are easy to set up and could be used for verification of RH instrumentation (with out employing precision instrumented RH chambers)?

Is there a proper way to determine when and when not to add quantities in quadrature (ie. adding the squares)? Or is it one of those black arts?

In what follows, is what I actually teach to my students. I say it is partly a black art. But am I really correct, or is there a firmer underpinning I can give it?

a) First off, I tell my students that if quantities are vectors at right angles you add in quadrature. That's easy and there is no problem seeing that is the case, simply by looking at the geometry of the situation. No black art yet, so far so good.

b) Second, imagine two Gaussian white-noise signals. Do we add these in quadrature or not? Initially this is hard for the student to see the actual geometry of the situation, but then I write down the formula for the correlation between the two noise signals. I then tell the students that they can imagine these two noise signals as vectors, where the magnitudes represent the rms amplitudes and the angle between the vectors represents the degree of correlation. Now they can see the geometry! So they understand now why you linearly add when the two signals are correlated and why you add the noise in quadrature when it is uncorrelated. Because they are at 90deg when they are uncorrelated. No problem, we are all happy. Perhaps it was a little bit of a stretch with the vector analogy, so this part is only a grey art so far.

c) Third, now for the black art. Note that there are many situations in physics and engineering similar to what I am about to describe. I am giving just one illustrative example here. But I want to make the point that this type of question is ubiquitous. The example is this: imagine a photodiode. The diode has a response time determined not only by the transit time of carriers across the depletion region, but also by its circuit RC time constant. Recall that C is the junction capacitance, and R is any load resistance hanging off the diode. Now my students can easily calculate both the transit time and the RC time, with no problem. But when it comes to calculating the total response time, should they linearly add those two times or should they add the times in quadrature?

Unfortunately, as a lecturer I totally fail here because I have no nice geometric picture to give my students, like I did in cases (a) and (b). When we look in the literature, we are told those two terms must be added in *quadrature*. Does the literature tell us why? No.

So this is what I tell my students. I say that we cannot necessarily know, a priori, if these two quantities should be added in quadrature or not. We can *suspect* that we should add those times in quadrature because they come from two physically different origins that are apparently independent. But we do not really know at first glance if there is any degree of correlation or not. Therefore, we must go away and make empirical measurements to be really sure. I tell them it is a black art. We have no firm theoretical underpinning to decide the correct way, other than just going and doing the experiment to see if the quadrature or linear model fits the measurements better.

Am I right to say this, or can we claim some firmer principle?

Remember, I am looking for a general principle and not simply an answer to the photodiode case, which was only one of many examples.

Is it possible to theoretically predict when and when not to add in quadrature, or am I right that in many cases it can only be finally decided by what a real experiment tells you to do?