Science topics: Instrumentation EngineeringCalibration
Science topic
Calibration - Science topic
Calibration is a determination, by measurement or comparison with a standard, of the correct value of each scale reading on a meter or other measuring instrument; or determination of the settings of a control device that correspond to particular values of voltage, current, frequency or other output.
Questions related to Calibration
I have a photodetector with calibrated spectral response data. What does it mean to normalize photocurrent spectral with respect to the detector's spectral response, and how do I go about doing that?
Hello!
Please, has anyone had trouble calibrating a Gamry PCI4-300 (model DC105) potentiostat on Windows XP using Framework 5.04? My calibration gets stuck and doesn't progress. Does anyone have any insight into the potential causes of this issue and possible solutions?
I'll attach some images.
Thank you.
Hi all,
I am designing a calibration Kit for 2 port calibration. So for the further work, to design a load of 50 ohm we have decided for a 23 nm Titanium thin fim resistor by electron beam evaporation. In order to design the load with a specified length and width, we need to know about the resitivity of the thin film resistor.
If any one has idea about that or leads to some papers regarding it, it would be a great help.
Thank you for your help!
Cheers,
Jojo
Hello, why would all my absorbances (at 490 nm) be about the same (~0.15 to 0.18) for my blanks and standards in my microplate reader? I have even tried the same blanks and standards on a different reader and got the same results. Also my QC standards (made from a different stock than my calibration) are also reading the about the same as the blanks and calibration standards.
Dear Sir/Madam,
Kindly help me with this doubt, I am injecting 50 PPM H2 gas standard for the calibration of 1 mL (1000 micromole). I need to convert this PPM to micromoles kindly help me.
Thank you,
Figure 1:
① First, by inputting hourly wind speed and tidal level data for a specific location over the course of a month into the model, I simulated the hourly wave heights. However, the model initially produced wave heights that were significantly higher than the observed values. Therefore, I would like to ask which parameters can be adjusted to fine-tune the model locally?
②Since one set of parameters cannot be applied to the entire calibration period, how can suitable calibration data and periods be selected? What factors should be considered to ensure the safety and reasonableness of the design wave height for the project?
③During calibration, is it possible to use different model parameters for different situations?
Figure 2:
④After that, I used wind field data from the ERA5 database, processed it into a DFS2 file, and applied the model to the site. However, the model results were very unreasonable, showing wave heights only during periods of high wind speed. What is the reason for this?
⑤Finally, when there is a situation where the model calibration results are good but the design wave height is excessively high, how should this generally be handled? How can the reasonableness of the design wave height be assessed?
Can you recommend some articles on a position and attitude calibration method for the linear laser sensor in gear 3D measurement?
Hi Folks,
In this discussion, I will provide my Python code for everyone interested in working with DEM in PFC software.
The calibration process in DEM modeling is highly time-consuming, requiring researchers to continuously monitor their computers and adjust their models through trial and error.
This process can be significantly streamlined by developing a Python script that runs in both PFC3D and 2D software. This script can manage other model scripts, run the model, save images, export data, and even adjust micro-parameters with new inputs!
All you need to do is define a range of possible inputs and run the Python script. It takes a few hours to input all your data into the model during each iteration. Finally, you can compare the outputs with the expected results.
I hope this code will assist you in your future DEM modeling endeavors.
We have a Knauer Smartline-1000 pump on our semi prep hplc. It suddenly displayed a message ' Calibration values and curves were destroyed', and stopped working. Does anybody have any clue about solving this issue?
Unfortunately we had an accident and the small metallic piece holding the magnet sensor for the plate positioning system got broken, I did not find the spare part and buying a whole fraction collector (even those for only parts in Ebay) is far away from our budged... Just in case some of you guys are planing to throw away an old Frac-950, just let me know so maybe we can fix ours using parts of your trash....
Thanks!!!
Hello,
I am making an apparatus to measure soil respiration over a 24 hour period for the Haney Soil Health Test and am having some trouble determining what math needs to be done. The test originally used Solvita gel paddles to measure co2 and reports values in CO2-C ppm, however, the creator of the test now recommends using NDIR or IRGA sensors for measuring CO2-C. I have made a product that goes into a mason jar lid and measures in ppm. I am wondering what conversion i need to do to take the raw data from just CO2 in ppm to CO2-C in ppm while accounting for the dimensions of the incubation jar (0.2365-L glass jar) and (i think?) the mass of dry soil that went into the test (40g). I believe this paper (https://soilfertility.osu.edu/sites/soilf/files/imce/Protocols/Respiration%20Protocol%20-%20OSU%20Soil%20Fertility%20Lab%20%28Oct%202019%29.pdf) is on the right track and mostly what I need to do, but they convert into a different value while also using a calibration method that I will not be using. I will be using a blank mason jar measuring the ambient CO2 levels and removing that value from the unknown soil sample CO2 values to correct for background levels. I believe my next step will be to plug the raw CO2 (ppm) values into the ideal gas equation to account for those conditions as shown in the linked paper, but I need to stay in ppm.
This paper is what I am basing most of my protocol on because it is made by the creator of the test that I am trying to run, Dr. Rick Haney ( ) but I am unsure if the soil respiration value he uses for his soil health test are just the raw data straight from the sensor or if they take into account the volume of soil and incubation jar.
Any suggestions or advice would be appreciated.
Thank you,
A crop simulation model (e.g., DSSAT, APSIM) was used to predict the long-term impacts of climate change on crop yields. The model was calibrated and validated using field experiment data and historical yield records. Future scenarios were simulated under different RCPs to evaluate potential adaptation strategies.
Hallo everyone!
I did a study which consist of three groups of patients: control group (n=30), treatment group 1 (n=60) and treatment group 2 (n=60). Then, I did qPCR to analyze the target genes and reference gene. I already calculated the value of ΔCT for all samples.
My question is how to do the next steps? Because I read from some literature that calculation of ΔΔCt cannot be performed in the same way for paired samples from independent repeated experiment and unpaired samples.
Should I average the ΔCT of all the samples in the control group (n=30) as the calibrator and calculate the ΔΔCt for all samples?
Thanks in advance who will give me some suggestions and feedback.
Hey,
I would like to know more about HRM analysing. can anyone assist me regarding the analysing the data which were taken from this experiment?!
I have used MeltDoctor reagent and I calibrated quantstudio 3 with HRM plate in advance. Attached you can see the result from my samples.
Thank you in advance.
Fatemeh
Is there a rule for ensuring that all sample concentrations fall within the calibration range when using GC-MS? Specifically, how can we determine the appropriate calibration range of standards to ensure that all our sample concentrations are within this range?
Hi, I am trying to perform a radiometric calibration of an Aster image on ENVI, I watched a youtube tutorial where they use the Radiometric calibration tool, but when I tried to do it on my computer the Radiometric Calibration tool is not displayed, does anyone has an idea of why, I already restarted either the program and the computer but it seems it's not helping.
Can you provide examples of specific scenarios where calibrated hemodynamic monitoring techniques would be particularly beneficial
I have calibrated my GC-MS for 16 EPA PAHs using available methods. However, the last three compounds (Indeno[1,2,3-c,d]pyrene, Dibenzo[a,h]anthracene, and Benzo[g,h,i,]perylene) are not showing the best linearity compared to others. Additionally, there is a decreasing trend in the response from the beginning compounds, starting at around 107 and decreasing to around 101 for the last compounds. What could be causing these calibration issues, and how can I improve them?
I started running SWAT-CUP with 100 simulations but the error message shows like in the attached screenshot that the output files does not exist in the directory path even after the calibration was run successfully.
I have a thermocouple which output me some voltage level after signal conditioning. I need to convert it to desired units in centigrade. Below is the formula I am using for conversion. I need to proof that this formula will ensure uniform conversion of all voltage levels of thermocouple to centigrade units, such that 0 Volt corresponds to -200 centigrade and 10 Volt corresponds to 1500 centigrade.
Maximum voltage and minimum voltage are from DAQ after signal conditioning.
Maximum Reading Range and minimum reading range are values in centigrade.
We need to prove that range of voltage lets say 0V to 10V will be uniformly converted to -200 centigrade to 1500 centigrade reading range
Below is the formula for which we need a proof.
Precision Factor = (Maximum Voltage - Minimum Voltage) / (Maximum Reading Range - Minimum Reading range)
Desired output value in Centigrade = ((Input Voltage level - Minimum Voltage)/ Precision Factor) + Minimum Reading Range
Hello, friends. Currently, I am working on species distribution modelling using maxent. I have run the model using occurrence and climate data from WorldClim. Where i can find calibration area (e.g., buffer zones, minimum convex polygons, enclosing rectangles) biases introduced during the calibration process from my model
i am working in a diagnostic lab and very confusing about the calibrator, control and standard that we are using in our lab for quality assuarance of diagnosis of disease
We run a Sciex X500r qToF and tune it once a week currently. I've been told that negative tune has historically been poor on other systems. The peak intensities and width are fine, however the 520.9 m/z precursor ion dips in and out of the upper 2ppm (-2 to +2pmm) range. I can't pinpoint what in the system is causing this and is this something that I should be concerned about?
The ICP-MS failed to tune every time it was tested and required mass calibration. Sometimes tuning cannot be successful even after mass calibration.
Could any kind scientists give me some advice? I would be very grateful!
Hello everyone!
I am new to circular dichroism spectrometry, and am running into a problem with our CD spectrometer (Aviv 202). One calibration that is recommended is checking the ratio of the peaks at 192.5 and 290.5 for 1mg/ml CSA in a 0.1cm pathlength cuvette. Previously (many years ago) our machine recorded CD ratio at 192.5nm/290.5nm = 29.6mdeg of 1.97. Currently I am recording a CD ratio of 1.21 (the 290.5nm peak matches the original signal, however the 192.5 is changed).
As I'm reading online, the ratio is expected to be between 1.9 and 2.2.
What could be a source of this error? Lamp alignment issues? Some other calibration needing to occur? Lamp life?
The s/n ratio is acceptable, but a bit worse than the original test.
Thanks!
Danny G.
Second Component of the ElectroMagnetic Field (SCEMF)
The result of researches concerning real physical phenomena of nature adjusted for
fundamental and basic calibrations, which closes the door to knowledge for humanity, is
presented. It is suggested to realize it and reveal.
Everyone is familiar with the fundamental theorem of Stokes and Helmholtz field
theory, which describes liquid and gas flow. There are different mathematician approaches to describe it. Let us describe one of them, mainly the integral-differential representation, which consists of two components: rotH+gradH*=J. The presented theorem is beyond question, it agrees with observations and experiments, and has no objections and contradictions.
Recognizing difference between fields and their properties as well as understanding that mathematical description of the field theorem corresponds to well-known fields, the
calibration of Coulomb should be considered. Basing on experiments with iron fillings, this
calibration repeals one of the components of the electromagnetic field (described
mathematically). As the result, from two components, specifically the CURL and the
DEVERGENCE, we use only the CURL in modern science.
I am fresh electrophysiologist in brain slices. I have worked only on extracellular signal detection and patch-clamp combination with voltage imaging in a dish before.
I have recently joined group that is working on neurons in preoptic area near hypocampus. Electrophysiologists working in this group left more then year ago and I am trying to repeat their experiments.
It took some time, but I stated to get relatively good mice brain slices. On other hand, my neurons are dead within 1hr. I am trying to patch more roudish, smooth membrane cells which do not have such visually exposed nucleos. For the first 10-20min neurons has nice spontaneous activity, but after 30min neurons become depolarized. If I keep cell at hyperpolarized stage, then I can evoke action potentials which amplitude overshoots 0mV.
Majority of cells has stable spontaneous activity if I keep cell hyperpolarized. Depolarized cells has resting potential about -35 to -25mV, all membrane parameters looks fine. Pipette offset a bit high (about 50mV), but its just because i am using LowCl intracellular solution with AgCl electrode inside.
I cleaned all set up few times, increased speed of brain preparation, checked pH and osmalarity (extracellular solution 300-305mOsm, intracellular solution calibrated just before experiment to have 11-15mOsm lower then extracellular).
Brain slices kept in my recording solution for 2-3hr looks relatively fine, cells do not shrink or expand. I have to admit that all cells looks a bit roundish after 1hr.
Based on observation cells looks intact and healthy, just these cells ''give up on life''. I am constantly giving carbogen to brain slices. Slicing and recording solutions made weekly. Digitized calibrated few weeks ago.
I will accept any suggestions.
Has knowledge of driving behaviour simulation, calibration, and model evaluation.
Kind regards
Hello,
I am trying to do methanol calibration in gas phase.
What I did was to vaporize liquid methanol in a vail and then take some amount with a syringe and inject . I am having weird data. My calibration curve is not linear... Any suggestions on better ways to do this?
Good day, may you suggest a reference method for micrometers calibration?
Thanks in advance
Stefano
Respected All ,
Actually I want to know how to analysis the Mossbauer Spectroscopy As We have received only two files of the data One is FOL file and another file is Calibration data file .Anyone from here please help me
I am a very new ICP-OES and ICP-MS user and had a problem in running my base metal samples in ICP-MS. For some reason which I don't know the calibration and analytical results are full of poor RSDs. Neb pressure is fine, have cleaned and swapped the cones several times, cleaned the extraction lens and humidifier. But could not obtain a good and clean result. Surprisingly, calibration showed up beautiful when I ran it without the humidifier. However, turning on the humidifier resulted in poor output. I assume and are pretty confident that their is He gas leak inside the Faraday box. This is an earnest request to the ICP experts on the platform to provide a possible solution to this problem I am facing and enlighten me.
Thank you in advance.
Short Course: Statistics, Calibration Strategies and Data Processing for Analytical Measurements
Pittcon 2024, San Diego, CA, USA (Feb 24-28, 2024)
Time: Saturday, February 24, 2024, 8:30 AM to 5:00 PM (Full day course)
Short Course: SC-2561
Presenter: Dr. Nimal De Silva, Faculty Scientist, Geochemistry Laboratories, University of Ottawa, Ontario, Canada K1N 6N5
Email: ndesilva@uottawa.ca
Abstract:
Over the past few decades, instrumental analysis has come a long way in terms of sensitivity, efficiency, automation, and the use of sophisticated software for instrument control and data acquisition and processing. However, the full potential of such sophistication can only be realized with the user’s understanding of the fundamentals of method optimization, statistical concepts, calibration strategies and data processing, to tailor them to the specific analytical needs without blindly accepting what the instrument can provide. The objective of this course is to provide the necessary knowledge to strategically exploit the full potential of such capabilities and commonly available spreadsheet software. Topics to be covered include Analytical Statistics, Propagation of Errors, Signal Noise, Uncertainty and Dynamic Range, Linear and Non-linear Calibration, Weighted versus Un-Weighted Regression, Optimum Selection of Calibration Range and Standard Intervals, Gravimetric versus Volumetric Standards and their Preparation, Matrix effects, Signal Drift, Standard Addition, Internal Standards, Drift Correction, Matrix Matching, Selection from multiple responses, Use and Misuse of Dynamic Range, Evaluation and Visualization of Calibrations and Data from Large Data Sets of Multiple Analytes using EXCEL, etc. Although the demonstration data sets will be primarily selected from ICPES/MS and Chromatographic measurements, the concepts discussed will be applicable to any analytical technique, and scientific measurements in general.
Learning Objectives:
After this course, you will be familiar with:
- Statistical concepts, and errors relevant to analytical measurements and calibration.
- Pros and cons of different calibration strategies.
- Optimum selection of calibration type, standards, intervals, and accurate preparation of standards.
- Interferences, and various remedies.
- Efficient use of spreadsheets for post-processing of data, refining, evaluation, and validation.
Access to a personal laptop for the participants during the course would be helpful, although internet access during the course is not necessary. However, some sample- and work-out spreadsheets, and course material need to be distributed (emailed) to the participants day before the course.
Target Audience: Analytical Technicians, Chemists, Scientists, Laboratory Managers, Students
Register for Pittcon: https://pittcon.org/register
I have followed the calibration wizard on a thermal analysis DSC Q200 twice. But the enthalpy values for an exotherm of test sample are still 5-6 times lower than expected. What is wrong?
Can you use raw data of Indium to assess the problem?
For LC-MS/MS the Set Up solution is often using for the mass Calibration.
The Unscrambler X is a commercial software product for multivariate data analysis, used for calibration of multivariate data which is often in the application of analytical data such as near infrared spectroscopy and Raman spectroscopy, and development of predictive models for use in real-time spectroscopic analysis of materials.
Hello everyone,
I am working on system for calibration for the GPC/SEC system with multiple detectors and it requires that know parameters such as the , its concentration, dn/dc, extinction coefficient, and intrinsic viscosity to input into the software. I am just wondering if anyone has suggestions for which standard to use and where to find this information in literature?
These parameters are solvent and temperature dependent and I have been using 30 or 40 degrees for temperature and eluent is either phosphate buffer saline or water. May anyone kindly provide me with any advise on which standard to use and where to obtain these parameters to perform system calibration in order to obtain absolute molecular weights of my samples?
Thank you,
Tina
Currently, I have a global variable array and I can also plot it in the calibration window but I want to plot each data sample by a delay of 2.5 mili Seconds which means after one data sample next should be plotted after 2.5 mili Seconds.
and is it possible to do so in CANape Only without external software such as MATLAB-Simulink or Python etc...
Dear all,
I am using SWAT+ to model a catchment in the Pyrinees and after running the model, when I start to analyze the results, I get too low values of flow_out (0,0something m3/s, when the observed flow is 5 to 100m3/s) so my questions is:
How can I improve the output flow? Because with this difference, calibration won't be enough
Thank you all
Sample type - Polymer granules
Problem faced - B, Al & Zn not giving proper response for standard calibration analysis(5ppb, 10ppb, 15ppb).
Instrument used - ICP-MS
In analytical chemistry, a linear model is developed on multiple concentration levels with a goal to predict target analyte concentration in an unknown sample. Will the model prediction favorize a concentration if more calibration samples at that concentration level is used in the model development? I have not found literature article on this topic.
"Dear ResearchGate Community,
I am currently working on a project that involves camera calibration using OpenCV. My goal is to achieve precise calibration by incorporating physical measurements from a ruler or another measuring tool. Can anyone provide insights, tips, or a step-by-step guide on how to perform camera calibration in OpenCV while incorporating real-world measurements? Your expertise and guidance would be greatly appreciated.
Thank you in advance for your assistance.
An aqueous solution is having no clear peak in its absorbance spectra but an overall proportional increase in absorbance across 200 - 800nm with increasing concentration. Is it okay to choose any wavelength for calibration so long as calibration is linear?
Im modelling a clamped-clamped beam with a uniform load. In order to see when collapse occur Im increasing the load to see when Abaqus is aborting, this is working except that the collapse load is overestimated by 700-800 kN in this case. I have calibrated the time increment and the mesh so it should not be regarded to this.
Then I looked at the stress distribution in the cross-section and found out that the stress is actually increasing above the yielding point at the support (see picture), so wonder if there is a correlation here? I mean if the beam obtain more stress after the plastic limit then the collapse will be overestimated right? :)
So how to I tell Abaqus not to go over 355 MPa which is the yield stress.
Hello everyone, I have a question about the thresholds to use in order to calibrate a 5 point likert scale to conduct an analysis on Fsqca and if you have some references regarding these thresholds performed on a 5 point likert scale ( I found some on 7 point) it would be very helpful. Thank you
I simulated a calibration laboratory using MCNP code. The source in the irradiator is Co-60, emitting two gamma rays (1.17 and 1.33 MeV) through decay. The output provided me with the values of dose equivalent ambient at the calibration points. Now, I need to determine the dose equivalent ambient rate with the corrected activity of my source. I followed a similar methodology used for Cs-137, which I successfully validated with experimental data from that laboratory. However, for Co-60, it is not yielding the expected results. I have not yet identified the issue with my analysis.
To obtain H*(10), the chosen tally was F5, and the conversion factors from ICRP 74 were applied using DE/DF on the data card. The input yields H(10) per NPS.
Attached are photos of my methodology to aid in understanding my question.
The methodology of the work, whose photo is attached, was also tested; however, it did not yield results that could be validated by experimental data.
(This work is referenced with the attached photo of its cover)
+1
Hi! I need some help with the analysis of AAS results.
We're analyzing the Ca and Mg content of soil and water samples. As means of determining the amount of metals in ppm (mg/L and mg/kg), we conducted the standard calibration method and the standard addition method. We already made a standard calib. curve (absorbance vs conc) for SCM method beforehand.
Sample details: SCM
1. 15g soil digested to 100mL (acid = aqua regia), 500 mL of water samples digested to 100 mL
2. We took 50mL alqt. each, then diluted it to 100 mL
3. Direct measurement of sample was taken using respective lamp, then recorded.
4. ppm in mg/L of metal in sample was determined using SCM curve with x=(Absorbance-b)/m
5. conversion to ppm in mg/kg for soil: im unsure how to go about this and where to include the dilution factor of 2
Sample details: SAM
1. flask volume = 25mL, solvent = aqua regia, standard conc. = 200 mg/L
2. sample volume = 0.100 mL was taken from the 50mL alqt. diluted to 100 mL (we used a micropipette for accuracy. due to very high absorbance upon spiking w/ standard, we were not allowed to use aqua regia in larger quantities by our professor bec. of safety issues, so we were not able to dilute samples further)
3. sample spiking: 6 solutions with 0, 0.1, 0.2, 0.3, 0.4, 0.5 mL each of 200 mg/L standard
4. we measured the absorbance and plotted standard addition curves (absorbance vs amt of added std.
5. overall, the SAM dilution factor is (100/50)*(25/0.1) = 500
6. mg/L of metal in sample = -(-b/m)(dilution factor)
7. conversion to ppm in mg/kg for soil: im also unsure how to go about this and where to include the dilution factor, or if i even need to include it
We had the calculations down beforehand, however, since our AAS in our institution broke, we took a pause from the experiment, and some handwritten solutions were lost 😅 we also lost a lot of volume for some water samples since we had to filter some more than once. so in one of our water samples, the dilution factor is (100/15.5)*(25/0.1) and the mg/L of Ca is 2000+. it doesn't match up with our other samples.
Thanks in Advance!
I have design a polymeric hydrogel, done BET of that, degas that at 120 C for 16 hours but surface will become negative. Please help me out what should i do?
(Mention that there is no leakage of gas and instrument is calibrated)
We have tried a couple companies and they are so bad! Fast turnaround will be a plus but we want the job done correctly!
Thank you
I am calibrating my Gilson pipette following the manufacture's instructions - the adjustments are done when the pipette is on low volume (i.e for a P1000 pipette I'm doing the adjustment while on 100ul).
After getting to an accurate measure in low volume I'm checking the accuracy at the high volume and it's not accurate at all. If I try to adjust in high volume and then check the accuracy of the low volume its again not accurate.
I have read several formal instructions on how to calibrate a pipette and all of it mention checking high and low volume for accuracy while doing the adjustments on low volume.
Will appreciate any tip or suggestion how to successfully calibrate my pipettes both in high and low volumes...
In QCA, you sometimes find configurations with 0 unique coverage. I like to think of them as merely artefacts of the data that don't mean anything. For example, I calibrated N=14 cases crisp and got two configurations. I also calibrated the cases fuzzy and then found three additional configurations, each with 0 unique coverage. I interpret this as: Configurations with 0 unique coverage do not add to the explanation and can be (should be) ignored. What do you think?
I am performing some DSC (perkin elmer 8500) measurements (same materials, same mass, same program). Following calibration, I encountered different baselines every day. The baseline was flat only on the first day. Any idea what's happening? Does the instrument need to be calibrated every day?
hi
i have calibrated a snow/glacier-fed mountainous watershed.
The calibration results are good with R2 and NSE >0.80 and p factor 0.77 and r factor 0.71, PBIAS 7.5
I have got the following values.
v__SFTMP.bsn -1.750000
v__SMTMP.bsn -5.850000
v__SMFMX.bsn 1.100000
v__SMFMN.bsn 2.100000
v__TIMP.bsn 0.083333
Is it Ok to have SFTMP < SMTMP and similarly SMFMX < SMFMN for a watershed in the Northern Hemisphere?
Hi,
I am trying to calibrate the full-cell OCV of the DFN physics-based model of the NMC532 cell. Even with multiple optimizations, the middle part of the curve doesn't calibrate with the experimental data. Can anyone suggest possible causes or solutions?
Attached is the comparison of exp. and simulated OCV curves obtained after optimization.
Hello, I'm trying to calibrate a material I tested in real life for Abaqus, but the program keeps displaying the error whatever I do. I even converted the stress-strsin curve from engineering to true stress/strain but it didn't solve the problem. Please help!
Dear all scientists and researchers,
Does anybody have experience and knowledge applied for automation of distributed HEC-HMS including automatic calibration?
We can exchange our knowledge to be applied to several river basins with different climatic conditions throughout the world.
Best regards,
Naser Dehghanian
I have reviewed various methods for calibration and their calculations. I would like to confirm the precise differences between the following terms: Error allowance, standard error estimate, tolerance, and accuracy. Is there anyone available for a discussion on this topic? Thank you.
In several reports related to electrocatalysis, reference electrodes (such as Ag/AgCl, Hg/HgO) were used to study the electrochemical activity of respective material. The obtained potentials were converted to potential w.r.t. RHE. using Eq. E(RHE)=E (Hg/HgO) + E0 (Hg/HgO) + 0.0591pH (for Hg/HgO electrode). However, some reports prefer calibration of Hg/HgO electrode in H2 saturated electrolyte solution (shown in attached references). So, among both methods which method is more appropriate?
Dear fellows,
Maybe you have done interesting measurements to test some model?
I can always use such data to use as examples and tests for my regression analysis software, and it's a win-win, since I might give you a second opinion on your research.
It's important that I also get the imprecision (measurement error/confidence interval) on the independent and dependent variables. At this moment, my software only handles one of each, but I'm planning to expand it for more independent variables.
Thanks in advance!
What is the step by step procedure of calibrating the APSIM crop model once the crop, soil,management and weather data have been collected during the time of conducting the experiment
Hello, I am a student in analytical chemistry, I am supposed to prepare samples, quality control, and calibration serial dilution for a forensic project which is working on larvae and flies (for quantification of benzodiazpines).
would you please correct what I wrote here even in terms of the specific volume and concentration?
Sample prep:
Collected sample with matrix is spiked with target analytes and RS (recovery standard)
Sample is extracted (prepared for analysis)
IS (internal standard is spiked before the analysis
Cal prep:
Calibration standards (mixture of target analytes and RS) are prepared with serial dilution
IS is added before the analysis
I have been trying to optimize gene expression qPCR assays that is already setup in my lab for my genes. However, when I do the PCRs, I am getting late Ct values than expected and thus my standard curve suffers from non-linearity and low efficiency. Also I see that my Ct values are increasing for the same set of primers/ same dilution series of positive control day-by-day.
Things I have tried:
- Use fresh reagents and plastic ware - primers, SYBR, positive control (human reference RNA converted to cDNA), DEPC treated water, filter tips, vials, fumigate working area/ lab.
- Recent calibration of qPCR instrument.
- Reagents have minimal freeze-thaw cycles.
Observations:
- Single melting temp peak, mostly.
- R2 of >0.9 during standard curve generation.
- No NTC contamination.
- Cts for lower dilutions (10, 5 and 1 ng/ul) are usually as expected. Mostly dilutions below 0.5 ng/ul are very late. And thus lower slope (< -4.0) and less efficiency (50-60%) in standard curve generation.
TIA,
Nikhil
Hi, I plan to work on ICP-MS so I am learning from the basic stage. Can someone please recommended a standard or reference which I can refer to for Samples prepreation (digestion, dilution methods etc.), calibration procedure, and other fundamental knowledge. Even if you can explain a bit on it It would be very helpful to me.
I would be using Agilent 7900 for testing. Though I found SOP for instrumnet but couldn't get that much help for samples prepartion for a novice like me.
I would be working with water samples to get metallic ions concentration. Would you guys prefer a single multi-element or separate (For each element) calibration standards? does it matter anyways?
There are several answers on how to calibrate Ag/AgCl reference electrode, is it similar to the calibration of Ag/AgCl or is there any other way to calibrate the reference electrode.
Hi,
After performing my annual cleaning on the QE and calibrated, it as passes the positive cal but when performing the basic (negative) Analyzer accuracy it has failed for Mass resolution dependency 'too high @ R17K'.
I have made a new cal-mix, optimize the probe position, increased flow rate from the syringe, new Hesi needle, currently trying to optimize gas flows to see if that will help.
Has anyone seen this before/ do you know what I can do to resolve this?