Science topic

Interpolation - Science topic

Explore the latest questions and answers in Interpolation, and find Interpolation experts.
Questions related to Interpolation
  • asked a question related to Interpolation
Question
1 answer
Dear Scholars,
Assume a mobile air pollution monitoring strategy using a network of sensors that move around the city, specifically a network of sensors that quantify PM2.5 at a height of 1.5 meters that lasts about 20 minutes. Clearly, using this strategy we would lose temporal resolution to gain spatial resolution.
If we would like to perform spatial interpolation to "fill" the empty spaces, what would you recommend? What do you think about it? What would be your approximations?
Regards
Relevant answer
Hi,
If you expect some local variation and then a non-stationary behavior of your data, probably Empirical Bayesian Kriging will be the one. This is assuming you have a lot of data non left-skewed and you can asumme Gaussian distribution.
In any case, I recommend you to carry out a Cross Validation with a study of RMSE and AMSE, as you can see in Pellicone (2019; 10.1080/17445647.2019.1673840) or in Ferreiro-Lera (2022; 10.1080/17445647.2022.2101949).
I hope I have been helpful.
All the best,
Giovanni.
  • asked a question related to Interpolation
Question
5 answers
I have two points along a straight line path. Each of these points has associated with it weather forecast data in the form :
[forecast time, direction, speed]
I need to generate direction and speed predictions along a route between these points at regular intervals (e.g. 10m or 10s)
I have seen a lot of methods using four points for speed interpolation but these do not work well for only two points.
Any suggestions?
Relevant answer
Answer
So this has plagued me for a while. The short answer for anyone following this is to use linear interpolation or something like a 2-point COSINE interpolation (What we settled on).
The problem is, as intuitive as it may be, is that this assumes that the position you are interpolating will lie exactly on this line between point A and B. Secondly, interpolating wind speed might be easy but wind direction is not. A somewhat simple solution to this is to change it from direction and magnitude to just a vector and interpolate it that way.
The better interpolation method that we finally rested on was just to hit our wind server with more traffic in an effort to do 4 point interpolation.
  • asked a question related to Interpolation
Question
3 answers
Dear, I am performing molecular dynamics through NAMD, and in the production stage (100ns), in my generated script, I can only get files > 80gb. How can I avoid producing a huge DCD file in the NAMD production step? Can anyone give a hint?
structure step3_input.psf
coordinates step3_input.pdb
set temp 310;
outputName step5_production; # base name for output from this run
# NAMD writes two files at the end, final coord and vel
# in the format of first-dyn.coor and first-dyn.vel
set inputname step4_equilibration;
binCoordinates $inputname.coor; # coordinates from last run (binary)
binVelocities $inputname.vel; # velocities from last run (binary)
extendedSystem $inputname.xsc; # cell dimensions from last run (binary)
dcdfreq 100;
dcdUnitCell yes; # the file will contain unit cell info in the style of
# charmm dcd files. if yes, the dcd files will contain
# unit cell information in the style of charmm DCD files.
xstFreq 100; # XSTFreq: control how often the extended systen configuration
# will be appended to the XST file
outputEnergies 100; # 5000 steps = every 10ps
# The number of timesteps between each energy output of NAMD
outputTiming 100; # The number of timesteps between each timing output shows
# time per step and time to completion
restartfreq 100; # 5000 steps = every 10ps
# Force-Field Parameters
paraTypeCharmm on; # We're using charmm type parameter file(s)
# multiple definitions may be used but only one file per definition
parameters toppar/par_all36m_prot.prm
parameters toppar/par_all36_na.prm
parameters toppar/par_all36_carb.prm
parameters toppar/par_all36_lipid.prm
parameters toppar/par_all36_cgenff.prm
parameters toppar/par_interface.prm
parameters toppar/toppar_all36_moreions.str
parameters toppar/toppar_all36_nano_lig.str
parameters toppar/toppar_all36_nano_lig_patch.str
parameters toppar/toppar_all36_synthetic_polymer.str
parameters toppar/toppar_all36_synthetic_polymer_patch.str
parameters toppar/toppar_all36_polymer_solvent.str
parameters toppar/toppar_water_ions.str
parameters toppar/toppar_dum_noble_gases.str
parameters toppar/toppar_ions_won.str
parameters toppar/toppar_all36_prot_arg0.str
parameters toppar/toppar_all36_prot_c36m_d_aminoacids.str
parameters toppar/toppar_all36_prot_fluoro_alkanes.str
parameters toppar/toppar_all36_prot_heme.str
parameters toppar/toppar_all36_prot_na_combined.str
parameters toppar/toppar_all36_prot_retinol.str
parameters toppar/toppar_all36_prot_model.str
parameters toppar/toppar_all36_prot_modify_res.str
parameters toppar/toppar_all36_na_nad_ppi.str
parameters toppar/toppar_all36_na_rna_modified.str
parameters toppar/toppar_all36_lipid_sphingo.str
parameters toppar/toppar_all36_lipid_archaeal.str
parameters toppar/toppar_all36_lipid_bacterial.str
parameters toppar/toppar_all36_lipid_cardiolipin.str
parameters toppar/toppar_all36_lipid_cholesterol.str
parameters toppar/toppar_all36_lipid_dag.str
parameters toppar/toppar_all36_lipid_inositol.str
parameters toppar/toppar_all36_lipid_lnp.str
parameters toppar/toppar_all36_lipid_lps.str
parameters toppar/toppar_all36_lipid_mycobacterial.str
parameters toppar/toppar_all36_lipid_miscellaneous.str
parameters toppar/toppar_all36_lipid_model.str
parameters toppar/toppar_all36_lipid_prot.str
parameters toppar/toppar_all36_lipid_tag.str
parameters toppar/toppar_all36_lipid_yeast.str
parameters toppar/toppar_all36_lipid_hmmm.str
parameters toppar/toppar_all36_lipid_detergent.str
parameters toppar/toppar_all36_lipid_ether.str
parameters toppar/toppar_all36_carb_glycolipid.str
parameters toppar/toppar_all36_carb_glycopeptide.str
parameters toppar/toppar_all36_carb_imlab.str
parameters toppar/toppar_all36_label_spin.str
parameters toppar/toppar_all36_label_fluorophore.str
parameters ../unk/unk.prm # Custom topology and parameter files for UNK
# Nonbonded Parameters
exclude scaled1-4 # non-bonded exclusion policy to use "none,1-2,1-3,1-4,or scaled1-4"
# 1-2: all atoms pairs that are bonded are going to be ignored
# 1-3: 3 consecutively bonded are excluded
# scaled1-4: include all the 1-3, and modified 1-4 interactions
# electrostatic scaled by 1-4scaling factor 1.0
# vdW special 1-4 parameters in charmm parameter file.
1-4scaling 1.0
switching on
vdwForceSwitching on; # New option for force-based switching of vdW
# if both switching and vdwForceSwitching are on CHARMM force
# switching is used for vdW forces.
# You have some freedom choosing the cutoff
cutoff 12.0; # may use smaller, maybe 10., with PME
switchdist 10.0; # cutoff - 2.
# switchdist - where you start to switch
# cutoff - where you stop accounting for nonbond interactions.
# correspondence in charmm:
# (cutnb,ctofnb,ctonnb = pairlistdist,cutoff,switchdist)
pairlistdist 16.0; # stores the all the pairs with in the distance it should be larger
# than cutoff( + 2.)
stepspercycle 20; # 20 redo pairlists every ten steps
pairlistsPerCycle 2; # 2 is the default
# cycle represents the number of steps between atom reassignments
# this means every 20/2=10 steps the pairlist will be updated
# Integrator Parameters
timestep 2.0; # fs/step
rigidBonds all; # Bound constraint all bonds involving H are fixed in length
nonbondedFreq 1; # nonbonded forces every step
fullElectFrequency 1; # PME every step
wrapWater on; # wrap water to central cell
wrapAll on; # wrap other molecules too
wrapNearest off; # use for non-rectangular cells (wrap to the nearest image)
# PME (for full-system periodic electrostatics)
PME yes;
PMEInterpOrder 6; # interpolation order (spline order 6 in charmm)
PMEGridSpacing 1.0; # maximum PME grid space / used to calculate grid size
# Constant Pressure Control (variable volume)
useGroupPressure yes; # use a hydrogen-group based pseudo-molecular viral to calcualte pressure and
# has less fluctuation, is needed for rigid bonds (rigidBonds/SHAKE)
useFlexibleCell no; # yes for anisotropic system like membrane
useConstantRatio no; # keeps the ratio of the unit cell in the x-y plane constant A=B
# Constant Temperature Control
langevin on; # langevin dynamics
langevinDamping 1.0; # damping coefficient of 1/ps (keep low)
langevinTemp $temp; # random noise at this level
langevinHydrogen off; # don't couple bath to hydrogens
# Constant pressure
langevinPiston on; # Nose-Hoover Langevin piston pressure control
langevinPistonTarget 1.01325; # target pressure in bar 1atm = 1.01325bar
langevinPistonPeriod 50.0; # oscillation period in fs. correspond to pgamma T=50fs=0.05ps
# f=1/T=20.0(pgamma)
langevinPistonDecay 25.0; # oscillation decay time. smaller value corresponds to larger random
# forces and increased coupling to the Langevin temp bath.
# Equal or smaller than piston period
langevinPistonTemp $temp; # coupled to heat bath
# run
numsteps 500000; # run stops when this step is reached
run 10000000; # 1ns
Relevant answer
Answer
Change dcdfreq 100 to dcdfreq 1000 or dcdfreq 5000
  • asked a question related to Interpolation
Question
3 answers
How can we interpolate between two very distant cross sections of a river ?
Relevant answer
Answer
The character of the river changes as it may meander or sometimes straight sections as it flows downstream. The characteristics change with various factors as geology, substrate, sediment loading, vegetation, access to floodplain, gradient, watershed or basin size and land use patterns. The facet sequence of riffle, run, pool and glide, and whether the channel is braided (typical of too much sediment), or entrenched (lack of floodplain access) can make a difference. Many rivers adjust their pattern with time. The suggestions of Dr. Harris are helpful too. It may take some training and experience to successfully interpret between two cross sections, but this may be difficult nonetheless unless some effort in placement was made. If the river is relatively clear, green LiDAR coverage may be helpful in visualizing complex river patterns and depths. If I had to choose two measurement cross sections, I would probably choose a riffle (shoal) and a typical pool. Riffles typically are gradient controls, and pools are located at river bends, often have a pronounced point bar, which is suggestive of bankfull flow, and whether there is floodplain access. Besides geologic controls and substrate controls, many rivers have a history of hydrologic modification and upstream influences by land instability and/or accelerated erosion from land use practices, which can influence river character and adjustments. Rivers are more dynamic than we are likely to perceive. Using aerial photos (leaf off period may be best, and possibly select photos taken in dry and wet or flood periods if available, and the sequence through time may also help evaluate the river boundaries and frequency and extent of changes.
  • asked a question related to Interpolation
Question
6 answers
Hello,
We use the commercial eurofins abraxis kit's for the detection of anatoxin-a (toxin produced by cyanobacteria). The test is a direct competitive ELISA based on the recognition of Anatoxin-a by a monoclonal antibody. Anatoxin-a, when present in a sample, and an Anatoxin-a-enzyme conjugate compete for the binding sites of mouse anti-Anatoxin-a antibodies in solution.
The concentrations of the samples are determined by interpolation using the standard curve constructed with each run.
The sample was containing a large concentration of cyanobacteria. So we analysed the sample pur and diluted at 1/100 and 1/200 (to be ine the linearity zone of the standard range). The sample pur was negative. However the dilutions gave positive results and I don't know why.
Thank you for helping me understand.
Relevant answer
Answer
Andrew Paul McKenzie Pegman I Think you're right, I should do that ^^
  • asked a question related to Interpolation
Question
1 answer
I am working on kaolin clay as an adsorbent for wastewater treatment and also i have undergone XRD characteristics. Thus, the analyst send me two type of data. one is the scan of the sample in the ‘native’ detector resolution of 0.1313° 2Θ, the other contains the same data, but interpolated to a step size of 0.02° 2Θ. So, which one can be used for graphical analysis to interpret its diffraction pattern?
Relevant answer
Answer
Get your XRD data analyzed using this website!
Check out this website:
The service is not free but it's very affordable and definitely worth it.
  • asked a question related to Interpolation
Question
4 answers
Hello, I am trying to run a simulation with real gas model from NIST using ch4a as working fluid. When I try to initialisate or run the simulation its not converging in Fluent,
in my simulation, inlet temp 110K, 0.02 kg/s ch4, pressure outlet 53 bar. just ıwant to see the phase change, ı try to understand the supercritical fluid, firstly ı tried to fit the curves at MATLAB, but there were wrongs with interpolations. ı reserach the rgp table from nist to CFX, ı couldnt, how do we deal with?
Relevant answer
Answer
The interesting thing about a Mollier chart (enthalpy vs. entropy): it's the only one with isotherms or isobars that are continuous in value and slope.
  • asked a question related to Interpolation
Question
5 answers
Dear all,
I am going to derive the precipitation data from NETCDF files of CMIP5 GCMs in order to  forecast precipitation after doing Bias Correction with Quantile Mapping as a downscaling method. In the literature that some of the bests are attached, the nearest neighborhood and Inverse Distance Method are highly recommended.
The nearest neighbour give the average value of the grid to each point located in the grid as a simple method. According to the attached paper (drought-assessment-based-on-m...) the author claimed that the NN method is better than other methods such as IDM because:
"The major reason is that we intended to preserve the
original climate signal of the GCMs even though the large grid spacing.
Involving more GCM grid cell data on the interpolation procedure
(as in Inverse Distance Weighting–IDW) may result to significant
information dilution, or signal cancellation between two or more grid
cell data from GCM outputs."
But in my opinion maybe the IDM is a better choice because I think as the estimates of subgrid-scale values are generally not provided and the other attached paper (1-s2.0-S00221...) is a good instance for its efficiency.
I would appreciate if someone can answer this question with an evidence. Which interpolation method do you recommend for interpolation of GCM cmip5 outputs?
Thank you in advance.
Yours,
Relevant answer
Answer
Can you please refer to the tools and codes used in regridding?
  • asked a question related to Interpolation
Question
4 answers
I was wondering if there's any tutorial or scripts that can be used to directly downscale and correct the bias in a reanalysis data available as NetCDF such as those provided by CRU or GPCC?
Also, for the downscaling part, does this mean we're just interpolating to lower mesh or is there an actual algorithm used to downscale the data in CDO?
Thanks in advance!
Relevant answer
Answer
what Ahmed Albabily mentioned in the query looks like he is interested in downscaling the precipitation data. I seriously suggest against statistical downscaling, especially for precipitation. Though physical-based numerical models are better for downscaling, the present employed methods can not downscale the precipitation reliably.
Cheers,
Kishore
  • asked a question related to Interpolation
Question
3 answers
Srtm 30m Dem data has significantly less data coverage in Nepal region; even NasaDEM modified version of SRTM is not precise in that region, I have tried a fill function by which via idw interpolation I filled the gaps of those dem in that region, but since the holes in dem data extend up to 10km it is not scientifically justified to interpolate in that large region. Even after this kind of dem gaps filling interpolation algorithm, further processing using that dem like Slope, Aspect, etc. map carry forward those errors... Can anyone suggest any solution regarding how to fill those gaps in the srtm data?
Relevant answer
Answer
Hi,
check the recently released Copernicus DEM (30 m and 90 m versions available; see -->https://spacedata.copernicus.eu/web/cscda/dataset-details?articleId=394198) is accessible via OpenTopography (https://portal.opentopography.org/dataCatalog?group=global).
Regards
  • asked a question related to Interpolation
Question
2 answers
I am using heat transfer in solids and fluids model in Comsol Multiphysics. I want to add a heat source term in the solid domain, which is the function of solid domain temperature.
The source term is as follows:
Q=rho*Cp(T)*Tad(T)/dt.
where rho, Cp and Tad are density, specific heat and Adiabatic temperature of the solid domain, which is the function of solid domain temperature.
I have used interpolation for defining Cp and Tad properties of materials in Comsol Multiphysics.
Please see the attached files.
Then I defined the source as, Source= (rho*Cp(T)*Tad(T))/dt in the variables node and used this Source in the Heat transfer in solids and fluids section as a source term.
I am getting very less increment in temperature after simulation(only 1.5K). It should be approx 4.5-5K.
Can anyone tell me, where I am doing wrong?
Relevant answer
Answer
Dear Antoni sir, there is not showing any missing information. Tad and other properties have been incorporated successfully. Simulation is also running and giving some results. But when I compared it with some papers, they are showing 4.5-5 K increment in temperature but I am getting only approx 1.5K increment. May I personally send you the .mph file?
  • asked a question related to Interpolation
Question
5 answers
How could one increase data values from weekly to daily observations using mathematical algorithm for interpolation?
Relevant answer
Answer
Hello Nasiru,
What's the reason for wanting these values? That may have more to do with what sort of approach, if any, might make sense.
At first glance, doing this doesn't sound like a good idea to me. Among the reasons:
Any method for taking adjacent weekly values and inserting some estimate for the 6 six "missing" daily values will only bias the estimates of day-to-day variance in scores/values as well as increase the serial correlation across a lag from 1 to 6 days. As well, the presumption of a (perfectly) predictable day to day change is unlikely to be realized in actual measurements.
Good luck with your work.
  • asked a question related to Interpolation
Question
2 answers
I did baseline correction by Xpert high Score Plus software with manual setting and used cubic spline interpolation for bacterial cellulose (BC). Is this baseline correction good in xrd spectrum of cellulose?
Thank you
Relevant answer
Answer
Thank you very much.
How should I draw the baseline?
  • asked a question related to Interpolation
Question
2 answers
I want to learn about Solving Differential Equation based on Barycentric Interpolation. I want to learn this method, if someone has hand notes it would be great to share with me. I need to learn that in 2 weeks. Thanks in advance.
Relevant answer
Answer
we can see this:
  • asked a question related to Interpolation
Question
2 answers
Suppose I got Force constants at 500K, 1000K, 1500K, and 2000K using TDEP. How do I interpolate the force constants for the temperatures in between?
I found it confusing when I first used it, so I am explaining the steps here in much detail.
  • asked a question related to Interpolation
Question
1 answer
Hi everyone,
I'm trying to apply the spatial interpolation process for my NetCDF file. However, I keep getting the "segmentation fault (core dumped)" error.
The screenshot of the error and my NetCDF file are in the attachment.
I'd be so glad if you could explain what goes wrong during the interpolation process.
Thanks in advance
Best of luck
Relevant answer
Answer
The problem seemed to be subregional cropping while downloading the climate data from Copernicus website.
  • asked a question related to Interpolation
Question
5 answers
I am setting up an experiment to estimate the accuracy of different interpolation algorithms for generating a spatially continuous rainfall data (gird) for a certain area. The data density (number of points versus the area) and the spatial arrangement of the data (i.e., random versus grid-based ), will vary for each run (attempt)
The objective is to understand how each algorithm performs under varying data density and spatial configuration.
Typically, different studies have done this using station data of varying data density and spatial configuration. In the current context, there are limited stations (just about 2)and the intent is to execute this experiment using data sampled (extracted) from existing regional rainfall grid, but varying the data density as well as the spatial configuration.
Note that I cannot generate random values because the kriging is to be implemented as a (multivariate stuff ) using some rainfall covariates....random values will mess up the relevance of such covariates.
I did a rapid test and found that, despite a wide difference in the density and configuration, there was no significant difference in the accuracy result, based on cross validation results. What's going on? It's not intuitive to me!!
Please, can you identify something potentially not correct with this design? Theoretically, is there anything about dependency that may affect the result negatively? Generally, what may not be fine with the design? how can we explain this in your view??
Thanks for your thoughts.
Sorry for many text..
Relevant answer
Answer
Hi Elijah,
Here are some suggestions:
If you use rainfall data you need to question what accumulation you employ. For instance, hourly accumulations have steeper gradients (are less smooth) than monthly accumulations, meaning that any prediction for hourly images is more difficult than for monthly ones, therefore what interpolation scheme and data you use matter the smaller the accumulation is.
The variability of the distances between the chosen points matters also, especially for kriging. Avoid choosing points all of which have similar distances from each other: if you do that the semivariogram is not computed correctly because this scheme exclude measurements in small distances. You need to inform the semivariogram on how the variability of the field changes in all different distances, including the short ones. Then you will catch microvariability (variability in short scales).
Choice of skill scores also matters. RMSE Bias and HK should be included. Also any measure for the scatter (in the scattergram measurements versus cross-validated predictions).
Finally kriging is providing not only the expected value but also a variance. Check if there is a difference there for different runs.
Best luck.
  • asked a question related to Interpolation
Question
5 answers
Hi,
I have a time series data for precipitation and temperature over a specific region and I need to regrid in different resolution, for instance, from 0.25 degree to 0.1 degree, using Matlab or R. I will be thankful for any help.
Relevant answer
Answer
Hadir Abdelmoneim Bassyouni Hi! did you find a solution? I am looking for something similar for SWE data.
Thanks
  • asked a question related to Interpolation
Question
1 answer
I am using a python script to run Agisoft Metashape on Jetson TX2. It takes a lot of time when there are more images involved in the creation of the model. I want to increase the speed of the operation by running those on CUDA. Can someone please help me with this?
import os import Metashape doc = Metashape.app.document print("Srcipt started") doc = Metashape.app.document chunk = doc.addChunk() chunk.label = "New Chunk" path_photos = Metashape.app.getExistingDirectory("main folder:") # image_list = os.listdir(path_photos) # photo_list = list() sub_folders = os.listdir(path_photos) for folder in sub_folders: folder = os.path.join(path_photos, folder) if not os.path.isdir(folder): continue image_list = os.listdir(folder) photo_list = list() new_group = chunk.addCameraGroup() new_group.label = os.path.basename(folder) for photo in image_list: if photo.rsplit(".", 1)[1].lower() in ["jpg", "jpeg", "tif", "tiff"]: # photo_list.append("/".join([path_photos, photo])) photo_list.append(os.path.join(folder, photo)) chunk.addPhotos(photo_list) for camera in chunk.cameras: if not camera.group: camera.group = new_group print("- Photos ajoutées") doc.chunk = chunk Metashape.app.update() # Processing: chunk.matchPhotos(downscale=1, generic_preselection=True, reference_preselection=False) chunk.alignCameras() chunk.buildDepthMaps(downscale=4) chunk.buildDenseCloud() chunk.buildModel(surface_type=Metashape.Arbitrary, interpolation=Metashape.EnabledInterpolation) chunk.buildUV() chunk.buildTexture(texture_size=4096) doc.save() # chunk.buildTexture(mapping = 'generic', blending = 'mosaic', size = 2048, count=1, frames = list(range(frames))) print("Script finished")
Relevant answer
Answer
Dear Shivani.
Unfortunately i have no answer to your question, but i have a question more. How did you manage installing python metashape on your jetson? I have not found any available version for arm on the web.
Have a nice day
Fabrice
  • asked a question related to Interpolation
Question
3 answers
What is the best approach to treat the missing data? Interpolation or Imputation? Which one is safe to use?
Relevant answer
Answer
If the available values show monotonically increasing or decreasing trend interpolation will work. Remember you lose a degree of freedom for each data point you are filling in. If the surrounding data just shows some scatter but no definite trend, use an average of the surrounding vales as an estimate. Again lose a degree of freedom for each estimated data value.
  • asked a question related to Interpolation
Question
6 answers
I have to prepare a spatial map of various soil properties for that I am confused about the semi-variogram is compulsory or not?
Relevant answer
Answer
Most often yes , unless grid size takes into account the area covered...
  • asked a question related to Interpolation
Question
4 answers
Hello,
i want to do the interpolation of the weather data from mess station to my interest points. Before that i need the data from mess station. But the data are not distributed very well and some data were missed. So i think about it and i have the idea, first to do the interpolation of time sequences to get all the data from mess station in all time point and then from the data in mess station to get them of my interest points. Has anyone done it before or what do you think of this idea? Do you know some good methods to do the interpolation of time sequences. My Weather data are temperature, wind speed, wind direction, humidity and dew point. thanks.
Relevant answer
Answer
Did you solve this? I was thinking to use kriging but how do you add the time component? any python library available?
  • asked a question related to Interpolation
Question
7 answers
I am utilizing Area-to-Area CoKriging (AtACoK) for raster downscaling in R using the atakrig package. For this purpose I have one dependent variable (let's say x) with pixel size of 400m and 2 covariates (c1 and c2) with 100m pixel size. When I perform AtACoK the resulting raster has pixel values from 0:Infinity. What might causes the problem? For comparison, when I perform AtAK, that is, using a single independent variable the downscaling works fine.
Relevant answer
Answer
Nikolaos Tziokas you can look at spBayes package as another option.
  • asked a question related to Interpolation
Question
4 answers
I am trying to compare the accuracy of different interpolation methods in GIS using Cross-Validation, but not sure what are the differences between the same methods in different toolboxes (e.g., IDW in Spatial Analyst Tools and Geostatistical Wizard).
Thanks in advance.
Relevant answer
Answer
Thanks for sharing the link. I had a quick look and it seems to be helpful. In the meantime, I asked the question from Esri Customer Care and I got the response below:
"Geostats analyst generally has more optimisations in terms of automatic calibration. It also supports cross-validation. The tools under spatial analyst are older implementations and have fewer parameter choices".
Regards,
Nasrin
  • asked a question related to Interpolation
Question
14 answers
I Wanna Interpolate my Data in ArcMap, I Found Geostatistic Extension (only this Tool) for this Purpose in ArcMap.
Relevant answer
Answer
They are the same. The difference is that in the wizard there are more graphical aspects that help you in the various steps. Another difference is that by using wizard the resulting map is a geostatistical layer that you can export as a raster at the desired resolution while in the SAT you will obtain directly the raster.
  • asked a question related to Interpolation
Question
3 answers
How to estimate FD wavefield at receiver position in case of FDFD method? Let's say incident wavefield is u at source position. Pls help & discuss in details.
Code in MATLAB for this will be well appreciated.
Relevant answer
Answer
I think you are asking how to simulate a device to calculate the field at the output using finite-difference frequency-domain (FDFD). If you are not familiar with the FDFD method, here is a link to a book I recently wrote that teaches FDFD to the complete beginner. It covers everything from the simplest concepts all the way up to advanced simulations of 3D and anisotropic devices.
The key things about the FDFD method are...
1. Representing the device on a grid. This entails building an array, assigning relative permittivity values to each point so that if you plotted the array you would see your device. It also necessary to leave some room around the device for absorbing boundaries and the source.
2. The source -- How do you want to excite your device? The three types of sources covered in the book are plane wave sources, beam sources, and guided-mode sources. You will need to calculate the source and apply it to your device.
3. Run the simulation -- Run the simulation. The FDFD method calculates the field everywhere on the grid. This is great for visualization and no post-processing step is necessary to see the fields.
4. Extract the fields you are interested in -- FDFD calculates the field at every point on the grid. Simply extract the values wherever you are interested in analyzing them. If you are interested in only seeing the scattered fields (source removed), the book teaches a very elegant implementation of the total-field/scattered-field technique where you can simply define some points to be scattered-field quantities. If you extract the field from these points, the source is automatically removed.
  • asked a question related to Interpolation
Question
18 answers
I have prepared these two maps of Electrical Conductivity (EC). I used both IDW and Kriging to prepare them. Which one should I choose? The sampling sites are also pointed in the map.
You can see that the Kirging map is quite strange!
I am in a fix. Please help!
Relevant answer
Answer
Kriging is better but consumes more computer resources. The Kriging result showed in the question is a failure, I guess the reason was too small radius search for interpolation. Try increasing the radius search and you will get a better result.
  • asked a question related to Interpolation
Question
3 answers
Does anyone help me with how I can interpolate the water level record using elevation as a reference? My study area is very rugged topographically, and I need to consider surface elevation during interpolation of the measured hydraulic heads. I am looking for a technical answer pls.
Relevant answer
Answer
@ Ahmed Moussa, I want to interpolate the groundwater level using a surface elevation (DEM) as collateral information. I am thinking of using kriging with the external drift (KED) interpolation method by using the topographic information as external drift. But, I couldn't get the technical idea of finding the drift value and which software is best suited to do this task.
  • asked a question related to Interpolation
Question
1 answer
Recently, our group collect some water samples at several rainfall events. And then I tried to analyze the c-Q relationships. The question is the amount of data in the falling part is less than the rising part. Could we use extending lines to interpolate the missing data? Or do I have some other choice to solve this problem?
Relevant answer
Answer
Dear Zeqi
You can apply interpolation techniques to both of your datasets. In this way you would have an opportunity to extract the missing data as well as to make a logical analogy between your datasets.
Regards
  • asked a question related to Interpolation
Question
3 answers
I have 500 companies individual data for the period 2009-2021. There are so many missing values in the data. So I deleted those company whose data is missing greater than 5 percent. So with the remaining companies I waana replace these missing values. So linear interpolation is good or not. Or shall I use mice in r.
Relevant answer
Answer
Hello,
you can also use "NeuralPower Softwer". I had used and it is very reliable. it is for free for one month.
good luck,
Azra
  • asked a question related to Interpolation
Question
2 answers
Does anyone have an idea how to reconstruct Nitrate profile either by interpolation or any other method.
I have a dataset with Nitrate concentrations at every 20-50 m resolution upto to 1000m
Is it possible to interpolate the profiles to get a higher resolution dataset say at every 5m.
If you have an idea, please let me know. I will appreciate your help.
Relevant answer
Answer
Thank you Christian for the recommendation.
I don't use python but it I will give it a try doesn't look that difficult to try.
Thanks for the tip with the wording as well.
  • asked a question related to Interpolation
Question
4 answers
An inquiry about Interpolation Model Validation.
I ran the interpolation of IDW, Ordinary kriging, and EBK. But the R(sq.) values for the all these models (including the semivariograms for OK) rarely exceed 0.1 (even sometimes 0.007 is the highest).
Is model with R(sq.) value of 0.007 good for publication? I think this value indicates too poor prediction, but none of these models is showing a decent R(sq.) value.
On the other hand, the RMSSE is really close to 1, and mean error is around 0.009.
What should I do know?
What can be the possible reason? Am I missing something? Or I should follow more complex models? Is the spatial distribution being controlled more by extrinsic factors (e.g. human interference)?
[120 samples were collected randomly from this study area].
I would gladly appreciate any suggestion.
Relevant answer
Answer
Just a suggestion, overlay your data with topography (contours or elevation) and soil type, and perhaps land cover type as forest, grassland, and sometimes even vegetation types might produce some differences, such as pine vs deciduous forest, or areas of study that were severely burned, or severely eroded in past land uses, etc. The ground based history and conditions and spatial differences may help explain some of the differences found.
  • asked a question related to Interpolation
Question
3 answers
Explain why we use RMSV and its Application?
Relevant answer
Answer
If you are referring to root mean square error or briefly RMSE, it is a well-known accuracy indicator that is used for evaluating the goodness of a prediction/interpolation task.
Having the formula in mind, it calculates the squared difference between "the interpolated value and the corresponding expected value". All values are then averaged (summed up and divided by the number of pairs). The result is then subjected to a square root to obtain the RMSE.
  • asked a question related to Interpolation
Question
3 answers
Hi,
I am looking for a reference (or a benchmark solution) to check my restriction and interpolation operators for a second order element in case of a geometric multigrid method using FEM descretization.
Relevant answer
  • asked a question related to Interpolation
Question
4 answers
Dear colleagues
I've downloaded netCDF file (climatic variable ),measured monthly for 20 years over the country.
I need to extract the value of variable at all point locations for the entirety of the time series.
So for the result, I would like to obtain monthly data for 20 years for each province of country(interpolation from the grid into points).
I will be grateful for any help.
I'm working more with r and mathlab.
Thank you very much for your answers.
Relevant answer
Answer
By Python Script, Extract the variable from the NetCDF file and get the dimensions (i.e. time, latitudes and longitudes) Extract each time as a 2D pandas DataFrame and write it to the CSV file. Write the data as a table with 4 columns: time, latitude, longitude, value.
Kind Regards
Qamar Ul Islam
  • asked a question related to Interpolation
Question
4 answers
Dear colleagues
If anybody can to provide me trivial code (R or Matlab) for the linear interpolation of all time series?
I've a netcdf file with lat,lon and data of rainfall.
Thank you very much
  • asked a question related to Interpolation
Question
3 answers
I am working on v-ADCP data of marine currents in a port. I have data along transects and I want to create a 2d map with interpolation of data which shows direction and intensity of currents. I use surfer 13 and I created two grid for velocity and direction and then created a 2-grid vector layer, resulting in arrows that show the direction of the current with a scale color for velocity. the problem is that the interpolation creates data inside a polygon delimtated but my pool of data, so there is new data even in areas where I don't have any, for example on land and docks..how do I set an interpolation method which allows me to interpolate data only in an area along the transect? I tried kriging and Triangulation with Linear Interpolation methods. thank you
Anna
Relevant answer
Answer
I recommend you to use "geostatistical simulation", instead of kriging. You may simulate 100 or 200 realizations and then use their average as the final estimate. Moreover, the suggestion from Pietro is seems to be alright.
The whole above procedure can be done using the "GS+" software, which is a complete package for 2D geostatistical estimation/simulations.
Note that you may adjust the "grid dimension" at the GS+; although the software suggest the optimum dimension with lowest kriging error.
Some example maps are attached.
A trick that I use is to perform geostatistical simulations with GS+ and use ArcMap for visualization.
  • asked a question related to Interpolation
Question
7 answers
I took interpolation techniques on one of my interesting research sites using R-programming to predict and estimate long year-based mean monthly and annual rainfall and temperature using KED, IDW, and OK methods. As a result, I found the predicted values, which were equal/same with the actual value for most of the months and annual, specifically for KED using DEM_90m as a covariate, and OK. The problem was more worsen in the case of OK, and I attached herewith the findings. So, I need your valuable recommendations on this one.
Thank you for your time!!
NB: I took the same procedure on the other catchment, and found good results using the same interpolation techniques.
Relevant answer
Answer
Quite possible , eventually that speaks off the precision of interpolation ...afterall thats the purpose of any interpolation to predict as close to original values ...
  • asked a question related to Interpolation
Question
8 answers
I have SPI values for each meteorological station but I should have SPI value for each pixel of satellite image.
Relevant answer
Answer
In my study area I have 153 stations. I have monthly precipitation data for all these for 40 years. I should I find SPI for WHOLE area? I know R studio.
  • asked a question related to Interpolation
Question
10 answers
Hello,
I have some small periodic data (about 32 points), and I want to fit it in a Fourier transform function.
Actually I use the fast Fourier transform method using the python toolbox (numpy.fft.fft). At first I calculate all Fourier complex coefficients (32 coefficients), however I get a noisy function as shown in the attached figure.
My question, is what is the best method to smooth the results?
Should I reduce the number of points by interpolating other points (e.g. 16 or 8 points) based the 32 points?
Should I apply some filters, like low-pass filter? if yes how can I choose the frequencies threshold to apply this filter?
Should I apply an filter based on the power of the complex Fourier coefficients (power = ||Ci||^2), and keep only frequencies which correspond to high powers? if yes how can I choose the power threshold?
Or are there other methods?
Thanks in advance.
Relevant answer
Answer
Saleh Mafi,
J. Rafiee
, Thanks for your answers, I will test the suggested methods.
  • asked a question related to Interpolation
Question
3 answers
I have daily rainfall data for the year 2000 - 2020 for a rainfall station. How i want to find the suitable value to use in ArcGIS / QGIS to do the interpolation? Only 1 value is used to do the interpolation.  Thanks in advance
Relevant answer
Answer
In ArcGIS you can use Geostatistics option and fit a surface using any of the available models such as frigging, polynomial, IDW, etc.
  • asked a question related to Interpolation
Question
2 answers
For example, in surface window, I can define the specific depth of interested factors. And define the z factor as like the concentration of Si @ 10 meters depth, on the condition, even if I got only 10 points around 10 meters.
Relevant answer
Answer
Hi,
in ODV the interpolation can be done with multiple methods but the most common and the one with the highest accuracy is DIVA (Data-Interpolating Variational Analysis). It is designed to solve 2-D differential or variational problems of elliptic type with a finite element method. Its end is to obtain a gridded field from the knowledge of sparse data points.
You can find more details about the method here: http://modb.oce.ulg.ac.be/mediawiki/index.php/DIVA_method
DIVA source code is now hosted on github at https://github.com/gher-ulg/DIVA.
In general it uses a finite-element method to solve a variational principle which takes into account the distance between analysis and data (observation constraint), the regularity of the analysis (smoothness constraint) and the physical laws (behaviour constraint).
The advantage of the method over classic interpolation methods is multiple:
  • the coastlines are taken into account during the analysis, since the variational principle is solved only in the region covered by the sea. This prevents the information from traveling across boundaries (e.g., peninsula, islands, etc) and then produce artificial mixing between water masses.
  • the numerical cost is not dependent on the number of data, but on the number of degrees of freedom, itself related to the size of the finite-element mesh.
I hope it helps.
If you have questions feel free to ask anytime.
All the best,
Naomi
  • asked a question related to Interpolation
Question
6 answers
I am a novice researcher, and i'm working on a project which is busy analyzing water quality data from different water sources such as dams,rivers, and springs , and also from secondary sources such water treatment plants, and households, i have collected water GPS coordinates, which appear on the image* , hence im having difficult to find the right methodology to analyse these results spatially. The microbiological parameters that are going to analysed include bacteria such as E.coli, salmonella spp , Shigella spp, Giardia spp, Entamoba Histolytica spp.
please help and be kind :)
i have attached an image showing the sample locations , additionally the study area is in six Quaternary drainage basins
  • asked a question related to Interpolation
Question
1 answer
Dear scholars and reviewers
What is your thought about a journal whose analysis is based on interpolation due to missing values. In addition, sometimes it becomes imperative to convert low frequency data to high frequency data most especially when mixed frequency can't be used because the dependent variable is high frequency. How do reviewer judge estimate from interpolated data and converted data. Can it lead to rejection of the paper
Best regards
Relevant answer
It depends on the reviewer. If you carefully explain what you did and why you did so and give a reference to somebody that suggests that it is okay to do so, your paper may be accepted but you never know.
  • asked a question related to Interpolation
Question
5 answers
We have ERA5 time series data which was used to extract maximum and minimum temperature. As the spatial resolution was not good therefore we downscaled it to 90 meters to increase the correlation coefficent value against in-situ data.
so i wanted to know whether this step is scientifically correct or not???
Relevant answer
Answer
In order to be sure, you can receive and compare the data related to the meteorological stations in the desired time period.
  • asked a question related to Interpolation
Question
5 answers
I am working on use of interpolated filter in channelizer of software defined radio .I wanted to know how ripples in stopband can affect the performance of digital filter?
Relevant answer
Answer
Yes, ripples in passband and stopband affects the performance of any digital filter. Only ideal filters don't have any ripples, so ripples are undesirable (it can not be eliminated in practical filters but can only be minimised)
  • asked a question related to Interpolation
Question
1 answer
Dear Researchers,
I would like to interpolate S-values for calculating absorbed self-doses of tumor xenografts from a pre-clinical study. OLINDA or IDAC are my sources for the model spheres data. For now, I fitted the data with a Power-law equation y=a*x^b, and the fit is quite okay (figure attached).
I was wondering if anyone has a better approach to derive the s-values for various sphere masses.
Thanks a lot and greetings from Berlin,
Jan
Relevant answer
Answer
Hi Jan. There is another way that separates data or samples into two parts. The first part is training data (about 90% of data) and the second part is test data (~ 10%). Then you fit polynomial functions with different orders (e.g: y = a0 + a1x (first order) or y = a0 + a1x + a2x^2 (second order) and ....) to training data and predict the values of test data with these polynomials. For each polynomial model, the RMSE (Root Mean Square Error) of model is calculated, so you can compare different models with each other. Here, in RMSE formula, the error is the difference between predicted value of a test sample and the true value of it. The best model is the model with minimum RMSE. It should be noted here that it is better to select the training data randomly.
Good luck
  • asked a question related to Interpolation
Question
20 answers
In mass disaster situations, do you (or your forensic unit) follow Interpol’s disaster victim identification guide?
If other protocol is used, kindly specify.
Relevant answer
Answer
Interpol and ICRC protocols are great guides but one must be cognizant that they are from a Euro-centric perspective that may not be fully applicable or translatable to Latin America.
  • asked a question related to Interpolation
Question
8 answers
Hello,
I try to predict daily stock market movements of German DAX using the SVM. As input features in want to take the daily changes of several stock markets. The problem is that different stock markets have different holidays which results in missing data (on trading days of the DAX). I get the data from Yahoo Finance. For example the S&P500 has similar trading days as the DAX with just a few days missing (1-2 days in a row).
The trading days of SSE composite vary more significantly from the ones of the European markets, sometimes 5 days in a row are missing.
Studies use different approaches on that topic. Some take the linear interpolated data between two trading days. This seems problematic because in reality, investors do not have these informations of market trends when the SSE is closed. Other studies remove the missing data with the changes of the previous day. Here a sequence of days would have the same values which could cause difficulties.
What would you recommend to deal with this issue? Should I remove all data, where at least one stock market is missing, from the training and test set (~8% of data set) and use the changes to the previous trading day of every individual stock markets?
Thank you in advance!
Relevant answer
Answer
If you are handling time series data using R, then you need not remove the missing time point at all, rather, simply fill value NA against the time for which data is missing. While plotting the time series, R will simply skip to plot line or dot against that time point and will proceed ahead with plotting for other available time points. Similarly, there are procedures available in R as to how to handle NA's for a given function being used.
  • asked a question related to Interpolation
Question
2 answers
Hi All:
I'm working on estimation with the Stochastic Frontier Analysis (fels) in the context of a panel data set and I've been having trouble with missing values in some spatial units. To handle this, I interpolated the variables I'm using in the model (nnipolate), and since some units persisted with missing values, I filtered the data, keeping only observations with information (keep if var!=.) and taking care that the panel is still balanced.
Is this a good approach? If not, what can you recommend?
Thank you!
Relevant answer
Answer
  • asked a question related to Interpolation
Question
4 answers
Hi dears,
I have a dataset with variables from 2015 to 2019 and one variable in 2015, 2017, and 2019. Is it correct to interpolate this one? It is gathered from a survey.
Relevant answer
Answer
Thanks for your suggestions.
  • asked a question related to Interpolation
Question
1 answer
I have a computational data of geometrical points and their instantaneous stress components. However based on the available data. I have to interpolate the stress values at points where the stress values are not available. Any inputs on the interpolation functions are highly appreciated. I did try interpolation function and scattered data function available in MATLAB. I am getting an error as the points are not monotonically increasing and the points are not under a convex hull.
Relevant answer
What about putting the mean of those values you have?
  • asked a question related to Interpolation
Question
3 answers
I have question about resizing the complex arrays.
I need to resize the complex valued array with interpolation method.
I tried scikit-image but this one didn't support complex data type.
I also tried to resize by cv2, and also it didn't work.
even with the real value and imaginary value separately.
Is there any solution to this??
Relevant answer
Answer
I would recommend to ask this Q at Stack Overflow community too:
  • asked a question related to Interpolation
Question
5 answers
I have 90 point data of soil texture, for example sand loam, silty loam etc. Now i want to prepared soil texture by interpolation in GIS. Kindly help me if anyone has some solution, suggestions for this type of mapping.
Relevant answer
Answer
There are several methods to achieve a thematic map based on points, and the right technique will depend on what you are looking for. I mean that if you are looking for precise boundaries, you will need to blend several layers to accomplish the map, in other words, for sure at least a coverage classification one, but if you are only looking for a spread data into a layer, you can use an interpolation method as Kriging, AI, Kernel or dominance interpretation as Voronoi or so on. People are used to marrying with a method, but you will have to consider that every method has particular conditions to perform a good job, and that will depend on the aims of the research.
  • asked a question related to Interpolation
Question
4 answers
Which method is being used for interpolation in DSM generation using Pix4D Mapper, Metashape and Inpho UASMaster?
Relevant answer
Answer
Normally, these commercial software companies does not reveal the exact interpolation technique or the formula they implemented to derive various UAV products such as DSM, DTM. However, Pix4D mapper has clearly stated that they have implemented spatial interpolation algorithm to create the DSM (check this link: https://community.pix4d.com/t/dtm-interpolation/12000) and this algorithm explained (link: ) is used to derive DTM from DSM.
  • asked a question related to Interpolation
Question
10 answers
I am generating a regression model using daily frequency data and most of the independent variables have missing values in non-working days but the dependent variable is daily frequency with no missing values. What approach should i take?
Relevant answer
Answer
For the week-day non-working days I would choose to interpolate. For student purposes, I think the method is not so important (MA, mean, linear trend, etc). For weekends, even God proposed to rest. So I suggest working only for 5day week. Prof.
George Stoica
's above comment is very useful.
  • asked a question related to Interpolation
Question
6 answers
Hello,
I found an educational attainment dataset that has values for every 5 years (1960, 1965, 1970....) and I want to interpolate the missing data by using stata. Anyone knows any method of how to do this? Also, what are the limitations of interpolating missing data ?(As I need to mention them in my paper)
Relevant answer
Dear Kostas Simoglou.
The problem of interpolating missing data is related to the resolution of the series of the original information in terms of accuracy and time. Different results can be obtained with a high resolution of the time series combined with a low accuracy of the evaluation of the thematic data, and vice versa with a low resolution of the time series and a high accuracy of the thematic data.
Data interpolation between points is usually carried out either by function polynomials based on the general regularity of the process, or by a set of sinusoids (spectral analysis). The general limitation of spectral analysis is defined as follows: as a result of spectral analysis, harmonics whose period is greater than the length of the time interval or less than twice the minimum interval cannot be determined. In other words, the local extremes between the reference points cannot be determined by the interpolation method. The study of the possibilities of spectral analysis of geospatial data in time and space can be viewed in the work Pobedinsky G. G. Boundary conditions for the discreteness of geospatial data / / Proceedings of the congress "Great Rivers '2012". Volume 1. Nizhny Novgorod, NNGASU, 2013, p. 402-405. http://www.nngasu.ru/cooperation/2012-tom1.pdf.
Sincerely, G. G. Pobedinsky
  • asked a question related to Interpolation
Question
4 answers
Nonlinear mixed-effects models (may) consider data below the limit of quantification (BLQ) in parameter estimation. However, an evaluation of the goodness-of-fit plots (observations vs predictions in particular, using spline interpolation), displays a strong trend (of spline interpolation, but not of the data) in the region of censored data, as if the model disregarded BLQ data and the data were the lower limit of quantification itself, as structured in the database. I believe that the database is structured correctly and that the model considered the censored interval. Apparently this plot is the only one to exhibit this behavior.
Is spline interpolation adequately representing the competence/capability of the final model in this case? How to handle this situation?
Relevant answer
Answer
Interpolation can be thought of in two ways:
1) Interpolation methods approximate some underlying model
2) Interpolation simply approximates a set of numbers continuously
In case 1), interpolating will capture the essence of the underlying function, if and only if, the data is representative of the fundamental model. Noise will definitely throw your results off.
In case 2), you are simply generating an approximation from a set of known numbers, e.g. Lagrange interpolation; forward, centered and backward Newtonian interpolation, &c. The numbers in hand may or may not represent anything in particular, or, they may not be accurate enough to enable one to gleam the underlying processes. Purely interpolated results can and do grow wild as the power of interpolation is increased.
What I suggest is a low power interpolation of the database, then look at the curve and compare to any model you have in mind. If the interpolated results resemble strongly enough some model, then you may further pursue investigating along that vein.
  • asked a question related to Interpolation
Question
12 answers
I was looking into GNSS derived TEC data and most of the data were missing and repeatedly occuring. What is the most precise way to handle those missing data? I guess interpolation is not good enough to perform for a large number of missing data.
Relevant answer
Answer
There are a lot of websites with GNSS data,
It is better to select periods with good continuous data sets
sincerely
Christine
  • asked a question related to Interpolation
Question
4 answers
I'm in a situation where I need to compare the accuracy of one stress-strain curve with respect to the other, with both curves having different x and y coordinates. If both curves have the same x-coordinates (independent variable) and varying y-coordinates (dependent variable), I could use the R squared value or the Weighted Least Squares (WLS) method.
I'm trying to avoid interpolation as there are many values and would be a very tedious task.
Any help is appreciated :)
Relevant answer
Answer
Thank you for all your answers. It is much appreciated :)
I stumbled upon a software called 'Origin Lab' that gets the job done.
  • asked a question related to Interpolation
Question
3 answers
Hi all,
I am hoping someone with experience in immunoassays will see this. The question I have is this: can one overcome the hook effect by interpolating high antigen concentration from the linear portion of a standard antigen curve?
Thanks in advance
Paul
Relevant answer
Answer
Hi Paul,
my practical experience with this is no. if you want to avoid issues of high analyte concentration, you need to use a competitive format elisa or include a dilution sample.
  • asked a question related to Interpolation
Question
11 answers
Hi everyone
Dose the ordinary kriging is an old method. If yes is there a new method to be used in the geostatistical interpolation
Regards
Relevant answer
Answer
The interpolation technique one selects must be suitable to answer the specific question being raised. Ordinary Kriging is robust under certain conditions, under different conditions it may not be the best technique. The data must support the technique employed as the "Garbage In, Garbage Out" or GIGO phrase holds true.
  • asked a question related to Interpolation
Question
3 answers
How can we interpolate the age of marker microfossils according to new time scale.
Eg. if the previous research papers or standard zonation charts has used older timescale then how can we use that particular microfossil in the new time scale.
When we are working with multiple microfossils then we try to follow a single timescale (the most recent one) so this is required.
Relevant answer
Answer
With an example: in Agnini et al 2014, the LAD of Discoaster lodoensis was calibrated at 48.37Ma on the 1995 GPTS. It is thus between the bottom of C21n (47.906Ma) and the top of C22n (49.037Ma). On the 2020 GPTS, those chron boundaries are at 47.760 and 48.878Ma respectively. The mapped age of the LAD of Discoaster lodoensis on the 2020 GPTS is thus (by linear interpolation): 47.760 + (48.37-47.906)*(47.906-47.760)/(49.037-48.878) = 48.18606Ma. Note that we normally have a page on the NSB website allowing this kind of conversion but it is currently buggy since we updated the website last month and i m currently busy correcting it to allow this type of conversions again).
  • asked a question related to Interpolation
Question
22 answers
Dear all,
I know it might depend also in the distribution / behavior of the variable that we are studying. The sample spacing must be able to capture the spatial dependence .
But, since Kriging is very much dependent in the computed variance within lag distance, if we have few number of observations we might fail to capture the spatial dependence because we would have few pairs of points within a specific lag distance. We would also have few number of lags. Specially, when we have points with a very irregular distribution across the study area, with a lot of observation in a specific region and sparce observations in other region, this will also will affect the estimation of computed variance among lag (different accuracy).
Therefore, I think in such circumstances computing semivariogram seems useless. What is the best practices if iwe still want to use kriging instead of other interpolation methods?
Thank you in advance
PS
Relevant answer
Answer
You need to separate two questions, first there is the number and spatial pattern of the data locations used in estimating and modeling the variogram. Secondly there is the number and spatial pattern of the data locations used in applying the kriging estimator/interpolator. These are two entirely different problems. The system of equations used to determine the coefficients in the kriging estimator only requires ONE data location but the results will not be very useful or reliable. Now you must decide whether to use a "unique" search neighborhood to determine the data locations used in the kriging equations or a "moving" neighborhood. Most geostatistical software will use a "moving" neighborhood, if you use a moving neighborhood then about 25 data locations is adequate, using more may result in negative weights and larger kriging variances. Depending on the total number of data locations and the spatial pattern there may be interpolation locations where there are less than 25 data locations. Using a "unique" search neighborhood will likely result in a very large coefficient matrix to invert.
With respect to estimating and modeling the variogram you must first consider how you are going to do this. Usually this will include computing empirical/experimental variograms but for a given data set the empirical variogram is NOT unique. It will depend on various choices made by the user such as the maximum lag distance, the width of the lag classes and whether it is directional or omnidirectional. An empirical variogram does not directly determine the variogram model type, e.g. spherical, gaussian, exponential, etc. It also does not directly determine the model parameters such as sill, range.
Silva's question may seem like a reasonable one to ask but it does NOT have a simple answer. Asking it implies a lack of understanding about geostatistics and kriging.
1991, Myers,D.E., On Variogram Estimation in Proceedings of the First Inter. Conf. Stat. Comp., Cesme, Turkey, 30 Mar.-2 April 1987, Vol II, American Sciences Press, 261-281
  • 1987, A. Warrick and D.E. Myers, Optimization of Sampling Locations for Variogram Calculations Water Resources Research 23, 496-500
  • asked a question related to Interpolation
Question
6 answers
Dears,
I would like to interpolate hourly gridded data (e.g. spatial grid 10 x 10 km) to a non regular set of points. The dataset covers around 100,000 points and, within each point, 1 year of data (i.e. 8760 hourly time-steps). I have tried using QGIS but the system crashes when attempting to load a .txt file of 8762 coloumns (e.g. lon, lat, T1, T2, T3, ..., T8760).
Probably Climate Data Operators (CDO) commands could be more useful/efficient in managing such heavy datasets.
Do you have any advice or experience to share on how to load and interpolate heavy datasets?
Thank you in advance for your kind reply.
Best,
Giorgio
Relevant answer
Answer
I wonder whether an R (e.g. interp, interpp, ...) or Python (e.g. scipy interpolate, ...) package may be useful here.
  • asked a question related to Interpolation
Question
22 answers
One of my research students (Ph.D. research) has done extension type of research on representation of numerical data on a pair of variables by mathematical curve. He derived some formulas from Newton's forward interpolation formula , Newton's backward interpolation formula, Newton's divided difference interpolation formula and Lagrange's interpolation formula. Accordingly, he mentioned these four formulas in his thesis to be submitted for the degree. It was necessary to mention them in the thesis since these are the sources from which he derived the new formulas. But, in the plagiarism checking done by the university, these have been treated as copied from the others and due to this reason the thesis has been treated as unfit for submission. Thus the question here arises -whether it is wrong or otherwise to mention these four formulas in the thesis. Therefore the question is
" Is it wrong to mention an existing formula in the thesis, containing a new formula, if the new one is derived from it ? "
Relevant answer
Answer
@ Georgeta Vaman ,
He had cited proper references for those formulas. So I think, your view had been complied with.
  • asked a question related to Interpolation
Question
3 answers
I have used geolocation grid data and interpolated the values between to get the incidence angles, but I am not able to assign the incidence angles to specific pixels on the image. Is there any way to get a 2d matrix of the incidence angles or any other way that may help me get to this?
Relevant answer
Answer
x , y coordinateلازم تكون النقاط الها احداثيات
  • asked a question related to Interpolation
Question
6 answers
I had an XY table that I mapped, and I have successfully interpolated 26 other maps from the same data set. For some reason, a few of the maps are not showing the appropriate variation, rather it is only showing a single value for every point.
Relevant answer
Answer
الانتربوليشين تحديد مجموعة نقاط على الخريطة لتوليد معلومات غير معلومة من بيانات تلك النقاط
  • asked a question related to Interpolation
Question
2 answers
Dear all:
When dealing with historical single radar station data (default coords system is polar), when I convert the polar coords system into cartesian system (e.g. WGS84), there will be some NaN region left around four corners for converted data. How to deal with these NaN value regions by proper interpolating methods?
Thank you all.
Relevant answer
Answer
One option might be to give these cells with missing values the mean or median value of the surrounding neighborhood of cells; maybe an 8-neighbor rule.