Question
Asked 28th Sep, 2019

Is there a rule for how many parameters I can fit to a model, depending on the number of data points I use for the fitting?

I have a model that needs calibration, but I am afraid that if I calibrate using too many model parameters, I will overfit to my data, or the calibration will not be well-done.
Can anyone suggest a method to determine the maximum number of parameters I should use?

Most recent answer

13th Apr, 2022
Tanay Dey
Homi Bhabha National Institute
I think It is Better to guess the function instead of using simple polynomial fit. Ofcourse in all cases we need to keep in mind the outliers . The advantage of using a function is the parameters will have some physical significance according to your model. Also from the chisq/ ndf of different fit you can compare the goodness of the fit.

All Answers (8)

29th Sep, 2019
Debopam Ghosh
Atomic Minerals Directorate for Exploration and Research
For purpose of calibration, i suggest you carry out a linear/planar form of model fitting initially, and perform the appropriate statistical analyses including the model diagonstic tests, in case you find the linear / planar model poorly fitting your data, you can sequentially go for a higher order model that best fits your data using the stepwise selection of regressors algorithm.
An alternate approach is to use a neural network approach, design a neural network with pre chosen no. of parameters( connecting weights, bias values and no. of neurons in hidden layers etc. ) , use experimental data on process input and output as training set with an appropriate algorithm, e.g back propagation algorithm or genetic algorithm as the optimizer.
29th Sep, 2019
Jianjun Qin
Shanghai Jiao Tong University
normally the number of parameters should be less than the number of data points.
29th Sep, 2019
El-Sayed Mahmoud El-Rabaie
Faculty of Electronic Engineering, Menouf
Initially the fitting problem is based on how you choose the shape of the fitting equation and you search for the fitting parameters. If the data is hard to detect its shape you can assume a nonlinear equation of the second order and you raise the order gradually to the limit of the accepted accuracy. For some problems the model may be assumed as a polynomial have parameters on the numerator and others on the denumerators, You can rely easily on several published letters on my list of publications that cover this topic.
Good Luck
Prof. S. El-Rabaie
1st Oct, 2019
John A Heathcote
Agree with Qin that the number of parameters should not usually exceed the number of data points less one. Deviation from this can be justified only if you have information additional to your data, e.g. a theoretical reason why a certain function is relevant. More parameters, up to data-1, will give a better fit (i.e. reduced residuals) but not necessarily better understanding - see comments from others on how to assess this. The more parameters you have, the more poorly behaved the model is likely to be outside the range of data, or even between data points.
3rd Oct, 2019
Sanjiv Sharma
University of Bristol
Typically, if the number of parameters of a model are fewer than the number of data points, then some optimisation process (like the least-squares method) can be used to determine the optimal model parameters. However, if the number of parameters are greater than the data points, then a unique set of parameters cannot be determined - of course in certain cases, the Lagrange multipliers approach has been proposed (see for example: http://people.csail.mit.edu/bkph/articles/Pseudo_Inverse.pdf ).
To illustrate: consider a simple linear models; it has two model parameters, the gradient, m, and offset, c. Two or more data points are needed to estimate the numerical values for m and c. If we had only one data point, then an infinity number of lines can be fitted and would be equally viable. However, if prior information about one of the model parameters were know, then the infinite number of solutions can be collapsed to a single solution!
I hope this helps!

Similar questions and discussions

How do I report the results of a linear mixed models analysis?
Question
45 answers
  • Subina SainiSubina Saini
1) Because I am a novice when it comes to reporting the results of a linear mixed models analysis, how do I report the fixed effect, including including the estimate, confidence interval, and p-value in addition to the size of the random effects. I am not sure how to report these in writing. For example, how do I report the confidence interval in APA format and how do I report the size of the random effects?
2) How do you determine the significance of the size of the random effects (i.e. how do you determine if the size of the random effects is too large and how do you determine the implications of that size)?
3) Our study consisted of 16 participants, 8 of which were assigned a technology with a privacy setting and 8 of which were not assigned a technology with a privacy setting. Survey data was collected weekly. Our fixed effect was whether or not participants were assigned the technology. Our random effects were week (for the 8-week study) and participant. How do I justify using a linear mixed model for this study design? Is it accurate to say that we used a linear mixed model to account for missing data (i.e. non-response; technology issues) and participant-level effects (i.e. how frequently each participant used the technology; differences in technology experience; high variability in each individual participant's responses to survey questions across the 8-week period). Is this a sufficient justification? 
I am very new to mixed models analyses, and I would appreciate some guidance. 

Related Publications

Article
This discussion reviews progress in applying conversational programming techniques to obtain parameters of Weibull and double exponential distribution functions for sets of fatigue lives measured at constant stress amplitude. Cases also are described where the techniques have been used to describe distributions of measured maximum corrosion pit dep...
Article
The proposed SBBB distribution allows for a more complete summarization of stand structure data than has been possible in forestry until now. For two data sets constructed out of a large loblolly pine data set to resemble possible plantations, the univariate SB and the trivariate SBBB distributions fit the data reasonably well as measured by χ2. Th...
Article
A conceptual model to predict the threshold shear velocity, which should be overcome to initiate deflation of moist sediment, was recently developed by Cornelis et al. The model relates the threshold shear velocity to the ratio between water content and the water content at a matric potential of -1.5 MPa, and contains one proportionality coefficien...
Got a technical question?
Get high-quality answers from experts.