Science topic

Microeconometrics - Science topic

Explore the latest questions and answers in Microeconometrics, and find Microeconometrics experts.
Questions related to Microeconometrics
  • asked a question related to Microeconometrics
Question
2 answers
Have you ever read this article?
Muñoz, Lucio, 2019. From Traditional Markets to Green Markets: A Look at Markets Under Perfect Green Market Competition, Weber Economics & Finance (ISSN:2449-1662), Vol. 7 (1) 2019, Article ID wef_253, 1147-1156
Relevant answer
Answer
Wendy, thank you for writing. I noticed that comment you do not mention the need overlooked even by the WCED 1987 to transition from pollution production economies to pollution-less economies such as the transition from environmental pollution production economies to environmentally clean economies, for which you need first to set up POLLUTION REDUCTION MARKETS LIKE GREEN MARKETS, which then can be driven towards environmental pollution-less markets by closing the renewable energy technology gap to make environmental pollution reduction A PROFIT MAKING OPPORTUNITY...
I appreciate the comment.
Respectfully yours;
Lucio
  • asked a question related to Microeconometrics
Question
2 answers
Have you ever read this article?
Muñoz, Lucio, 2014.  Understanding the Road Towards the Current Dominant Non-Renewable Energy Use Based Economy: Using An Inversegram to Point Out a Step by Step Strategy Towards an Efficient Dominant Renewable Energy Use Based Economy, Boletin CEBEM-REDESMA, No. 11, December 23, La Paz, Bolivia.
Relevant answer
Answer
Osama, since 1987 WCED the world knew that there was a need to transition systematically, locally and globally, to clean economies and leave pollution production economies like the coal based economy and the oil based economy behind. No plan has ever been made, even the paris agreement avoids going beyond managing externalities a la sustainable development.... The paper is above a step by step way to do it if one day the world has to do it..... The sustainability crisis under management sooner or later will backfire and force a more painful and faster local and global transition from pollution production economies to pollution reduction economies and then to clean economies....You can see the trend of the crisis from bad to worse by looking at data WCED 1987 and 2024....while the problem has been under management instead of being fixed....
Thank you for taking the time to comment
  • asked a question related to Microeconometrics
Question
3 answers
I am conducting research in which I want to investigate the effect of tax incentives on research and development intensity in a group of firms. I have access to the data of a survey that:
  • It's only for one year. (It has a cross-sectional nature).
  • The companies were not chosen randomly, but those companies that conduct research and development.
  • It includes nearly 3000 companies, about 20-30% of which have been exposed to tax incentives.(As I know, there are about 1,500 companies that have done research and development but not cooperated with the questioners)
  • My dependent variable, which is the share of research and development expenses to the total expenses of the company, has a number between 0 and 1.
I'm having a little trouble choosing the right econometric method for this research.
#microeconometrics
Relevant answer
Answer
Dear Sadam Jamaldin and
V. Kerry Smith
Thank you very much for your answers and tips.
  • asked a question related to Microeconometrics
Question
9 answers
Carbon markets have become popular, locally and nationally, including in Canada as a way to address carbon pollution. And this raises the question, are carbon price based markets green markets? Why?
I think no, what do you think?
Relevant answer
Answer
They are not green markets because they are not cleared by a green market price, they are environmental externality management based markets.
  • asked a question related to Microeconometrics
Question
7 answers
Hi, in an RCT, I have 3 different treatment groups and one control group. The size of the control group is around 1000 while the size of other groups are just above 300.
To test balance I used ANOVA and Welch test, which show that several variables are unbalanced. Can I draw a smaller random sample(400) from the control group so that the sizes of all groups are relatively equal?
Actually after doing that only one variable is unbalanced. So would it cause any problems if I want to draw a smaller sample just for more balance? Thanks!
Relevant answer
Answer
My apologies for not noticing sooner that this thread continued with a further question. Consider running a random effects panel regression, with each person who is tracked over time as a panel, person-level demographic and other explanators, and dummy variables denoting the group/village each person was in (with no dummy for the comparison group). Run a Hausmann test to assure random effects made sense. To look at the effects of the intervention, initially do not code demographic/geographic differences between the villages as explanators. But in a subsequent run, code those to probe whether differences observed (or lack thereof) resulted from differences between the villages rather than the intervention. Often, the significance of intervention only becomes clear when those village-level controls are added. I'm unsure if ANCOVA also allows a panel formulation.
  • asked a question related to Microeconometrics
Question
4 answers
Wyższa Szkoła Biznesu - National Louis University is considering tendering for a project financed by the Polish Agency for Enterprise Development (PAED). The main aim of the project is to create a publication on the use of econometric modelling in the evaluation of public programmes, policies, strategies and regulations. The publication (book) must consist of 8 articles in Polish (minimum 25K characters each). The deadline for preparing those 8 papers - including their content edition - is June 3rd ,2019. The deadline for completing the entire book (proof-reading etc.) is July 10th, 2019. PAED (PARP) is supposed to select 8 papers out of 10 proposed in the tender.
I would be more than happy to arrange 2 more papers as different thematic examples of using microeconometric modelling in the evaluation of public policies (e.g. labour market, firm competitiveness, environment). I would want the authors to share their achievements in the field of the microeconometric evaluation of public policies (e.g. outcomes, methodological advantages and drawbacks, concepts to improve this research method, counterfactual analysis, etc.). What is more, the role of the micro-analysis for the procedure of calculating spill-over elasticities at the macro-level could be described.
Thus, I would like to find out if there are any experts conducting this type of microeconometric research in the EU who would be willing to write a text and be a part of the book. As I mentioned I need two more papers. The analysis of the impacts of the innovation-supporting policies would be especially welcome. However, any other thematic fields are also needed.
A book is targeted at a more general audience of policy makers and others. The technical/scientific language should not be exaggerated. Unfortunately, the book is supposed to be in Polish. However, we would bear the translation costs (from English into Polish). Moreover, there are no geographical constraints, hence, you are not obliged to write a paper on the Polish economy.
In case of any questions feel free to contact me at mogila.zbigniew@gmail.com
Relevant answer
Answer
sounds interesting. I would be happy to receive more details.
  • asked a question related to Microeconometrics
Question
3 answers
Hello everyone,
Hope you’re doing well. I’m trying to explore whether the level of human capital investment (e.g. educational expenses) is lower for disabled children in comparison to their siblings. At this point, I’m hoping to get suggestions on how to specify a model that would allow me to restrict the analysis at the intra-household level and between siblings. What I’m thinking so far (using a linear or Poisson specification) and restricting the sample to siblings only:
Edu_expenses ~ f(disabled (yes=1/no=1); severity of disability; demographic indicators; socioeconomic proxies; household fixed effects; location dummies)
Will this suffice in achieving my objectives?
Thanks in advance for your suggestions!
Best, Wameq
Relevant answer
Answer
1. You need to run a panel regression. Each family is a panel. Since you mentioned HH fixed effects, I think you are aware of that.
2. Poisson is an arrival rate distribution (yes-no), does not make sense in this context, although you could try a mlogit or hazard rate regression on number of years of school attended. Since expenses have a long-tailed range, you probably will want a log-linear model. Rather than OLS, most likely run a generalized linear model, possibly experimenting with a gamma link. Especially if your severely disabled often have no educational expenses, consider a 2-stage model based on Manning, W. G., Basu, A., & Mullahy, J. (2005). Generalized modeling
approaches to risk adjustment of skewed outcomes data. Journal
of Health Economics, 24, 465–488. doi:10.1016/j.jhealeco.
2004.09.011
3. You need a variable to control for birth order (i.e., first child, second child, etc). In that count, exclude children who died before they were school age. So e.g., first child surviving to age 5.
4. I would probably exclude families with a multiple birth.
5. Sex of child obviously is a critical variable and I am assuming all the data come from one country.
6. How will you handle government vs private expenditure, compulsory education rules. (obviously not an issue in some countries) [Extreme example: in the US, severely disabled all are sent to school as the school essentially provides custodial day care or a pupil personnel worker who comes to the home.]
7. If you are collecting data prospectively, use the WHOqol-BREF with the parents. Get a parental assessment of the child's functional capacity using the Health Utility Index-3, the PEDSqol ,or one of the new generation of similar instruments.
8. If you wish to chat further, I'm in the DC suburbs, miller@pire.org
  • asked a question related to Microeconometrics
Question
15 answers
I am currently working on project regarding the location determinants of FDI. I have been reading 'Cameron, A.C. and Trivedi, P.K., 2010. Microeconometrics using stata (Vol. 2). College Station, TX: Stata press.' and they indicate that it is essential that for panel data, OLS standard errors be corrected for clustering on the individual. I have 19 countries over 17 years. I was advised that cluster-robust standard errors may not be required in a short panel like this. Could someone please shed some light on this in a not too technical way ?
Thanks.
Relevant answer
Answer
On theoretical grounds, you can find a good and accesible introduction to the topic in th 8th chpater of Mostly Harmless Econometrics (Angrist & Pischke, 2008).
Intuitively, clustered standard errors allow researchers to deal with two issues:
(1) Correlation of observation in the same group (e.g., students in the same class, which are more likely to be similar or share unobserved characteristics than others in the other classes).
(2) Correlation over time of the same units (e.g., students or classes over time)
The use of this approach allows for any sort of correlation within the clusters and over time, so the researcher has not to worry about serial correlation.
In your case, the motivation is only the 2nd, since you have countries (with no smaller units inside you are considering) and years. Therefore, your clusters should be, in principle, countries. This can be easily implemented in Stata (command xtreg):
xtreg depvar [indepvars] [if] [in] [, re RE_options] vce(cluster group)
where group would be country in your case.
The problem is that you only have 19 countries here. It is not clear which is the threshold for this (most of the literature suggests at least around 50).
A remedy for this is proposed by Cameron and other authors, and you can implement it thourgh the package clustse in Stata (you have to download it).
Another possibility is to model the serial correlation. Depending on the topic, one must have some ideas of how the serial correlation might be. For instance, fertility usually follows and AR(1) process. A possibility could be to test for the robustness of differents forms of serial correlation (e.g., )
Related to this, it is useful to comment at which level should you cluster. You only have to take into account this issue when you introduce variables other than dummies at a certain level (if they are just dummies and you are not interested in the coefficient of them, no problem with that level). Sometimes you have two levels that do no overlap, e.g., firms and workers with workers moving from one to another firm over time. In that case, you must use two-way clustering (in Stata, you have to use the package reghdfe). Your case is not this one as far as I know. When you have two levels that overlap (e.g., regions and countries, with countries comprising regions and region- and country-level variables), you have just to cluster at the highest level (e.g., country).
I hope it helps.
All the best,
Nacho
  • asked a question related to Microeconometrics
Question
4 answers
From where to get firm level panel data of Asian countries? What is the approximate cost for the same?
Is there any ready to use industry level panel data of Asian countries, like that of ASI 3-digit industry for India? 
Relevant answer
Answer
You can Search it in Bloomberg Database . But it's a very costly database and may not available in all places.so contacts any IIM and request them to access bloomberg and you can get that data on every country in this world.
  • asked a question related to Microeconometrics
Question
2 answers
Whenever I ran a cost frontier function of Frontier 4.1c version, all the cost efficiency (CE) indices are going beyond 1. Please advise.
Relevant answer
Answer
Thank you Valentina. I later came to understand that cost efficiency indices from Frontier 4.1c software are greater than one due to the nature of error term (v+u)
  • asked a question related to Microeconometrics
Question
3 answers
Hi guys.Dataset of two waves given.  Does it make sense to estimate then a logit model with fixed effects? Are there also ordered logit with FE? Or is it better to pool the data and use it as pooled cross section?
Relevant answer
Answer
Dear Sebastian, As you only have two waves just pool your data and add a time dummy. You can add some time*Xs where you think appropriate. Cheers, Marc
  • asked a question related to Microeconometrics
Question
4 answers
Hello,
I am working on a dissertation about green IT adoption. When I check discriminant validity (comparing square root of AVE and matching correlation), I found that 3 latent variables have serious discriminant validity issue. The variables named:
  1. Green Intention in Purchasing/Using IT product (GIP) (3 questions)
  2. Intention to Support Green-imaged business (ISG) (3 questions)
  3. Environmental Concern & Habit (ECH) (5 questions)
From data I acquired, most respondents answered between 4 and 5 (5-point Likert) to the three constructs. This probably caused discriminant validity issue. GIP has square root of AVE at 0.808 and 0.802 for ISG, but a correlation between them was 0.93. Removing observed variables wasn't help much. Thus, as A. M. Farrell (2009) suggested, I combined GIP and ISG together. 
However, there is very serious discriminant validity issue between ECH and GIP+ISG. Again, removing some observed variables wasn't fix the problem. I tried very bizarre thing; converted ECH from multiple-item to single-indicator (using mean score), and it presented discriminant validity. Without literature to cite, I'm not sure is that acceptable or not. This is a statistical quagmire.
My curiosities are:
Q1: When such serious discriminant validity issue occurs, is it acceptable to combine latent variables? or Is there any other way? or Should I separate model? one for GIP and other for ISG.
Q2: For ECH, is it appropriate to turn multi-indicator into single-indicator to solve discriminant validity issue? or it's better to eliminate ECH?
Q3: In cross-sectional study, how to validate a single-indicator variable? I heard that Test-Retest method is the only way, but it is only for longitudinal-study.
Best regard,
Relevant answer
Answer
Seems to me that the initial problem is that many items have no variance, or very little (if everyone answered 4 or 5, then they are essentially endorsing all the items.
In a case like this, you are up against the very limits of science, which attempts to relate variation in one property to variation in another. With no variation to model, the system just isn't possible. The distinction between a 4 and a 5 on a Likert scale is driven by factors like response recency, education etc, so much of the variance in the data is going to be non-construct-related but correlated between items.
I am doubtful if anything can be done in this case, I'm sorry to say.
  • asked a question related to Microeconometrics
Question
4 answers
My macroeconomics model has many closed circuits. It contains at least 20 variables and the ability make decisions (or possibly conditioned jumps) based on sub-criterion and formulas.The question is what simulation programs can satisfy this need?
Relevant answer
Answer
Hi,
You can try Minsky software http://www.ideaeconomics.org/minsky/
It is free and can be downloaded from https://sourceforge.net/projects/minsky/files/
The program is similar to Mathcad, Mathematica, Mathlab and other mathematical modeling/ simulation tools, but is optimized for accounting-based, flow-of-funds analysis.
  • asked a question related to Microeconometrics
Question
5 answers
Experts in the field of Micro-finance.
Relevant answer
Answer
thank you sir.
  • asked a question related to Microeconometrics
Question
13 answers
The question is, how to prove / estimate, on what characterisitcs do have impact large, medium, small and micro companies. Supposing large companies would  have impact on unemployment and GDP I could use these two + average wage. Do you have some other ideas? Problem is the data availability (data from Czech Statistical Office). What methods would you use? How to eliminate (minimize) the influence of others factors? Is the only way of solution using appropriate statistical methods? What about qualitative explanation? What about the time delay?
Do you know some articles dealing with this, i.e. impact of companies according to their size, exluding / minimizing other factors?
Thank you very much!
Relevant answer
Answer
Radka,
If your ultimate goals is to be able to assess the overall level of those systemically stable elements of a business's activities which have enduring 'local impact'and at the same time recognise that business activities are not events occurring at a point in time but are processes of which time is an attribute of each of the effects of each of its actions, then you have immediately placed your work outside of the CGE orthodoxy of mainstream economic analysis,  Not an uncommon place to be for most development economists when they become focussed on delivering real results for real communities rather than marketable policy recommendations for PR and political rhetoric purposes of central governments.
In the former case your route should take you into the methods, tools and concepts of the evolutionary and institutional approaches within economics.  It should encompass all the behavioural, organisational and legislative institutions whose operation flows through time and create an institutional landscape characteristic of the context of each target community, and of those more remote communities whose landscapes' characteristics are themselves determinant factors in the conduct of the local operations of those firms whose ownership and control is influenced by the norms and values embedded in them and which differ from those of the local target's institutional context (as described by its local institutional landscape).
It sounds complex but, when you have visualised it, the idea is quite simple.  With that idea you have a framework inside of which you able to more easily conceive of those factors which you seek and to design the measurement methodologies which are appropriate to their nature.  Overall assessment of what will be a profile of disparate metric types might best be approached by another 'landscaping' approach. This time based upon a 'radar diagram' representation may be rather than the multi-dimensioanl 'surface analogue' by which institutional landscapes might be compared and assessed.
This part of a such a methodology might equip you in seeing how business size might be correlated with the systemic exportation through transfer payments and profit attribution of labour value to the detriment of local goals for long-term development, stability and the social cohesion which changes in income disparities tend to damage.
This approach might also provide the framework within which might be studied the effect of re-purposing and redirecting of local labour into directly creating and evolving local institutions that are conducive to the sustained development, support and maintenance of local social, physical and organisational infrastructure that are conducive to raising the efficiency and effectiveness of the local community in producing and increasing value which feed directly into the 'well-being value' experienced as a lifetime flow for each individual and their households.
The same methodology, applied to the current institutional landscape, and the firms operating within it would identify those behavioural and policy institutions, operating upon and within the local community, whose main function are, in practice, to facilitate the 'export'  to more central, and hence remote, communities value produced by local labour through their local economic activities.
Given the characterisation and detailed understanding of these two states it would create a locally relevant contextual framework of the present and future institutional landscapes and suggest the policy paths, whose evolution and implementation would lead to the evolutionary diminution and, perhaps, abandonment of those institutions of the present which are antagonist to the desired development goal.  Similarly, those policy paths would include elements which lead to the creation and enhancement of those institutions which are then being seen as promoting the local development goals.
Moral and ethical integrity, of global international force and validity, can be embedded within the target developmental institutional landscape for each locality.  This can be done by integrating into the desired institutional landscape those elements which are consistent with that set of institutional elements, that as a set are regarded as necessary and sufficient, for that institutional proto-landscape which is deemed to describe and define the conditions essential for any society or community capable of durably, resiliently and sustainably conducting its life in strict accordance with the aggregate agreed wisdom of humanity as expressed through the normative declarations of those statutory international bodies we have established since 1948 and which are now evolving out of their infantile stage into some form of co-responsibility and credibility.
The same methodology of landscape building applied to the current body of pronouncements as a whole would perhaps be very instructive and salutary for us all to consider.  It would be expected to identify those elements in what has already been said and agreed which are in fact antagonistic to the overall goals and perhaps serve various local interests and have no place in the setting of a holistically relevant, universally applicable vision of a humanitarian future by which humanity may not only survive and evolve but in which every lifetime lived can also take pleasure in doing so.
Radko,  The approach you choose to take now cannot employ a methodology that is not yet in existence.  However, the birth of that methodology can be facilitated by researchers such as yourself developing, in the course of their own work small elements and tools, and employing specific conceptual frameworks, in argument, which are consistent with its ideas.
Human behaviour has and continues to evolve.  Societal development, at the social and the economic levels, is one of continual evolution and these elements are completely intertwined and are characterised by flows, and flows within flows.  Some order is imposed upon that river flowing through time by degradable and modifiable conduits of different strength, diameter and length... they are our institutions.  The river has many tributaries.. the isolated communities in our history. But, we can now see that it will always be characterised as being a flow passing through a web of channels around many obstacles... these are our geographic and environmentally determined regions.  They lie in ever changing patterns as they present themselves and we pass them by... but always as one river.
This river of institutions and regions is itself comprised of molecules of water... each one of which is you and I, and every other person.  The river is us.  No molecule is any different from any other, and through the twists and turns as the river runs along its bed of time any molecule may end up in any channel, in any conduit and each remains a part of the whole, entitled to be in the liquid which is the water of the river.  End of metaphor...before I stretch it too far.
The point being that you cannot model the behaviour of any part of a river using a time series of statically calculated equilibria at each moment along the chosen axis of time and extrapolate and predict future states of equilibria beyond the present moment.  Using our metaphor, you know that there are conduits, ending, starting, flexing and cracking.  But you do not know which, where or when.  Just that they will.  The same goes for the obstacles up ahead, the channels opening up and those which are closing ahead of you, even dead-ends forming.  You know that they will occur and again do not know when,where and which.  
Any physicist would die laughing at you if tried to build such a model and would throw a tome or two describing basic fluid mechanics at you as he took his last gasping, hilarious breath.  You may be still alive.  But you are wrong and he was right!  At least, demonstrably more so than yourself.  Fluid mechanics may only hold a few useful tools, or even just concepts to help our thinking about ourselves.  But we can be quite sure that no living creature has or ever will be able to rationally arrive at any optimal solutions to most of life's problems in any useful way at the individual level, and so when aggregated their behaviour cannot be expected to be done in any accurately representational manner.
NP-hard problems make up most of life's decisions, we do not calculate solutions to them, nor could we for the most part.  We have evolved, and continue to evolve, other means to handle life's complexity.  Much of that methodology is now institutionalised in our newly acquired technologies and tools.
For more discussion of this I refer you back to what I mentioned many words earlier.   A seminal textbook, bringing together foundational and coherent argument, more suited to handling and studying the realities of life, its societies and its economic activities, in particular, is in the making here on RG.   Prof Shiowaza being one.  One might also delve into http://afee.net/
I expect I haven't been much help to you in producing you next paper, but it can sometimes be useful to take you eyes off the ground at your feet and gaze for a while at the horizon....  if only to check that your path is still set towards the sun and not away from it.  :-))
Robin
  • asked a question related to Microeconometrics
Question
3 answers
I am looking for model selection criteria for random effects panel estimation models. AIC and BIC do not seem to be appropriate in this case. And are not even provided by Stata. Also I did not find adjusted R^2 for this case.
I cannot use a Fixed or Mixed Effects model as my main regressors are fixed properties of the observational units.
Relevant answer
Answer
It's a bit computationally intensive, but out-of-sample prediction might work. It's a bit tricky with panel data but easily implemented in Stata.
  • asked a question related to Microeconometrics
Question
3 answers
Dear all,
I'm using Latent Gold Choice to estimante a LC model. The model performs well when up to three classes are considered. However, when I try with four classes I got a message of no convergence ("estimation procedure did not converge, 25 gradients larger than 1.0E-3) and maximum number of iterations reached without convergence. Anybody knows the possible reasons? Mis-specification of the model? A large number of observations (in my case 250000 obs)? Something else?
Thank you
Relevant answer
Answer
Thank you Amir for your reply.
I've already tried to increase the number of iteration without success, so I think the problem is related to the first point. Even if I don't have convergence, I obtain the value of the parameters and the measure of BIC and AIC. Are these results reliable? At the end it turns out that the lowest value of BIC and AIC are obtained with 6 classes (in which I don't have convergence). Do you suggest to use only the results of the 3 classes  as the results of 6 classes are not reliable? Thank you
  • asked a question related to Microeconometrics
Question
9 answers
I am looking for theories mentioned about the relationship between labour mobility ( international and domestic labour mobility) and the economic impacts. Please help to provide some related THEORIES and source of DATA your know.
Thank you!
  • asked a question related to Microeconometrics
Question
3 answers
When I read the “14A.1 A ssumptions for Fixed and Random Effects” in Introductory Econometrics: A Modern Approach, by Jeffery Wooldridge, it says that the FE estimator is consistent with a fixed T as N → ∞.
Does this indicate the FE estimator will not be consistent when N is fixed but with T→ ∞? But I remember according to Microeconometrics: Methods and Applications, by Colin Cameron and Pravin Trivedi, any of N and T being infinite is enough. This controversy makes me feel confused. 
Relevant answer
Answer
I understand that if T tends to infinity, the estimator of fixed effects is consistent. However, if T is fixed and N tends to infinity (typical in models with reduced-sized), only the estimator β of fixed effects is consistent. While the estimator of individual effects (α+μi) won't be consistent because the number of parameters is increased when N also increases.
Maybe, you can find more information in:
  •  Baltagi, B.H. (2008). Econometric Analysis of Panel Data (4º ed). Chichester: Wiley.
  •  Arellano, M. (2003): Panel Data Econometrics. New York: Oxford University Press.
  • asked a question related to Microeconometrics
Question
1 answer
Folks,
Indian Stock exchanges provide two types of EOD volumes: Total Traded Volume (TTD hereafter) and Deliverable Volume (DV hereafter).
As we know TTD is vulnarable to noise by HFTs, market makers. While the DV is very effective measure of demand/ supply. Because it's a measure of how many shares changed ownership at EOD.
I have explored but I could not find any other Stock exchanges providing DV as a EOD data.
Can someone please confirm?
Relevant answer
Answer
I think it is important to take into account market expectations which affect price. It is important to have in mind that that price is that average in the negotiations.
Regards
Francisco
  • asked a question related to Microeconometrics
Question
7 answers
I'm finishing my master thesis and I'm using a simultaneous equations model (SEM) as econometric strategy. In the process, I've noticed that is really hard to find an graduate econometrics textbook with a relevant coverage of that kind of stuff, at least in traditional microeconometrics textbooks except Wooldridge (Cameron & Trivedi, for the other part, has very little about it). Other, more hard modelling strategies, as discrete-choice dynamic programming (DCDP) models doesn't even appear in textbooks.
So I was wondering... There are textbooks focused mainly on these kind of strategies?
Relevant answer
Answer
You can also consult the textbook of William_H_Greene of Econometric Analysis.
  • asked a question related to Microeconometrics
Question
13 answers
Is that possible that technological progress, technical efficiency or TPF has a negative growth ?
If I say that better crop varieties as technical efficiency (same input but  better output) and better agricultural practices as technological progress ?
In the long term, is that possible that technological progress, technical efficiency or TPF be has a negative growth ?
Relevant answer
Answer
Dear Muhamad,  This is a question that has been troubling economists for years.  never mind the semantics of the English and I focus on a production function or non parametric approach.  You need to be specific and clear about what and how you're measuring things.  Now assuming some kind of neo-classical production function TFP approach a la Jorgenson et al, or alternatively a torquist index approximation of same, then TFP is a residual that could contain (changes in the growth rate index approach): economies of scale, economies of scope, reorganisation of economic activity among the many actors/producers (See Gollop, Ball, Hawke and Swinand) (this may be a deviation from perfect comp), changes in input quality not captured in your current input quantity measurement programme, X-efficiency/inefficiency-non maximising producer behaviour (this is part of technical efficiency), slow adaption of new technologies (technical progress -- this may be optimal based on optimal replacement rates of long lived capital equipment - see the many articles by Jorgenson, etc).  If you're specifying a functional form then there are potential issues there too.  In anycase, if any of the factors such as scale etc, overwhelm the technical progress and technical efficiency, then one could be negative while the others positive.
BTW, I would not call better crop varieties technical efficiency, I would call these technological progress, and adopting best practices as technical efficiency (see Coelli at al textbook).  this is mostly just nomenclature.
To disentangle these, you need a methodology such as those developed by Fare et al.  or a translog distance function if going parametrical estimation route.
  • asked a question related to Microeconometrics
Question
9 answers
Does anyone know how to compute a test of over-identifying in a system of simultaneous equations? Sargan and Hansen tests are just used for a single equation but I need the test for the whole system of many simultaneous equations. I think there is a Hansen-Sargan test for this but I did not find any explicit reference exposing the formula.
Relevant answer
Answer
All the above answers are correct
  • asked a question related to Microeconometrics
Question
9 answers
Referring to the 10 th edition of "Microeconomic theory: basic principles and extensions" page 92 reads as follows: Although marginal utility is obviously affected by the units in which utility is measured, ........" Question is how to prove it? You can refer an article, book where someone explains i in detail.
Relevant answer
Answer
If a function u(x_1,x_2,...x_n) represents preferences, any increasing transformation of this function g(u(x_1,x_2,..x_n)) also represents same preferences. The marginal utility, however, changes from u'(x_1,...) to g'(u(...))u'(x_1,..). Therefore, economists care mostly about ratios of marginal utilities.
  • asked a question related to Microeconometrics
Question
5 answers
Is there any difference between them?
Most studies of Corporate Governance-firm performance used different type of ownership variables as a proxy for the Internal Governance attitude of a firm. I need to know which variable is usually used to construct such data. Is it the outsider ownership (block-holding that exceeds 5% of the outstanding shares of the firm) or the number/percentage of shares held by the insider which exceeds 5% of the outstanding shares?
I think the concentrated ownership means high private control of a firm by the insiders (managers or board members). If so, do I have to sum up all the percentages of holding of the insiders that equal or above 5% to find the concentration of the ownership?
Other scholars have studied family-owned firms instead of publicly traded firms. Most of well-known data sources have a category called "private firms".
Does it mean family-owned firms in this case, or unlisted firms?
Any help will be highly appreciated.
Relevant answer
Answer
1. Concentrated Ownership simply refers to the case where majority of shares are held by few owners. Further, if more than 5%, 10%, 20% (different levels) of shares are held by the state, or group of companies, or a family or a foreign investor then you are in a position to identify the owner or block holder of the company. 
2a. To calculate Ownership Concentration, you have to define the levels. For instance you can measure how much ownership is concentrated at Top 1 level. this means what percentage of shares are held by the largest shareholder of the company.
Similarly in literature, Concentration of Ownership at level 3 and 5 (percentage of shares held by 3 largest and 5 largest shareholders) has been calculated.
2b. To calculate family ownership you have to calculate percentage shares held by a family. 
Similarly you can calculate, insiders shareholding, state ownership, foreign ownership etc.
3. Yes, the concentrated ownership means high control of a firm.
4. Private firms are unlisted.
  • asked a question related to Microeconometrics
Question
11 answers
The main idea is to test the hypothesis using a panel data set. We also need to get access to staff files at university level. Who is willing to help me with this research?
Research output is going to influence in a positive way the quality of teaching for those who are classified as researchers.
Relevant answer
Answer
The effect of research on teaching is that it helps to teach a specific topic with confidence and with evidence, instead of vaguely teaching.
  • asked a question related to Microeconometrics
Question
3 answers
Simultaneous (or multiprocess) event history models have been developed over the most recent years, as a very particular and advanced type of duration models. Does anyone know if:
i) there is already some type of multiprocess event history models allowing for competing risks in BOTH processes (e.g., any way of modelling two main transition processes, where each transition may assume two or more different modes or routes)?
ii) there is any package that allows the estimation of these models in STATA?
The idea would be to estimate both processes jointly. One of the processes has been already studied through a discrete-time competing-risks model, but it would be nice if some methodology would allow the joint estimation of this competing-risk model (where transitions may occur through 2 routes) with another one, for another choice problem (precisely, a multinomial choice problem where agents decide among 4 alternative occupations), in order to allow for potential interdependencies (through unobservables) between the two processes.
Relevant answer
Answer
Dear Vera,
For the outcome equation: Competing risks models can be readily estimated for each destination separately, you do not need a special package for that: just stset your data accordingly and re-estimate the same models for each outcome, marking the spells that experience the respective other events as zero until that point and censored afterward. Your risk set will be the same each time, but the number of events will vary.
As for the selection equation I remember the routine had some rigidities in that respect, but maybe there, too, you can work with separate binary response models?
Sorry I cannot be of more help.
Best, Jonas
  • asked a question related to Microeconometrics
Question
16 answers
Income for a sizable number of households in the survey is too low even after missing values were replaced by imputation.
Relevant answer
Answer
It depends on your research goal.
normally, there is relatonship between income and consumption;
Generally, if I do research on household consumption, income are often less than consumption, and then ask the respondents again; it is difficult to get income sometimes, in this situation, I often use consumption rather than income to get some results. for example, demand income elastisity often is replaced by demand expenditure elastisity,
  • asked a question related to Microeconometrics
Question
4 answers
I am experimenting with the Oaxaca-Blinder decomposition to examine wage increases in Belgium between two years (2000 and 2010) using wage survey data. I am interested in the part of the wage increase accounted for by differences in socio-demographic factors between the two years (explained difference). My dependent variable is the log of hourly wage. However, there are lots of observations (too many to simply get rid of them) whose workload is very limited either because of part-time work or because they only lasted a short amount of time. Therefore, it seems to me that in addition to the survey extrapolation factors (pweight) I should also do some weighting according to workload. Does this make sense and how do you introduce an additional weight using the Oaxaca procedure in Stata (fweight and aweight do not seem to serve this purpose)?
Relevant answer
Answer
That's a good question. I have never heard anyone raise that objection before, however, and I don't think you need to worry about it -- certainly plenty of authors have ignored the problem! But it is interesting: Suppose a given case had an erroneously high reported number of hours. That would result in (a) an understatement of their hourly wage, and (b) an increase in the weight attached to that observation. That would tend to bias the weighted average hourly wage downwards for the whole sample. But as long as that bias is roughly similar across subgroups of the population and over time it should not have much effect on the Oaxaca decomposition.
Here's another random Oaxaca tip. If you use Stata you have the option of specifying "pooled", which specifies that the coefficients used to evaluate the "explained effects" of differences in X variables between the two periods are the coefficients from a regression that pools the data, and includes a dummy variable to distinguish the two periods. The thing I like about that specification is that it does not rely on the period 1 or the period 2 coefficients, but rather on a kind of average of the two which seems prudent. Also, the total "unexplained effect" will come out exactly the same as the coefficient on the dummy variable in the pooled regression: it is the part that cannot be explained by the differences in X.
Best of luck,
Tom
  • asked a question related to Microeconometrics
Question
4 answers
Heckman procedures have been widely used in empirical research to correct for selection bias. However, for duration models (survival analysis/time-to-event data), selection correction is still under development. There is an important contribution by Boehmke et al (2006) in American Journal of Political Science, which resulted in the program "DURSEL" for STATA.
Does anyone know any subsequent advance to correct for selection bias in duration models, especially for STATA?
Thanks in advance!
Relevant answer
Answer
Hi Vera, I am not aware of the paper by Boehmke et al. (2006), but with single observation per unit data you can control uncorrelated (with X) unobserved heterogeneity(UH), with multiple observation per unit even for correlated UH (see eg Van den Berg, 2001, Handbook). Concerning dynamic selection, you should have a look at the so called "Timing of events" approach by Abbring/Van den Berg (2003). In addition, I do remember having seen something on selection and duration models by Denis Fougère or Jen-Pierre Florens. I haven't found it now on the spot, but it might be worthwile looking at Florens/Fougère/Mouchart (2008) or Fougère/Kamionka (2008). Best Alfred
  • asked a question related to Microeconometrics
Question
20 answers
I have a count data model with panel data and I would like to decide between fixed and random effects. But the value of the Hausman test is negative (p value = 1). How can it be possible? Might it be due to the existence of outliers?
Relevant answer
Answer
The covariance matrix for the Hausman test is only positive semi-definite under the null. It also does not necessarily have the obvious degrees of freedom. Take a simple example. Consider adding an additional variable to an OLS regression. The initial coefficients are consistent and efficient if the coefficient on the additional variable is 0. They are consistent if the new variable is orthogonal to the original variable. Under the null of a zero coefficient, the difference between the covariance matrix with the additional variable and the covariance matrix without this variable is positive semi-definite, but if, in fact, the additional variable has a lot of explanatory power, the standard errors on the original coefficients can come down. Moreover, as this example shows, Hausman tests can often have fewer degrees of freedom than the rank of the covariance matrix and therefore end up being positive semi-definite rather than positive definite under the null. If memory serves me, Hausman discusses this for IV in his original article. I think Paul Ruud had a good review article sometime in the early to mid 1980s and the general issue is discussed in Davidson and MacKinnon's text.
  • asked a question related to Microeconometrics
Question
25 answers
Instrumental Variables and other econometric methodologies suitable to deal with potential endogeneity problems in regressors are becoming a hot topic in applied economic work. However, I have not found yet how to "instrument" potential endogeneous regressors and correct potential endogeneity problems in survival time data. IV methods seem to be well developed for linear models (both cross-section and panel data models) and only some non-linear models (e.g., binary outcome models). Does anyone know any recognized and suitable method to use Instrumental Variables methodologies in duration models (particularly in discrete time duration models)? Any reference and/or program (especially for Stata)?
Relevant answer
Answer
I do not think there is a solution within this non-linear setting without specifing a fully parametric 'first stage'. This is pretty unattractive because (as opposed to the linear model) this first stage has most likely literally to be true to avoid inconsistencies. An alternative is to treat durations as outcome variable and set-up a non-parametric LATE analysis along the lines of Imbens & Angrist. If covariates are needed, the Froelich (2007, Journal of Emetrics) paper would be the appropriate reference for this.