Type of Document Dissertation Author Ogunc, Asli K URN etd-0406102-033213 Title Essays on the Bayesian Inequality Restricted Estimation Degree Doctor of Philosophy (Ph.D.) Department Economics Advisory Committee
Advisor Name Title M. Dek Terrell Committee Co-Chair R. Carter Hill Committee Co-Chair Lynn R. LaMotte Committee Member W. Douglas McMillin Committee Member Bogdan S. Oporowski Dean's Representative Keywords
- Bayesian theory
- Gibbs sampler
- qualitative choice models
- metropolis algorithm
Date of Defense 2002-03-15 Availability unrestricted AbstractBayesian estimation has gained ground after Markov Chain Monte Carlo process made it possible to sample from exact posterior distributions. This research aims at contributing to the ongoing debate about the relative virtues of the Frequentist and Bayesian theories by concentrating on the qualitative dependent variable models. Two Markov Chain Monte Carlo (MCMC) methods have been used throughout this dissertation to facilitate Bayesian estimation, namely Gibbs (1984) sampling and the Metropolis (1953, 1970) Algorithm.
In this research, several Monte Carlo experiments have been carried out to better understand the finite sample properties of Bayesian estimator and its relative performance to Maximum Likelihood Estimation (MLE) in probit and poisson models. In addition, the performance of the estimators is compared when inequality restrictions are imposed on the coefficients of the models. The restrictions are imposed within the context of a Monte Carlo experiment for the probit model and applied to the real data in the poisson regression framework. The research demonstrates the ease with which the inequality restrictions on the coefficients of the probit and poisson models via the Gibbs sampler and Metropolis Algorithms, respectively.
It has been shown throughout the research that sample size has the largest impact on the risk of the parameters in both techniques. Bayesian estimation is very sensitive to prior specification even in the case of non-informative priors. Lowering the variance of the non-informative prior improves the Bayesian estimation, without significantly changing the nature of the distribution. In the cases where Bayesian prior variance is very large, MLE dominates the Bayesian in the almost all of the experimental designs. Whereas, when the prior variance is lowered, the improvement in the estimation process is remarkable.
In the constrained cases, the Bayesian estimator has lower variance and lower MSE when the restrictions are correct. As the specification error increases, the Bayesian estimator suffers more than the MLE. The increase in bias is more than the efficiency gain for the Bayesian case. The effects of changes such as the changes in the distribution of regressors, parameter values, collinearity, and their interactions warrant more investigation.
Filename Size Approximate Download Time (Hours:Minutes:Seconds)
28.8 Modem 56K Modem ISDN (64 Kb) ISDN (128 Kb) Higher-speed Access Ogunc_dis.pdf 4.24 Mb 00:19:37 00:10:05 00:08:49 00:04:24 00:00:22
If you have questions or technical problems, please Contact LSU-ETD Support.