95JCGS04\P0001-----------------------------------------------------
Gauss, Statistics, and Gaussian Elimination
G.W. Stewart
Gaussian elimination is the algorithm of choice for the solution of
dense linear systems of equations. However, Gauss himself originally
introduced his elimination procedure as a way of determining the
precision of least squares estimates and only later described the
computational algorithm. This article tells the story of Gauss, his
algorithm, and its elation to his probabilistic development of
least squares.
Key Words: Gauss; Gaussian elimination; Theory of least squares.
95JCGS04\P0012----------------------------------------------------
Approximations to the Log-Likelihood Function in
the Nonlinear Mixed-Effects Model
Jose C. Pinheiro and Douglas M. Bates
Nonlinear mixed-effects models have received a great deal of
attention in the statistical literature in recent years because
of the flexibility they offer in handling the unbalanced
repeated-measures data that arise in different areas of
investigation, such as pharmacokinetics and economics. Several
different methods for estimating the parameters in nonlinear
mixed-effects model have been proposed. We concentrate here on
two of them --- maximum likelihood and restricted maximum
likelihood. A rather complex numerical issue for (restricted)
maximum likelihood estimation in nonlinear mixed-effects models
is the evaluation of the log-likelihood function of the data,
because it involves the evaluation of a multiple integral that,
in most cases, does not have a closed-form expression. We consider
here four different approximations to the log-likelihood,
comparing their computational and statistical properties. We
conclude that the linear mixed-effects (LME) approximation
suggested by Lindstrom and Bates, the Laplacian approximation,
and Gaussian quadrature centered at the conditional modes of the
random effects are quite accurate and computationally efficient.
Gaussian quadrature centered at the expected value of the random
effects is quite inaccurate for a smaller number of abscissas
and computationally inefficient for a larger number of abscissas.
Importance sampling is accurate, but quite inefficient
computationally.
Key Words: Gaussian quadrature; Importance sampling; Laplacian approximation;
Maximum likelihood estimation.
95JCGS04\P0036-----------------------------------------------------------
Method of Moments Using Monte Carlo Simulation
Andrew Gelman
We present a computational approach to the method of moments using
Monte Carlo simulation. Simple algebraic identities are used so
that all computations can be performed directly using simulation
draws and computation of the derivative of the log-likelihood.
We present a simple implementation using the Newton-Raphson
algorithm with the understanding that other optimization methods
may be used in more complicated problems. The method can be
applied to families of distributions with unknown normalizing
constants and can be extended to least squares fitting in the
case that the number of moments observed exceeds the number of
parameters in the model. The method can be further generalized
to allow ``moments'' that are any function of daa and parameters,
including as a special case maximum likelihood for models with
unknown normalizing constants or missing data. In addition to
being used for estimation, our method may be useful for setting
the parameters of a Bayes prior distribution by specifying moments
of a distribution using prior information. We present two
examples---specification of a multivariate prior distribution in
a constrained-parameter family and estimation of parameters in an
image model. The former example, used for an application in
pharmacokinetics, motivated this work. This work is similar to
Ruppert's method in stochastic approximation, combines Monte Carlo
simulation and the Newton-Raphson algorithm as in Penttinen,
uses computational ideas and importance sampling identities of
Gelfand and Carlin, Geyer, and Geyer and Thompson developed for
Monte Carlo maximum likelihood, and has some similarities to the
maximum likelihood methods of Wei and Tanner.
Key Words: Bayesian computation; Compositional data; Estimation;
Importance sampling; Least squares; Maximum likelihood;
Missing data; Newton-Raphson; Prior distribution;
Stochastic approximation; Unnormalized densities.
95JCGS01\P0055-------------------------------------------------------
Generation of Over-Dispersed and Under-Dispersed Binomial Variates
Hongshik Ahn and James J. Chen
This article proposes an algorithm for generating over-dispersed
and under-dispersed binomial variates with specified mean and
variance. The over-dispersed/under-dispersed distributions are
derived from correlated binary variables with an underlying
continuous multivariate distribution. Different multivariate
distributions or different correlation matrices result in
different over-dispersed (or under-dispersed) distributions. The
over-dispersed binomial distributions that are generated from
three different correlation matrices of a multivariate normal are
compared with the beta-binomial distribution for various mean
and over-dispersion parameters by quantile-quantile (Q-Q) plots.
The two distributions appear to be similar. The under-dispersed
binomial distribution is simulated to model an example data set
that exhibits under-dispersed binomial variation.
Key Words: Beta-binomial; Correlated binary; Intracluster correlation;
Monte Carlo; Teratology.
95JCGS01\P0065-----------------------------------------------------------
Bayesian Computation of Software Reliability
Lynn Kuo and Tae Yang
Bayesian methods for the Jelinski and Moranda and the Littlewood
and Verrall models in software reliability are studied. A Gibbs
sampling approach is employed to compute the Bayes estimates. In
addition, prediction of future failure times and future
reliabilities is examined. Model selection based on the mean
squared prediction error and the prequential likelihood of the
conditional predictive ordinaes is developed.
Key Words: Gibbs sampling; Hierarchical Bayes; Model selection;
Stochastic substitution.