Part II
Abstracts of Papers Presented in Person
Sydney International Statistical Congress
Sydney, Australia: 8-12 July 1996


Go back to Part I


Wednesday 10 July: 08:30-10:20


ASC-Q/E Invited: Monitoring and Modelling Air Quality


The Atmosphere as a Stochastic Medium - Implications for Air Quality

Brian L. Sawford, CSIRO, Aspendale, Victoria, Australia

The atmosphere exhibits variability in its flow and meteorology over a huge range of time and space scales. This variability demands a statistical description for many atmospheric phenomena, including air quality. In this paper we discuss the role of variations from interannual down to milliseconds on the fate of air pollutants in the atmosphere. From an operational view point this variability impacts on the specification and quantification of air quality standards, making it necessary to specify the time interval (or averaging period) over which a particular standard or measurement applies. It also tends to mask trends in air quality due to improved control methodology. The variability is also responsible for the dispersion and dilution of the impact of pollutants. Some of the recent uses of stochastic processes to represent this dispersion will be described.

Brian L. Sawford, CSIRO, Division of Atmospheric Research, Private Bag 1, Aspendale Vic 3195, Australia brian.sawford@dar.csiro.au


A Comparison of the Regional Oxidant Model with Observational Ozone Data

Douglas W. Nychka, North Carolina State University, Raleigh, USA

Using meteorology and precursor emissions as the driving elements, the EPA regional oxidant model (ROM) can be used to infer ozone concentrations for the Eastern United States. This research compares the predictions from this model to observed ozone over the summer 1987 model run. Detailed results for Northern Illinois indicate that the discrepancies between observed ozone and model output is typical less than 20\% but model performance does depend on meteorological conditions and overestimates ozone for cool and cloudy days. Some preliminary work has also be carried to validate the model output over a much larger region and with a focus on matching the dynamical characteristics of the ozone field. Here a wavelet decomposition is used to focus the comparisons on different spatial and temporal scales.

Douglas Nychka, Department of Statistics, North Carolina State University, Raleigh, NC, 27695-8203, USA nychka@stat.ncsu.edu


Graphics for Looking at Environmental Data

Daniel B. Carr, George Mason University, Virginia, USA

This talk describes a variety of graphical templates for representing environmental data. The templates derive from research in converting EPA tables into plots and from research in representing complex environmental summaries using plot-augmented maps. Size constraints and a revisiting of perceptual principles suggest variations on familiar cdf plots and box plots. Careful attention to sorting and linking brings many multivariate patterns within the realm of human comprehension. For example one row-labeled plot shows U.S. ecoregion by acreage patterns for 159 AVHRR classes. Multivariate sorting and clustering bring out the patterns embedded in the plot of 159 variables. The talk emphasizes templates for static plots and includes methods for representing estimate variability and quality. The examples address air quality and other environmental applications.

Daniel B. Carr, Dept. AES, 4A7, George Mason University, Fairfax, VA 22030, USA dcarr@voxel.galaxy.gmu.edu


ASC Invited: All-day Workshop on SURVEY DESIGN AND ANALYSIS


Design-Based vs Model-Based Sample Survey Design and Inference: A Somewhat Personal View

Ray Chambers, University of Southampton, UK

Sample survey practitioners have always relied, to a greater or less extent, on assumptions about the population to be surveyed (ie. models). In contrast, sample survey theoreticians have, till relatively recently, viewed models as unnecessary for valid inference, and to be avoided if at all possible. Up until the early 1970's, this conflict resulted in model-free randomisation or ``design-based'' sample survey theory becoming something of an ivory tower academic activity, and one which was largely irrelevant to the way survey practitioners went about their business. Beginning with a seminal paper of Brewer (1963), sample survey theoreticians have come to realise the importance of models for both design and inference. In this paper, Ray gives a somewhat personal overview of how design-based, model-based and model-assisted sample survey design and inference relate to one another. Ray hopes, in doing so, to convince the reader that model-based theory has as much to offer a survey practitioner as the traditional design-based theory which underpins most of established ``good practice'' in sample survey work.

Prof Ray Chambers, University of Southampton, UK


A Model-based Design

Ray Lindsay, ABARE, Canberra, Australia

The techniques used by ABARE to obtain estimates from its farm and fisheries surveys will be described. These surveys examine the economic well being of the principal agricultural and fishery sectors and also have a very important role in forming a research database from which policy alternatives can be examined. Being voluntary the surveys suffer from non-response, making design-based estimation suspect. The model-based methods which are used are based on those described in Bardsley and Chambers, though the surveys are now designed using a list and design-based techniques. The sample weighted sum of key physical production variables is as close as possible to the census value. With the constraint that weights be at least unity, this last aim cannot be achieved exactly, and the methods used to minimise the bias will be described.

Dr Ray Lindsay, Australian Bureau of Agricultural & Resource Economics, Canberra, Australia


A Design-based Survey: The ABS Monthly Labour Force Survey

Robert Clark, Australian Bureau of Statistics, ACT, Australia

The Labour Force Survey is a multi-stage area-based survey of about 30 000 households, conducted monthly by the Australian Bureau of Statistics. The estimation method is a modified form of post-stratification, using independent population benchmarks. Selection weights are taken into account in the process, as probabilities of selection differ within post-strata. Variances are estimated using a split-halves method, to partially take account of the first stage systematic selection method. The Labour Force Survey can be considered a design-based survey, because models are not explicitly stated and inference is considered to rely on randomisation. In this paper, Robert discusses how randomisation-based inference has affected sample design, estimation and variance estimation for this survey. Some issues which are difficult to address in the randomisation framework will be discussed, for example non-response and variance estimation. Robert will also outline some implicit models underlying the survey, and some difficulties that would arise in a model-dependent approach.

Mr Robert Clark, Australian Bureau of Statistics, ACT, Australia


INT Invited: Classification


First-Order Relational Learning

J. Ross Quinlan, University of Sydney, Sydney, Australia

Most nonparametric classification systems assume that each object or case is described by the values of a fixed set of variables, and predict a case's class as a propositional function of these variables. A recent initiative, relational learning, represents each case by arbitrarily many entries in a set of relations, and constructs first-order classifiers expressed as executable Horn clause definitions. This talk will introduce the relational formalism and illustrate its advantages in applications that are difficult or impossible to address by the propositional approach.

J.R. Quinlan, Basser Department of Computer Science, Madsen Building F09, University of Sydney, Sydney Australia 2006 quinlan@cs.su.oz.au


Tree-structured Classification with Applications to Imaging and HIV Genetics

Richard A. Olshen, Stanford University, Stanford, USA

The talk will begin with a brief review of statistical problems in classification and clustering, with an emphasis on binary tree-structured methods. The clustering is of $k$-means type and is used for ``lossy'' coding of images. We sometimes wish to ``compress'' an image by coding its pixel blocks and also to classify the blocks as to whether they contain something of special interest such as a tumor in the case of a medical image. I will present algorithms that enable both goals to be achieved. The clustering also applies to understanding ``quasi-species'' in the context of HIV genetics, in particular the V3 loop region. Examples will be given to illustrate the applications. These results come from collaborations with many individuals over the past six years.

Richard A. Olshen, Division of Biostatistics, Department of Health Research and Policy, Stanford University School of Medicine, Stanford, CA 94305-5092, USA olshen@playfair.stanford.edu
http://www-isl.stanford.edu/~gray.compression.html.



Classification Rules -- Matching the Solution to the Question

David J. Hand, The Open University, Milton Keynes, UK

Superficially, the aim of supervised classification is to classify correctly as many objects as possible. As a consequence, most methodological studies of the quality of supervised classification techniques are based on minimising error rate. However, this is an oversimplification which often fails to take account of the underlying objectives of the classification problem. We examine a series of supervised classification situations where straightforward minimisation of error rate is inappropriate, and show that it leads to suboptimal classification rules. Even when error rate is appropriate, specific aspects of the problem may mean that supervised classification rules lead to suboptimal solutions. Moreover sometimes the possible error rate is bounded, so that apparently poor performance might be almost the best that can be achieved. Worse still, many simulation studies of error rate estimators are based on inappropriate definitions, so yielding doubtful conclusions.

David J. Hand, Department of Statistics, The Open University, Milton Keynes, MK7 6AA, UK d.j.hand@open.ac.uk


IMS Invited: New Directions in Programming Environments


New Directions in Programming Environments: Extensible Software

Günther Sawitzki, University of Heidelberg, Germany

If we want software that can be adapted to our needs in the long run, extensibility is a main requirement. For a long time, extensibility has been in conflict with stability and/or efficiency. This situation has changed with recent software technologies. The tools provided by software technology however must be complemented by a design which exploits their facilities for extensibility. We illustrate this using Voyager, a portable data analysis system based on Oberon.


The R Language

Robert Gentleman, Ross Ihaka, University of Auckland, NZ

The $R$ language has been an experiment in combining what we felt were good ideas from a variety of sources and packaging them together. In particular, we have taken syntax and the notion of lazy function arguments from $S$ and combined them a Lisp-like run-time system. The result is a relatively fast, portable language which will run on quite small machines and which provides a reasonable amount of the functionality of $S$. This talk will discuss some of the choices and tradeoffs made in implementing $R$ and also look to possible future developments, such as compilation, and the consequent changes they are likely to bring to the language.

Robert Gentleman, Statistics Department, University of Auckland


Evolution of the S Language

John M. Chambers, AT&T Bell Laboratories, NJ USA

The S language and its supporting programming environment provide rapid high-level prototyping for computations with data, featuring interaction, graphics, and universal, self-describing objects. Programming in the current version of S uses function objects, informal classes and methods, and interfaces to other languages and systems. A major revision of S, described as ``Version 4'', has been underway for several years, designed to improve the usefulness of the language, for a wide variety of applications and at every stage of the process of programming with data. This talk describes the design goals of the new version, reviews changes to date, and outlines plans for future work.

John M. Chambers, AT&T Bell Laboratories, Murray Hill, NJ USA jmc@research.att.com


IMS/INT Contributed: Wavelet Methods


Discrimination of High Dimensional Data Using Adaptive Wavelets

Yvette L. Mallet, Danny H. Coomans, Olivier Y. de Vel, James Cook University, Queensland, Australia
Jeroslav Kautsky, Flinders University, South Australia, Australia

A major concern arising from the classification of spectral data is that the number of variables or dimensionality often exceeds the number of available spectra. This leads to a substantial deterioration in performance of traditionally favoured classifiers. It becomes necessary to reduce the number of variables to a manageable size whilst, at the same time, retaining as much discriminatory information as possible. A new and innovative technique based on adaptive wavelets, which aims to reduce the dimensionality and optimize the discriminatory information is presented. A discrete wavelet transform is utilized to produce wavelet coefficients which are used for classification. Rather than using a predefined wavelet basis from literature, we perform an automated search for the wavelet which optimizes specified discriminant criteria. Preliminary results indicate that the adaptive wavelet compares well with reference techniques such as linear discriminant analysis and more sophisticated techniques including regularized disciminant analysis and penalized discriminant analysis.

Yvette Mallet, Mathematics and Statistics Department, James Cook University, QLD 4811, Australia yvette.mallet@jcu.edu.au


Use of Wavelets in the Detection of Underwater Sound Signals

Trevor Bailey, Theofanis Sapatinas, Kenneth Powell, Wojtek Krzanowski, University of Exeter, Exeter, UK

We consider underwater sound data in which acoustic events of interest, or `signals', are superimposed on a background sound environment, or `noise'. Such noise is rarely random error, but contains features dependent upon particular underwater conditions and recording apparatus; furthermore, it may change and evolve over time. Given raw sound data, our objective is to develop a model of this noise which is capable of adapting with time, so forming a basis for detection of intermittent departures considered to be signals. We wish to determine where signals begin and end and characterise their interim behaviour, or `signature'. We propose a background noise model that uses recursive kernel estimation of the multivariate distribution of certain summaries of the coefficients resulting from wavelet decompositions of the original sound. Observations considered to be outliers from this kernel estimate are then flagged as signals. The method is illustrated on various types of dolphin sounds.

Dr Trevor C. Bailey, Department Mathematical Statistics & Operational Research, University of Exeter, Exeter, EX4 4QE, UK T.C.Bailey@exeter.ac.uk
http://msor0.ex.ac.uk/Staff/TCBailey/HomePage.html.



On Large-Sample Properties of Wavelet Methods

Peter Hall, Australian National University, Canberra, Australia

Much of the existing theory describing properties of wavelet methods for curve estimation addresses their minimax performance over function classes. Thus, we know that in terms of upper bounds they achieve near-optimal performance uniformly in many different candidates for the target function. However, significantly less information is available about properties when estimating a single function, that function possibly varying with sample size (so that the estimation problem becomes more difficult as the amount of data increases). Results of this nature provide at once information about the limitations of wavelet methods, about their performance relative to other techniques, and about choice of smoothing parameters. Such results will be described in this talk.

Peter Hall, Centre for Mathematics and Its Applications, Australian National University, Canberra, ACT 0200 halpstat@fac.anu.edu.au


Wavelet Thresholding via a Bayesian Approach

Felix Abramovich, Tel Aviv University, Tel Aviv, Israel
Bernard W. Silverman, University of Bristol, Bristol, UK

We discuss a Bayesian formalism which gives rise to a type of wavelet threshold estimation in nonparametric regression. The prior model for the underlying signal $f$ can be adjusted to give functions $f$ falling in any specific Besov space $B^{\sigma }_{p,q}$. It is also of interest in its own right as a way of understanding and demonstrating the meaning of the Besov space parameters. The prior hyperparameters have intuitive meanings and give rise to level-dependent thresholding functions applied to the empirical wavelet coefficients. The performance of the method is investigated and compared to existing thresholding methods. Not surprisingly, incorporating reasonable prior information about the smoothness of the signal can improve the quality of estimates. Several approaches to the choice of the hyperparameters are discussed.

Felix Abramovich, Department of Statistics & Operations Research, Tel Aviv University, Ramat Aviv 69978, Israel felix@math.tau.ac.il


Ideal Time-Frequency De-Noising

Iain M. Johnstone, Stanford University, USA

We describe methods for removal of noise from signals which are neither smooth nor stationary, but which instead have a sparse time-frequency character. Our methods exploit the best-orthonormal-basis paradigm of Coifman-Meyer-Wickerhauser, and operate by selecting a ``best basis for de-noising'' from a library of time-frequency bases, such as the cosine packet or wavelet packet libraries. In the selected basis we apply thresholding to the coefficients of the noisy signal. We develop two theoretical concepts to explain the challenges of best-basis de-noising. The first, the ideal risk, describes the mean-squared error achievable with the use of an {\it oracle} that selects a basis for us ideally (with complete knowledge of the underlying object). The second, {\it oracle inequality}, describes the mean squared error achievable by realizable procedures, which must select a basis from the library based on noisy data. We exhibit several oracle inequalities, showing that a variety of basis selection procedures attain risk within a logarithmic factor of the ideal risk. Such behavior is best possible. This is joint work with David Donoho.

Iain M. Johnstone, Dept. of Statistics, Stanford University, Stanford, CA 94305 USA imj@playfair.stanford.edu


INT Contributed: \ Computational and Multivariate Statistics


An Implementation of Canonical Analyses in Omega-Stat

Hanga C. Galfalvy, E. James Harner, West Virginia University, Morgantown, USA

Principal Component Analysis (PCA) and Correspondence Analysis (CA) are useful methodologies for examining the internal structure of continuous and count data, respectively. The relationships between individuals (e.g., sites in ecological studies) and variables (e.g., species) can currently be viewed by text reports and by dynamic projection biplots in Omega-Stat [{\it Learning\kern .16667em from\kern .16667em Data:\kern .16667em AI\kern .16667em and\kern .16667em Statistics\kern .16667em V.} Springer-Verlag\kern .16667em (1996):334-336]. It is often useful to relate derived principal (or correspondent) variables to external (e.g., environmental) regressor variables. An approach is to model external variables directly. The resulting canonical analyses for principal components (termed redundancy analysis) and correspondence analysis allow one or more derived variables to be constrained linearly by the regressor variables and the remaining derived variables to be unconstrained. A further extension is to do analyses orthogonally to selected regressor variables which result in partial (canonical) analyses. These partial canonical options have been added to principal component and correspondence analyses in Omega-Stat. The dynamic projection plots have been updated to include constrained axes as well as representations of the regressor variables in the biplots.

E. James Harner, 106 Knapp Hall, West Virginia University, Morgantown, WV 26506, USA ejh@cs.wvu.edu


Monotone Smoothing Made Easy

Xuming He, University of Illinois, USA
Peide Shi, Peking University, China

Data smoothing, that is, fitting a smooth function to data, is one of the basic tools in statistical applications. However, it is generally harder to do smoothing with a constraint on the curve such as monotonicity. Ramsay (1988) proposed using a sub-class of monotone splines. We consider searching over the space of all B-spline functions by representing monotonicity as linear constraints to a linear program. Availability of efficient linear programming algorithms makes the computation of monotone smoothing relatively easy and flexible. The asymptotic properties of the constrained fits are similar to those of unconstrained estimates. Examples will be provided, and some comparison with other monotone smoothing methods will also be made.

Xuming He, 725 S. Wright, Champaign, IL 61820, USA he@stat.uiuc.edu


Additive Nonparametric Regression with Autocorrelated Errors

Michael Smith, Robert Kohn, Australian Graduate School of Management, Sydney, Australia
Chi-Mong Wong, Hong Kong University of Science and Technology, Hong Kong

A Bayesian approach is presented for estimating non-parametrically an additive regression model with autocorrelated errors. Each of the potentially nonlinear components is modeled as a regression spline using many knots, while the errors are modeled by a high order stationary autoregressive process parameterized in terms of its partial autocorrelations. Significant knots and partial autocorrelations are selected using variable selection. All aspects of the model are estimated simultaneously using Markov chain Monte Carlo. It is shown empirically that the proposed approach works well on a number of simulated examples.


Construction of Optimal Row-Column Designs by Computer

Nam-Ky Nguyen, CSIRO Clayton, Australia

This paper describes an effective computer-based method for constructing optimal or near-optimal row-column designs with up to 100 treatments. The method generates designs that compare favourably with previously published results whilst at the same time greatly increasing the range of available RCDs.

Nam-Ky Nguyen, CSIRO-IAPP Biometrics Unit, Private Bag 10, Rosebank MDC, Clayton, Vic. 3169, Australia namky@forprod.csiro.au


Application Aspects of the Essentially Multivariate Analysis

Vadim I. Serdobolskii,

A survey is proposed describing advances of a new branch of the multivariate statistics, which is a theory of development of multivariate procedures consistent as sample size $N$ is increasing uniformly with respect to dimension $n$ of variables, if the ratio $n/N$ is bounded. This theory is based on the developed recently in the asymptotic theory of increasing random matrices, initiated by a paper of V.A.Marchenko and L.A.Pastur (1967), and developed in monographs by V.L.Girko (1975-1988), and applied recently (1988-1996) to sample covariance matrices (A.N.Kolmogorov's asymptotics). The most important result of it for the multivariate analysis is that under some restriction on the variables dependence the main parts of regular spectral functions of sample covariance matrices, first, involve components of order of $n/N$, and second, do not depend on moments higher than the second. The further developments have provided approximately non-improvable and distribution free solutions of a number of well-known problems: of the expectation vector estimation, of the stable linear discrimination analysis and of linear regression with random predictors.


Self-Validating Computation of Normal Probabilities Using a Continued Fraction

Trong WU, Southern Illinois University, Edwardsville, USA

A self-validated method for computing the normal distribution with a continued fraction is introduced. The results of the computation will provide verification of the magnitude of the absolute error. With this method, we obtained 18 digits of accuracy for 80 bit floating-point numbers. We provide two new results, a lemma and a theorem; the proofs are mathematically simple. The lemma and the theorem are the cornerstones of the computation. In this paper, we discuss problems in accurately computing the normal distribution function, report the difficulties of implementing the algorithms and formulas, and present programming skills to overcome exceptional conditions such as overflow and underflow.

Trong Wu, Department of Computer Science, Southern Illinois University, Edwardsville, IL 62026-1656, USA twu@siue.edu


A Mixed Measure Formulation of the EM Algorithm for Huge Data Set Applications

George W. ROGERS, Bradley C. WALLET, Naval Surface Warfare Center Dahlgren Division, Virginia, USA
Edward J. WEGMAN, George Mason University, Virginia, USA

A formulation of the E-M algorithm based on multiple measures is presented that is suitable for use with huge data sets . The approach is based on partitioning the observation set and applying an appropriate measure to the observations in each partition. In regions where the observations are sufficiently dense, a discrete measure can be applied with little loss of information whereas in regions of sparse observations it is preferable to use the standard continuous measure. A generalized version of the EM algorithm is derived for the mixed measure case. With a suitable choice of partitions and measures, this mixed measure version of EM scales nicely even to huge data sets. A Monte Carlo example is presented which uses a sample size of 10 million observations. The example includes a comparison to subsampling.

George W. Rogers, Bradley Wallet, Code: L42, Dahlgren, VA 22448 USA bwallet@nswc.navy.mil
http://farside.nswc.navy.mil/.



INT/IMS Contributed: Nonparametric Statistics


Some recent developments of U- and V-statistics

Grace S. Shieh, Academia Sinica, Taipei, Taiwan

The developments of $U$- and $V$-statistics up to 1988 were reviewed in Serfling [{\it Encyclopedia of Statist.\kern .16667em Sci.\kern .16667em }\kern .16667em 9 (1988):436-444] and Riedwyl [{\it Encyclopedia of Statist.\kern .16667em Sci.\kern .16667em }\kern .16667em 9 (1988):1159-1169], accordingly. Since then further properties of $U$- and $V$-statistics have been explored, and new classes of $U$- and $V$-related statistics introduced. Four classes of recent developments will be reviewed.

Grace S. Shieh, Institute of Statistical Sci., Academia Sinica, Taipei 11529, Taiwan gshieh@stat.sinica.edu.tw


Permutation Tests for Reflected Symmetry

Georg Neuhaus, University of Hamburg, Germany
Li-Xing Zhu, Chinese Academy of Sciences, China

The paper presents a permutation procedure for testing reflected (or diagonal) symmetry of the distribution of a multivariate variable. The test statistics are based on empirical characteristic functions. The resulting permutation tests are strictly distribution free under the null hypothesis that the underlying variables are symmetrically distributed about a center. Furthermore, the permutation tests are strictly valid if the symmetric center is known and are asymptotically valid if the center is an unknown point. The equivalence, in the large sample sense, between the tests and their permutation counterparts is established. The power behavior of the tests and their permutation counterparts under local alternative are investigated. Some simulations with small sample sizes ($\leq 20$) are conducted to demonstrate how the permutation tests work. {\em Acknowledgment}: Second author's work supported by the NNSF of China and a fellowship of the Max-Planck Gesellschaft zur Foerderung der Wissenschaften of Germany, while on leave from Institute of Applied Mathematics, Chinese Academy of Sciences, at University of Hamburg.

Li-Xing Zhu, Institute of Applied Mathematics, Chinese Academy of Sciences, China zhu@till.math.uni-hamburg.de


On Characterizations of Continuous Distributions by Moments of Order Statistics when Sample Size is Random

Zofia Grudzień}, Dominik Szynal, Maria Curie-Sklodowska University, Poland

Let $\delimiter "4266308 X_{n},n \geq 1\delimiter "5267309 $ be a sequence of i.i.d. random variables obeying distribution $F$ and $N$ be a positive integer-valued random variable independent of $(X_{n},n \geq 1)$ such that $E(X^{2}_{k:N} | N \geq k) < \infty $ for some $k\geq 1$, where $X_{k:n}$ stands for $k$-th order statistics. We characterize continuous distribution functions $F$ when a probability distribution of a sample size $N$ belongs to a class of counting distributions. Characterizing conditions generalize or extend, among other things, those given in Lin and Twoo [{\it Statistics and Probability Letters} 7(1989),357-359; {\it SEA Bull. Math.} Vol. 15, No 2(1991), 139-144].

Dominik Sznyal, Institute of Maths, Maria Curie-sklodowska University, Pl M Curie-Sklodowskiej 1, 20-031 Lublin Poland szynald.@plumcs11.bitnet


Bayesian Analysis of Order-Statistics Models for Ranking Data

Philip L.H. Yu, The University of Hong Kong, Hong Kong

In modelling ranking data, order-statistics models are commonly used. The idea behind is that a judgement about the $i$th object (out of $k$ objects) can be represented by a random variable $U_i$ which follows a known distribution. Various distributions such as Normal, Exponential, have been suggested in the literature. One important problem in estimating the above models is the computational burden in evaluating the multidimensional numerical integration which could be highly inaccurate when $k$ is large. In order to overcome this problem, a Bayesian approach is proposed to fit the order-statistics models via the use of the Monte Carlo Markov Chain methods in this paper. The joint distribution of the utilities is assumed to be a multivariate $t$ distribution. The proposed techniques are demonstrated by simulation studies and an empirical investigation of motor vehicle preferences as illustrated in Dansie (1986) [{\it Applied Statistics}, 269-275].

Dr Philip L.H. Yu, Department of Statistics, The University of Hong Kong, Pokfulam Road, Hong Kong PLHYU@HKUCC.HKU.HK


Extending CCI Beyond the Two-Sample Problem

Paul W. Vos, Thomas C. Chenier, East Carolina University, Greenville, NC, USA

Simulation studies have shown the superiority of CCI (conditional confidence intervals) over the bootstrap and other methods used for the one and two sample problems. In this paper we give some theoretical motivation for using conditional confidence intervals. We also consider some theoretical and computational issues for extending CCI beyond the two sample problem.

Paul Vos, Biostatistics, SAHS, East Carolina University, Greenville, NC 27858, USA bsvos@ecuvax.cis.ecu.edu


Trimmed Conditional Confidence Intervals for a Shift between Two Populations

Paul W. Vos, Thomas C. Chenier, East Carolina University, Greenville, NC, USA

The trimmed conditional confidence interval (CCI) is introduced for estimating the shift between two populations. The trimmed CCI is compared to an untrimmed CCI and to intervals obtained from the trimmed mean, the usual $t$-procedure, and the bootstrap. The simulations show that the trimmed CCI generates the shortest intervals while maintaining coverage rates at the nominal level. The untrimmed CCI is reasonably robust with respect to moderate departures from the procedural assumptions and performs well compared to the nontrimmed procedures. Both versions of the CCI are calculated easily and are suitable for use in an interactive setting. SAS and S code for obtaining the CCI are available from the authors.

Tom Chenier, Biostatistics, SAHS, East Carolina University, Greenville, NC, 27858, USA bschenie@ecuvm.cis.ecu.edu

Wednesday 10 July: 10:30-12:20

ASC Invited: Plenary Address


Bayesian Curves and CARTs

Adrian F.M. Smith, David G.T. Denison, Bani K Mallick, Imperial College, London, UK

Bayesian methods are proposed for curve fitting and generating classification and regression trees. The approach to curve fitting is through piece-wise polynomials for which a joint posterior distribution is set up over both the number and position of the knots and the polynomial coefficients. The approach to CARTs is through a posterior distribution over a space of possible trees. For both curves and CARTs, the posterior distributions are explored using tailored forms of reversible jump Markov chain Monte Carlo methods.

Adrian F.M. Smith, Department of Mathematics, Imperial College, 180 Queen's Gate, London SW7 2BZ UK a.smith@ic.ac.uk


INT/IMS Invited: Plenary Address


Local Learning Based on Recursive Covering

Jerome H. Friedman, Stanford University, USA

Local learning methods approximate a global relationship between an output (response) variable and a set of input (predictor) variables by establishing a set of "local" regions that collectively cover the input space, and modeling a different (usually simple) input-output relationship in each one. Predictions are made by using the model associated with the particular region in which the prediction point is most centered. Two widely applied local learning procedures are near-neighbor methods, and decision tree induction algorithms (CART, C4.5). The former induce a large number of highly overlapping regions based only on the distribution of training input values. By contrast, the latter partition the input space into a (relatively) small number of highly customized (disjoint) regions using the approaches in an attempt to combine the strengths of both. A large number of highly customized overlapping regions are produced based on both the training input and output values. Moreover, the data structure representing this cover permits rapid search for the prediction region given a set of (future) input values. It can also provide interpretable information on the nature of the global input-output relationship.

Jerome H. Friedman, Stanford University, USA jhf@playfair.stanford.edu


ASC Invited: All-day Workshop on SURVEY DESIGN AND ANALYSIS


Minimising Error in Panel Surveys

Gerry Bardsley, ANOP

Panel surveys are used by both government and private sector agencies. Gerry, with extensive experience across both sectors, will provide a comprehensive overview of the potential sources of non response in panel surveys. He will discuss some of the traditional and more innovative solutions to minimising non response for panel surveys. This will include a discussion of MIS procedures, successful communication and relationship management strategies and the use of incentives in panel surveys. Gerry will draw on experience with the ABS Monthly Labour Force survey, DEET's Longitudinal Survey program and other private sector studies.


Minimising Error in Telephone Surveys

Jim Millwood, Wallis Consulting Group

With telephone surveys increasingly becoming the most efficient means by which data can be collected in a timely manner, it is not surprising that many government agencies and private sector market research organisations rely heavily on this methodology. Against this background, Jim will draw on his significant experience in both public and private sectors to detail the potential sources of non response and then discuss some of the techniques now being used both here and overseas to minimise non response to telephone surveys.

Mr Jim Millwood, Wallis Consulting Group


Adjusting for Non-response through Calibration Estimation in ABS Household Surveys

Sadeq Chowdhury, Frank Yu, Australian Bureau of Statistics, ACT, Australia

The Australian Bureau of Statistics has introduced the calibration method (Deville and Sarndal, 1992) for estimation in several household surveys. The approach unifies a number of different estimation procedures which involve auxiliary information for improving efficiency of estimation. As well, it will help reducing biases due to sample undercoverage or non-response. The approach includes, as a special case, the post-stratification estimation method which is used for the Monthly Labour Force Survey. In this talk, Frank will give an overview of the calibration method and discuss its application for a number of ABS surveys. He will present an evaluation of the use of household benchmarks in selected surveys. A reduction in the bias of estimates was made possible by the use of auxiliary information on the number of households by household type in the estimation procedures. The talk will conclude with a list of further research problems that are being investigated.

Mr Sadeq Chowdhury, Australian Bureau of Statistics, ACT, Australia

Wednesday 10 July: 14:00-15:50

ASC-E/INT Invited: The Interface of Computing with Statistics in Environmental Biology and Environmental Protection


Interactive Statistics on the Internet: Applications in Environmental Biology

Walter W. Piegorsch, R. Webster West, University of South Carolina, Columbia, USA

Environmental phenomena generate complex data that often require new forms of computational analysis. This concern is particularly rich in environmental biometry, where many different scientific questions and associated data structures have spawned a wide variety of statistical models and computational methods. Unfortunately, user software constraints often make application of these methods difficult. To collect and disseminate new, advanced computational methods of data analysis, we describe a prototype World Wide Web site for interactive statistics. The site allows environmental biologists to identify and employ targeted statistical methods for their specific data analytic problems. The site is accessible to anyone with an Internet browser, and provides assistance in an automated fashion, at no cost to the user. We illustrate its use with examples from computationally-intensive problems in environmental biology, including change-point analysis and quantitative risk assessment.

Walter W. Piegorsch, Department of Statistics, University of South Carolina, Columbia SC 29208, USA piegorsc@stat.sc.edu
http://www.stat.sc.edu/~west/TR187.ps.



The Interface Between Computing and Statistics in Environmental Protection

Lawrence H. Cox, US Environmental Protection Agency, Research Triangle Park, USA

Environmental data sets typically are large, noisy, and not well characterized by standard statistical distributions. In addition, they frequently exhibit both spatial and temporal structure, possibly different structure at different scales. Environmental pollution is often confounded with other complex phenomena such as meteorology or hydrology. Environmental exposure may be difficult to measure, highly variable and difficult to characterize, particularly for small areas or subgroups. Environmental effects on ecology and human health are dependent on exposure and biological or physiological processes that, too, are understood imperfectly. The paradox of having too much data to analyze but not enough to draw firm conclusions is commonplace. These situations demand computationally intensive statistical methods. A selection of these problems and associated statistical methods involving optimization, modelling, resampling, simulation, and data validation and synthesis will be discussed.

Lawrence H. Cox, US Environmental Protection Agency, National Exposure Research Laboratory (MD-75), Research Triangle Park, NC 27711 USA cox.larry@epamail.epa.gov


Water Quality Monitoring in a River

Glenn Stone, John Donnelly, CSIRO, Sydney, Australia

In this paper we discuss some ideas for the design of a sampling scheme for the long term monitoring of water quality in a ``linear'' system such as a river. Based on data from a pilot study, we build piecewise linear models with knot locations chosen from the data using variable selection techniques. We also consider how to select sampling locations so as to fit these models. The application had an unusual fitting criterion, and we discuss its estimation. The model can be extended to cope with tributaries and small embayments, and we discuss these extensions and some problems with this approach. This work was carried out in collaboration with Australian Water Technologies.

Glenn Stone, CSIRO Division of Mathematics and Statistics, Locked Bag 17, North Ryde, NSW 2113, Australia Glenn.Stone@dms.CSIRO.AU


ASC Invited: All-day Workshop on SURVEY DESIGN AND ANALYSIS


Implications of Survey Design for Data Analysis

Chris Skinner, University of Southampton, UK

Chris Skinner will review approaches to handling complex sampling designs in the analysis of survey data. A basic distinction is between an aggregated and a disaggregated approach. In an aggregated analysis, the design is treated as a `nuisance' in the definition of parameters and is relevant only to inference. In a disaggregated analysis, the design reflects population structure of relevance to the subject under study, and this structure needs to be reflected in the analysis, for example, via multilevel models. Some issues relevant to aggregated analysis, including weighting and the role of variance estimation software, such as SUDAAN, will be considered. Examples of surveys which have required complex designs to efficiently address their aims are the Australian Workplace Industrial Relations Survey 1995 and the 1991 Queensland Crime Victims Survey, both of which will be discussed during this session.

Prof Chris Skinner, University of Southampton, UK


Australian Workplace Industrial Relations Survey 1995 (AWIRS 95)

Roger Jones, QED Pty Ltd

AWIRS 95 consists of a main survey of 2001 workplaces with 20 or more employees, a small workplace survey of 1055 workplaces with 5-19 employees and an employees survey with 19321 returns. For the main survey, the design aimed to ensure approximately equal standard errors for 18 industry groups and five workplace size bands. Roger will discuss the sample design, fieldwork outcomes and weighting procedures. A particular problem was that the sampling frame included a large number of `dead' businesses and had poor classification of size of the workplaces.


1991 Queensland Crime Victims Survey

Peta Frampton, Queensland Government Statisticians Office, Brisbane, Australia
David Steel, University of Wollongong, NSW, Australia

A model to explain fear of crime in Queensland was developed and fitted to data from the 1991 Queensland Crime Victims Survey which had a stratified multistage clustered design. The overall sample size of 7530 households spread throughout Queensland yielded 6315 personal crime interviews. Fear of crime was measured from the answers to the question about respondents' feelings of safety when walking alone in an area after dark. Issues considered in the analysis, including the selection and categorization of variables, use of weighted or unweighted data, and presentation of results, will be discussed.

Ms Peta Frampton, Queensland Government Statisticians Office, Australia


ASC Invited: Nonparametric Methods for Design and Analysis of Experiments


Nonparametric Methods for the Analysis of Fixed and Mixed Factorial Designs

Edgar Brunner, University of Göttingen, Germany

So far, the analysis of factorial designs in a nonparametric setup has been restricted mainly to the one-way layout. Procedures for higher-way layouts are either restricted to semiparametric models or to special designs (see e.g. Brunner & Puri, [{\em Handbook of Statistics, Vol. 13, (to appear 1996)}]). Moreover, the continuity of the underlying distribution functions is assumed in general. No unified theory for the analysis of factorial designs seems to be available. The idea to formulate nonparametric hypotheses by means of the distribution functions in a similar way as in the theory of linear models has been introduced by Akritas & Arnold [{\it J. Amer. Stat. Ass.}, {\em 89}, (1994):\penalty \@M \ 336-343] in special repeated measurements models and has been further developed for fixed unbalanced factorial designs by Akritas, Arnold & Brunner [{\it preprint}, (1994)], Akritas & Brunner [{\it preprint}, (1995)] and Brunner & Puri, {\em Handbook of Statistics, Vol. 13, (to appear 1996)}] for unbalanced mixed models. The aim of this talk is to provide a unified theory for the analysis of nonparametric fixed and mixed models in factorial designs based on the nonparametric hypotheses. The results of the aforementioned papers are generalized to the case of score functions with a bounded second derivative. It is not assumed that the underlying distribution functions are continuous. This means that data coming from continuous distributions as well as discrete ordinal data are covered by this approach. Within this framework, also the question of the {\it rank transform property} of a rank statistic is shortly addressed. A general method to derive useful approximations for small samples is also considered. The results are applied to special factorial designs including ordered alternatives and longitudinal data.

Prof. Edgar Brunner, Abt. Med. Statistik, Humboldt Allee 32, D-37073 Göttingen, Germany brunner@ams.med.uni-goettingen.de


Fully Non-parametric Hypotheses

Steven F. Arnold, Pennsylvania State University, USA

The traditional approach to rank tests in ANOVA models assumes that the various distributions are all shifted versions of a common distribution. Since this formulation assumes the existence of shift parameters, it is a semi-parametric model. We consider a different, fully non-parametric formulation for hypotheses in ANOVA models. For example, our version of the hypothesis of no interaction in a two-way model is that the distribution function in the $(i,j)$ cell is a mixture of a distribution function representing the row effect ($i$) and a distribution function representing the column effect ($j$). We show that this hypothesis is equivalent to the hypothesis that there is no interaction between the row and column effect on any scale.

Dr Steven F. Arnold, 313 Classroom Building, Department of Statistics, The Pennsylvania State University, University Park, PA 16802, USA sfa@stat.psu.edu
http://www.stat.psu.edu/faculty/arnold.html.



Validation of Nonparametric Linear Models

Axel Munk, Holger Dette, Ruhr-Universität Bochum, Bochum, Germany

A new test is proposed in order to verify that a regression function $f$ has a prescribed parametric form. This test is based on the large sample behaviour of the $L^2$-distance between $f$ and the subspace $U$ spanned by the regression functions to be verified. Up to now, a large $p$-value associated to a test for the hypothesis $H:f\in U$ is considered as a sufficient measure for the evidence of $H$. We illustrate how misleading decisions may become when we follow this approach. In contrast, we propose a test for the hypothesis that $f$ is not in a preassigned $L_2$-neighborhood of $U$ which allows us to `verify' the model $U$ at a controlled type I error rate. The performance of the suggested test is very simple, in particular we do not require nonparametric estimates of the regression function and hence the test does not depend on the choice of bandwith or kernel. Ideas transfer to other problems as the validation of (nonparametric) linear models or multivariate designs.

Dr. A. Munk, Ruhr-Universität Bochum, Fakultät für Mathematik, Universit"atsstr. 150, 44780 Bochum, Germany axel.munk@rz.ruhr-uni-bochum.de


Nonparametric Adaptive Procedures in One-factor Experiments

Shan Sun, Texas Tech University, Lubbock, TX, USA

We provide a class of adaptive nonparametric procedures similar to Hogg's procedures, but for more general situations, including (i) testing for ordered alternatives in $c$-sample location problem ($c>2$), (ii) testing, estimation and multiple comparisons in one-factor experiments. In addition, we will resolve the problem of handling ties. Some applications are provided. The supremacy of these procedures over the usual parametric procedure based on the sample means, and the usual nonparametric procedure (based on ranks) is established.

Shan Sun, Texas Tec University, Lubbock, TX, USA
http://www.math.ttu.edu.



IMS Invited: Bayesian Nonparametrics


Approaches for Semiparametric Bayesian Regression

Alan E. Gelfand, University of Connecticut, USA

Developing regression relationships is a primary inferential activity. For such problems modern statistical work encourages less presumptive, i.e., nonparametric specification for at least a portion of the modeling. In the context of hierarchical models we review the various ways in the literature in which such specification has been incorporated. In particular, it can be adopted at a particular hierarchical stage. It can capture error distributions, regression functions, hazards in survival models and calibration functions in measurement error models. Depending upon the application, it can be implemented through countable mixtures, polya trees including dirichlet processes and independent increment or levy processes. We offer a brief survey of all of these approaches using a range of regression examples.

Alan E. Gelfand, Dept. of Statistics, U-120, University of Connecticut, Storrs, CT 06269-312, USA GELFAND@UCONNVM.UCONN.EDU


Issues and Models in Bayesian Non/Semi-parametric Time Series Analysis

Mike West, Duke University, NC, USA

This paper will introduce and review some areas of recent development and current interest in Bayesian non/semi-parametric modelling in time series. The discussion will select from a collection of topics, including: (a) mixture modelling, and other, approaches in non-linear auto-regressive time series; (b) issues and models arising in dealing with timing variabilities and stochastic time "deformations" underlying observed non-linear structure in time series; (c) non-parametric models and methods for problems of detecting and inferring long-range dependence in time series; (d) the treatment of problems of specifying and estimating measurement error structures in state-space modelling of time series, including approaches using wavelet methods; and (e) non-parametric models for time-evolving distributions. Discussion will include applied time series problems that underlie and motivate such research directions.


Bayesian Nonparametrics: From Poincaré (1896) to Berkeley (1996)

Persi Diaconis, Harvard University

I will survey efforts to do Bayesian statistics on infinite dimensional problems. These start with efforts of Poincaré and Hausdorf who introduced Gaussian and stickbreaking priors. They culminate in work of Wahba, and Dubins and Freedman. This work is finally being tried out in practical problems. Issues of consistency and common sense will be addressed as well as recent joint work with Freedman and Petroni.


IMS/INT Contributed: Resampling Methods


Implementation of Saddlepoint Approximations to Bootstrap Distributions

Angelo J. Canty, Anthony C. Davison, University of Oxford, U.K

In many situations, saddlepoint approximations can be used to replace Monte Carlo simulations to find the bootstrap distribution of a statistic. We explain how bootstrap and permutation distributions can be expressed as conditional distributions and how methods for linear programming and for fitting generalized linear models can be used to find the saddlepoint approximations. If the statistic of interest cannot be expressed in terms of a single estimating equation, then an approximation to the marginal distribution of the statistic is required. This situation arises commonly in finding the bootstrap distribution of a studentized statistic. We discuss two proposed approximations and look at the their implementation and performance. The results are illustrated using an example from statistical process control.

Angelo J. Canty, Dept. of Statistics, University of Oxford, 1 South Parks Road, Oxford, U.K., OX1 3TG canty@stats.ox.ac.uk


Resampling Methods for Estimation Following Sequential Tests

Denis Heng-Yan Leung, Memorial Sloan-Kettering Cancer Center, New York, USA
You-Gan WANG, CSIRO Biometrics Unit, Queensland, Australia

It is a long standing problem in sequential analysis that maximum likelihood estimation following a sequential procedure usually results in biases. The existing methods to obtain improved estimates are of limited use because of their heavy dependence on analytic approximations which may not be available in many cases. In this paper, we explore the use of resampling techniques for bias reduction. Specifically we consider parametric bootstrap and stochastic approximations to obtain copies of the original sequential trials. We show how to combine information from the original sequential trial and its copies to obtain improved parameter estimates.

Denis Heng-Yan Leung, Department of Epidemiology and Biostatistics, Memorial Sloan-Kettering Cancer Center, New York, USA leung@biosta.mskcc.org


Combined Empirical Likelihood

Bruce M. Brown, University of Tasmania, Australia
Song Xi Chen, La Trobe University, Australia

In conventional empirical likelihood, there is exactly one structural constraint for every parameter. In some circumstances, additional constraints are imposed to reflect additional and sought-after features of statistical analysis. Such an augmented scheme is called combined empirical likelihood; it uses the implicit power of empirical likelihood to produce very natural adaptive statistical methods, free of arbitrary tuning parameter choices, and does have good asymptotic properties. The price to be paid for such good properties is in extra computational difficulty. To overcome the computational difficulty, we propose a `least-squares' version of the combined empirical likelihood. The method is illustrated by application to the case of combined empirical likelihood for mean and median in one sample location inference.

Bruce M. Brown, University of Tasmania, TAS 7005, Australia brown@hilbert.maths. utas.edu.au


Sequential Linearization of Empirical Likelihood Constraints with Application to U-statistics

Kim-Anh Do, University of Queensland, Brisbane, Australia
Andrew Wood, University of Bath, Bath, United Kingdom
Bradley Broom, Queensland University of Technology, Brisbane, Australia

Empirical likelihood for a mean is straightforward to compute, but for non-linear statistics significant computational difficulties arise because of the presence of non-linear constraints in the underlying optimization problem. These difficulties can be overcome with sufficient time, care and programming effort. However, they do make it difficult to write general software for implementing empirical likelihood, and hinder the widespread use of empirical likelihood in applied work. The purpose of this paper is to suggest an approximate approach which sidesteps the difficult computational issues. The basic idea, which may be described as ``sequential linearization of constraints'', is a very simple one, but we believe it could have significant ramifications for the implementation and practical use of empirical likelihood methodology. One application of the linearization approach is to the problem of constructing empirical likelihood for $U$-statistics. However, the idea can be extended to a broad range of smooth statistical functionals.

Kim-Anh Do, Department of Social and Preventive Medicine, University of Queensland, PA Hospital, QLD 4102, Australia kim@sophocles.pa.uq.oz.au


Asymptotic Comparison of Iterated Bootstrap Confidence Intervals

Stephen M.S. Lee, The University of Hong Kong, Hong Kong

High-order correction to a standard bootstrap confidence interval can be made by adjusting a tuning parameter that parameterizes the interval end points. Past literature has seen different ways of selecting such a parameter: Hall [{\it Ann.\kern .16667em Statist.} 14\kern .16667em (1986):1431-1452] and Beran [{\it Biometrika} 74\kern .16667em (1987):457-468]. Adjustment of the tuning parameter is usually performed by an iterated bootstrap procedure, resulting in an interval commonly known as the iterated bootstrap confidence interval. The present paper examines the asymptotic properties of such intervals in the smooth function model setting (Bhattacharya and Ghosh [{\it Ann.\kern .16667em Statist.} 6 \kern .16667em (1978):434-451]). In particular, general explicit formulae are derived for their end points and asymptotic coverages. A theoretical comparison is made between various types of iterated intervals by direct computation of their asymptotic end points and coverages for some common examples of smooth function models, using the machinery introduced by Lee and Young [{\it Ann.\kern .16667em Statist.} 23 \kern .16667em (1995):1301-1330]. Both one-sided and two-sided cases are taken into consideration.

Dr S M S Lee, Department of Statistics, The University of Hong Kong, Pokfulam Road, Hong Kong smslee@hkuxa.hku.hk


Linear Model Selection Based on Risk Estimation

J. Louisa J. Snyman, Rand Afrikaans University, Johannesburg, South Africa
Johannes H. Venter, Potchefstroom University, Potchefstroom, South Africa

The problem of selecting one model from a family of linear models to describe a normally distributed observed data vector is considered. The notion of the model of given dimension nearest to the observation vector is introduced on the grounds that nearly all selection criteria limit selection to such models. An estimator of the risk associated with such a nearest model is proposed extending the approach of Breiman [{\it JASA}\kern .16667em 87\kern .16667em (1992):738-754]. This leads to a promising new resampling type model selection criterion, called the ``partial bootstrap'', which is an attractive alternative to criteria such as Mallows' $C_p$. The methods are illustrated in a regression variable selection context and the criterion is evaluated by way of simulation studies.

Dr. J.L.J. Snyman, Dept. of Statistics, Rand Afrikaans University, P.O. Box 524, Aucklandpark, Johannesburg, 2006, South Africa JLJS@RAU3.RAU.AC.ZA


On a Resampling Approach to Choosing the Number of Components in Normal Mixture Models

G.J. McLachlan, D. Peel, University of Queensland, Brisbane, Australia

We consider the fitting of a $g$-component normal mixture model to multivariate data. The problem is to test whether $g$ is equal to some specified value versus some specified alternative value. This problem would arise, for example, in the context of a cluster analysis effected by a normal mixture model, where the decision on the number of clusters is undertaken by testing for the smallest value of $g$ compatible with the data. A test statistic can be formed in terms of the likelihood ratio. Unfortunately, regularity conditions do not hold for the likelihood ratio statistic to have its usual asymptotic null distribution. One approach to the assessment of $P$-values with the use of this statistic is to adopt a resampling approach. An investigation is undertaken of the accuracy of $P$-values assessed in this manner.

G.J. McLachlan, Department of Mathematics, The University of Queensland, Queensland 4072, Australia gjm@maths.uq.edu.au


INT/IMS Contributed: \ Classification and Discrimination


Discrimination with Curves

P.J. Brown, University of Kent, Canterbury, UK
T. Fearn, University College London, London, UK
M.S. Haque, University of Kent, Canterbury, UK

We tackle the problem of discrimination with a high dimensional vector of observations and a relatively small training set. For example there is interest in using the near-infrared spectrum of a wheat sample to identify the wheat as belonging to one of a number of varietal groups. Alternatively it may be important to identify microbiological taxa in the search for new Pharmaceutical drugs using their NIR spectra. Standard approaches often breakdown in these high dimensional settings. We examine a variety of approaches both old and new, and especially Bayesian, to the problem of discrimination with curves.

Philip J. Brown, Institute of Mathematics and Statistics, Cornwallis Building, University of Kent at Canterbury, Canterbury, Kent CT2 7NF UK Philip.J.Brown@ukc.ac.uk


An Alternative Approach to Clustering

D.H. Liyana Arachchige, Royal Melbourne Institute of Technology, Australia

The inability of the commonly used agglomerative hierarchical clustering methods to grab a more complete view of the clusters in a data set compels the user of such methods to search for alternative methods. To address this problem, an approach called 3-D Clustering was developed and its implementation was discussed. The method is illustrated with two examples.

D. H. Liyana Arachchige, Dept. of Statistics and Operations Research, Royal Melbourne Institute of Technology, G.P.O. Box 2476V, Melbourne, Australia 3001


Decision Trees for Classifying 2D and 3D Shapes from Intensity Data

Bruno Jedynak, INRIA Rocquencourt, France

We present experiments for classifying 2D and 3D shapes from intensity data using decision trees, and discuss various protocols involving multiple trees, randomization and supervised vs. unsupervised learning. This is joint work with Yali Amit, Donald Geman, Ken Wilder and Fran\setbox \z@ \hbox {c}{\lineskiplimit -\maxdimen \unhbox \voidb@x \vtop {\baselineskip \z@skip \lineskip .25ex\everycr {}\tabskip \z@skip \halign {##\crcr \unhbox \z@ \crcr \hskip \hideskip \char 24\hskip \hideskip \crcr }}}ois Fleuret. The basic methodology, common to all applications, is as follows: The array of pixel values is transformed into an array of codes (``tags'') for small sub-images, capturing the local topography of the image. There are no special or distinguished locations and the tags do not individually convey information about the shape class. Instead, all the discriminating power derives from spatial relationships among the tags, expressed by constraints on relative angles and distances, and hence invariant to many transformations, including translation, scaling, and local nonlinear perturbations. Decision trees are used to access this source of information. Multiple, randomized trees are grown using the queries as candidates for splitting rules. The complexity of the queries increases with tree depth. At each node, a small random sample of queries is investigated and the one which achieves the largest drop in the conditional entropy of the class label is chosen. Randomization leads to weak dependence between the trees. Each terminal node of each tree is labeled by an estimate of the corresponding conditional distribution over the classes. A test image is classified by adding together the terminal node distributions it reaches and taking the mode. Since tree-growing and parameter estimation are separated, the estimates can be refined indefinitely without reconstructing the trees. In addition, the queries may be chosen in an unsupervised mode, or chosen based only on some of the shape classes, and good error rates still maintained for all classes. The proposal and variations are tested and compared on artificially corrupted 2D shapes (LaTeX symbols), isolated handwritten digits (the rates achieved with the NIST database are comparable to the best reported elsewhere) as well as 3D solid objects.

Bruno M. Jedynak, INRIA Rocquencourt , B.P 105 78153, Le Chesnay Cedex France Bruno.Jedynak@inria.fr
http://www-rocq.inria.fr/syntim/research/jedynak.



Selection of Variables in Discriminant Analysis by Means of Cross Model Validation

Nelmarie Louw, University of the Western Cape, Bellville, South Africa
Niel J. le Roux, Sarel J. Steel, University of Stellenbosch, Stellenbosch, South Africa

The cross model validation technique proposed by Hjorth [{\em Computer Intensive Statistical Methods} (1994): 24-53] for regression analysis, is applied to discriminant analysis. The application of this method in the discriminant analysis context entails using error rate as variable selection criterion. To ensure unique model selection, a normally smoothed version of error rate estimation (Snapinn and Knoke [{\em Technometrics} 27(1985): 199-206]) is used. Cross model validation error rate estimates are obtained, as well as other error rate estimates calculated for the usual stepwise selection procedures. These error rates are compared with respect to bias and unconditional mean squared error. It is shown that the cross model validation method performs well.

Nelmarie Louw, Department of Statistics, University of the Western Cape, Private Bag X17, Bellville 7530, South Africa nelmarie@iafrica.com


Visualization and Implementation of Feedforward Neural Networks

Sigbert Klinke, Humboldt-University, Berlin, Germany
Janet Grassmann, DKFZ Heidelberg, Heidelberg, Germany

We first introduce Feedforward Neural Networks (FNN). These kind of networks are often used as black-box-method for classification and regression. The aim is to understand how a FNN for a specific dataset works, especially which variables have the most influence. We use non-metric multi-dimensional scaling (MDS) to visualize the geometry of the network. Since MDS is based on inter-point distances we suggest two functions to transform the weights in the FFN to distances. Then we describe our implementation of FFN's in {\fam \ttfam \tentt XploRe} 3.2. It is based on $4$ commands ({\fam \ttfam \tentt NNINIT}, {\fam \ttfam \tentt NNFUNC}, {\fam \ttfam \tentt NNVISU} and {\fam \ttfam \tentt NNUNIT}) and a macro named {\fam \ttfam \tentt NN}. We apply our technique to the credit scoring data of Fahrmeier and Hamerle [{\it Multivariate statistische {V}erfahren, de Gruyter (1981)}]. We look to different FFN's for this classification problem.

Sigbert Klinke, Humboldt-University, Department of Economics, Institute of Statistics and Econometrics, Spandauer Str. 1, 10178 Berlin, Germany sigbert@wiwi.hu-berlin.de
http://www.wiwi.hu-berlin.de/$\mathaccent "707E \ $sigbert.



Training Noise in the Hopfield Neural Network

Lipo Wang, Deakin University, Melbourne, Australia

We present a statistical approach to the discrete Hopfield neural network in the presence of training noise. The memory capacity of the network, which is the number of patterns that the network can store and classify, is shown to be significantly reduced by a small amount of noise in training patterns. We have carried out extensive computer simulations to support our analytic theory. This result directly opposes that for the back-propagation neural network, whose classification performance can be enhanced by injecting noise in training patterns.

Lipo Wang, School of Computing and Mathematics, Deakin University, 662 Blackburn Road, Clayton, Victoria 3168 Australia lwang@deakin.edu.au


INT/IMS Contributed: \ Topics in Statistical Inference II


Double Blind Deconvolution via Quasi-Profile Likelihood and Kernel Smoothing

D. S. Poskitt, K. Doǵan, S-H. Chung, Australian National University, Canberra, Australia

This paper is concerned with the analysis of observations made on a system that is being stimulated at fixed time intervals but where the precise nature and impact of any individual stimulus is unknown. The realized values are modelled as a stochastic process consisting of a random signal embedded in noise. The aim of the analysis is to use the data to unravel the unknown structure of the system and ascertain the probabilistic behaviour of the stimuli. A method of parameter estimation based on quasi-profile likelihood is presented and the statistical properties of the estimates are established whilst recognising that there will be a discrepancy between the model and the true data generating mechanism. A method of model validation and determination is also advanced and kernel smoothing techniques are proposed for identifying the amplitude distribution of the stimuli. The data processing techniques described have direct application to the investigation of excitatory postsynaptic currents recorded from nerve cells in the central nervous system and their use in quantal analysis of such data is illustrated.

D. S. Poskitt, Department of Statistics, The Australian National University, Canberra, A.C.T. 0200, Australia Don.Poskitt@anu.edu.au


Local Asymptotic Minimax Risk for Estimating a Density in the Sup-norm, Using Optimal Recovery Theory

Boris Y. Levit, Maarten Schipper, University of Utrecht, The Netherlands

We consider the problem of minimax estimation of a probability density function over the unbounded interval using a sup-norm loss function. Provided that the unknown density is in a given Hölder-class, we give the exact asymptotics for the locally minimax risk for this estimation problem. Moreover, we present a locally efficient estimator for this model. This extends, in some respects, recent results by Korostelev and Nussbaum [{\it Technical Report, Weierstrass Institute, Berlin,} 1995] and are proved by direct methods, which exploit the adaptive estimation of the bandwidth. Both the efficient estimator and the derivation of the lower bound use the solution of a corresponding problem in the Optimal Recovery theory. Therefore this work can be viewed as a continuation in establishing the close relation between the theory of Optimal Recovery and nonparametric statistics (see Donoho [{\it Probab.\kern .16667em Theory Relat.\kern .16667em Fields}\kern .16667em 99 (1994):145-170]).

Maarten Schipper, Budapestlaan 6, PO Box 80.010, 3508 TA, Utrecht, The Netherlands schipper@math.ruu.nl


Genetic Algorithms to Estimate Change Points for Non-Homogeneous Markov and Semi-Markov Manpower Models

Erin J. Montgomery, Sally I. McClean, University of Ulster, Jordanstown, N. Ireland

In a manpower environment, leaving patterns change over time. Previous work therefore has sought to develop estimation for non-homogeneous Markov and semi-Markov manpower planning models which allow for time-inhomogeneity and, in the semi-Markov case, for the probability of a transition from a particular grade to be dependent on the duration spent in the grade. Our current focus is on the use of maximum likelihood to estimate the parameters of two models which assume that patterns remain constant for a period of time, initiated and terminated by unknown change points. Maximum likelihood estimation is accomplished by first maximising the parameters over each time period (which all contain statistically incomplete data) by assuming that the change points are known. Genetic Algorithms are then used to estimate the change points. This problem is particularly suited to optimisation using genetic algorithms since the search space is typically multimodal and requires such a robust method.

Erin Montgomery, School of Information and Software Engineering, Faculty of Informatics, University of Ulster, Jordanstown, Northern Ireland, BT37 0QB ej.montgomery@ulst.ac.uk


Multivariate Growth Curves: Estimation Methods and Covariance Structures

Ioannis G. Vlachonikolis, Vassilios G. S. Vasdekis, University of Loughborough, UK

The analysis of multivariate growth curves requires estimation of many parameters and presents problems associated with the performance of estimators, like Maximum Likelihood (ML) and Restricted Maximum Likelihood (REML). We adopt a model for which the covariance matrix of the repeated observations is generalised from Anderson [{\it Ann.\kern .16667em Stat.\kern .16667em 14(1986):405-417}] and comprises two matrices $\Sigma _{1}$ (covariance matrix of measurements at each time point and common to all individuals) and $\Sigma _{2}$ (intraclass covariance component contributing to variability of each individual). Assuming at least positive semidefinite matrices $\Sigma _{1}$ and $\Sigma _{2}$ we obtain analytical expressions for ML and REML estimates of all parameters. While asymptotically equivalent, in small samples ML estimates are shown to be more often deficient and more biased than those of REML. Another aspect of the adopted model is that the number of random effects must be defined in advance. We show simulation results regarding the specification of the number of random effects and the effect it has on the efficiency of the regression estimates of the mean profiles.

Dr I. G. Vlachonikolis, Dept. of Mathematical Sciences, University of Loughborough, Loughborough LE113TU UK I.G.Vlachonikolis@lut.ac.uk


Detecting Multiple Outliers in One-way MANOVA

Wing K. Fung, University of Hong Kong, Hong Kong

The high-breakdown robust {\em S}-estimation method is proposed for the identification of multiple outliers in one-way multivariate analysis of variance. The method however may tend to detect too many observations as extreme. The adding-back approach of Fung [{\it J.\kern .16667em Amer.\kern .16667em Statist.\kern .16667em Assoc.},\kern .16667em 88\kern .16667em (1993):515-519] is employed for remedying this swamping effect problem. Some reference values are suggested for choosing between `good' and `bad' observations. The proposed method is used for analysing some real and simulated data sets. Satisfactory results are obtained.

Wing K. Fung, Dept of Stats, University of Hong Kong, Pokfulam Road, Hong Kong HRNTFWK@HKUCC.HKU.HK


A Quasi-likelihood Approach for Ordered Categorical Data with Overdispersion

You-Gan WANG, CSIRO Biometrics Unit, Queensland, Australia

Quasi-likelihood (QL) methods are often used to account for overdispersion in categorical data. This paper proposes a new way of constructing a QL function for dependent data that stems from the conditional-mean-variance relationship. Unlike traditional QL approach to categorical data, this QL function is, in general, not a scaled version of the ordinary log-likelihood function. The estimating functions are optimal if the conditional-mean-variance relationship is correctly specified, and estimation is computationally more attractive compared with other approaches because the ``working matrix'' is diagonal. A simulation study is carried out to examine its performance, and the fish mortality data from quantal response experiments are used for illustration.

You-Gan Wang, CSIRO Biometrics Unit, P.O. Box 120, Cleveland, Queensland, Australia wan032@qld.ml.csiro.au


Robust Estimation of Extremes

Debbie J. Dupuis, Technical University of Nova Scotia, Halifax, Canada
Christopher A. Field, Dalhousie University, Halifax, Canada

The problem of making inferences about extreme values from a sample is considered. The underlying model distribution is the generalized extreme value distribution and the interest is in estimating the parameters and the quantiles of the distribution robustly. Robust estimation provides weights assigned to each observation and estimates of the parameters which are based on the data which is well modelled by the generalized extreme value density. It will also identify observations which are not consistent with the model density giving an assessment of the validity of the model density for the data. The estimation techniques are based on optimal B-robust estimates. Their performance is compared to the probability-weighted moments of Hosking {\it et al.} [{\it Technometrics} \kern .16667em 27\kern .16667em (1985): 251-261].

Debbie J. Dupuis, Department of Applied Mathematics, Technical University of Nova Scotia, P.O. Box 1000, Halifax, Nova Scotia, Canada B3J 2X4 dupuis@tuns.ca

Wednesday 10 July: 16:00-17:50

ASC Invited: SSAI Presidential Address


The Diversity of the Species

Helen L. MacGillivray, Queensland University of Technology, Brisbane Australia

Developments and evolution in the statistical sciences and their practice in research, industry, government and education, are linked, whether strongly or weakly, with developments and extensions of other areas, and with changes in aspects of government, industry, universities and schools. Increasing diversity in the work and jobs of statisticians does not, however, necessarily imply increasing differences. In many ways, recent developments and challenges for the future underline common concerns and interests across the ever-broadening spectrum of statistical science, and emphasize the importance of the communality of the statistical profession.

H.L. MacGillivray, School of Mathematics, QUT Gardens Point, GPO Box 2434, Brisbane, Q 4001 h.macgillivray@qut.edu.au


ASC Invited: All-day Workshop on SURVEY DESIGN AND ANALYSIS


Statistical Packages

Michael Adena, Robyn Attewell, Michael Jones, Intstat Australia Pty Ltd, ACT, Australia

Not all statistical software is suitable for analysis of surveys with stratified or clustered designs. This session will begin with an overview of statistical packages, with an emphasis on their capabilities for such surveys. Robyn and Mike will show how two specific packages, GENSTAT and SUDAAN, can be used to analyse such data.

Dr Michael Adena, Intstat Australia Pty Ltd

Thursday 11 July: 08:30-10:20

ASC Invited: Session in Celebration of Ted Hannan's Contributions to Time Series - I


Ted Hannan's Work on Time Series Regression and Adaptive Estimation

Peter M. Robinson, London School of Economics, UK

The work of E.J. Hannan on regression models is discussed. Hannan developed in the 1960's semiparametric estimates of linear regression and more general models that are efficient in the presence of nonparametric error autocorrelation. He proposed a frequency domain technique for reducing the effects of measurement error in regressions. Around the same time he also introduced an elegant class of consistent estimates for lagged regression. Hannan established asymptotic normality of such estimates. Among other work, Hannan gave central limit theory for least squares estimates of linear regression and other models under unusually mild conditions.

Peter M. Robinson, Economics Department, London School of Economics, Houghton Street, London WC2A 2AE, UK P.M.Robinson@lse.ac.uk


The Parametrization of State Space Forms via Balanced Canonical Forms

Dietmar Bauer, Manfred Deistler, Technische Universität Wien, Vienna, Austria

Hannan made an extensive study of identifiability and the algebraic and topological properties of ARMAX and state-space models. A linear state space system is called balanced if its observability Gramian and its controllability Gramian are equal and diagonal with the diagonal entries ordered by size. Balancing has been introduced by Moore [{\it IEEE \kern .16667em Trans. \kern .16667em Autom.\kern .16667em Control} \kern .16667em AC-26 \kern .16667em (1981):17-31] and balanced systems have a number of numerical advantages. Balancing does not uniquely prescribe representatives for the classes of observationally equivalent systems. Balanced canonical forms are described in Ober [{\it SIAM J.\kern .16667em of \kern .16667em Control \kern .16667em and \kern .16667em Optimization}\kern .16667em 29 \kern .16667em (1991):1251-1287]. Here we are concerned with the topological properties of parametrizations corresponding to this canonical form. In particular we show on which subsets the mapping attaching the parameters to the transfer function is continuous. These results are important for estimation, e.g. by the maximum likelihood method

Professor Manfred Deistler, Institut für Ökonometrie, Operations Research und Systemtheorie, TU Wien, Argentinierstraße 8, A-1040 Vienna, Austria deistler@e119ws1.tuwien.ac.at


Functional Limit Theorems in Time Series Inference

William T.M. Dunsmuir, University of New South Wales, Australia

This paper will review the use of functional limit theorems in time series inference. Applications include estimation and testing for moving average parameters at or on the unit circle, testing the hypothesis that there is a common root in the autoregressive and moving average operators, and in establishing the central limit theorem for least absolute deviation estimation in time series regression. The use of functional limit theorems can provide an advantage over traditional asymptotic methods in time series inference in that they can handle non-standard situations and provide accurate asymptotic approximations for quite small sample sizes.

William T.M. Dunsmuir, School of Mathematics, University of New South Wales, Sydney, NSW, 2052, Australia W.Dunsmuir@unsw.edu.au


ASC-E Invited: All-day Workshop on ENVIRONMENTAL IMPACT ASSESSMENT


Wildlife and Fisheries Environmental Impact Assessment

Kenneth H. Pollock, Raleigh, USA

Assessing environmental impact on wildlife and fisheries is extremely difficult due to the large spatial scales involved. The requirements of traditional experimental design: control, randomisation and replication are often difficult to meet. Alternatives involve Before-After-Control-Impact designs and modelling approaches. This talk discusses the strengths and weaknesses of the different approaches and illustrate them with examples involving wind farm/eagle interactions and the effect of hunting on wild turkeys.

Kenneth H. Pollock, Dept of Statistics, Box 8203, NCSU, Raleigh, NC 27695,USA


Modelling and Analysing Outliers in Spatial Data (with Particular Reference to Environmental Problems)

Ronit Nirel, The Hebrew University of Jerusalem, Israel
Moira A. Mugglestone, IACR-Rothamsted, Harpenden, U.K
Vic Barnett, Nottingham University, U.K

Environmental data often contain outliers resulting from error, natural variability or some other form of contamination. Outliers can affect estimation of second-order properties such as autocovariance function (ACF) and spectral density function (SDF). This paper concerns environmental data which take the form of continuous measurements on a rectangular grid. Flexible models for different configurations of contamination are introduced, including `isolated' and `patchy' types of outliers. The bias induced by contamination in the estimation of the ACF and SDF will be analysed and a bias-robust (non-parametric) estimation procedure will be introduced. The method is seen to compare favourably with other estimates under no contamination or isolated contamination models, with the advantage of being robust to patchy contamination. The procedure will be illustrated with simulated data and with data of heavy metal pollutants from the National Soil Inventory of England and Wales.

Ronit Nirel, Department of Statistics, The Hebrew University of Jerusalem, Mt. Scopus, Jerusalem 91905, Israel msronit@olive.mscc.huji.ac.il


Non-Gaussian Modelling of Spatial and Spatio-temporal Environmental Processes

Stuart G. Coles, Lancaster University, Lancaster, U.K

The standard problem in spatial smoothing is to estimate the value of an underlying spatial surface, $S(x)$, given a set of noisy observations, $Y(x_1),\mathinner {\ldotp \ldotp \ldotp },Y(x_n)$. If the errors are normally distributed and it is a linear function of $S$ which is of interest, then the problem has a simple closed-form solution which is generally referred to as kriging. This solution is optimal in the sense of minimizing expected mean-square-error. Recently, Diggle, Moyeed and Tawn (1996, submitted), have proposed a technique based on Markov Chain Monte Carlo methodology which extends this smoothing procedure to processes which are marginally non-Gaussian, while preserving the notion of an underlying Gaussian process to model spatial dependence. In this paper we will generalize the procedure for two different environmental applications: first to the problem of spatially mapping a number of different radioactive contaminants whose concentrations are correlated; and secondly to the space-time modelling of hurricane pressure fields as an input for extreme value analysis.

Stuart Coles, Department of Mathematics and Statistics, Lancaster University, Lancaster, U.K. s.coles@lancs.ac.uk


ASC Invited: Stochastic Networks


Some Recent Developments for Queueing Networks

Ruth J. Williams, University of California at San Diego, La Jolla, USA

Queueing network models are of current relevance for analyzing congestion and delay in computer systems, communication networks and complex manufacturing systems. Early investigations in queueing theory provided detailed analysis of the behavior of a single queue and of networks that could be decomposed into a product of single queues. Whilst insights from these early investigations are still used, more recent investigations have focussed on understanding how network components interact. In the last few years there have been some surprises, especially with regard to the stability and heavy traffic behavior of multiclass queueing networks with feedback. This talk will give some historical perspective on the theory of queueing networks, with the main emphasis on recent developments involving the probabilistic analysis of these networks.

Ruth J. Williams, Department of Mathematics, UCSD, 9500 Gilman Drive, La Jolla CA, 92093-0112, USA williams@math.ucsd.edu
http://math.ucsd.edu/~williams.



Estimating Blocking Probabilities in Telecommunications Networks with Linear Structure

M.S. Bebbington, Massey University, Palmerston North, New Zealand
P.K. Pollett, The University of Queensland, Queensland, Australia
I. Ziedins, The University of Auckland, Auckland, New Zealand

Telecommunications networks can be modelled as collections of links, on which are defined routes, each route being offered traffic at a given rate. The traditional performance measure in these situations is the blocking probability, for which a widely used approximation is the Erlang Fixed Point (EFP). This assumes that links block independently of one another, and performs well in situations where both the link capacities and offered traffics are large, or where the number of routes is much larger than the number of links. Since there are many situations far from either of these ideals, it is important to have a more accurate method for calculating blocking, and some idea of the error in the EFP. We shall consider these issues by taking a simple situation in which the EFP is expected to perform poorly, a ring network with one and two-link traffic, where the link capacities are small. By allowing for dependencies between neighbouring links, we can construct an approximation for the blocking probabilities with error typically $10^{-5}$ of that found using the EFP. This can be extended to cases where admission controls, such as trunk reservation, are used.

P.K. Pollett, Department of Mathematics, The University of Queensland, Queensland 4072, Australia pkp@maths.uq.oz.au
http://www.maths.uq.oz.au/~pkp/publist.html.



IMS Invited: Curve Estimation and Modelling - I


Adaptive Selection, Neural Networks, and Information Theory in Nonparametric Estimation

Andrew Barron, Yale University, USA

Increasingly simple tools from statistics and information theory are available to assess the capabilities for inference in nonparametric classes. Armed with these tools we address the following questions. (1) Adaptability: What statistical performance is possible in the absence of prior knowledge of the best approximating class? Can model selection criteria provide optimal inference over multiple function classes? (2) What is the role of neural networks and related nonlinearly parameterized models?


Wavelets in Curve Estimation and Modeling

Mary Ellen Bock, Purdue University, Indiana, USA

This computationally intensive method of fitting curves to data provides both flexibility and insight into underlying patterns. An introduction to some of the simpler wavelet techniques is given with special emphasis on Daubechies D4 wavelet for illustration. Graphical data examples provide insight for the viewer.

Mary Ellen Bock, 1399 Mathematical Sciences Building, Purdue University, West Lafayette, IN, 47907-1399, USA mbock@stat.purdue.edu


Limit Theory for a Survival Function Estimator

Charles Heilig, Hina Malani, Deborah Nolan, University of California, Berkeley, USA

This paper considers the asymptotic properties of a Kaplan-Meier-type estimator based on a modification of the redistribution to the right algorithm proposed by Malani [{\it Biometrika} 82\kern .16667em (1995):515-526]. The algorithm uses disease markers in the redistribution of censored observations. That is, the weight associated with a censored observation, at the time of censoring, is redistributed to those individuals in the risk set who have the same or similar marker values. The large sample properties of this estimator are examined here through functional limit theory for the U-statistic process.

Deborah Nolan, U.C. Berkeley, Statistics Department, 367 Evans Hall \# 3860, Berkeley, CA 94720-3860, USA nolan@stat.berkeley.edu


ASC Contributed: \ Topics in Genetics and Bayesian Analysis


Interpreting DNA Mixtures

Christopher M. Triggs, University of Auckland, New Zealand

Genetic markers are widely used in forensic science both to eliminate and to provide powerful evidence against suspects. However the interpretation of genetic profiles from recovered samples requires care when the sample contains genetic material from more than one person. This is especially common in rape cases when the sample may contain material from the victim or consensual partners as well as from the perpetrator or perpetrators. The statistician's task is to assign numerical weight to evidence of this kind. This talk discusses some examples of such mixture evidence and the problems that can arise in their interpretation.

Christopher M. Triggs, Department of Statistics, University of Auckland, Private Bag 92019, Auckland, New Zealand triggs@stat.auckland.ac.nz
http://www.stat.auckland.ac.nz/reports/report96/stat9601.html.



Bayesian Spatial Analysis of 2D Binary Data on a Lattice: Applied to Dingoes, Toads, and Kangaroos

Samantha J. Low Choy, A. N. Pettitt, Queensland University of Technology, Brisbane, Australia

In this paper we concentrate on estimation problems for binary data observed on a rectangular lattice. Such data is often recorded as simple absence/presence information, and arises naturally in many application areas particularly biogeography and image analysis. We use a simple model incorporating information from nearest neighbours which is a reparameterization of the well-known Ising model, and we extend this model to allow for anisotropic spatial relationships. We present a Bayesian approach to analysis as an alternative to coding methods (Besag, [{\it J.\kern .16667em Roy.\kern .16667em Statist.\kern .16667em Soc.Ser.B}\kern .16667em 36\kern .16667em (1974):192-236.]),maximum likelihood (Pickard, [{\it J.\kern .16667em Amer. Statist.\kern .16667em Assoc.}\kern .16667em 82\kern .16667em (1987):90-96.]), pseudo-likelihood (Besag, [{\it J.\kern .16667em Roy.\kern .16667em Statist.\kern .16667em Soc.Ser.B}\kern .16667em 48\kern .16667em (1986):259-302.]; Heikkinen & Högmander, [{\it Appl.\kern .16667em Statist.}\kern .16667em 43,\kern .16667em No.\kern .16667em 4\kern .16667em (1994):569-582.]), and others. (See Dubes & Jain, [{\it J.\kern .16667em Appl.\kern .16667em Statist.}\kern .16667em 16\kern .16667em (1989):131-164.] for an overview.) MCMC methods are harnessed within the Bayesian modelling framework to estimate the posterior distributions of these parameters. Sensitivity of the posteriors to the choice of prior distributions are investigated.

Samantha Low Choy, School of Mathematics, QUT, GPO Box 2434, Brisbane, Qld 4001. Australia s.lowchoy@fsc.qut.edu.au


Information and Clone Mapping of Chromosomes

Bin Yu, University of California, Berkeley, USA

The primary goal of the Human Genome Project is to sequence the entire human genome, which consists of about $3 \times 10^9$ base pairs (bp) of DNA. Current technology only permits sequencing of fragments of the order of a few hundred to a 1000 base pairs of DNA in a single reaction. Consequently much effort is devoted to fragmenting large DNA molecules such as chromosomes, in such a way that the sequenced fragments can be readily assembled. Clone maps, which are one form of physical mapping, play a key role in this process, as well as providing a resource permitting the detailed study of chromosomal regions of biological interest. To be more precise, a clone map of part or all of a chromosome is the result of organizing order and overlap information concerning collections of DNA fragments called clone libraries. In this talk, the expected amount of information (entropy) needed to create such a map is discussed. A number of different formalizations of the notion of a clone map are considered, and for each, exact or approximate expressions or bounds for the associated entropy are calculated. Based on these bounds, comparisons are made for four species of the entropies associated with the mapping of their respective cosmid clone libraries. All the entropies have the same first order term ($N \mathop {\fam \z@ \tenrm log}\nolimits _2 N$ when the clone library size $N \rightarrow \infty $) as that obtained by Lehrach et al. (1990).

Bin Yu, Department of Statistics, University of California at Berkeley, USA binyu@stat.berkeley.edu


Simulation Methods for the Time Since ``Adam'' and ``Eve''

David J. BALDING, Queen Mary & Westfield College, University of London, UK, and University of New South Wales, Australia

Recent advances in molecular genetic technology have allowed the possibility of using genetic data to estimate the time since the most recent common ancestor of living humans. Although large amounts of data are now available, the underlying processes which have generated the data are very complicated and consequently analysis is difficult. Several analyses which have appeared in the literature are flawed. We present simple simulation methods for approximate inference. These methods are flexible enough to permit the investigation of robustness to some modelling assumptions, in particular those concerning assumptions about parameter values. The complexity of human evolution means that it remains difficult to make precise inferences, but some useful conclusions can be drawn.

Dr David Balding, School of Mathematical Sciences, Queen Mary & Westfield College, Mile End Road, London E1 4NS, UK. d.j.balding@qmw.ac.uk


Selfish Genes or Shuffled Genes?

Graham R. Wood, Central Queensland University, Rockhampton, Australia
D. Ross Boswell, Christchurch School of Medicine, Christchurch, New Zealand

Statistical ideas have recently been used to shed light on a central debate in the field of evolutionary biology. There are two dominant theories of gene formation: the selfish gene theory of Dawkins and the gene shuffling theory of Gilbert. Differing distributions of sizes of coding gene segments, termed exons, are predicted by these theories. The size distribution of exons has been extracted from the gene sequence library and statistical ideas used to show that both theories may in part be correct.

Professor G.R. Wood, Department of Mathematics and Computing, Central Queensland University, Rockhampton, QLD 4702, Australia g.wood@cqu.edu.au


Bayesian Double Sampling Plans with Normal Distributions

Yeh Lam, Chi Van Lam, The Chinese University of Hong Kong, Shatin, Hong Kong

Lam [{\it Scientia\kern .16667em Sinca\kern .16667em Ser.A}\kern .16667em 31\kern .16667em (1988):129-140] and [{\it Biometrika}\kern .16667em 75\kern .16667em (1988):387-391] and Lam and Lau [{\it Commu.\kern .16667em Statist.\kern .16667em Simulation\kern .16667em Comput.}\kern .16667em 22\kern .16667em (1993):371-386] had studied a Bayesian single variable sampling plan. In this paper, we generalize the above research work by considering a Bayesian double variable sampling plan. Assume that the quality of an item is measured by a continuous random variable $X$ is studied, and the variable $X$ has a normal distribution $N(m,\tau ^{2})$, and $m$ has a normal prior distribution $N(\mu ,\sigma ^{2})$, where $\tau ^{2}$, $\mu $ and $\sigma ^{2}$ are known. A random sample $\underline {X}_{1} = (X_{1},\mathinner {\ldotp \ldotp \ldotp },X_{n_{1}})$ of size $n_{1}$ is taken, the batch is accepted if the sample mean $\overline {X}_{1} = \sum _{i=1}^{n} x_{i}\/ n_{1}$ is close to the standard value; the batch is rejected if $\overline {X}_{1}$ is far away from the standard value; in the intermediate case, a second random sample of size $n_{2}$ will be taken. Assume further that the loss function is a polynomial function. An explicit expression for the Bayes risk is derived. The upper bounds for the optimal sample sizes of the first and second samples are determined respectively. Then a finite algorithm is suggested for determination of an optimal double sampling plan in a finite number of search steps.

Yeh Lam, Dept. of Statistics, The Chinese University of Hong Kong, Shatin NT Hong Kong ylam@cuhk.edu.hk


Bayesian Estimation Using Ranked Set Sampling

M. Fraiwan Al-Saleh, Hassen A. Muttlak, Deakin University, Geelong, Australia

Ranked set sampling (RSS) as suggested by McIntyre (1952) and Takahasi and Wakimoto (1968) may be used in Bayesian estimation to reduce the Bayes risk. Bayesian estimation base on a ranked set sampling (RSS) is investigated for a large class of distributions. We examine the Bayes risk of the Bayes estimator using RSS. It appears that the Bayes risk of the Bayes estimator using RSS is smaller than that of the corresponding Bayes estimator using simple random sample (SRS) of the same size.

Hassen A. Muttlak, School of Computing and Mathematics, Deakin University, Geelong Vic 3217, Australia muttlak@deakin.edu.au


ASC Contributed: \ Design of Experiments


Designing Experiments for Dependent Processes

Neil T. Diamond, Victoria University of Technology, Melbourne, Australia

Fisher's invention of randomization and blocking made it possible to run valid and informative experiments on a non-stationary process (Box[{\it Qual. \kern .16667em Eng.} \kern .16667em 2\kern .16667em (1990): 497-502]). Without randomization, estimated effects will have heterogeneous variances and will be correlated (Saunders and Eccleston[{\it Austral.\kern .16667em J.\kern .16667em Statist.} \kern .16667em 34 \kern .16667em (1992):77-90]). If a IMA(0,1,1) model is appropriate then it is shown that with blocks of size 2 all within block contrasts are uncorrelated and have the minimum possible variance. Comparisons of variances of within block and between block contrasts are made for different blocking arrangements, and methods for determining posterior probabilities that contrasts are real (Box and Meyer[{\it Technometrics} \kern .16667em 28\kern .16667em (1986): 1-26]), are extended.

Neil Diamond, Victoria University of Technology, P.O. Box 14428, MCMC, Melbourne, 8001 ntd@matilda.vut.edu.au
http://matilda.vut.edu.au/pub/papers/ntd/dep_proc.ps.



Use of Geostatistical Methods in the Experimental Design of Field Trials

Annette K. Ersb{\O }ll, The Royal Veterinary and Agricultural University, Denmark

In classical analysis of field experiments, significance of a given treatment effect for a fixed design depends largely on the amount of variation within the area used. As the heterogeneity of measurements in the field increases (due to e.g.\ soil variations), the precision of the test statistics decrease. Among different experimental designs the one most optimal is usually chosen with respect to the effect under consideration (e.g.\ a split-plot design versus a randomized block design). An optimal design is more seldomly chosen based on the spatial variation in the field due to e.g.\ soil heterogeneity. An alternative approach for design of field experiments is described using the spatial variation between plots of different size and shape for the purpose of comparing the residual variation of different designs. The spatial dependence between plots is introduced by using the semivariogram to describe the residual variation. The consequence of changing plot size, shape and orientation in the block can then be estimated. Of particular interest is the application of the approach to plot dimensions which are not constrained by the form of the original uniformity trial.

Annette K. Ersb{\o }ll, CSIRO, Biometrics Unit (IPPP), Private Bag, Wembley WA 6014, Australia ae@dina.kvl.dk


Misspecification Robust Design

Donal P. Krouse, Industrial Research Limited, Wellington, New Zealand

Factorial designs are commonly used to estimate a polynomial regression of a response in terms of controllable factors. Assuming no bias and a scalar error covariance, Gauss-Markoff theory shows that ordinary least squares estimators are best linear unbiased. Here we consider the robustness of the ordinary least squares estimator to both model and error misspecification. By modelling bias as a smooth realization of a Gaussian process, both systematic and random departures from the null model can be specified within a class of {\it patterned} covariance matrices. We extend some of the results for two-level fractional factorials in Krouse [{\it Commun. Statist.- Theory Meth.}\kern .16667em 23\kern .16667em (1994):3285-3301] to designs with more than two levels.

Donal P. Krouse, The New Zealand Institute for Industrial Research and Development, Box 31-310, Lower Hutt, New Zealand d.krouse@irl.cri.nz


Coverage Designs for Software Testing

Siddhartha R. Dalal, Bellcore, Morristown, USA
Colin L. Mallows, AT&T Bell Labs, Murray Hill, USA

The problem of designing a batch of tests for a large software product is superficially similar to that of designing an experiment to estimate main effects and interactions. In fact classical designs have been proposed for this purpose. However there are two crucial differences: replication is unhelpful, and therefore wasteful, and there is no real-valued response to be measured; the result of a test run is either ``OK" (in which case nothing needs to be done) or ``failure" (in which case the run must be analysed to determine the reason for the failure). The usual log-linear models are irrelevant and new design criteria are needed. We suggest that the challenge is simply to cover the relevant design space as completely as possible. We study several classes of designs.

Colin Mallows, AT&T Bell Labs, Murray Hill, NJ, USA 07974 clm@research.att.com


Multitiered Experiments and their Analysis

Christopher J. Brien, University of South Australia, Adelaide, Australia

Multitiered experiments [Brien, {\it Biometrics}\kern .16667em 39\kern .16667em (1983):51-9] include two-phase, superimposed and some plant and animal experiments. Their mixed model analysis is examined using an example; the method employed guarantees separate terms in the linear model and separate sources in the analysis of variance table for each of several confounded effects. As Harville [{\it J.\kern .16667em Amer.\kern .16667em Stat.\kern .16667em Assoc.}\kern .16667em 86\kern .16667em (1991):812-15] asserts, the custom of representing the sum of confounded effects by a single term or source is overly-restrictive and confusing. To ensure there is a term in the model and a source in the table for each confounded effect, it is essential that the randomization employed in the experiment is displayed in the table. We show that this can be achieved by building up the analysis in stages corresponding to the randomization. It will be demonstrated how the analysis can be achieved with software that performs conventional mixed model analyses, such as Minitab.

Dr. C.J. Brien, School of Mathematics, University of South Australia, North Terrace, Adelaide 5000, Australia chris.brien@unisa.edu.au
http://phoenix.levels.unisa.edu.au/staff/brien.c.j./home.html.



ASC/IMS Contributed: Topics in Statistical Inference III


The Behaviour of the Maximum Likelihood Estimator as a Process and Some Applications

Robert M. Loynes, University of Sheffield, England

Given a set of observations, supposedly either independent and identically distributed or from a stationary AR process, whose distribution contains a fixed-dimension unknown parameter, the behaviour of the maximum-likelihood estimator (MLE) as a function of the number of observations used contains evidence about whether the model assumptions are satisfied, or whether a change of regime or drift is taking place. A weak convergence result for the process of MLEs is given, which allows various tests to be constructed.

Prof R.M. Loynes, University of Sheffield, Probability & Statistics Section, School of Maths & Stats, Sheffield S3 7RH UK r.loynes@sheffield.ac.uk


Comparing Groups with Irregular Longitudinal Data

J. S. Maritz, Medical Research Council, South Africa

Longitudinal data arise when observations of a dependent variable are made at several successive time points. When such data are recorded for a number of subjects it often happens that the time configuration varies from subject to subject, producing irregular longitudinal data. Comparison of two or more groups of subjects is considered using exact permutational methods. This entails choosing appropriate descriptive and test statistics and generating their exact distributions.

J. S. Maritz, MRC-CERSA, PO Bo 19070, Tygerberg 7505, South Africa SMARITZ@eagle.mrc.ac.za


Repeated Ordinal Responses

Rory St John Wolfe, Southampton University, UK

An approach to modelling repeated ordinal responses is discussed. This involves using `scaling' terms in a cumulative logit model [McCullagh {\it J.\kern .16667em Roy.\kern .16667em Statist.\kern .16667em Soc.Ser.B}\kern .16667em 42\kern .16667em 1980:109-142]. The approach is applied to data from telecommunication experiments. A new general purpose method of fitting the model in GLIM4 is introduced. Finally the consideration of a random-effects model is discussed.

Rory Wolfe, Maths Department, Southampton University, Highfield, Southampton, SO17 1BJ, UK rw@maths.soton.ac.uk


On Minimum Distance Estimation of Location Parameter for Interval Censored Data

Vasudaven MANGALAM, Curtin University, Perth, Australia

Let $X_1, X_2,.....X_n$ be i.i.d. with distribution function given by $F(x-a)$ where $F$ is an unknown symmetric distribution and $a$ is an unknown location parameter. $T_1, T_2,....T_n$ are i.i.d. and independent of $X_i's$ with unknown distribution G. We observe $(T_i,d_i), i=1,...,n$ where $d_i$ is the indicator of whether $X_i$ is less than or equal to $T_i$. A minimum distance estimator is constructed for the parameter and the properties are studied. Two-sample extension to this is also considered.

Vasudaven Mangalam, School of Mathematics and Statistics, Curtin University of Technology, GPO Box U1987, Perth WA 6001 vasu@cs.curtin.edu.au


Variance of the MLE of a Survival Function with Interval Censored Data

Qiqing Yu, SUNY at Binghamton
Linxiong Li, University of New Orleans
George Y. Wong, Strang Cancer Preventive Institute

Interval-censored data consist of $n$ pairs of observations $(l_i,r_i)$, $i=1,...,n$, where $l_i \le r_i$. We either observe the exact survival time $X$ if $l_i=r_i$ or only know $X \in (l_i,r_i)$ otherwise. We established the asymptotic normality of the nonparametric MLE of a survival function $S(t)\penalty \@M \ \penalty \@M \ (=P(X>t)$ with such interval-censored data and present an estimate of the asymptotic variance of the MLE. We show that the convergence rate in distribution is in $\radical "270370 {n}$. Simulation study also supports our result. An application to the cancer research is presented.

Qiqing Yu, Math Department, SUNY at Binghamton, NY 13902, USA qyu@math.binghamton.edu


On the Relationship Between Power of a Test and Shape of its Critical Region

Daryl W. Tingley, Maureen A. Tingley, University of New Brunswick, Fredericton, NB, Canada

With the Neyman-Pearson Lemma as yard-stick for comparisons, the relationship is investigated between test power and shape of critical region, as test size approaches zero. Measure-theoretic definitions are used to quantify the notion of similar versus dissimilar critical regions. Results are obtained for two extremes of limiting power: power approaching that of Neyman-Pearson, and power negligible when compared with Neyman-Pearson, as test size decreases. Examples illustrate that small test sizes are not practical when sample estimates replace values of nuisance parameters.

Maureen Tingley, Dept of Math and Stat, University of New Brunswick, Box 4400 Fredericton, NB, Canada E3B 5A3 maureen@math.unb.ca


A Generalisation of Cochran's Theorem and Its Applications in the Analysis of Variance of Repeated Measures

Júlia T. Fukushima, San Paulo University
Regina C. C. P. Moran, State University of Campinas
Ioannis G. Vlachonikolis, Loughborough University of Technology, UK

Cochran's theorem and its many corollaries and interrelationships have played a most prominent role in statistics. The present work attempts to formulate a unified approach by means of two theorems on necessary and sufficient conditions under which the sums of squares of the various hierarchical layers in ANOVA are distributed like multiples of chi-square variables. Some new results concerning the standard univariate $F$-tests in the analysis of repeated measurements are derived as a special case.

Ioannis G. Vlachonikolis, University of Loughborough, Department of Mathematical Sciences, Loughborough, LEICS LE113TU, UK I.G.VLACHONIKOLIS@LUT.AC.UK

Thursday 11 July: 10:30-12:20

ASC Invited: Session in Celebration of Ted Hannan's Contributions to Time Series - II


Estimation of Speed, Direction and Structure from Spatial Array Data

David R. Brillinger, University of California, Berkeley, USA

The contributions of E.J. Hannan and his collaborators to the problem of the estimation of speed, direction and structure from spatial array data will be reviewed. One has in mind data such as earthquake signals recorded as the energy passes across an array of seismometers. From such data one can estimate parameters of the earthquake and also of the medium through which the signal is passing. Work on the topic by some other contributors will also be mentioned as well as some personal research.

David R. Brillinger, Statistics Department, University of California, Berkeley, CA 94720 brill@stat.berkeley.edu


Kriging or Splines, Nonparametrics or Time Series: When to Use Each

Victor Solo, Macquarie University, Sydney, Australia

In spatial statistics there has for some time raged a debate between adherents of smoothing splines versus adherents of Kriging. The Krigers give examples where Kriging seems to outperform smoothing splines while the 'spliners' give theory showing that additional parameters used in Kriging models cannot be consistently estimated. There has emerged more quietly a one dimensional version of this argument with some researchers using nonparametric regression methods on data traditionally treated as time series. We take a step towards resolving the debate by giving a new kind of asymptotic theory that shows when each method is appropriate.

Prof. Victor Solo, Dept Statistics, Macquarie University, Sydney NSW 2109, Australia vsolo@zen.efs.mq.edu.au


On the Estimation of Trend and Seasonal Patterns

Peter J. Thomson, Victoria University, Wellington, New Zealand

Many time series consist of a seasonal cycle superimposed on a smooth slowly changing trend together with noise. The need to extract these components, particularly the trend, has led to many empirical and model-based seasonal adjustment and trend estimation procedures. This paper considers some trend and seasonal models designed to underpin such procedures. In particular, a flexible family of finite moving average trend filters is developed. These are derived from specified smoothness and fidelity criteria and are based on local dynamic models operating within the span of the filter. Seasonal models originally proposed by Professor E.J. Hannan will also be reviewed as well as some of his other contributions to the seasonal adjustment literature.

Dr Peter Thomson, ISOR, Victoria University, PO Box 600, Wellington, New Zealand peter@isor.vuw.ac.nz


ASC-E Invited: All-day Workshop on ENVIRONMENTAL IMPACT ASSESSMENT


Identification of Large Scale Hydrologic Systems

A.J. Jakeman, ANU, Canberra, Australia

Separation of precipitation incident upon a catchment into evapotranspiration and stream discharge losses, groundwater accessions and change in catchment storage is a central problem in hydrology. The need for appropriate separation procedures has been accentuated by the desire to predict the effects of climatic and land use changes on water supply, water quality and associated ecological responses, as well as that to improve hydrological components of General Circulation Models and Regional Climate Models. Many of the available models work quite well for a limited range of purposes. For example, most models can fit streamflow data tolerably and forecast a few time steps ahead accurately, but suffer substantial deterioration in performance when used to predict on independent periods not used for the model parameter calibration. Issues that need more attention and discussed here include: estimation of areal precipitation using knowledge of event climatology and terrain elevation; volume estimation of snow accumulation and melt; accurate prediction of daily stream discharge and evapotranspiration, given an input time series of precipitation, temperature and other available climatic data, and its sensitivity to model parameters; prediction of the diurnal pattern of evapotranspiration; better appreciation of the appropriate scale(s) (for different purposes and in different landscapes) at which to model the heterogeneity in the hydrological response of the land surface to atmospheric forcing; more informed approaches, including regionalisation with landscape attributes, for separating the water balance in ungauged catchments or predicting balance changes to variations in land cover and use.

A.J. Jakeman, Centre for Resource & Environmental Studies, Aust National University, Canberra ACT 0200, Australia


Combining Environmental Information: Environmental Monitoring, Measurement and Assessment

Lawrence H. Cox, US Environmental Protection Agency, USA
Walter W. Piegorsch, Univ of South Carolina, Columbia, USA

An increasingly important concern in environmental science is the need to combine information from diverse sources that relate to a common endpoint or effect. We explore the need to combine information in three areas (environmental monitoring, measurement and assessment), review available statistical methods, and discuss opportunities for statistical research. A companion paper (Piegorsch and Cox 1995) explores related issues in environmental epidemiology and toxicology.

Lawrence H. Cox, Senior Statistician, US Environmental Protection Agency, AREAL (MD-75),Research Triangle Park, NC 27711,USA


A Stochastic Global Tracer Transport Model for Studying the Sources and Sinks of Greenhouse Gases: 1980-1995

John A. Taylor, Jay W. Larson, ANU, Canberra, Australia
John E. Mulquiney, MIT, Cambridge, USA

A global 3-dimensional atmospheric tracer transport model, the Australian National University-Chemical Transport Model (ANU-CTM), has been applied to a wide range of problems in tracer transport, atmospheric chemistry and the sources and sinks of the key greenhouse gases. The original version of the model employed bimonthly wind field statistics calculated from European Centre for Medium Range Weather Forecasting (ECMWF) at 2.5 degrees resolution with 7 levels in the vertical for 1980. Recently the model has been updated to incorporate new monthly wind field statistics derived from ECMWF observations for the period 1980-1995. New physical parameterisations representing the effects of the atmospheric boundary layer, surface topography and sub-grid scale cloud transport have also been incorporated into the model. A description of the new model formulation will be presented. Results from the application of this model to the study of the sources and sinks of atmospheric carbon dioxide will be discussed.

John A. Taylor, Centre for Resource & Environmental Studies, Aust National University, Canberra ACT 0200, Australia


ASC Invited: Computational Methods in Experimental Design


CYCDESIGN: A Package for Constructing Block and Row-Column Designs

J. A. John, The University of Waikato, Hamilton, New Zealand
E. R. Williams, CSIRO Canberra, Australia

We describe a computer package that provides an experimenter with efficient designs for a range of different block and row-column structures. The main module is a procedure that interchanges treatments in the design with the aim of optimising an appropriate objective function. Simulated annealing techniques are used to overcome local optimality problems. In general, the objective function is a weighted linear combination of a number of objective functions of the individual components of the design. In some cases the weights are necessarily fixed, while in other cases they can be chosen by the user. Two special modules are also provided; one for cyclic block designs and the other for alpha designs. Hence we cater for the majority of the design types discussed in our book {\it Cyclic and Computer Generated Designs}. Row-column designs can either be constructed simultaneously using the main module, or can be obtained in two stages. The first stage constructs a block design, using the main or cyclic design module. With this design as the columns of the row-column design, the second stage then carries out interchanges within columns to obtain the final design. The simultaneous and two-stage approaches can also be used to construct resolvable row-column designs, with the alpha design module used as an option at the first stage. For resolvable designs the replicates can be set out contiguously either in a line or in a two-way array. Rows and/or columns can be grouped to provide other design options important in practice. Factorial, as well as single-factor, treatments can also be accommodated. Some of the features of the new package are discussed and examples of its application to a variety of experimental situations presented. The user is taken through the options available for the choice of block and treatment structures, with extensive use made of on-line help facilities. The final design can be randomised and a record of the design properties obtained.

J. A. John, Department of Statistics, The University of Waikato, Hamilton, New Zealand


Construction of Optimal and Near-Optimal Designs for Dependent Observations Using Simulated Annealing

R. J. Martin, University of Sheffield, Sheffield UK

Simulated annealing has recently received much attention in Statistics as a stochastic method of optimisation when there are many local optima. Its use has been suggested for finding optimal or near-optimal block designs and row-column designs. Then, the moves considered are pairwise interchanges of treatment labels. Using an approximation to the criterion is faster, and usually satisfactory. In practice, it is difficult to predetermine good settings for the routine, and the routine rarely finds the global optimum. Suggested improvements include using a descent routine after the annealing, and having various levels in which both the annealing and descent routines are used. With spatial dependence, the situation is more complicated. Some versions of the routine will be discussed. Examples of various applications will be presented, including block designs with linearly or spatially arranged units. Different situations of interest include comparing treatments of equal status and test-control designs.

R. J. Martin, School of Mathematics & Statistics, University, Sheffield S3 7RH UK. r.j.martin@sheffield.ac.uk


Computational Methods For Constructing Orthogonal Main Effect Plans

Leonie Burgess, University of New South Wales, Sydney, Australia
Deborah J. Street, University of Technology, Sydney, Australia

Orthogonal main effect plans (OMEPs) are a special class of fractional factorial designs, in which all main effects can be orthogonally estimated, assuming that interactions between factors are negligible. Let $N_{IJ}$ be the incidence matrix of factors $I$ and $J$, so that the ($x$,$y$) position of $N_{ij}$ is the number of times level $x$ of factor $I$ appears with level $y$ of factor $J$. In an OMEP the entries in $N_{IJ}$ can be derived directly from the replication information for each factor separately. In this talk we describe computational methods which use this property to construct OMEPs.

Leonie Burgess, School of Mathematics, University of NSW, Sydney 2052 Australia leonie@maths.unsw.edu.au


IMS Invited: \ Curve Estimation and Modelling - II


Remarks on Improved Density Estimation in the Tails

David W. Scott, Rice University, Houston, USA

Given the general nature of nonparametric density algorithms, reliable estimation in the tails is not expected due to the paucity of data. Bias reduction techniques are an attractive approach in any case. Ordinary higher order kernels can be unsatisfactory due to the possibility of negative density values. Interesting smooth nonnegative higher order techniques can be obtained by taking the ratio of kernel estimates with different smoothing parameters as in Terrell and Scott [{\it Ann.\kern .16667em Statist.}\kern .16667em 8\kern .16667em (1980):1160-63] or the adaptive procedure of Abramson [{\it Ann.\kern .16667em Statist.}\kern .16667em 10\kern .16667em (1982):1217-23], for example. However, there exist in theory special bandwidths in the tails that lead to higher order convergence pointwise. The nature of these bandwidths, the possibility of practical application, and some examples are described in this talk. This work is joint with Stephen R. Sain.

David W. Scott, Department of Statistics MS 138, Rice University, POB 1892, Houston, TX 77251-1892, USA scottdw@rice.edu
ftp.stat.rice.edu/pub/scottdw.



Dynamical Systems Trajectory Estimation via Tunable Models with Noisy Data

Grace Wahba, Jianjian Gong, Donald R. Johnson, University of Wisconsin, Madison WI, USA
Joseph Tribbia, National Center for Atmospheric Research, Boulder CO USA

We consider the estimation of a trajectory which represents the evolution of a very complex dynamical system, for which there exists an approximate computer representation with some physical parameters only known approximately, and for which scattered, noisy, incomplete observations on functionals of the trajectory over time and space are available. We consider the simultaneous tuning of weighting, smoothing and physical parameters, which influence the fitted trajectory. Such problems occur in general circulation models describing the atmosphere and the ocean, and elsewhere. Simulation results concerning a `toy' problem based on an equivalent barotropic vorticity equation on a latitude circle are described, and feasible methods for potential implementation in operational systems are discussed.

Grace Wahba, Dept. of Statistics, University of Wisconsin, 1210 W. Dayton St., Madison, WI 53706, USA wahba@stat.wisc.edu
See ftp://ftp.stat.wisc.edu/pub/wahba/talks/sydney.96 for related work..



Exact Risk Analysis of Wavelet Regression

J. S. Marron, University of North Carolina, Chapel Hill, USA

Wavelets have motivated development of a host of new ideas in nonparametric regression smoothing. Here we apply the tool of exact risk analysis, to understand the small sample behavior of wavelet bases, and also comparisons between hard and soft thresholding are given from several viewpoints. Our results provide insight as to why the viewpoints and conclusions of Donoho and Johnstone differ from those of Hall and Patil

J. S. Marron, Department of Statistics, University of North Carolina, Chapel Hill NC 27514, USA marron@stat.unc.edu


ASC Contributed: Multivariate Analysis


Considering the Ordering Constraint for Simple Correspondence Analysis

Eric J. Beh, University of Wollongong, Australia

Correspondence is a multivariate statistical, and graphical, procedure for analysing two-way and multi-way contingency tables. For the analysis of two-way tables, simple correspondence analysis may be applied, while for a multi-way table, multiple-way correspondence analysis is used via considering either the contingency tables "Indicator Matrix" or its "Burt Matrix". However the additional constraint of ordered variables has been little studied. Here, a method is shown that will cater for the application of correspondence analysis to two-way contingency tables which has either one, or both variables ordered. By this method, it can be shown that all properties of simple correspondence analysis still hold when the ordered constraint is applied, as well as allowing for more information to be gained about the variables by isolating location, dispersions, skewness, kurtosis, etc, components.

Eric J. Beh, University of Wollongong, Wollongong,NSW, Australia eric_beh@ouw.edu.au


Symbolic Specification of Multivariate Logistic Models

Gary Glonek, The Flinders University, Adelaide, Australia

Multivariate logistic regression models relate the joint distribution of several categorical responses to predictor variables and are based on the multivariate logistic transform of the table of cell probabilities. This transformation decomposes the table of probabilities into main effects and interactions. In situations involving a moderate number of responses, it is computationally feasible to select a parsimonious model by systematically eliminating non-significant interaction terms in a way similar to the analysis of factorial experiments. However, without a suitable system of model formul\ae \ to specify the models, such an approach is unduly cumbersome. In this talk, the symbolic representation of multivariate logistic models, proposed by Glonek and McCullagh [{\it J.\kern .16667em Roy.\kern .16667em Statist.\kern .16667em Soc.Ser.B}\kern .16667em 57\kern .16667em (1995):533-46], is explained and its application to data is illustrated.

G. Glonek, Department of Mathematics and Statistics, Flinders University, GPO Box 2100, Adelaide SA 5001, Australia gary@stats.flinders.edu.au


Nonparametric Estimation of the Money Demand Cointegration by the Projection Pursuit Method

H.D. Vinod, Fordham University, New York, USA

Money demand equation continues to attract attention of econometricians with a new wrinkle provided by Cointegration. We use projection pursuit (PP) regressions pioneered by Friedman and Stuetzle (1981) to suggest new estimates of partials of conditional expectations of the regressands with respect to the regressors. Since the usual Cointegration methodology involves linear relations, carefully chosen directions where generalized additive structure is preserved by PP methods is potentially useful. These methods are computationally demanding, since the bootstrap needs to be used for confidence statements. Our numerical estimates with 18 regressors are plausible and yield narrow confidence intervals.

H.D. Vinod, Fordham University, New York, 10458 vinod@murray.fordham.edu


ML and REML Estimators in Multivariate Growth Curve Models

V. G. S. Vasdekis, University of Crete, Greece
I. G. Vlachonikolis, University of Technology, Loughborough, UK

The Lange and Laird (1989) model of random effects is extended to the multivariate setting where more than one characteristic are measured at each time point. ML and REML estimators are obtained under the restriction that estimates of variance matrices being p.s.d. It is shown that REML has greater probability of giving non-degenerate estimates of variance matrices and has smaller bias in small samples. The need for deciding the number of random effects is investigated under two univariate measures of efficiency and estimators are compared with respect to these univariate measures.


Analysis of Quadratic Risk for Estimators of Parameters in Several Parallel Regression Lines with Multivariate Student-$t$ Errors

Shahjahan Khan, University of Southern Queensland, Toowoomba, Australia
A.K.Md. Ehsanes Saleh, Carleton University, Ottawa, Canada

The problem of estimation of the slope as well as the intercept parameters of a set of $p$ regression lines with errors following multivariate Student-$t$ distribution is considered when it is a priori suspected that the regression lines are homogeneous, that is, the slope of each of the $p$ regression lines is equal to $\beta $ (unknown). Five different estimators, namely the {\it unrestricted, restricted, pre-test, shrinkage and positive-rule shrinkage}, are defined. The properties of the estimators are investigated based on the bias and risk under quadratic loss criteria. The dominance picture of the estimators is discussed under different conditions.

Shahjahan Khan, Department of Mathematics and Computing, University of Southern Queensland, Toowoomba, Australia


Some Results on the Estimation of Parameters for Normally Distributed Random Matrices

Charles R.O. Lawoko, Queensland University of Technology, Brisbane, Australia

Consider the $p \times n$ random matrix $X$ which is normally distributed with mean $M$, and let the covariance matrix between any two columns of $X$ (say $x_i$ and $x_j$) be $\gamma _{ij}(\Sigma + \Sigma _{\varepsilon }$). Some results on the maximum likelihood estimation of some parameters of this model will be discussed under various conditions. Extensions to the situation when the covariance matrix is $\Gamma \oplus (B_{\delimiter "26B30D i-j\delimiter "26B30D } \Sigma _{\varepsilon }+ C_{\delimiter "26B30D i-j\delimiter "26B30D \Sigma _{\varepsilon }})$, with some restrictions on $B$ and $C$, will also be discussed. This model arises from a problem on separating ``noise'' from ``signal'' for remotely sensed data, and related problems. Other aspects of the estimation problem related to this particular problem will also be discussed. Some of the results reported follow from joint work with Dr D.Q. Wang, School of Mathematics and Statistics, University of Birmingham, UK.

Charles R.O. Lawoko, Faculty of Business, Queensland University of Technology, PO Box 2434, Brisbane QLD 4001 Australia c.lawoko@qut.edu.au


Missing Values in Factor Analysis

Christopher Turville, Robert W. Mellor, University of Western Sydney, Macarthur, Australia

Because factor analysis is based on large data sets, it seems inevitable that for most analyses some data will be missing. The paper examines three techniques for handling missing data in factor analysis, which are readily available in statistical software; ``complete cases only'' uses only those cases or observations that have a measurement for every variable; ``imputing means'' replaces each missing value by the mean for that particular variable; and estimating variances and covariances from all available pairs of data. Utilising a real data set with differing proportions of the data randomly removed, the three techniques were compared by looking at their ability to accurately predict factor loadings for all factors, their susceptibility to changing the order of importance of variables within factors, and their ability to represent accurately the variance explained by each factor. Not surprisingly, the most effective technique depends on the number of variables, number of cases or observations, proportion of missing values, and average intercorrelations (as defined by Timm [\it Psychometrika,\kern .16667em 35(1970):417-437]).

Robert Mellor, Dept of Mathematical Sciences, University of Western Sydney, Macarthur, PO Box 555, Campbelltown NSW 2560, Australia r.mellor@uws.edu.au


ASC/IMS Contributed: \ Topics in Markov Chain Monte Carlo


Parallelizing Markov Chain Monte Carlo Sampling Techniques for Distributions with Isolated Modes

Sujay Datta, Janis P. Hardwick, Quentin F. Stout, University of Michigan, Ann Arbor, USA

Monte Carlo sampling-based methods using stationary Markov chains have recently become popular in Bayesian statistical inference. Many of the traditional methods (such as {\it Gibbs sampling}) suffer from problems of slow convergence and large dependencies between successive states when the target-distribution has isolated modes. Even multiple independent runs of them may not guarantee a sample with a fair representation of each mode. More recent developments, such as {\it simulated tempering} and the {\it tempered transitions} method, are a considerable improvement in this respect. However, the overall picture is that these methods are still on the slower side when executed in a serial manner. With the advent of high-performance {\it parallel computing architectures}, several parallel simulated annealing techniques have been proposed for optimization purposes. Here we develop analogous parallel algorithms for {\it simulated tempering} and the {\it tempered transitions} method, looking for speed-up in convergence. The approaches taken are {\it simultaneous periodically interacting searches}, {\it multiple dependent trials} and {\it massive parallelization}.

Sujay Datta, Dept. of Statistics, Univ. of Michigan, 1440 Mason Hall, 419 S. State St., Ann Arbor, MI 48109-1027, USA sdatta@stat.lsa.umich.edu


A Bayesian Approach to the Genetic Mapping of Quantitative Trait Loci in a Half-Sib Design

A. W. George, K. Mengersen, Queensland University of Technology, Brisbane, Australia
G. Davis, CSIRO, Brisbane, Australia

It is difficult to deny the enormous impact the use of genetic markers has had on the mapping of quantitative traits. Through the use of genetically marked chromosomes, it is possible to locate the loci affecting quantitative traits. In this paper, we consider a Bayesian approach to detecting a single QTL and estimating its size given 3 genetic markers for a half-sib design. This design is very common in animal breeding but results in substantial missing information. The methodology requires both parameter estimation and model selection. Due to the complexity of the posterior distribution and the model selection demands, Markov Chain Monte Carlo methods are used and we will discuss implementation issues. Results obtained from a simulated data set illustrate the technique.

Andrew George, School of Mathematics, Gardens Point Campus, Queensland University of Technology, GPO Box 2434, Brisbane 4001, Queensland, Australia george@einstein.fsc.qut.edu.au


Empirical Bayes Analysis of Partially Accelerated Life Tests Using Gibbs Sampling

Mohamed T. MADI, UAE University, Al-Ain, UAE

We consider a life testing setting in which several groups of items are put, at different instances, on the partially accelerated life test introduced by DeGroot and Goel [{\it Naval Res. Logistics Quarterly,} 26(1979):223-235]. The combined failure time data is then used to derive empirical Bayes estimators for the failure rate of the exponential life length under normal conditions. The estimation which is implemented using the Gibbs sampler Monte-Carlo-based approach, illustrates once again the ease with which these types of estimation problems, often requiring sophisticated numerical or analytical expertise, can be handled using the sampling based approach.

Mohamed T. MADI, Department of Statistics, P.O. Box 17555, Al-Ain, UAE MADI@ADMINPO.UAEU.AC.AE


Gibbs Sampling and some Applied Problems

A. N. Pettitt, Queensland University of Technology, Brisbane, Australia

In terms of offering advances for the applied statistician, methodology based upon Bayesian techniques using modern computational techniques such as Markov chain Monte Carlo has something substantial to offer. The necessity of writing computer code to implement algorithms for a large class of models is removed by using the BUGS ( Spiegelhalter {\it et al.}, 1995) software. This makes these analyses accessible to a large class of researchers and students. Experiences with using BUGS for a range of problems is described as well as using it in teaching.

Prof Tony Pettitt, School of Mathematics, QUT, GPO Box 2434, Brisbane 4001, Queensland a.pettitt@fsc.qut.edu.au


Finite Sample Performance of Additive Robust Bayesian Nonparametric Regression

Michael Smith, Simon Sheather, Robert Kohn, AGSM, UNSW, Australia

The finite sample performance of a number of nonparametric regression estimators is investigated in a variety of settings involving outliers. A Bayesian approach is shown to have good overall comparative performance. An additive real data example is examined in order to demonstrate the methodology's potential in fitting multivariate regression data sets. The Bayesian analysis is carried out using a Markov chain Monte Carlo approach.

Michael Smith, AGSM, UNSW, Sydney, 2052, Australia mikes@agsm.unsw.edu.au


Bayesian Object Recognition using Resolution-Varying Templates

Håvard Rue, The Norwegian University of Science and Technology, Trondheim, Norway

Bayesian object recognition is the problem of how to estimate the number of objects and their location in a non-ideal environment. The difficulty varies with the degree of variation in size and shape of the objects and the number of types of objects present. The deformable template approach of U.\penalty \@M \ Grenander use as the objectmodel a stochastic deformed template [{\it General Pattern Theory}, Oxford University Press, 1993]. To better handle the case when different types of objects are present, we allow for the deformed template to develop also {\em through\/} the different object-types by introducing a stochastic resolution of the templates. This allows better large jumps in the Markov chain Monte Carlo (MCMC) algorithm, an easier construction of split-an-object and fuse-objects jumps, and it allows for changing also the object-type while doing various steps in the MCMC algorithm. The idea are illustrated on some examples.

Håvard Rue, Dept. of Math. Sci., The Norwegian Univ. of Sci. and Tech., N-7034 Trondheim, Norway Havard.Rue@imf.unit.no
http://www.imf.unit.no/~hrue.



A Comparison of Computable Bounds for Markov Chain Monte Carlo Rates of Convergence

David J Scott, University of Auckland, Auckland, New Zealand
Kerrie L Mengersen, Queensland University of Technology, Brisbane, Australia
Gareth O Roberts, University of Cambridge, Cambridge, England
Richard L Tweedie, Colorado State University, Fort Collins, USA

Computable bounds for the rate of convergence to stationarity of Markov chains are of interest in the implementation of Markov chain Monte Carlo (MCMC) methods since good bounds could provide guidance on the length of the burn-in period required to ensure the Markov chain is sufficiently close to stationarity for sampling to be assumed to be from the stationary distribution. In this paper actual numerical bounds are obtained for rates of convergence for a number of examples, using the results of different authors. For the main the bounds obtained are not of use in practice, being far too large. The exception is when the Markov chain under consideration is stochastically monotone.

David J Scott, Division of Science and Technology, Tamaki Campus, The University of Auckland, PB 92019, Auckland, New Zealand d.scott@auckland.ac.nz


ASC/IMS Contributed: Poster Session


\bf Limit Properties of Order Statistics from Almost Lack of Memory Distributions

B. Dimitrov, GMI, Flint, USA
M.E. Ghitany, Kuwait University, Safat, Kuwait
Z. Khalil, Concordia University, Montréal, Canada

The Almost Lack of Memory class of random variables is specified by the periodicity of its hazard distribution function, the probability to survive the first period, and the conditional distribution of the time to failure within the first period given that it has failed there. In this study we establish explicit forms of the distribution of order statistics in a random sample from this class of distributions. For finite sample size, we find the limit distributions of the order statistics when the probability to survive the first period approaches 0 or 1, as well as the limiting behaviour of spacing distributions.

Prof. Z. Khalil, Department of Statistics and O.R., Faculty of Science, Kuwait University, P.O.Box 5969, Safat 13060, Kuwait zohel@kuc01.kuniv.edu.kw


Control Charts for Non-normal Spatial Data

Michelle L. Gatton, Queensland University of Technology, Queensland, Australia

Control charts are a commonly used quality assurance tool in industry. A difficulty with sampling from a process arises when there is spatial variability within a sampling unit. For focus in this presentation we will consider sampling the thickness of an animal hide. In this paper we investigate ways to deal with this problem by using the minimum thickness from sampling regions on the hide. The distribution of minimum thickness is non-normal and depends on the distribution of thickness. We use a new family of distributions developed by MacGillivray and Cannon (1994, under revision) called the $g$-and-$k$ distribution which may be used to approximate a wide class of distribution, with the advantage of effectively modelling skewness and kurtosis through independent parameters. New control limits which more accurately reflect the distribution of the data can be derived by fitting the $g$-and-$k$ distribution to the data. We also observe the effect on the control limits as the shape of a distribution deviates from normal. This work is a joint project with Michele A Haynes and Kerrie L Mengersen, Queensland University of Technology, Queensland, Australia.

Michelle L. Gatton, School of Mathematics, Queensland University of Technology, GPO Box 2434 Brisbane Q 4001, Australia m.gatton@qut.edu.au


Residuals for the Linear Model with General Covariance Structure

John Haslett, Trinity College, Dublin, Ireland

The general linear model $\underline {Y}=X\underline {\beta }+ \underline {\epsilon }$, where Cov$(\underline {\epsilon })=V$, is the basis for very many statistical procedures. It is therefore surprising that it is not well known that deviations from the expected value may be studied equally simply via (i) the marginal expected value $\underline {\mu } = E[\underline {Y}] = X \underline {\beta }$ and (ii) the conditional expected value $\mathaccent "707E {\underline {\mu }}$ whose elements are $E[Y_{i} | rest]$. These yield deviations $\underline {\epsilon }$ and $\mathaccent "707E {\underline {\epsilon }} = V^{-1}\underline {\epsilon }$ respectively. When the parameters $\underline {\beta }$ are to be estimated from the data this duality extends to residuals; these may be multivariate corresponding to a `leave-k-out' analysis. The classical definition ``residual= data - fitted value'' is thus capable of two {\it equally simple but complementary} definitions of ``fitted value''.The theory will be illustrated by reference to residuals from a time series on 'global warming'. See related work in contributed paper session and Dynamic Statistical Graphics workshop.

Professor J. Haslett, Statistics Dept, Trinity College, Dublin, Ireland jhaslett@tcd.ie


Lower Reliability Bound for Consecutive k-out-of-r-from-n:F System

Y. Higashiyama, Ehime University, Matsuyama, Japan
M. Kraetzl, DSTO, Salisbury, Australia
L. Caccetta, Curtin University of Technology, Perth, Australia

A consecutive $k$-out-of-$r-$from-$n:F $system has $n$ ordered components such that the system fails if and only if there are $r$ consecutive components at least $k$ of which are failed. This paper studies the reliability of a general system with not necessarily equal components. Higashiyama et al. proposed two recursive algorithms to evaluate the exact reliabilities of linear and circular consecutive $2$-out-of-$r$-from-$n:F$ system [{\it IEICE\kern .16667em Trans.\kern .16667em Fundamentals\kern .16667em E}\kern .16667em -78\kern .16667em {\it A} (1995):680-684]. For $(k \geq 3) $an explicit solution is difficult to find and we give a lower bound for the reliability of a consecutive $k$-out-of-$r$-from-$n:F$ system. New recursive equations are proposed, which have time complexity of $O(nr^{k})$ for the linear system, $O(nr^{2k}) $for the circular system.

Yoichi Higashiyama, Faculty of Engineering, Ehime University, 3 Bunkyo-cho, Matsuyama, 790 Japan mountain@ccs42.dpc.ehime-u.ac.jp


Food Habits of Young and Middle Aged Women Outside the Capital Cities of Australia

Gita D. MISHRA, Women's Health Australia, University of Newcastle
Rhonda REYNOLDS, Faculty of Health, University of Western Sydney.
Annette J. DOBSON, Department of Statistics, University of Newcastle.

The pilot studies for the Australian Longitudinal Study on Women’s Health provided an opportunity to compare food habits in two groups of women, aged 18-22 and 45-49 years, living in urban, rural and remote areas of New South Wales. The survey form included 19 food frequency questions. Analysis of variance, and for some food items, log linear models were used to investigate the associations between food habits, socio-economic characteristics and other health related behaviours. Data from this survey showed that women in urban areas consumed less meat than women in rural and remote areas. In comparison to young women, middle-aged women consumed more fruit and vegetables, reduced fat milk, fish, biscuits and cakes, bread, soft cheese, dried beans, and fat on meat. For both young and middle-aged women, smokers consumed less fresh fruit and vegetables and less breakfast cereals than non-smokers suggesting that health promotion material aimed at changing eating diet behaviours may need additional targeting for smokers.

Gita D. Mishra, Women's Health Australia, Mathematics Building, University of Newcastle, University Drive, Callaghan 2308, N.S.W., Australia whgm@cc.newcastle.edu.au


Applications of Official Statistics--The Hong Kong Experience

Teresa K.Y. Ng, City University of Hong Kong, Kowloon, HK

Economic and Social Statistics compiled and published by the Hong Kong Government are vital to business activities and Government projects. They have been increasingly used by various organisations in areas of planning administration, and decision-making. This paper presents two case studies on what published official statistical data and how these data are being applied by end-users in Hong Kong. The first case covers how real estate companies foresee the future residential property market in the light of trade statistics, employment statistics, financial statistics $\mathinner {\ldotp \ldotp \ldotp }$ etc. The second case is on the applications of port statistics and shipping statistics to container terminal development in Hong Kong.

Teresa K.Y. Ng, Dept Applied Stats & Operatnl Research, City University of Hong Kong, 88 Tat Chee Ave, Kowloon Hong Kong ARHEIHEI@CITYU.EDU.HK


Murray Mouth Modelling

Vladimir Nikulin, University of Newcastle, NSW, Australia

The ecological stability in the River Murray Estuary is a subject of many factors. Among them the river flow is the most influential. Under natural conditions river flow is seasonal. In order to smooth this process weirs and barrages were constructed in 1940 to have been successful in providing water supplies for irrigation. They transformed the River Murray Estuary into a highly regulated system. Nevertheless, wind can change the situation dramatically during summer droughts. In 1981 the mouth of the River Murray was closed for several weeks by a sand barrier. Its closure prompted the State Government to engage consultants and researchers to investigate the causes underlying the closure and to elaborate an optimal strategy of water source: first, it was necessarily to conquer for flow during summer, and on the other hand evaporation due to high water levels during winter needed to be controlled. In this paper the computational procedures based on the real data have been established to elaborate modelling strategy of the River Murray Estuary. It was found that the intensity of the water's circulation may be effectively approximated by the function of the wind (random factor) and flow of the water through barrages (control). This indicator-function leads to the Simulation Model and Optimisation Problem. The solutions of the Optimisation Problem were found and can be used for the management of the water's consumption.

Vladimir Nikulin, Dept of Statistics, University of Newcastle, NSW 2307 stvn@alinga.newcastle.edu.au


Overparametrization of Generalized F Distribution: A Simulation Study

Yingwei Peng, Keith B. G. Dear, The University of Newcastle, Newcastle, Australia

The generalized $F$ distribution is an extremely flexible distribution. However, there are problems in its application. An important problem is the overparametrization among shape and scale parameters which will lead to unreliable and uninterpretable estimates of these parameters with large variances. In this paper, we illustrate the problem with real data and compare the generalized $F$ distribution with other more parsimonious distributions by simulation. Results show that the extended generalized gamma distribution, which has one fewer parameter than the generalized $F$ distribution, can fit data almost as well as the generalized $F$ distribution.

Yingwei Peng, Department of Statistics, The University of Newcastle, Newcastle, NSW 2308, Australia peng@frey.newcastle.edu.au


Constrained Optimization via Stochastic Approximation with a Simultaneous Perturbation Gradient Approximation

Payman Sadegh, Technical University of Denmark, Denmark

The paper deals with a projection algorithm for stochastic approximation using simultaneous perturbation gradient approximation for optimization under inequality constraints where no direct gradient of the loss function is available and the inequality constraints are given as explicit functions of the optimization parameters. It is shown that under application of the projection algorithm, the parameter iterate converges almost surely to a Kuhn-Tucker point. The procedure is illustrated by a numerical example.

Payman Sadegh, Department of Mathematical Modelling, Building 321, DTU, DK-2800 Lyngby, Denmark ps@imm.dtu.dk


Application of Circular Distribution on Road Accidents in Chiang Mai City

Paitoon Tunkasiri, Chalida Niparugs, Chiang Mai University, Chiang Mai, Thailand

Times of occurring accidents on 2 important roads in Chiang Mai City are compared. Samples of 31 and 23 accidents on Road I and Road II were recorded in a month. Circular Distribution techniques were employed. The sample from Road I showed the mean angle of $45.76^{\circ }$ (3.05am) while the sample from Road II showed $117.38^{\circ }$ (8.20am) The parametric Rayleigh test showed to reject that the sampled population is uniformly distributed around a circle for both Road I and Road II. Then the hypothesis of equal means angle was also rejected by Stephens utilized statistic $F$-test. Nonparametric techniques were parallely used. By Watson one-sample $U^{2}$-test we rejected the hypothesis of uniformity for both roads. We also rejected that the samples came from two populations having the same direction by Watson $U^{@}$ for two samples. That means times of occurring accidents on the two roads are significantly different.

Dr Paitoon Tunkasiri, Dept of Statistics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand scsti009@Chiangmai.ac.th

Thursday 11 July: 14:00-15:50

ASC Invited: Applications of Statistical Analysis


Competing Risks: Applied to Analysis of a Clinical Trial in Advanced Breast Cancer

M. Lunn, St Hugh's College, Oxford, England

Addition or substitution therapy for advanced tamoxifen resistant breast cancer was investigated in a clinical trial. One major interest was disease progression in new or old sites. It was thought that addition therapy may be advantageous due to control of some old sites. The question of disease progression at new or old sites was tackled using competing risk theory. General results suggested that the original hypothesis was incorrect. Recent work extending methods using conditional and marginal probabilities for competing risks was applied with interesting results concerning interaction of the therapy with progesterone receptor status.

Mary Lunn, Mathematical Institute, St Hugh's College, Oxford OX2 6LE, UK mlunn@vax.oxford.ac.uk


Outliers in Hierarchically Structured Data

Toby Lewis, University of East Anglia, Norwich, UK

What {\em is} an outlier in a hierarchical or multilevel data set? Consider for example a set of 3-level data where examination scores for students (level 1) in classes (level 2) in schools (level 3) are to be analysed for relationships with various explanatory factors (e.g. sex of student, class size, type of school). We may have an outlying school, an outlying student in an outlying or again in a non-outlying school, and so on. In respect of {\em what} is a response outlying? Does its status depend on component outliers at lower level? How can outliers be identified and how do they affect the analysis? We illustrate these issues by an analysis of data from a survey of high-school student adjustment (Scott, W.A. and Scott, R. [{\it Australian\kern .16667em Journal\kern .16667em of\kern .16667em Psychology\kern .16667em } 41(1989):269-284]; Canberra:\kern .16667em Social\kern .16667em Science\kern .16667em Data\kern .16667em Archives,\kern .16667em The Australian National\ University, 1995).

Toby Lewis, Centre for Statistics, University of East Anglia, Norwich NR4 7TJ UK T.Lewis@uea.ac.uk


Modelling Scores in the 1995 Winfield Cup

A. J. Lee, University of Auckland, New Zealand

Data from the 1995 ARL season are analysed to model the distribution of scores in Rugby League. The method used is to model the numbers of tries, goals and field goals scored by each team as a multivariate negative binomial distribution, and then calculate the characteristic function of the bivariate distribution of the two scores in a typical game. This distribution is then fitted to the data using empirical characteristic functions. The results were then used to assess the profitability of a lottery promotion based on League scores.

A. J. Lee, Department of Statistics, University of Auckland, Private Bag 92019, Auckland, New Zealand lee@stat.auckland.ac.nz


ASC-E Invited: All-day Workshop on ENVIRONMENTAL IMPACT ASSESSMENT


Official Environmental Information Systems

Jeannette Heycox, Aust Bureau of Statistics, ACT, Australia

``If you can't measure it, you can't manage it.'' This is as true of management for sound environmental outcomes as for anything else. Australian government is developing a range of environmental information systems necessary to assist decision making. This talk outlines the major government activities in data collection, analysis and dissemination with particular reference to those of the Australian Bureau of Statistics. It discusses the broad issues of compiling environmental statistics by a national statistical organisation, and such techniques as State of Environment reporting, environmental accounts and indicators. It raises such issues as the identification of users and their information needs, the level of geographical resolution required and the need to harmonise the development of information at the local, regional, national and global levels. Other issues touched on include the need to integrate environmental information with traditional development information, the need to establish standards and methods for handling such information and the need to share information about available data sources.

Jeanette Heycox, Environment & Energy Statistics, ABS, PO Box 10, Belconnen ACT 2616


Expenditure by Industry in the UK on Pollution Abatement

Alan H. Brown, Department of the Environment, London, UK

A number of initiatives in the UK have been carried out to reduce or eliminate environmental pollution by industry. Although studies are conducted to estimate the cost of implementing specific environmental regulations very little information is available about the overall costs to industry of pollution control. The UK has recently carried out a pilot survey to see whether reliable information on pollution control expenditure can be obtained from industry and to make preliminary estimates of capital expenditure on ``end-of-pipe'' and integrated processes, and current expenditure. The results of the study will be used to assess the value of the environmental expenditure information to policy makers, and to assess its use in the production of sustainable development indicators and the development of environmental ``green'' national accounts. The results of the pilot study suggest that it is possible to obtain reliable expenditure information from a regular enquiry.

Alan H. Brown, Dept of the Environment, Room A117 Romney House, 43 Marcham Street, London SWIP 3PY UK jmartin.epsim@gtnet.gov.uk.


Spatial and Temporal Representation of Topography and Weather

M.F. Hutchinson, ANU, Canberra, Australia

Topography and climate are dominant controls on earth surface processes and their productivity. Topography directly moderates surface water flow and, in conjunction with climate, influences soil erosion and soil properties. For many purposes, the shape of a representation of terrain is more important than absolute elevation accuracy. A technique for interpolating topographic data to a regular grid from various data sources, including spot heights, contours and streamlines, is described. A key feature of the method is its attention to the drainage properties of the interpolated grid. Recent developments are directed toward developing additional flexibility and accuracy by incorporating locally adaptive interpolation strategies. Monthly mean climate is a primary determinant of plant growth and has been used to successfully describe the spatial distributions of both plant and animal native species. Climate is also a key parameter in defining agricultural productivity. Using the strong dependence of climate on elevation, thin plate spline techniques have been used to interpolate monthly climate variables over entire continents from standard meteorological networks. Further work is aimed at describing, in statistical terms, fine scale spatio-temporal distributions of monthly and daily weather. Elucidating the dependence of these models on large scale patterns and trends associated with climate change is essential for developing strategies for managing climate variability and climate change.

M.F. Hutchinson, Centre for Resource & Environmental Studies, ANU, Canberra ACT 0200, Australia


ASC Invited: Robust and Nonparametric Regression


Robust Estimation of Smooth Regression and Spread Functions and their Derivatives

A.H. Welsh, The Australian National University, Canberra, Australia

We consider the application of kernel weighted local polynomial regression methods to estimate regression and spread functions and their derivatives. In particular, we consider both an extension of the regression quantile methodology introduced by [{\it Koenker and Bassett} (1978)] and an approach based on $M$-estimation for heteroscedastic regression models. The present work is partly motivated by the paper of [{\it Ruppert and Wand} (1994)] who show that by analysing local polynomial fitting directly as a weighted regression method rather than as an approximate kernel smooth, asymptotic results for estimating the regression function can be obtained for complex problems including vector covariates, general polynomials, derivative estimation and boundary problems. We extend their results to allow for robust fitting, for modelling general heteroscedasticity and for derivative estimation in the multivariate case. Our results confirm that local polynomial fitting procedures produce robust estimators of the regression and spread functions and their derivatives. Moreover, we show that we can reduce the bias of the estimators by increasing the order of the polynomials being fitted. The excellent edge-effect behaviour of local polynomial methods extends to derivative estimation and the multivariate case. We apply the methodology to two data sets to illustrate its practical utility.

A. H. Walsh, The Australian National University, ACT 0200, Australia Alan.Welsh@anu.edu.au


Nonparametric Regression using Bayesian Variable Selection

Robert KOHN, Mike SMITH, Australian Graduate School of Management, Sydney, Australia

This paper estimates an additive model semiparametrically, while automatically selecting the significant independent variables and the appropriate power transformation of the dependent variable. The nonlinear variables are modeled as regression splines, with significant knots selected from a large number of candidate knots. The estimation is made robust by modeling the errors as a mixture of normals. A Bayesian approach is used to select the significant knots, the power transformation and to identify outliers using the Gibbs sampler to carry out the computation. Empirical evidence is given that the sampler works well on both simulated and real examples and that in the univariate case it compares favorably with a kernel weighted local linear smoother. The variable selection algorithm in the paper is substantially faster than previous Bayesian variable selection algorithms.

Robert Kohn, Australian Graduate School of Management, Univ NSW Sydney 2052 NSW, Australia R.Kohn@unsw.edu.au
http://www.agsm.unsw.edu.au.



Diagnostics to Detect Differences in Robust Fits of Linear Models

Joseph W. McKEAN, Western Michigan University, Kalamazoo Michigan

How much do influential data points affect the resulting estimates? To which fitted values do they make a difference? In this talk, diagnostics are proposed which help answer these questions by measuring the difference in fits between a highly efficient estimate and a highly robust estimate. The diagnostics are intended to expose the critical few observations that make a difference in the analysis. Further analyses may depend on how much weight these points should have, after closer inspection by the investigator.

Joseph W. McKean, Department of Mathematics & Statistics, Western Michigan University, Kalamazoo Michigan 49008, USA joe@stat.wmich.edu


IMS Invited: Information in Nonparametric Problems


Information Calculus for Non I.I.D. Data

Peter J. Bickel, University of California, Berkeley, USA

In chapter 3 of {\it Multivariate Analysis: Future Directions} (1994) Ed. C.R. Rao, we sketched how ideas of Levit could be combined with the calculus of efficient influence functions developed in Bickel, Klaasen, Ritov, Wellner, ``Efficient and adaptive estimation in semiparametric models'' (1993) to enable one to calculate information bounds and check efficiency of estimates in non i.i.d. situations. We develop this framework to obtain some insight into classical optimality results involving the usual estimates in the Cox regression model and rederive and extend some results of Kutoyants on inference in diffusion processes. We also pose some new problems on optimal inference in stochastic processes.

Peter J. Bickel, University of California, Dept of Stats, 367 Evans Hall \#3860, Berkeley, CA 94720-3860, USA bickel@stat.berkeley.edu


Some Estimation Problems for SPDE's

I. Ibragimov,

We consider the following estimation problem. We are observing the solution $u(t,x)$ to the stochastic parabolic equation $ du(t,x)=LU(t,x)dt+au(t,x)+bu(t,x)+edw(t,x) $ where $L$ is a known partial differential operator and $w$ is a cylindrical Wiener process. The problem is to estimate unknown $F(a),G(b)$ of known functions $F,G$ at unknown points $a,b$. We consider the asymptotic set up of the problems when the (known) small parameter $e$ goes to zero. We find the optimal rate of convergence of estimators to $a,b$. In the case when $F,G$ are Frechet differentiable with Hilbert-Schmidt derivative we construct asymptotically efficient estimators for $F(a), G(b)$.


Direct Estimation of Low Dimensional Components in Additive Models

Enno Mammen, Universität Heidelberg, Germany

Additive regression models have turned out to be a useful statistical tool in analyses of high dimensional data sets. Recently, an estimator of additive components has been introduced by Linton and Nielsen (1994) which is based on marginal integration. The explicit definition of this estimator makes possible a fast computation and allows an asymptotic distribution theory. In this talk a modification of this procedure is introduced. We propose to introduce a weight function and to use local linear fits instead of kernel smoothing. These modifications have the following advantages: (i) We demonstrate that with an appropriate choice of the weight function, the additive components can be efficiently estimated: An additive component can be estimated with the same asymptotic bias and variance as if the other components were known; (ii) Application of local linear fits reduces the design related bias. This talk reports on joint work with W. Härdle and J. Fan

Enno Mammen, Institut für Angewandte Mathematik, Ruprecht-Karls-Universität Heidelberg, Im Neuenheimer Feld 294, 69120 Heidelberg, Germany mammen@statlab.uni-heidelberg.de


IMS Contributed: Nonparametric Smoothing II


Granulometric Smoothing

Guenther Walther, Stanford University, Stanford, USA

A method for `smoothing' a multivariate data set is introduced that is based on a simple geometric idea. This method is applied to the problem of estimating level sets of a density and minimum volume sets with given probability content, with the goal of constructing certain multivariate bootstrap confidence regions and highest posterior density regions in a Bayesian context. Building on existing techniques, the resulting estimator combines excellent theoretical and computational properties for a very flexible class of sets: It converges with the minimax rates (up to log factors) in most cases where these rates are known and allows at the same time to be computed, visualized, stored and manipulated by simple algorithms and tools.

Guenther Walther, Dept. of Statistics, Stanford Univ., Stanford, CA 94305, USA walther@playfair.stanford.edu


Smoothing Spline Models With Correlated Random Errors

Yuedong Wang, University of Michigan, Ann Arbor, USA

Spline smoothing provides a powerful tool for estimating a function. Its performance depends on the choice of smoothing parameters. Many methods such as CV, GCV and GML are developed for selecting smoothing parameters under the assumption of independent observations. They fail badly when data are correlated. In this talk, for structured random errors, we propose to estimate the smoothing parameters and the correlation parameters simultaneously by the GML method. We establish the connection between a smoothing spline and a mixed-effects model. This connection allows us to fit a spline model with correlated errors using the existing SAS procedure {\fam \ttfam \tentt mixed}. We illustrate our methods with applications to time series and spatial data.

Yuedong Wang, Department of Biostatistics, University of Michigan, Ann Arbor, MI 48109, USA yuedong@umich.edu
http://www.sph.umich.edu/~yuedong.



Bandwidth Selection in Recursive Kernel Regression

Mikael Thuvesholmen, Lund University, Sweden

A bandwidth selector based on crossvalidation is proposed for a recursive kernel regression estimator. In the nonrecursive case, the crossvalidation method needs $O(n^2)$ operations and storage of size $O(n)$. Here, a step-by-step crossvalidation (SSCV) is proposed which needs $O(n)$ operations for each update. A method using binned approximations is under investigation. This method would decrease both operations and storage to $O(N)$, where $N$ is a fixed number of gridpoints. One of the main themes is the use of known results for the Nadaraya-Watson estimator and the translation of these to the recursive estimator proposed by Devroye and Wagner. The reason why this recursive estimator is used is due to its simplicity and somewhat better convergence properties than the estimator proposed by Ahmad and Lin. However, the analysis performed here can be made for the Ahmad and Lin estimator as well.

Mikael Thuvesholmen, Dept.\ of Mathematical Statistics, Lund University, Box 118, S-221 00 Lund, Sweden mikael@maths.lth.se


Change-points Estimation by Local Polynomial Regression Smoothers

Jyh-Jen H. Shiau, Pai-Hung Yeh, S.-H. Lin, National Chiao Tung University, Taiwan

Consider the problem of estimating an unknown function that is smooth except for some change-points, where discontinuities occur on either the function or its low-order derivatives. Under a fixed design semi-parametric regression model, we propose estimators of the change-point and its corresponding jump size based on maximizing the difference of two one-sided local polynomial regression estimators. We show that the change-point estimator and the jump size estimator are asymptotically normal. We also propose some regression function estimators. One function estimator is proven to be free of the boundary effects problem when conditioned on the change-point estimator. A simulation is conducted to study the performance of the method in finite sample situations. The results look quite promising. The method is also applied to the famous Nile data.

Jyh-Jen Horng Shiau, Institute of Statistics, National Chiao Tung University, Hsinchu, Taiwan 300, R.O.C. jyhjen@stat.nctu.edu.tw


Kernel Smoothing

Ivana Horová, Dept of Applied Maths, Faculty of Science, Czech Republic

The present contribution concerns nonparametric curve estimation procedures. Attention is mainly paid to kernel estimates. As far as the univariate case is concerned the problems of the choice of kernels and of boundary kernels are dealt with. The univariate kernel estimates can be directly generalized to the multivariate ones. In particular, kernel estimates by means of so-called product kernels for a special rectangular design are investigated. But the direct kernel estimates suffer from the curse of dimensionality. In order to overcome this problem the approach based on additive models has been suggested by many authors. In the present contribution these techniques for fitting additive models by means of kernel smoothing technique are described. The computer implementation of these theoretical results and application to climatological and biological data sets are also presented.

Ivana Horová, Janáckovo nám. 2a, 662 95 Brno, Czech Republic horova@math.muni.cz


Orthogonal Series Regression Estimation: Projection vs. Shrinkage

Bernd Droge, Humboldt University, Berlin, Germany

We consider the problem of orthogonal series regression estimation when no prior information on the actual form of the regression curve is available. In a decision-theoretic framework we study the finite sample properties of two classes of nonlinear estimators, based either on a projection or on a shrinkage approach. The considered estimators may be compared with the hard and soft threshold rules in the wavelet terminology, see e.g. Donoho and Johnstone [{\it Biometrika}\kern .16667em 81\kern .16667em (1994):425-455]. The whole analysis is carried out under the simplifying assumption of normally distributed observations with a common known variance. Employing the minimax regret principle shows the superiority of the optimal data-dependent shrunk estimator over its projection-type analogue. The latter one behaves similarly to the minimizer of Mallows' $C_p$-criterion. In a simulated data example we illustrate the behaviour of the proposed methods. Possible modifications for practical purposes are also discussed.

Bernd Droge, Institute of Mathematics, Humboldt University, Unter den Linden 6, D-10099 Berlin, Germany droge@mathematik.hu-berlin.de


A Comparison of Uniform and Quadratic Kernels

J. Dong, Michigan Technological University, MI USA

This talk is about estimating cell probabilities of contingency tables using uniform kernel estimators. Contrary to the general belief, our simulation results show that the uniform kernel performs better than the quadratic kernel for moderate tables (the number of cells is between 20 and 500). By examining the effects of the uniform kernel on bias and variance separately, we find that, in general, variance is two or three times larger than squared bias. If variance is reduced by 10\%, and squared bias is increased by 10\%, then average sum of squared error is still reduced. We believe that this is the reason why the uniform kernel outperforms the quadratic kernel. It is a well known fact that the kernel that minimizes the variance is the uniform kernel. A boundary kernel estimator based on a uniform kernel is proposed. By comparing the new estimator with other boundary kernels, we find the new boundary kernel is also superior.

Jianping Dong, Dept. of Math. Sciences, Michigan Tech. Univ. Houghton, MI 49931, USA jdong@mtu.edu


ASC Contributed: Time Series II


Pre-Testing for Unit Root and Bootstrapping Results in Swedish GNP

Thimothy Oke, Uppsala University, Sweden

It is often a crucial necessity to be able to identify economic time series as difference stationary (DS) or trend stationary (TS) because of their economic implication. Statistical tests of significance are routinely used to decide whether a series follows a DS or a TS process. Here we investigate some properties of parameter estimates when performing a preliminary test of significance of unit root. We extend this idea to a resampling scheme (bootstrapping), where we analyse the Swedish GNP from 1861 to 1993. Our analysis reveals that a model based on wrong assumptions might well have low bias and low risk function of parameter estimates.

Thimothy Oke, Uppsala University, Department of Statistics, P.O.Box 513, S-751 20 Uppsala, Sweden


On the Use of Predictive Least Square Criterion in Economic Time Series

C.S. WONG, W.K. LI, University of Hong Kong, Hong Kong

In the linear time series literature, several criteria have been proposed to solve the order determination problem. One of the criteria proposed by Rissenan, is the predictive least square (PLS) principle. In two different contexts, Hemerly and Davis, Hannan, McDougall and Poskitt show the strong consistency of the PLS criterion in selecting the order for autoregression. Their results are based on the assumption that the conditional variance is homogeneous over time. Contrary to the linear time series modelling, there is lack of guidance in order determination for economic time series. Even the properties of some standard criteria such as AIC and BIC are unknown. We propose to use the PLS criterion to solve the order determination problem. The consistency property of the PLS criteria can still be obtained after the relaxation of the constant conditional variance assumption. Applications to some popular econometrics models such as ARCH and GARCH models will also be discussed.

Chun Shan Wong, Department of Statistics, University of Hong Kong, Pokfulam Road, Hong Kong cswonga@hkusua.hku.hk


On Consistency of Estimators of Parameters in Censored Autoregressive Processes

Gopal M. Nair, Moses M. Sithole, Mangalam Vasudaven, Curtin University of Technology, Perth, Australia

We propose an estimator of the autoregression parameter of the first-order stationary autoregressive process in which the random errors are independent and identically (but not necessarily normally) distributed and the observed responses are subject to random censoring. Consistency of the new estimator is investigated. The new estimator is also compared with an existing estimator by means of Monte Carlo simulation experiments and found to perform better in most cases.

Moses M. Sithole, School of Mathematics and Statistics, Curtin University of Technology, GPO Box U 1987, Perth WA 6001, Australia sitholem@cs.curtin.edu.au


Estimation of the Parameters of the Bilinear Time Series Model ${BL}(p,0,1,1)$

P. W. A. Dayananda, Griffith University, Brisbane, Australia
L. Billard, University of Georgia, Athens, USA

The problem of estimation of the parameters in the model $ y_{t} = \varepsilon _{t} + \sum _{i=1}^{p} a_{i} y_{t-i} + b\varepsilon _{t-1} y_{t-1} $ is considered where $\delimiter "4266308 \varepsilon _{t}\delimiter "5267309 $ is Gaussian white noise with zero mean and variance $\sigma ^{2}$. The autocovariances are presented in a closed form and the invertibility of the model examined. It is shown that the parameters satisfy a set of linear relations and their estimation and properties are discussed. Particular cases $p = 1,2$ will be considered in detail.

P. W. A. Dayananda, Griffith University, Brisbane Q. 4111, Australia


Trends for Composite Time Series

Ian M. Westbrooke, Statistics New Zealand, Christchurch NZ

Analysts often wish to assess the trend underlying a seasonal time series. Seasonal adjustment packages generally estimate a trend as they decompose the original series into seasonal, trend and irregular factors. However, often a time series is the sum of a number of component series. For example, gross domestic product is made up of a number of production groups within the economy. In some cases seasonal adjustment of the components may give a superior result to adjusting at the top level. I will discuss the options for estimating the trend for composite series, with particular reference to $X12$, the US Bureau of the Census' new version of the popular $X11$ seasonal adjustment program.

Ian M. Westbrooke, Statistics New Zealand, Private Bag 4741 Christchurch NZ imwestbr@stats.govt.nz


On a Family of Moving-Average Trend Filters for the Ends of Series

Alistair G. Gray, Peter J. Thomson, Victoria University of Wellington, New Zealand

Many seasonal adjustment procedures decompose time series into trend, seasonal, irregular and other components using non-seasonal moving-average trend filters. This paper is concerned with the extension of the central moving-average trend filter used in the body of the series to the ends where there are missing observations. For any given central moving-average trend filter, a family of end filters is constructed using a minimum revisions criterion and a local dynamic model operating within the span of the central filter. These end filters are equivalent to evaluating the central filter with unknown observations replaced by constrained optimal linear predictors. Two prediction methods are considered: best linear unbiased prediction and best linear biased prediction where the bias is time invariant. These end filters are compared to the Musgrave end filters used by X-11 and to the case where the central filter is evaluated with unknown observations predicted by global ARIMA models.

Alistair Gray, Institute of Statistics and Operations Research, Victoria University of Wellington, PO Box 600, Wellington, New Zealand alistair@isor.vuw.ac.nz


Non-Stationary Bilinear Time Series Models

M. Shelton Peiris, The University of Sydney, Australia

The theory of non-stationary bilinear time series models is developed using time dependant coefficients. Some regularity conditions are used to establish the stability of the underlying process. The conditions for invertibility of the model are studied. Prediction results are derived and some examples are added. A real world situation is analysed to show the importance of this theory under the theme of quality improvement through statistical methodology.

M. Shelton Peiris, School of Maths & Stats, The University of Sydney, NSW 2006, Australia


ASC/IMS Contributed: Topics in Probability and Stochastic Processes II


Variants on Variance: Students' Understanding of the Law of Large Numbers

Sue Finch, University of Melbourne, Victoria Australia

Peoples' propensity to rely on information from small samples and an insensitivity to the size of samples has been described by some psychologists as arising from a failure to understand the Law of Large Numbers (LLN). In this study, undergraduate and postgraduate psychology students were asked to rate and then explain the rating of an item ostensibly describing the LLN. Students' explanations were examined in terms of referents to different distributions, sample characteristics, distributional features and different measures of variability. A block modelling procedure demonstrated that these features clustered together in a consistent way, characterising different understandings. Ratings of the item as false corresponded to characteristically different explanations than ratings of the item as true. It is suggested that an insensitivity to frequency information or to the inferential purpose of collecting sample data may underlie some of the difficulties students have in explaining the LLN.

Sue Finch, Dept of Psychology, University of Melbourne, Parkville, 3052 sue_finch.psychology@muwaye.unimelb.edu.au


Weak Convergence for Weighted Sums of I.I.D. Implies Strong Convergence for Branching Random Walks

Harry Cohn, University of Melbourne, Australia

Let $\xi _1, \mathinner {\ldotp \ldotp \ldotp }, \xi _n, \mathinner {\ldotp \ldotp \ldotp }$ be a sequence of non-negative, independent, identically distributed random variables, $S_n=\xi _1+\mathinner {\cdotp \cdotp \cdotp }\xi _n$, and assume that $L(x)=\intop \nolimits _0^x(1-F(u))du$ is slowly varying. Here $F$ is the distribution function of $\xi _1$. Then by a result of Feller there exist constants $\delimiter "4266308 a_n\delimiter "5267309 $ such that $\delimiter "4266308 S_n/a_n\delimiter "5267309 $ converges in probability to $1$. Write $T_n=b_1^{(n)}\xi _1+\mathinner {\cdotp \cdotp \cdotp }+b_n^{(n)}+\mathinner {\cdotp \cdotp \cdotp }+b_n^{(n)}=n$. It turns out that $\mathop {\fam \z@ \tenrm lim}_{n\rightarrow \infty }\mathop {\fam \z@ \tenrm max}_{i=1, \mathinner {\ldotp \ldotp \ldotp },n}b_i^{(n)}/n=0$ suffices for $\delimiter "4266308 T_n/a_n\delimiter "5267309 $ to converge in probability to $1$. This result is applied to a martingale derived from a branching random walk to yield almost sure convergence of $\displaystyle {1\over {c_n(\theta )}} \sum _{|u|=n} \mathop {\fam \z@ \tenrm exp}\nolimits (-\theta z_u)\delimiter "5267309 $, where $\delimiter "4266308 z_u:|u|=n\delimiter "5267309 $ are the points of the nth generation of the process, $m(\theta )=E[\sum _{|u|=1}\mathop {\fam \z@ \tenrm exp}\nolimits (-\theta z_u)]$ and $m=m(0)>1$ are assumed to be finite, $c_n(\theta )=(m(\theta )^n/L_\theta (m^n)$ where $L_\theta $ is the function $L$ corresponding to the limit distribution of $\delimiter "4266308 \displaystyle {{1\over {c_n (\theta )}}\sum _{|u|=n}\mathop {\fam \z@ \tenrm exp}\nolimits (-\theta z_u)}\delimiter "5267309 $.

Harry Cohn, Department of Statistics, Melbourne University, Parkville, Victoria 3052, Australia harry@stats.mu.oz.au


Stochastic Calculus with Respect to Fractional Brownian Motion

Wen Dai, Menzies School of Health Research, Darwin, Australia

This talk presents a method of defining stochastic integrals with respect to fractional Brownian motion which is not a semimartingale. The Itô formula is established for fractional Brownian motion. Then we propose and study a stochastic model - fractional Black-Scholes model - to describe the movement of stock prices in mathematical finance. Stochastic differential equations driven by Brownian motion are traditionally used to model the dynamics of stock prices. It is well known that Brownian motion is a typical short range dependent process. However, in recent years it has become increasingly obvious that long range dependent phenomenon is widespread in financial data. It is, therefore, of practical and theoretical importance to take into account long range dependence in the research of the fluctuating behaviour of financial markets. The fractional Black-Scholes model is a stochastic differential equation driven by fractional Brownian motion which is a typical long range dependent process. Since a fractional Brownian motion is a generalisation of Brownian motion, the fractional Brownian model includes as a special case the standard Black-Scholes model which has a Brownian motion as an integrator process in its dynamics and is widely used in mathematical finance. The advantage of the fractional Black-Scholes model is that it accounts for long range dependence in financial markets.


Relationships Between Transform and Algorithmic Methods in the Equilibrium Analysis of M/G/1 Type Queues

Guven Mercankosk, Gopal Nair, Wesley J. Soet, Curtin University of Technology, Perth, Australia

The equilibrium behaviour of $M/G/1$ type queues has been extensively analysed by transform and algorithmic methods. Techniques from both methods derive the equilibrium boundary probabilities, which are required to calculate the remaining equilibrium probabilities recursively. Gail, Hantler and Taylor [To appear in {\it Adv. Appl. Prob.}], in implementing transform methods, consider a specific subspace of bounded solutions to the system of Weiner-Hopf equations associated with the embedded Markov Chain. The spectral basis of the shift operator on this subspace, is used to construct linear equations in the equilibrium boundary probabilities. The first passage time distribution matrix, central to the algorithmic methods of Neuts [``{\it Structured Stochastic Matrices of $M/G/1$ Type and their Applications'',\kern .16667em Marcel\kern .16667em Dekker,\kern .16667em 1981}], is the matrix representation of the above shift operator. Relationships between the two methods are explored with specific application to Bailey's Bulk Queue.

Wesley J. Soet, School of Maths, Curtin University of Technology, GPO Box U1987, Perth WA, Australia soetw@cs.curtin.edu.au


Crump-Mode-Jagers Branching Processes and Queueing Systems

Valentin A. Topchij, Institute of Information Technologies and Applied Mathematics, Omsk, Russia
Vladimir V. Vatutin, Steklov Mathematical Institute, Moscow, Russia

We consider a single-server queueing system in which customers arrive in batches. Any kind of dependence between the service time of a particular customer and the sizes and arrival times of the batches of customers entering the system during the service of the customer is allowed. Customers are served according to a modification of the `last-come-first-served' discipline. It is assumed that the mean number of the customers which arrive in the system during the service of a particular customer is equal to one. Using a correspondence of a new type between the queueing system and a Crump-Mode-Jagers branching process, we study the probability of the event that there exists a time in the first busy period, at which the total remaining service time of all the customers being at that moment in the queue exceeds a high level $t$.

V.\penalty \@M \ A.\penalty \@M \ Topchii, Institute of Information Technologies and Applied Mathematics, Andrianova 28, 644077 Omsk, Russia topchij@iitam.omsk.su


Central Limit Theorem for Weights of Evidence

Elena V. Kulinskaya, La Trobe University, Melbourne, Australia
Michael B. Dollinger, Pacific Lutheran University, Tacoma, USA

A guarded weight of evidence for an alternative hypothesis generalizes the test critical function by yielding a real number between 0 and 1 instead of the usual 1-0 value provided by membership or non-membership in a critical region, while preserving a specified significance level. See Blyth and Staudte [{\it Prob.\kern .16667em & \kern .16667em Statist. \kern .16667em Letters}\kern .16667em 23\kern .16667em (1995):45-52] where it is shown that optimal guarded weights of evidence are functions of the likelihood ratio. We prove the following analogue of the C.L.T. for such optimal weights of evidence based on the sample mean used in a location problem for distributions with a monotone likelihood ratio. As the sample size tends to infinity: (1) for a fixed alternative, the weights of evidence tend to the usual gaussian z-test critical function; and (2) for local alternatives, the weights of evidence tend to the corresponding optimal weight of evidence for the normal distribution.

Elena Kulinskaya, School of Statistics, La Trobe University, Bundoora VIC 3083, Australia STAEK@LURE.LATROBE.EDU.AU


On a Universal Strong Law of Large Numbers for Conditional Expectations

J.R. Leslie, A.S. Kozek, Macquarie University, Australia
E.F. Schuster, University of Texas at El Paso, Texas, USA

A number of generalizations of the Kolmogorov Strong Law of Large Numbers (SLLN) are known including convex combinations of r.v's with random coefficients. In the case of pairs of i.i.d. r.v's $(X_1,Y_1),\mathinner {\ldotp \ldotp \ldotp },(X_n,Y_n)$, with $\mu $ being the probability distribution of $X$'s, the averages of $Y$'s for which the accompanying $X$'s are in a vicinity of a given point $x$ may converge with probability 1 (w.p.1) and for $\mu $-a.e. $x$ to conditional expectation $E\left ( Y|X=x \right ) $. We consider Nadaraya-Watson estimator of $E\left ( Y|X=x \right ) $ where the vicinities of $x$ are determined by window widths $h_n$. Its convergence w.p. 1 and for $\mu $-a.e. $x$ under condition $E |Y| < \infty $ is called a {\em Strong Law of Large Numbers for Conditional Expectations} (SLLNCE). If its convergence holds true for all probability distributions of $X$ it is called {\em universal}. We investigate the minimal assumptions for the SLLNCE and for the universal convergence and we improve the best known results in this direction. An example will be presented that suggests that the universal SLLNCE may not obtain.

Julian R. Leslie, Macquarie University, Department of Statistics, SEFS, NSW 2109, Australia; jleslie@efs.mq.edu.au
http://zen.efs.mq.edu.au/\penalty \@M \ akozek/.


Thursday 11 July: 16:00-17:50

ASC/RSS Invited: \ Ordinary Meeting of the Royal Statistical Society


Spatially Varying Two-Parameter Gamma Densities for Frequency Domain Analyses of fMRI Time Series

Nicholas Lange, National Institutes of Health, MD, USA
Scott L. Zeger, Johns Hopkins University, MD, USA

A nonlinear parametric model for brain activation detection by functional magnetic resonance imaging (fMRI) is proposed. The effects of a designed temporal stimulus on the fMRI signal at each brain location in a $36 \times 60$ spatial grid are estimated from discrete Fourier transforms of the observed time series at each location. The frequency domain regression model accommodates unobservable and spatially varying hemodynamic response functions through their estimated convolutions with the global stimulus. This approach generalizes an existing method for human brain mapping in two ways: by allowing hemodynamic responses to vary spatially and by modeling these responses with a flexible, two-parameter family of gamma densities. An fMRI experiment to detect focal activation during primary visual stimulation demonstrates the usefulness of the method.

Nicholas Lange, National Institutes of Health Federal Building, Room 7C04 7550 Wisconsin Avenue MSC 9135, USA Bethesda, MD 20892-9135 Nicholas.Lange@nih.gov
http://helix.nih.gov/pub/nick/lange.zeger.ps.



ASC-E Invited: All-day Workshop on ENVIRONMENTAL IMPACT ASSESSMENT


Statistical Endpoint Estimation in Ecotoxicology Studies

A. John Bailer, James T. Oris, Miami University, Oxford, Ohio, USA

Common numerical criteria used for setting limits on toxin exposures for the protection of aquatic life and human health include no-observed-effect concentrations (NOECs), lowest-observed-effect concentrations (LOECs) and effective concentrations (ECs). The NOEC and LOEC are design-sensitive indices, and are open to strong criticism. The EC indices are often estimated through the inversion of a parametric regression model fit. One criticism leveled against the EC estimation routines has been that one model can not be appropriate for the variety of different biological responses that are studied in ecotoxicology. These responses include survival, fecundity, and growth. Generalized linear models provide an overall framework for the analysis of such responses. The proposed EC estimator in this framework is the concentration associated with a specified level of change in the response relative to control response, often an inhibitory concentration. The construction of this estimator is presented along with illustrative examples.

A. John Bailer, Dept of Maths & Stats, Miami University, Oxford, Ohio 45056, USA ajbailer@muohio.edu


Exposure, Retention, Elimination and Effect

Geoffrey Berry, University of Sydney, Australia

The health effects of inhaled particles and fibres are related to the intensity and duration of exposure and occur many years after the exposure. For example, the incidence of mesothelioma after exposure to asbestos is proportional to the intensity of exposure (fibres per ml of air) and to the length of exposure, and increases with time since exposure to a power of 3.5. The disease process is related to the dose in the lungs; this depends not only on the exposure level and duration, but also on the respirability of the fibres, and the elimination of this dose over time or its converse, the biopersistence. Differences in disease incidence between different types of fibre are related to the respirability and also to the biopersistence. For example, crocidolite asbestos, which produces a high incidence of mesothelioma following heavy exposure, is much more biopersistent than chrysotile asbestos, which produces few mesotheliomas. Man-made mineral fibres such as glass wool are used as substitutes for asbestos and concern has been expressed on their safety. These fibres are much less biopersistent than asbestos and there is no evidence of mesotheliomas in man after exposure to these fibres. Some of the problems arising in this area will be discussed from the modelling viewpoint.

Professor G. Berry, Department of Public Health and Community Medicine, Edward Ford Building (A27), University of Sydney, NSW 2006, Australia geoffb@pub.health.su.oz.au


Statistical Treatment of Data for Prediction of Ecological Risks

J. Simmonds, L.J. Logan, R.D. Cardwell, Parametrix, Inc., Kirkland, WA, USA
T. Winton, Sinclair K. Merz, J. Hansen, Sydney Water Corporation, Sydney, Australia
Z. Tadic, EnSoft, Sydney, Australia
T. Miskiewicz, M. Donald, AWT EnSight, West Ryde, Australia

This paper describes the approach used to evaluate risks to aquatic life from chemicals discharged by Sydney's coastal sewage treatment plants (STPs). The similarity of the results from predictive models using measured chemical concentrations in the STP effluents, whole effluent toxicity testing, and from field biosurveys indicated that a simple statistical treatment of data provided an adequate assessment of the risks to aquatic life. The study investigated potential risks from chemicals discharged directly into the ocean from coastal sewage treatment plants using predictive models approved by the NSW Environment Protection Authority (EPA). This paper discusses the methods used to predict potential risks for aquatic life that live in coastal waters. (Potential risks to people who swim in and surf in coastal waters or who catch and eat fish from coastal waters, and to marine wildlife (birds and mammals) that live and feed in coastal waters were also assessed but are not discussed). Aquatic life risk evaluations used a combination of site-specific data, mathematical models, and data obtained from the scientific literature. Chemical concentration data in effluent, sediment and biota were obtained as part of a comprehensive data collection program. The evaluation was conducted in two steps. A screening level assessment identified chemicals of potential concern using intentionally conservative and simplifying assumptions. A detailed assessment was conducted on chemicals identified as being of potential concern to provide more realistic estimates of risk. In the screening level assessment, the upper 95 percent confidence limits of the median (or mean) concentration was used to evaluate chronic (long-term) exposure. The upper 95th percentile concentration of the population was used to evaluate acute (short-term) exposures. Values equal to one-half of the detection limit were used for chemical concentrations below detection limit. The estimated environmental concentrations were compared to acute and chronic water quality criteria. For the detailed risk assessment, site specific dilution data was used to model more realistically the expected environmental concentrations. The distributions of surface water concentrations at various distances from the discharge points were estimated using Monte Carlo simulation techniques incorporating the distributions of the effluent concentrations and of effluent dilutions. The detailed aquatic life risk predictions, the whole effluent toxicity tests, and the biosurveys all resulted in predicted regions of increased risk that were within a factor of three of each other at each STP. The similarity of these results indicate that the model based on the whole effluent testing adequately predicted risks to aquatic life.

J. Simmonds, Sydney Water Corporation, Sydney NSW Australia


ASC Invited: Application of Continuous Time Stochastic Processes


Diffusions in Markov Chain Monte Carlo

Richard L. Tweedie, Colorado State University, Fort Collins, USA

We consider a continuous time method of approximating a given distribution $\pi $ using the Langevin diffusion $ d{L}_t=d{W}_t+ {\raise 1pt\hbox {$\scriptstyle {1 \over 2}\displaystyle $}}{\nabla \mathop {\fam \z@ \tenrm log}\nolimits \pi ({L}_t) }dt. $ We find conditions under which this diffusion converges exponentially quickly to $\pi $ or does not: in one dimension, these are essentially that for distributions with exponential tails of the form $\pi (x) \propto \mathop {\fam \z@ \tenrm exp}\nolimits ({-\gamma |x|^{\beta }})$, $0 < \beta < \infty $, exponential convergence occurs if and only if $\beta \geq 1$. We then consider conditions under which discrete approximations to the diffusion converge. We first show that even when the diffusion itself converges, naive discretisations need not do so. Perhaps surprisingly, even a Metropolised version need not converge exponentially fast even if the diffusion does. We briefly discuss a truncated form of the algorithm which, in practice, should avoid the difficulties of the other forms.

R.L. Tweedie, Department of Statistics, Colorado State University, Fort Collins, CO 80523-0001 USA tweedie@stat.colostate.edu
http://www.stat.colostate.edu/\penalty \@M \ tweedie/documents.


/tweediecurrentpapers.html.

Equivalence of Linear Gaussian Systems

B. Goldys,

Let $X_1$ and $X_2$ be two arbitrary continuous time Gauss-Markov processes taking values in ${\bf R}^n$ and ${\bf R}^m$ respectively. We give necessary and sufficient conditions for the absolute continuity of the laws of the observation processes $Y_1=C_1X_1$ and $Y_2=C_2X_2$, where $C_1$ and $C_2$ are arbitrary matrices of appropriate dimensions. The main tool to obtain such conditions is the representation of the processes $Y_1$ and $Y_2$ as stochastic integrals combined with some results from the theory of Wiener-Hopf operators.


Continuous-Time Threshold ARMA Processes and Applications

Peter J. Brockwell, Royal Melbourne Institute of Technology, Melbourne, Australia

Non-linear continuous time ARMA processes, in particular continuous-time analogues of SETARMA and STAR processes with zero delay (Tong, {\it Non-linear Time Series}, 1990) constitute a useful class of time series models defined in terms of stochastic differential equations. Possible discontinuities in the parameters prevent the use of the standard construction of strong solutions of the defining stochastic differential equations. Unique weak solutions can however be constructed for the threshold AR(1) equations (Stramer, Brockwell and Tweedie, [{\it J.\kern .16667em Appl.\kern .16667em Prob.}\kern .16667em 33 (1996)], AR(2) equations (Brockwell and Williams [{\it Adv.\kern .16667em Appl.\kern .16667em Prob.}\kern .16667em 33\kern .16667em (1997)], and a suitably restricted class of ARMA equations. Conditional moments associated with these ARMA processes are expressible as expected values of functionals of standard Brownian motion. Numerical calculations however require the development of approximating sequences of processes. A simple sequence of approximating processes (Brockwell and Hyndman [{\it Int.\kern .16667em J.\kern .16667em Forecasting}\kern .16667em 8\kern .16667em (1992):157-173]) can be shown to converge in law to the corresponding threshold ARMA process. The problem of model-fitting for threshold ARMA processes and parameter estimation based on maximization of the ``Gaussian likelihood'' is discussed. Some examples are considered which illustrate the advantages of threshold models over linear models for the same data and comparisons are made with some other non-linear models.

Peter J. Brockwell, Dept of Stats/OR, RMIT, GPO Box 2476V, Melbourne 3001, Australia pjbrock@rmit.edu.au


IMS Invited: Time Series and Chaos


New Machine Learning Techniques for Nonlinear Time Series Applied to Financial Engineering

Andreas S. Weigend, University of Colorado at Boulder, USA

After briefly setting the stage for nonlinear modeling of noisy time series (examples range from paging in computers, and forecasting the daily energy demand of France, to finance), this talk focuses on two key problems: regime switching, and overfitting. We introduce a connectionist architecture called "gated experts" for time series prediction, and show that (1) the gating net discovers different regimes underlying the process, (2) the widths associated with each expert characterize the sub-processes, and (3) there is significantly less overfiting compared to single nets, since the experts learn to match their adaptive variances to the (local) noise levels. This can be viewed as matching the local complexity of the model to the local complexity of the data. We compare the performance of the gated experts neural network to standard architectures on several case studies. For financial data, we show on the example of daily sentiment in the Deutschmark/Dollar foreign exchange market, how introducing memory into the gate allows us to capture the dynamics of the switching process, yielding clean and interpretable regimes.

Andreas Weigend, University of Colorado at Boulder
http://22.cs.colorado.edu/\penalty \@M \ andreas/Home.html.



On the Statistical Inference of a Machine-Generated Autoregressive Model

Howell Tong,

We have obtained the asymptotic bias and the limiting distribution for the Yule-Walker estimator of the autoregressive parameter under considerably weaker assumption than that of independence in the noise sequence. Among other things, these suggest robustness of the classical results and throw some light on the use of simulations based on pseudo-random numbers in verifying these results.


Mean, Median and Chaos: Bootstrap Hypothesis Tests for Lyapunov Exponents

Rodney C.L. Wolff, Queensland University of Technology, Brisbane, Australia
Qiwei Yao, University of Kent at Canterbury, UK

Chaos is characterised by the tendency of trajectories with nearby initial conditions to diverge exponentially fast, at least in the short term. A global measure of the average rate of exponential divergence inherent in a dynamical system is given by its Lyapunov exponent(s); loosely speaking, chaos is often present if there is a positive exponent. In current practice, when such quantities are estimated from an observed time series, no consideration is made of precision: a positive numerical estimate of the dominant Lyapunov exponent may not be significantly different from zero in a strict statistical sense. We show how one may use percentiles (Efron [{\it Statistica Sinica}\kern .16667em 1\kern .16667em (1991):93-125]) in a bootstrap hypothesis test of the positivity of the Lyapunov exponent. Various extensions of the methodology will be suggested. A gentle introduction to some statistical aspects of chaotic dynamical systems will be given.

Dr Rodney C.L. Wolff, School of Mathematics, Queensland University of Technology, G.P.O. Box 2434, Brisbane 4001. Australia r.wolff@fsc.qut.edu.au


ASC/IMS Contributed: Nonparametric Smoothing III


Exploratory Methods for Detecting Change Points in Time Series Using Kernel Density Estimation Techniques

Neville Davies, Christian Beardah, Nottingham Trent University, Nottingham, UK

In this paper we consider the problem of determining change points in time series when the generating mechanism for the data is not, in general, linear. Automated kernel smoothing of dependent data using time series cross validation has been proposed by Hart [{\em J. Roy. Statist. Soc. Ser B}, 56 (1994): 529-542], but the issue of utilising the kernel density approach to detect changes in level, parameter values and variability does not seem to have been addressed in the literature. In that respect our methods are preliminary and exploratory. We estimate a simple autoregressive-type structure of the form $Y_t = f(Y_{t-1}) + \epsilon _t$, for raw data and make few assumptions about the generating mechanism for the noise. Kernel density techniques depend crucially upon the choice of the so-called smoothing parameter, which is analogous to the bin-width for histograms. We compare the effect of two methods for automatic selection of the smoothing parameter, the simplistic normal scale rule and the more advanced ``solve the equation'' method (Wand and Jones [ {\em Kernel Smoothing}. London: Chapman and Hall (1995):74]. Our methods are implemented in MATLAB, and we use some well known time series as examples.

Professor Neville Davies, Department of Maths, Stats and OR, Nottingham Trent University, Burton Street, Nottingham NG1 4BU, UK nd@maths.ntu.ac.uk


Conditional Density Estimation

David M. Bashtannyk, Rob J. Hyndman, Monash University, Clayton, Australia

We consider a kernel estimator of conditional density and derive its asymptotic bias, variance and mean square error. We minimize the integrated mean square error to find optimal bandwidths and show that the density estimator has a convergence rate of order $n^{-2/3}$. Finally we derive some simple bandwidth selection procedures and apply them to a data set of Melbourne's daily maximum temperatures from 1981 - 1990.

David M. Bashtannyk, Department of Mathematics, Monash University, Clayton, Victoria 3168, Australia davidb@gizmo.maths.monash.edu.au
http://www.maths.monash.edu.au/\penalty \@M \ hyndman/papers.html.



Convolution and Interpolation: Competitors with Local Polynomial Smoothing

Peter G. Hall, Berwin A. Turlach, Australian National University, Canberra, Australia

Local polynomial smoothing enjoys a variety of very attractive features. It is often viewed as superior to convolution and interpolation methods, which offer greater numerical stability but inferior theoretical performance. In this paper we show that modifications to convolution and interpolation techniques produce effective competitors with local polynomial smoothing, enjoying similar bias, variance and mean squared error properties but without the downside of numerical instability. The methods suggested here may be employed as the basis for empirical wavelet transforms of ungridded data.

Berwin A. Turlach, Statistics/CMA, ANU, Canberra ACT 0200, Australia berwin@alphasun.anu.edu.au


Diagnosing Discontinuity in Nonparametric Regression

Adrian Bowman, University of Glasgow, Glasgow, Scotland
Alun Pope, University of Newcastle, Newcastle, Australia

We present a diagnostic for flagging discontinuities in one-dimensional nonparametric regression. We consider a situation in which the data are noisy observations of a function defined on an interval, and the form of the function is unspecified, except that it may have a finite (but unknown) number of discontinuities, and between discontinuities it is smooth. We assume that our goal is nonparametric (eg kernel-smoothing) estimation of the underlying function. It is known that if there really are discontinuities then it is important to allow for them: smoothing through discontinuities gives very poor mean integrated squared error, because of very poor pointwise mean square error in the neighbourhood of discontinuities. We propose a diagnostic test with an associated graphical representation, which we hope will enable users of nonparametric regression to identify the presence of discontinuities.

Alun Pope, Department of Statistics, University of Newcastle, NSW 2308, Australia pope@frey.newcastle.edu.au


Nonparametric Smoothing Subject to Monotonicity, Convexity or Concavity

David Dole, The University of Western Australia, Australia

In many of the applied sciences, it is common that the forms of empirical relationships are almost completely unknown prior to study. Nonparametric smoothing methods have considerable potential to ease the burden of model specification that a researcher would otherwise face in this situation. Occasionally the researcher will know the signs of the first or second derivatives, or both. This paper develops a smoothing method that can incorporate this kind of information. It is shown that nonnegative cubic regression splines offer a simple and effective approximation to monotonic, convex or concave smoothing splines. Monte Carlo results show that this method has a lower approximation error than either unconstrained smoothing splines or the suitably constrained fitted values of unconstrained smoothing splines. This method should have wide application, especially in applied econometrics.

David Dole, Agricultural and Resource Economics, The University of Western Australia, Nedlands, WA 6907, Australia ddole@uniwa.uwa.edu.au


ASC Contributed: Topics in Regression and Generalised Linear Models III


Bayesian Variable Selection in Generalised Linear Models

Roderick D. Ball, The Horticulture and Food Research Institute of New Zealand Ltd

We describe a Monte Carlo Markov chain sampling based approach to model selection which extends the Gibbs sampling based model selection method (SVSS) of George and McCulloch [{\it JASA 88,\kern .16667em 423,\kern .16667em 881-889,\kern .16667em 1993}] to the class of generalised linear models with unknown dispersion factor. Convergence of SVSS can be slow because the regression parameters at one step of the algorithm will often be at low posterior probability in the new model when parameters are correlated. For linear models Smith and Kohn (preprint, 1995) solve this problem using a sampler based on analytically evaluating the marginal distribution of model selection parameters. For generalised linear models however the integrals are intractable. We give an alternative approach based on approximate distributions and Metropolis rejection. A putative new model is chosen using an approximate distribution and tested by Metropolis rejection after new regression parameters are sampled. An unknown dispersion parameter can be modelled explicitly.

Roderick D. Ball, HortResearch,P.B. 92169,Auckland,New Zealand rball@hort.cri.nz


Additive Extensions to Generalized Estimating Equation Methods

C.\penalty \@M \ J. Wild, University of Auckland, New Zealand
Thomas W. Yee, Massey University, New Zealand

This presentation will be based on Wild and Yee [{\it J.\kern .16667em Roy.\kern .16667em Statist.\kern .16667em Soc.\kern .16667em Ser.B}\kern .16667em 56\kern .16667em (1996)]. The methods underlying vector generalized additive models (Yee and Wild [{\it J.\kern .16667em Roy.\kern .16667em Statist. \kern .16667em Soc.\kern .16667em Ser.B}\kern .16667em 56\kern .16667em (1996)]) are extended to provide additive extensions to the generalized estimating equations approaches to multivariate regression problems of Liang and Zeger [{\it Biometrika}\kern .16667em 73\kern .16667em (1986):13-22] and the subsequent literature. The methods are illustrated by two examples using correlated binary data concerning the presence/absence of several plant species at the same geographical site, and chest pain in male workers from a workforce study. With minor modification, the extensions are shown to apply to longitudinal data.

T.\penalty \@M \ Yee, Department of Statistics, Massey University at Albany, Private Bag 102904, Northshore Mail Center, New Zealand t.w.yee@massey.ac.nz


Minimum Disparity Estimation in Linear Regression Models: Robustness, Distribution and Efficiency

Ro J. Pak, Taejon University, Taejon, Korea

Basu and Lindsay [{\it Ann. I. Stat. Math.}, {\bf 46}, 683-705] developed the minimum disparity estimation, a large subclass of density based minimum distance estimation, of which minimum Hellinger distance estimation by Beran [{\it Ann. Stat}., {\bf 5}, 445-453] is a part. This talk considers the minimum disparity estimation in linear regression models. The estimators are defined as statistical quantities which minimize the blended weight Hellinger distance between a weighted kernel density estimator of the errors and a smoothed model density of the errors. It is shown that if the weights of the density estimator are appropriately chosen, the estimators of the regression parameters are robust, asymptotically normally distributed, and efficient.

Ro J. Pak, Department of statistics, Taejon University, Dong-gu, Youngun-dong, Taejon, Korea, 300-716 davidp@chollian.dacom.co.kr


Testing for a Relationship Between Mean and Variance

Colleen Hunt, Patty Solomon, University of Adelaide, Australia

Variance components in nonlinear models and generalised linear models are currently receiving attention in the literature. Such models are typically complex to handle and simple methods to determine when complex models are necessary would be useful. A score test is proposed for testing the presence of an arbitrary relationship between the group means and variances in longitudinal data. The true likelihood for our model is intractible and the test is based on a Laplace expansion of the true likelihood. Simulations over a wide range of parameter values will illustrate how the score test behaves. Finally, the test will be applied to data on CD4 cell counts from the San Francisco Mens Health Study, and data on blood pressure.

Colleen Hunt, Department of Statistics, University of Adelaide, S.A. 5005, Australia chunt@stats.adelaide.edu.au


A Generalized Estimating Equations Approach to Estimating Dispersion in a Multiplicative Errors Model

J.M. Kelly, A.N. Pettitt, The Queensland University of Technology, Queensland, Australia

For positive data, the gamma distribution provides a convenient modelling tool. When a simple multiplicative errors model for regression $Y_i=exp({x}_i^T {\beta })\varepsilon _i$, $\varepsilon _i$ is gamma with $E(\varepsilon _i)=1$, is considered, a log transformation of the response followed by least squares provides estimates for regression parameters (assumed orthogonal to the constant term) which are almost as efficient as maximum likelihood estimates given by the standard GLM analysis (Firth [{\it J.\kern .16667em Roy.\kern .16667em Statist.\kern .16667em Soc.Ser.B}\kern .16667em 50\kern .16667em (1988):266-268)]). When the dispersion parameter systematically varies from case to case, that is $E(\varepsilon _i)=1$, ${var}(\varepsilon _i)=\phi _i$, then maximum likelihood depends upon functions of $(y_i,\mathop {\fam \z@ \tenrm log}\nolimits y_i)$, whereas quadratic estimating equation techniques involve $(y_i,y_i^2)$, gaining simplicity and robustness at the expense of inefficiency. In this talk we consider estimating equations based on $(y_i,y_i^\nu )$ incorporating both the above cases with $\nu =2$ and $\nu \rightarrow 0$, and $\nu =1/3$, the Wilson-Hilferty normalisation transformation. The robustness of the above techniques with respect to misspecification of gamma errors in terms of efficiency is investigated as well as the interpretation of resulting estimates.

J.M. Kelly, The School of Mathematics, Queensland University of Technology, GPO Box 2434, Brisbane Q 4001, Australia j.kelly@fsc.qut.edu.au


Covariate Transformation Diagnostics for Generalized Linear Models

Andy H. Lee, John S. Yick, Northern Territory University, Darwin, Australia

Transformations of the covariates are commonly applied in regression analysis. When a parametric transformation family is used, the maximum likelihood estimate of the transformation parameter is usually sensitive to perturbations of the data. Diagnostics are derived to assess the influence of cases on the transformation parameter. A logistic regression example is presented to illustrate the usefulness of the proposed diagnostics.

Andy H. Lee, Faculty of Science, Northern Territory University, Darwin, NT 0909 Australia A_Lee@bligh.ntu.edu.au


ASC Contributed: Topics in Design and Sampling


Fuzzy Sets and their Application in the Field of Acceptance Sampling

Vadim A. Lapidus, PRIORITY Centre, Russia

This paper treats specific methods for acceptance sampling based on the idea of a fuzzy set and randomization test. These methods were attributed to flexible methods of statistical quality control. They enable to establish sampling procedures with a family of operating characteristics of assumed properties. Their application improves the relationships between a manufacturer and a consumer, considerably enhancing the lexicon of quality requirements while retaining the application of the decision-making rules unambiguous. For this purpose the following two types of fuzzy sets are employed: normative (specified) and random. The former assists in defining the quality requirements, the latter aids in processing the inspection results. The conformance or nonconformance decisions are taken basing on the inclusion (or exclusion) of a random fuzzy set into the normative one.

Vadim A.Lapidus, PRIORITY Centre, 603603, 213a, Moskovskoye shosse, Nizhny Novgorod, Russia cmc@prior.kis.nnov.su


Estimation of Parameters Using Ranked Set Sampling

Dinesh S. Bhoj, Rutgers University, Camden, USA

Minimum variance linear unbiased estimators of the parameters of some distributions are obtained by using ranked set sampling. These estimates are compared with those obtained by the ordered least squares method. The comparison shows that the relative precisions of our estimators are higher than those of the ordered least squares estimators. It is also shown that the relative precision of our estimator for the population mean is higher than the usual estimator based on ranked set sample.

Dinesh S. Bhoj, Rutgers University, Camden, NJ 08102, USA dbhoj@crab.rutgers.edu


Problems in the Fitting of and the Design for Semivariogram Models

Werner G. Mueller, University of Economics and Business Administration, Vienna, Austria

Data from Semivariogram clouds is usually highly correlated. In practice, these correlations are always neglected when a semivariogram model is fitted to this data. It will be demonstrated that there can be considerable differences of the fitted models, whether iteratively reweighted GLS is used for estimation or not. This has also impact on the question of design, i.e. where to locate observation sites in order to obtain most accurate estimates of the semivariogram parameters. A standard approach is to spread out observations uniformly. It will be shown that techniques, which are adopted from optimum design theory and take into account the correlation structure of the data, lead to substantial improvements of the precision of the estimators.

Werner G. Mueller, Department of Statistics, University of Economics, Augasse 2-6, A-1090 Vienna, Austria werner.mueller@wu-wien.ac.at


Level Changes and Trend Resistance on Replacement in Asymmetric Orthogonal Array

P.C. Wang, M.H. Chen, National Central University, China

When level changes and trend-resistant effects are considered in factorial experiments, the order of experimental runs is essential. Under such considerations, it is helpful to use the technique of orthogonal array to obtain experimental runs and their orders simultaneously. To use this technique, one needs to know level changes and trend resistance in columns of the arrays to gain an appropriate design. Most researches on this topic were focussed on the case of symmetric factorials or the trend-resistant run orders of asymmetric factorials. Recently Wang ({\it Statistica Sinica}, 1996, in press) derived level changes and trend resistance in 2 and 4 mixed-level arrays. We extend it to more general case.

P. C. Wang, Institute of Statistics, National Central University, ChungLi, Taiwan 32054 pcwang@sparc20.ncu.edu.tw


Analysis of a Geochem. Survey of the North Sea Petroleum Province

Thomas K. Wignall, University of Guam, USA

The survey data were the result of a 1967 gas-bubble survey on the North Sea by Dr. Bradley of the Pan American Petroleum Research Center in Tulsa, Oklahoma. Dr. Bradley observed gas bubbles which appeared to be from the sea floor. He divided the North Sea into one-square mile quadrats, and recorded the gas-bubble count in each dividing the North Sea into three areas: (3) North, (2) Center, and (1) South of Middlesborough on the North-East Coast of the British Isles. The results imply significant differences between the regions. The Bonferroni-t results are (i) Comparison of the South v. Center: $t$1,2 = 1.697: $p$-value = 0.028358 (insignificant difference); (i) $t$1,3 = 4.962: $p$-value = 0.00023 (highly significant) and (iii) $t$2,3 = 0.00890 (significant). The results may be verified from my new Bonferroni-$t$ tables with 3 tests and 20 degrees of freedom. The next step was to test a theory that the higher the count, the more likely it would be that a drill-test would yield. Analysis (Wignall) revealed multiple anomalies, which were subsequently drilled and resulted in oil and gas discoveries.

Thomas K. Wignall, University of Guam, USA


Latin Squares as Response Surface Designs

Deborah J. Street, University of Technology, Sydney, Australia
Derek Goh, University of New South Wales, Sydney, Australia

Hunter [in {\it Design and Data Analysis}\kern .16667em (1987): 163-170] has shown that for Latin squares of order 3 the traditional definition of isomorphism is inappropriate if the squares are used as response surface designs. We extend his work to squares of order 4 and find that there is no one square that can be recommended for all polynomial models.

Deborah J. Street, School of Mathematical Sciences, University of Technology, Sydney, Broadway, NSW 2007 Australia deborah@maths.uts.edu.au


Using Process Variogram to Find Optimal Sampling Point when Sampling not Instantaneous

Ian S. Gomm, Neil S. Barnett, Victoria University of Technology, Victoria, Australia

In the study of continuous processes the most appropriate time at which to sample is a question that needs answering. This paper addresses the case of systematic sampling of certain deterministic flows. The stochastic behaviour of the continuous stream is characterised by its variogram. Linear and exponential variograms are considered.

Ian Gomm, Department of Computer and Mathematical Sciences, Victoria University of Technology, P.O.Box 14428 MCMC, Melbourne, 8001 Australia isg@matilda.vut.edu.au
ftp://matilda.vut.edu.au/pub/papers/isg/sisc_paper.ps.


Friday 12 July: 08:30-10:20

ASC-E Invited: Sampling and Remediation of Contaminated Environments


Contaminated Sites - Statistical Bases for Investigation and Remediation

William R. Ryall, Longmac Environmental Pty Ltd, NSW, Australia

The remediation of contaminated sites in Australia is relatively new, and to date has mostly involved excavation of the contaminants and their burial in landfills. With few exceptions, the industry has been slow to adopt scientific method to the investigation and remediation of contaminated sites, which has taken place only since the early 1990's. There is an emerging understanding of the role statistics must play in ensuring contaminated sites are investigated adequately and are remediated satisfactorily. High costs of investigation, including chemical analyses of samples, and of remediation make most economical use of resources. The only defensible way to achieve these economics is by the use of statistical-based procedures. Such procedures are now being employed routinely for design of optimum sampling parameters, to establish clean-up criteria and to defend attainment of these criteria. Data from contamination investigations are typically correlated and censored, and the common use of parametric statistical methods gives rise to errors in interpretation. The methods of geostatistics have not been applied routinely in clean-up of contaminated sites, but useful information has been gained in estimation of volumes and concentrations of contamination. The use of statistically-based data to defend attainment of clean-up criteria is again an area of emerging importance.

William R. Ryall, Longmac Environmental Pty Limited, PO Box 940, Crows Nest NSW 2065, Australia longmac@ozemail.com.au


Detection and Estimation of an Intervention's Impact on Resident Biota

A. H. El-Shaarawi, National Water Research Institute, Burlington, Ontario, Canada
J. E. Zapotosky, IIT Research Institute, Chicago, IL, USA
R. J. Snider, Michigan State University, USA

An extensive {\em in situ} monitoring program to examine for possible effects to biota of electromagnetic fields (EM) produced by the US Navy's Extremely Low Frequency (ELF) Communications System has been completed. The ELF system consists of two transmitting facilities, one of which is located in northern hardwood forests on the Upper Peninsula of Michigan, that synchronously broadcast messages using frequency modulated signals centred at 76Hz. The Michigan transmitter became fully operational in 1989. Biological and ecological variables were measured at about the same time at treatment and control sites, before and after the transmitters became fully operational. This paper considers the problem of modelling the spatial and temporal changes in total collembola counts in soil samples collected at ten fixed plots within the test and control sites for the Michigan's facility. Since 1986 and for each site, the number of collembola was determined at 12 or 13 equally spaced time intervals during the yearly sampling season. Quasi-likelihood based methods are used to model and make inferences about the effects of the intervention in the presence of overdispersion and autodependence.

A. H. El-Shaarawi, National Water Research Institute, Burlington, Ontario, Canada


Sampling and Characterizing of Degraded Soils at Different Scales

A. Stein, Wageningen Agricultural University, The Netherlands

To sample and fully characterize degraded soils, modern spatial statistical procedures are applied. As concerns available data, a distinction can be made into observations which are considered to be realizations from a random field, such as concentrations of a contaminant, and observations which have a random location (or centre point), such as pores and cracks with contaminating fluids. Both procedures have their own merits and can be applied for complementary spatial analyses. In the first part of this presentation elements of geostatistics will be applied, focusing on spatial sampling. Based on an initial random scheme, optimization in the presence of prior information will be shown. Next, attention will be given to spatio-temporal sampling for processes which develop in space and time [{\it Stein,\kern .16667em Subm.\kern .16667em ``Analyzing\kern .16667em variability in\kern .16667em space\kern .16667em and\kern .16667em time\kern .16667em using\kern .16667em geostatistical\kern .16667em procedures.''\kern .16667em Statistica\kern .16667em Neerlandica.}] Finally, attention will be given to the analysis of spatial point patterns of methylene-blue coloured soils. Soils of different texture under different forms of land use will be compared, at several depths [{\it Stein,\kern .16667em A.,\kern .16667em Baddeley, A.J.,\kern .16667em Droogers\kern .16667em P.\kern .16667em and Booltink,\kern .16667em H.\kern .16667em ``Point\kern .16667em processes and\kern .16667em random\kern .16667em sets\kern .16667em for\kern .16667em analyzing\kern .16667em patterns\kern .16667em of methylene-blue coloured soil'' In prep.}] In interactive Geographical Information Systems an increasing number of facilities is available to combine, analyze and display spatial information [{\it Stein,\kern .16667em A.,\kern .16667em Staritsky, I.G.,\kern .16667em Bouma J.\kern .16667em and\kern .16667em van\kern .16667em Groenigen\kern .16667em J.W. (1995)``Interactive\kern .16667em GIS\kern .16667em for\kern .16667em Environmental\kern .16667em Risk Assessment''\kern .16667em Int.\kern .16667em J.}]

A. Stein, Wageningen Agricultural University, PO Box 37, 6700 AA Wageningen, The Netherlands


ASC Invited: Markov Chain Monte Carlo


Model Comparison in Longitudinal Generalized Linear Models with Random Effects

Siddhartha Chib, Edward Greenberg, Washington University, St Louis, USA
Rainer Winkelmann, University of Christchurch, Christchurch, NZ

We consider the question of model comparison for hierarchical generalized regression models with multiple cluster-specific random effects. Bayes factors are computed by Markov chain Monte Carlo by adapting a method due to Chib (1995). A fast and efficient procedure for sampling the random effects using the Metropolis-Hastings is proposed. The methods are applied after a simple reparameterization that is related to the idea of hierarchical centering. Procedures for computing the maximum likelihood estimate by simulation are also developed. The techniques are applied to two large longitudinal sets of count data with a Poisson link function.

Siddhartha Chib, John M. Olin School of Business,Washington University, Campus Box 1133, 1 Brookings Drive, St Louis, MO 63130 chib@simon.wustl.edu


A Minimal Conditioning Approach to Autoregressive Modelling with Level Shifts

Christopher K, Carter, The Hong Kong University of Science & Technology, Hong Kong
Richard Gerlach, Robert Kohn, Australian Graduate School of Management, Sydney, Australia

A Bayesian approach is presented for analysing time series using an autoregressive model with both outliers and level shifts. All aspects of the model are estimated simultaneously using an efficient Markov chain Monte Carlo sampling scheme. Our sampler differs from the Gibbs sampler by generating from several reduced conditional distributions. We show, using both simulated and real data, that our sampler converges much faster than the Gibbs sampler for models with unknown autoregressive parameters and level shifts. We show how our methods can be extended to a number of related models.

Christopher K. Carter, Department of Information and Systems Management, The Hong Kong University of Science & Technology, Clear Water Bay, Kowloon, Hong Kong imchrisc@usthk.ust.hk


Do Six Metur Meta?

K. Mengersen, Queensland University of Technology, Brisbane, Australia

Meta-analysis, a new name for an old practice of combining information from difference sources, is an important tool in a wide range of sciences. Moreover, it falls naturally into a Bayesian framework which, with the computational liberation that MCMC provides, enables the particular problem to be as closely modelled as possible or practicable. From this platform, we will explore the role of mixture distributions, which provide a convenient parametric framework in which to extend in any way the basic meta-analysis model. Two particular applications will be pursued. The first explores the statistical role in the legal question of causation through the extrapolation of population-based overall risk (assessed through combination of population-based studies) to the individual situation. The second aims to estimate genetic distances between breeds of cattle using microsatellite data, in order to estimate evolutionary times and ultimately choose breeds with the widest range of genetic variability. Our use of MCMC demands a discussion of the algorithm and its performance. In this presentation we will focus on convergence, commenting on the desirability of a geometric rate of convergence, comparing theoretical computable bounds on this rate, and discussing various empirical estimates.

Kerrie Mengersen, School of Mathematics, Gardens Point Campus, Queensland University of Technology, GPO Box 2434, Brisbane 4001, Queensland, Australia k.mengersen@qut.edu.au


IMS Invited: Resampling - I


Uses and Implementation of Double Bootstraps

David V. Hinkley, Valerie J. Ventura, University of California, Santa Barbara, USA
Anthony C. Davison, University of Oxford, UK

A double bootstrap calculation is a bootstrap calculation applied to a bootstrap calculation. The two main uses are in improving accuracy of a bootstrap procedure, and in bootstrap diagnostics. The excessive simulations required by naive double bootstraps are avoided either by theoretical approximation (usually saddlepoint approximation) or by Monte Carlo recycling. For some diagnostic methods it is also helpful to smooth nonparametric bootstrap samples before resampling from them. Various aspects of these methods will be described and illustrated.

Professor David V. Hinkley, Department of Statistics and Applied Probability, University of California, Santa Barbara, CA 93106, USA hinkley@pstat.ucsb.edu


Asymptotic Approximations in Resampling

John Robinson, University of Sydney, Australia

We will review approximations for resampling methods based on permutation tests and the bootstrap. Those based on permutation tests are available for a restricted set of problems but they give exact results, whereas the bootstrap is widely applicable but is an approximation. Monte Carlo methods provide an approximation in each case and asymptotic approximations of either the Edgeworth or saddlepoint varieties can be used in some cases either to give analytic approximations to replace the use of computer intensive methods or, in the case of bootstrap or Monte Carlo approximations, to give results on accuracy. Further results on efficiency or power of the resampling methods can be obtained using this approach. Edgeworth approximations have been used to show that, for example, the studentised bootstrap is second order accurate and saddlepoint methods can be used to show that $p$-values in these cases have better than first order relative accuracy in regions of moderate deviations.

John Robinson, School of Mathematics and Statistics, University of Sydney, NSW 2006, Australia robinson_j@maths.su.oz.au


What Can Make Resampling Work?

Willem R. van Zwet, University of Leiden, The Netherlands and University of North Carolina at Chapel Hill, USA

Resampling is an exceptionally flexible methodology that comes into its own only if full use is made of its flexibility. However, the choice of a good bootstrap method in a particular case requires rather precise information about the structure of the problem at hand. This knowledge may not be available, and if it is, there are often alternative methods which may work just as well. It is therefore difficult to provide hard and fast rules for the use of the resampling methodology. However, certain generally helpful pointers can be given.

Prof W. R. van Zwet, Dept of Mathematics & Computer Science, Leiden University, PO Box 9512, 2300 RA Leiden, The Netherlands vanzwet@wi.leidenuniv.nl


ASC Contributed: Topics in Statistical Inference III


Some Problems in Group Decision Making

Prem K. Goel, Ohio State University, USA
Chandra M. Gulati, University of Wollongong, NSW, Australia

Consider a group of $k$ decision makers $(k > 1)$. It is assumed that each of the $k$ members can collect information about the state of the world by possibly different characteristics of the same phenomenon. The team member $j (j=1, 2, \mathinner {\ldotp \ldotp \ldotp }, k)$ only get to look at the $j$-th component of the observation vector $X$. The joint distribution of the observation vector is assumed known. It is assumed that the team's cost of taking an observation vector at each state of the sampling process is $C(C > 0)$. At stage $n( n=1, 2, \mathinner {\ldotp \ldotp \ldotp })$, after an observation $x$ is taken, the team may choose either to stop sampling and accept a reward based on the value of the last observation, or else reject. The group is compared with the reward of individual decision makers.

Prem K. Goel, Ohio State University, USA


Guarded Weights of Evidence and Acceptability Profiles Based on Signs: I. Permutation Arguments

Michael B. Dollinger, La Trobe University, Melbourne, Australia and Pacific Lutheran University, Tacoma, USA
Elena V. Kulinskaya, Robert G. Staudte, La Trobe University, Melbourne, Australia

A test for a hypothesized parameter may be generalized by replacing the indicator function of the test critical region with a function {\it (weight of evidence for the alternative)} having values in [0,1] and estimating the value 1 when the alternative is true and 0 otherwise. It is a {\it guarded} weight of evidence if a bound is placed on the Type I risk. Inversion of a family of guarded weights of evidence yields a {\it profile of acceptability} for parameter values which is more informative than the traditional confidence interval. The optimal (minimizing Type II risk) guarded weights of evidence for a simple alternative depend on the likelihood ratio (Blyth and Staudte [{\it Prob.\kern .16667em & \kern .16667em Statist. \kern .16667em Letters}\ 23\ (1995)\ :\ 45-52]). Acceptability profiles based on the likelihood ratio of the sign statistic are found here for the centre of a symmetric distribution using permutation arguments and are compared with traditional sign statistic confidence intervals.

Robert G. Staudte, School of Statistics, La Trobe University, Bundoora VIC 3083, Australia STARGS@LURE.LATROBE.EDU.AU


Approximation Methods to Reliability Function Based on Expert Opinions

Yasuhide Shinohara, Tadashi Dohi, Shunji Osaki, Hiroshima University, Japan

Consider the life time data which obeys an exponential distribution function with an unknown parameter. Ordinarily, in order to execute reliability analyses in such a case, we would assume any probability law for the parameter and express the reliability function, utilizing expert opinions and informed judgements. If the representative expert opinions are shaped up by their mean and variance and the parameter obeys a Weibull distribution or a truncated normal distribution, we can easily obtain an expression of the reliability function by taking the Laplace transform of the parameter distribution. However, if the parameter follows a log-normal distribution, it is not easy to have an analytical expression of the reliability function. In this paper, we propose five methods to approximate the reliability function with a log-normal distributed parameter and compare them in terms of precision. The approximation methods are classified into three types; the methods based on the Poission-lognormal distribution, the Taylor series expansion and the inverse-Gaussian distribution. Finally, we show the usefulness of the methods proposed in numerical examples and examine the dependence of expert opinions in the reliability evaluation.

Yasuhide Shinohara, Department of Industrial and Systems Engineering, Hiroshima University, 4-1 Kagamiyama 1 Chome, Higashi-Hiroshima 739, Japan shino@gal.sys.hiroshima-u.ac.jp


Why Conventional Hypothesis Testing is Generally Preferred Over a Bayesian Approach

Milo A. Schield, Augsburg College, Minneapolis, USA

There are two well-known approaches to hypothesis testing. The conventional test of significance focuses on the conditional probability that the test's result will be statistically significant given that the Null hypothesis is true. The Bayesian approach focuses on the conditional probability that the Null hypothesis is False, given that the test result is statistically significant. In both cases, we want to use the sample statistic as evidence for rejecting the Null hypothesis. This long-standing difference in approach has often focused on what must be assumed: alpha for tests of significance versus the prior-probability for the Bayesians. This paper argues that this difference is incidental in explaining the popularity of the conventional approach. This paper argues that as the Alternative Hypothesis becomes more unlikely to be true, it will become much, much easier -- comparatively speaking -- to reject the Null by using conventional hypothesis testing than by using the Bayesian approach.

Milo Schield, 2211 Riverside Ave S; Augsburg College; Mpls., MN. 55454 USA schield@augsburg.edu


Asymptotically Optimal Transformations for Grouped Data

Klaus Felsenstein, Technical University, Vienna
Klaus Pötzelberger, University of Economics and Business Administration, Vienna

We study the loss of information (measured in terms of the Kullback-Leibler distance) caused by observing grouped data, with emphasis on the asymptotic case, i.e. when the number of groups becomes large. In the case of a univariate observation, we compute the optimal rate of convergence and characterize asymptotically optimal partitions (into intervals). In the multivariate case we derive the asymptotically optimal regular sequences of partitions. Furthermore, we compute the asymptotically optimal transformation of the data, when a sequence of partitions is given. Examples demonstrate the efficiency of the suggested discretizing strategy even for few intervals.

Klaus Felsenstein, Wiedner Hauptstr 8-10, Vienna A-1040, Austria


Inverse Bayes Formula Without Positivity Assumption

Kai W. Ng, University of Hong Kong, China

Bayes formula has a history of more than 200 years. Among other things, it provided Bayesians a formula for finding the posterior distribution given the prior distribution and the likelihood. By an inverse Bayes formula we mean an analytic formula for finding the prior distribution given the posterior distribution and the likelihood. In terms of pdf, it is a formula for finding $f(x)$ and $f(y)$, and hence $f(x,y)$, given only $f(x|y)$ and $f(y|x)$. Ng (1995) and Ng (1996) presented analytic solutions under the positivity assumption. In this talk, we shall present solutions without the positivity assumption. The solutions involve conditions for the uniqueness and existence of $f(x,y)$ and an analytic formulae to calculate $f(x)$ and $f(y)$ given $f(x|y)$ and $f(y|x)$.

Kai W. Ng, Department of Statistics, The University of Hong Kong, Pokfulam Road, Hong Kong HRNTNKW@hkucc.hku.hk


Tight Upper Confidence Limits from Discrete Data

Christopher J Lloyd, University of Hong Kong, Hong Kong
Paul V. Kabaila, La Trobe University, Melbourne, Australia

We consider the problem of finding an upper $1 - \alpha $ confidence limit for a parameter of interest $\theta $ in the presence of a nuisance parameter vector $\phi $ when the data is discrete. Approximate upper limits $T$ typically have coverage probabilities below, sometimes far below, $1 - \alpha $ for certain values of $(\theta , \phi )$. We remedy this defect by shifting the possible values $t$ of $T$ so that they are as small as possible subject to both the minimum coverage probability being greater than or equal to $1 - \alpha $ and to the shifted values being in the same order as the unshifted $t$'s. The resulting upper limits are called {\it tight}. Under very weak and easily checked regularity conditions, we develop a formula for the {\it tight} upper limits.

Chris Lloyd, Department of Statistics, University of Hong Kong, Hong Kong lloyd@hkustasa.hku.hk


ASC/IMS Contributed: MML Methods and Goodness of Fit


Assessing Goodness-of-Fit for Mass-Frequency Particle Size Distributions and Testing Homogeneity of Replicates

Thaung Lwin, CSIRO, Melbourne, Australia.

Two main problems usually arise at the initial stage of analyzing mass-frequency particle size data. First, there is the problem of establishing homogeneity of replicate samples. Second, it is necessary to assess the goodness-of-fit of a selected model for use as a suitable working model. Satisfactory summarization of data or comparison of sets of data hinges upon the resolution of these two problems. In this paper a test of homogeneity of independent replicate particle size distributions as well as a test of goodness-of-fit of a specified theoretical model are discussed. The asymptotic distributions of these test statistics are chi-squared distributions with appropriate degrees of freedom as for the case of number-frequency distribution data. The forms of test statistics, however, are quite different from the corresponding analogues of the number-frequency case.

Dr. T. Lwin, Division of Mathematics and Statistics, CSIRO, Private Bag 10, Clayton, Rosebank MDC, Vic. 3169, Australia lwin@dmsmelb.mel.dms.csiro.au


Nonparametric Versus Parametric Goodness of Fit

Hannelore Liero, University of Potsdam, Germany

Let $X_1,\mathinner {\ldotp \ldotp \ldotp }, X_n$ be a sample of i.i.d. r.v.'s with density $f$. We test whether $f$ lies in some parametric family of density functions, i.e. we consider the problem of testing the hypothesis $H : f\in {\fam \tw@ F}={f(\cdot ,\vartheta )\mid \vartheta \in \Theta \subseteq R^k}\hskip 1em\relax $against $\hskip 1em\relax K: f\mathrel {\mathchoice {\mathsurround =\z@ \lineskiplimit -\maxdimen \unhbox \voidb@x \vtop {\baselineskip \z@skip \lineskip .25ex\everycr {}\tabskip \z@skip \halign {##\crcr $\hfil \displaystyle \mkern 1mu/\hfil $\crcr $\displaystyle \in $\crcr }}}{\mathsurround =\z@ \lineskiplimit -\maxdimen \unhbox \voidb@x \vtop {\baselineskip \z@skip \lineskip .25ex\everycr {}\tabskip \z@skip \halign {##\crcr $\hfil \textstyle \mkern 1mu/\hfil $\crcr $\textstyle \in $\crcr }}}{\mathsurround =\z@ \lineskiplimit -\maxdimen \unhbox \voidb@x \vtop {\baselineskip \z@skip \lineskip .25ex\everycr {}\tabskip \z@skip \halign {##\crcr $\hfil \scriptstyle \mkern 1mu/\hfil $\crcr $\scriptstyle \in $\crcr }}}{\mathsurround =\z@ \lineskiplimit -\maxdimen \unhbox \voidb@x \vtop {\baselineskip \z@skip \lineskip .25ex\everycr {}\tabskip \z@skip \halign {##\crcr $\hfil \scriptscriptstyle \mkern 1mu/\hfil $\crcr $\scriptscriptstyle \in $\crcr }}}}{\fam \tw@ F}$. As a test statistic we propose the deviation of the well-known kernel estimate from its expectation with respect to a density from ${\fam \tw@ F}$ in the $L_2$- and the sup-norm on a grid. Since the parameter $\vartheta $ is unknown it is replaced by the maximum likelihood estimator. The main result is the comparison of the asymptotic behavior of the power of both tests under Pitman and "sharp peak" type alternatives. It turns out that under Pitman alternatives the $L_2$-test is always not worse than the $L_\infty $-test, but there exist "sharp peak" alternatives such that the $L_\infty $-test is better. As a by-product one obtains results on the influence of the parameter estimation and the bandwidth choice on the convergence of the power.

H. Liero, Institute of Mathematics University of Potsdam, D-14415 Potsdam, Germany liero@rz.uni-potsdam.de


Estimation and Testing of Regression Disturbances Based on Minimum Message Length

Md.Mizanur Rahman Laskar, Maxwell L. King, Monash University, Melbourne, Australia

This paper derives six different forms of message length functions for the linear regression model using two different prior densities and the idea of parameter orthogonality. Parameter estimates are then obtained by finding those parameter values which minimize the message length. The asymptotic properties of the minimum message length (MML) estimators are studied and we show that these estimators are asymptotically normal. A Monte Carlo experiment was conducted to investigate the small sample properties of the MML estimators and MML based tests in the context of first- order moving average regression disturbances. The results show that the combination of parameter orthogonality and message length based inference can produce good small sample properties

Md. Mizanur Rahman Laskar, Department of Econometrics, Monash university, Clayton, Vic. 3168, Australia Mizan.Laskar@Monash.edu.au


Fitting Finite Gaussian Mixture Models Using Minimum Message Length Estimation

Rohan Baxter, Jonathan Oliver, Monash University, Australia
David Hand, Open University, UK

This paper examines the mixture modelling problem using Gaussian component distributions. Determining the number of components which best describe some data is considered a hard problem. We use Wallace's Minimum Message Length (MML) estimators for point estimation. We use prior distributions over the parameters of the mixture which express relative ignorance. We give the Message Length formula for selecting the number of components in a mixture. General properties of MML estimators are described in Wallace and Freeman [{\it J.\kern .16667em Roy.\kern .16667em Statist.\kern .16667em Soc.Ser.B}\kern .16667em 49\kern .16667em (1987):240-265]. MML allows us to estimate the number of mixture components, $k$, within the same framework as the other parameters. We give an empirical comparison of alternative criteria, such as AIC, MDL/BIC and ICOMP, described in Bozdogan [{\it Proc.\kern .16667em of\kern .16667em the\kern .16667em First\kern .16667em US/Japan \kern .16667em Conf.\kern .16667em on\kern .16667em Frontiers\kern .16667em of Statistical\kern .16667em Modelling}\kern .16667em (1994):69-113]. We conclude that the MML criterion appears to be the best (of the criteria considered here) for selecting the number of components in a mixture for the experiments performed.

Rohan A. Baxter, Dept. of Computer Science, Monash University, Clayton 3168, Australia rohan@cs.monash.edu.au
http://www.cs.monash.edu.au/rohan/mixture.



MML Mixture Modelling of Multi-State, Poisson, von Mises Circular and Gaussian Distributions

Chris S. Wallace, David L. Dowe, Monash University, Melbourne, Australia

Minimum Message Length (MML) is an invariant Bayesian point estimation technique which is also consistent and efficient. We provide a brief overview of {MML} inductive inference [{\it Comp.\kern .16667em J.\kern .16667em }\kern .16667em 11\kern .16667em (1968):185-194; {\it J.\kern .16667em Roy.\kern .16667em Statist.\kern .16667em Soc.Ser.B}\kern .16667em 49\kern .16667em (1987):240-252], and how it has both an information-theoretic and a Bayesian interpretation. We then outline how MML is used for statistical parameter estimation, and how the MML mixture modelling program, Snob[{\it Comp.\kern .16667em J.\kern .16667em }\kern .16667em 11 (1968):185-194; {\it Proc.\kern .16667em 7\kern .16667em Aust.\kern .16667em Joint\kern .16667em Conf. \kern .16667em Art.\kern .16667em Intell.}(1994):37-44] uses the message lengths from various parameter estimates to enable it to combine parameter estimation with selection of the number of components. The message length is (to within a constant) the logarithm of the posterior probability of the theory. So, the MML theory can also be regarded as the theory with the highest posterior probability. Snob currently assumes that variables are uncorrelated, and permits multi-variate data from Gaussian, discrete multi-state, Poisson and von Mises circular distributions.

David L. Dowe, Department of Computer Science, Monash University, Clayton, Victoria 3168, Australia dld@cs.monash.edu.au


Resolving the Neyman-Scott Problem by Minimum Message Length

David L. Dowe, Chris S. Wallace, Monash University, Melbourne, Australia

The Neyman-Scott problem concerns $M$ Gaussian distributions with unknown means and identical but unknown standard deviation. Two data are sampled from each distribution. As $M$ tends to infinity, we see that the Maximum Likelihood (ML) estimate of $\sigma $ is inconsistent, under-estimating $\sigma $ by a factor of $ \radical "270370 { 2 } $. One way around this problem is to use the marginalised ML estimate for $\sigma $. An alternative is to use Minimum Message Length (MML) [{\it J.\kern .16667em Roy.\kern .16667em Statist.\kern .16667em Soc.Ser.B}\kern .16667em 49\kern .16667em (1987):240-252; {\it Comp.\kern .16667em J.\kern .16667em }\kern .16667em 11\kern .16667em (1968):185-194] to estimate $\sigma $ and the various distribution means. The MML estimate maximises the posterior probability contained in an uncertainty region of volume proportional to the reciprocal of the square root of the expected Fisher information. MML is a general, universally applicable, invariant Bayesian method, and general theorems of Wallace and Freeman [(1987), above] and Barron and Cover [{\it IEEE Trans.\kern .16667em Inf.\kern .16667em Th.\kern .16667em }\kern .16667em 37\kern .16667em (1991):1034-1054] show MML to be consistent and efficient. We further seek a related problem for which ML remains inconsistent but for which we can not marginalise.

David L. Dowe, Department of Computer Science, Monash University, Clayton, Victoria 3168, Australia dld@cs.monash.edu.au


Minimum Message Length Mixture Modelling of Spherical von Mises-Fisher Distributions

Jonathan J. Oliver, David L. Dowe, Monash University, Melbourne, Australia

The spherical von Mises-Fisher distribution is said to be one of the most important distributions concerning directional data[{\it Biometrika}\kern .16667em 65\kern .16667em (1978):369-377]. Minimum Message Length (MML) has previously been applied to parameter estimation for the von Mises circular distribution[submitted\kern .16667em {\it Aust.\kern .16667em J.\kern .16667em Stat}.;\kern .16667em {\it Proc.\kern .16667em Max.\kern .16667em Entropy\kern .16667em Conf.}\kern .16667em (1995)] and the spherical von Mises-Fisher distribution[submitted\kern .16667em {\it ISIS-96\kern .16667em Conf.}] and in both cases outperformed classical estimators on simulations. In addition, MML has proved successful for mixture modelling of Gaussian, discrete multi-state, Poisson and von Mises circular distributions[{\it Proc.\kern .16667em 7th\kern .16667em Aust.\kern .16667em Conf.\kern .16667em Art.\kern .16667em Intell.} (1994): 37-44]. In this work, we apply MML to the problem of forming mixtures of spherical von Mises-Fisher distributions. We believe that mixtures of this type will have applications in areas such as biology, geography, geology, geophysics, medicine, meteorology and oceanography[N.I.\kern .16667em Fisher,\kern .16667em {\it Stat.\kern .16667em Analysis Circular\kern .16667em Data} C.U.P.\kern .16667em (1993)]

Jonathan J. Oliver, Dept of Computer Sci., Monash University, Clayton, Victoria 3168, Australia jono@cs.monash.edu.au
http://www.cs.monash.edu.au/$\sim $jono/mixvonmises3d.ps.



ASC/IMS Contributed: Time Series III


Relative Forecasting Ability of the Bilinear, SETAR and ASTAR Nonlinear Time Series Models

David John King, Danny Henri Coomans, James Cook University, Townsville, Australia

Despite the fact there are numerous articles investigating the modelling ability of nonlinear time series models, there is much less on their ability to forecast. The relative forecasting ability of the Box-Jenkins, bilinear, SETAR and ASTAR models for three real data sets has been investigated. Both the blowfly data set and the IBM share price data set exhibit significant SETAR-type nonlinearity for both the estimation period and the full sample period, whereas the Southern Oscillation Index data set exhibits only a small degree of SETAR-type nonlinearity. The results show a clear benefit in using nonlinear time series models to forecast the blowfly data and the IBM share price data, particularly for short forecasting horizons. For the SOI data forecasts obtained from the linear Box-Jenkins models were quite competitive with the nonlinear models. Generally, the SETAR model was preferred for forecasting nonlinear time series.

Danny Coomans, Dept Maths & Stats, James Cook University, Townsville Q4811, Australia Danny.Coomans@jcu.edu.au


Fractional ARIMA-GARCH Time Series Models

S-Q. Ling, W.K. Li, The University of Hong Kong, Hong Kong

This paper considers a class of time series models, called FARIMA($p$, $d$, $q$)-GARCH($r$, $s$) model, which combines the popular GARCH and the fractional ARMA models. The fractional differencing parameter $d$ can be greater than 1/2 thus incorporating the important unit root case. Some sufficient conditions for stationarity, ergodicity and existence of higher order moments are derived. An algorithm for approximate maximum likelihood (ML) estimation is presented. The asymptotic properties of ML estimators, which include consistency and asymptotic normality, are discussed. The large-sample distributions of the residual autocorrelations and the square-residual autocorrelations are obtained and two portmanteau test statistics are established for checking model adequacy. As an illustration, the FARIMA($p$, $d$, $q$)-GARCH($r$, $s$) model is applied to the daily returns of the Hong Kong Hang Seng Index(1983-1984) and the results provide evidence that the long-memory phenomenon may exist simultaneously in the stock return and the volatility of the stock return.

S-Q. Ling, Statistics Department, The University of Hong Kong, Hong Kong hrntlwk@hkucc.hku.hk


On the Classification of a First Order Smooth Threshold Autoregression (Star) Model

M.G. Nair, D. Nur, N.D. Yatawara, Curtin University of Technology, Perth, Australia

We consider the Smoothed Threshold Autoregression (STAR) model of order $1$ with the delay parameter, $d=1$, $ X_{t}=aX_{t-1}+bX_{t-1}F \mathopen {\hbox {$\left (\vbox to14.5\p@ {}\right .\nulldelimiterspace \z@ \mathsurround =\z@ $}}{{X_{t-1}-r}\over {z}}\mathclose {\hbox {$\left )\vbox to14.5\p@ {}\right .\nulldelimiterspace \z@ \mathsurround =\z@ $}} + \varepsilon _{t},$ where $r$ is a threshold parameter, $r \epsilon {\fam \tw@ R}^{+}, \varepsilon _{t}$ is a sequence of independent and identically distributed random variables and $F$ a distribution function. The process $\delimiter "4266308 X_{t}\delimiter "5267309 $ is a first-order Markov chain. Necessary and sufficient conditions for ergodicity and rules for classification of the Markov chain into recurrent and transient states are presented.

M.G. Nair, School of Maths & Stats, Curtain University of Technology, Perth WA


On Sufficient Conditions for Ergodicity of a Threshold ARCH Model

Nihal Yatawara, Curtin University of Technology, Perth, Australia

In this paper we consider the class of smooth threshold autoregression heteroscedastic models, (STARCH), defined by $ Z_t = \phi Z_{t-1} + \varepsilon _{t} $ where the random error $\varepsilon _{t}$ has a conditional variance of the form $ \sigma ^2_t = a Z^2_{t-1} + b Z^2_{t-1} F \mathopen {\hbox {$\left (\vbox to14.5\p@ {}\right .\nulldelimiterspace \z@ \mathsurround =\z@ $}} {{Z^2_{t-1} - r}\over {2}} \mathclose {\hbox {$\left )\vbox to14.5\p@ {}\right .\nulldelimiterspace \z@ \mathsurround =\z@ $}}. $ We shall present sufficient conditions for ergodicity of this model.

Nihal Yatawara, School of Mathematics & Statistics, Curtin University of Technology, Perth, Australia


A Stochastic Parameter Model for Unit Trust Prices

Hermi Boraine, University of Pretoria, Pretoria, South Africa

Unit trusts are currently growing in popularity as a medium to long term investment instrument. The different funds vary in aspects such as portfolio structure and the risk involved for the investor, but their performance is affected in similar ways by economic factors such as a change in interest rates and the inflation rate. It is therefore realistic to assume that the growth of the different funds can be described by the same type of model, but that the parameters may vary across the different funds. In this paper a non-linear model with stochastic parameters and autoregressive moving average (ARMA) error terms is proposed to describe the trend in unit trust prices. It is assumed that the parameters of the different funds are a random sample from a common multivariate distribution. This distribution provides a mechanism which can, for instance, be used to compare the different funds.

H.Boraine, Department of Statistics, University of Pretoria, Pretoria, 0002, South Africa hboraine@ebs.up.ac.za


Combining Ordinal Forecasts with an Application in a Financial Market

Pui Lam Leung, The Chinese University of Hong Kong, Hong Kong

The literature on combining forecasts has almost exclusively focused on combining point forecasts. The issues and methods of combining ordinal forecasts have not yet been fully explored, even though ordinal forecasting has many practical applications in business and social research. In this talk, we consider the case of forecasting the movement of the stock market which has three possible states (bullish, bearish and sluggish). Given the sample of states predicted by different forecasters, several statistical methods can be applied to determine the optimal weight assigned to each forecaster in combining the ordinal forecasts. The performance of these methods is examined using Hong Kong stock market forecasting data, and their accuracies are found to be better than the consensus method and individual forecasts.

Dr. Leung Pui Lam, Department of Statistics The Chinese University of Hong Kong Shatin, N.T. Hong Kong b094763@cucsc.cuhk.hk


Modelling A Financial Time Series

Ian W. Wright, Curtin University of Technology, WA, Australia

It is well known that interventions such as strikes, interest rate rises, etc. can have a sudden impact on the properties of a financial time series. Other events such as the share becoming ex-dividend are non-innovative. Recent modelling of securities prices indicated the presence of a small but recurring proportion of larger disturbances that could represent significant events. The magnitudes of such disturbances militate against the Normality of security returns while their apparent interdependence compromises the random walk model. Consequently a comprehensive model needs to give serious attention to the large disturbance aspect. Various models for analysing a security's price series are considered in relation to the large disturbance aspect including a semi-Markov sequence embodying large and small disturbances. The methods are illustrated using examples and their performances are compared.

Ian W. Wright, School of Maths & Stats, Curtain University of Technology,Perth WA, Australia


ASC/IMS Contributed: Inference and Computational Statistics


Exact Distributions of Certain Test Statistics for Random Walk Hypothesis

Leslie Chandrakantha, University of Newcastle, NSW, Australia

We obtain the exact distributions of test statistics for testing the null hypothesis that a variable follows a random walk. The test statistic used for this purpose is the ratio of estimated variances of two variables which are the differences of the same variable observed in two different time frequencies. Several useful approximations to this distributions are also given and the power functions are examined.

Leslie Chandrakantha, Department of Statistics, Callaghan, NSW 2308, Australia stmwlc@cc.newcastle.edu.au


A Hypothesis Testing Approach Towards Identifying Active Contrasts

Sarel J. Steel, University of Stellenbosch, Stellenbosch, South Africa
Johannes H. Venter, Potchefstrrom University, Potchefstroom, South Africa

The problem of identifying the active contrasts in an unreplicated fractional factorial design is approached from a hypothesis testing point of view. The null hypothesis that all contrasts are inactive is first tested and if rejected the active contrasts causing rejection are identified and estimated. The associated error variance is also estimated. The method is illustrated with standard examples from the literature, compared with the confidence interval approach of Dong [{\it Statistica Sinica} 3(1993): 209-217] and Lenth [{\it Technometrics} 31(1989): 469-473] and found to be a worthy competitor.

Sarel J. Steel, Department of Statistics, University of Stellenbosch, Private Bag X1, Matieland 7602, South Africa sjst@maties.sun.ac.za


Mean Location and Sample Mean Location on Manifolds: Asymptotics, Tests, Confidence Regions

Zinoviy Landsman, Harrie Hendriks, The University of Haifa, Israel

In a previous investigation we studied some asymptotic properties of the empirical mean location on submanifolds of Euclidean space. The empirical mean location generalizes least squares statistics to smooth compact submanifolds of Euclidean space. In this paper these properties are put into use. Tests for hypotheses about mean location are constructed and confidence regions for mean location are indicated. We study the asymptotic distribution of the testing statistic. The problem of a comparison of the mean locations for two samples is analyzed.The results are illustrated with examples of spheres, Stiefel manifolds including the orthogonal group, and special orthogonal groups. The results are also illustrated by our experience with simulations.

Zinoviy Landsman, Department of Statistics, The University of Haifa, Haifa, 31905 Israel landsman@stat.haifa.ac.il


Cook's Distance for Models Other Than Ordinary Least Squares

Ingrid BAADE, Queensland University of Technology, Brisbane, Australia

For the ordinary least squares model, Cook's distance gives a measure of the influence of an observation or a group of observations on the parameter estimates. However, sometimes an observation may not appear influential on its own when it is being masked by another observation. Lawrance [{\it J.\kern .16667em Roy.\kern .16667em Statist.\kern .16667em Soc.Ser.B}\kern .16667em 57\kern .16667em (1995):181-189] suggested a conditional Cook's distance, to measure the influence of observations conditional on the prior removal of other observations. I have extended Cook's distance to models for which the variance is treated as known but not equal to a multiple of the identity matrix. Examples are a time series model with AR(1) errors and a random effects or mixed model. I hope that by extending to a non-diagonal variance matrix it will be possible to extend notions of Cook's distance to survival analysis models. The question of influential observations arose for some survival analysis data that my supervisor and I looked at in our paper [Baade and Pettitt, {\it Biometrics\kern .16667em 51}\kern .16667em (1995):1502-1513].

Ingrid Baade, School of Mathematics, QUT, GPO Box 2434, Brisbane QLD 4001, Australia i.baade@qut.edu.au


On Simulation from Discrete Conditional Distributions with Application to Generalisations of the Changepoint Problem

R.A. Göran Broström, Department of Statistics, University of Umeå, Umeå, Sweden

Given observations on independent Bernoulli variables, the problem of resampling from the conditional distribution, given the sum, is considered. The success probabilities are supposed to be known, but not necessarily equal. The traditional method of acceptance sampling can be impossible in practice if the sum to condition upon is far from its expected value. By utilising the sufficiency principle, it is possible to get an equivalent situation, where the expected value is equal to the observed, increasing the simulation efficiency considerably. Applications include the changepoint problem and the problem of testing the proportional hazards assumption in survival analysis.

Göran Broström, Dept. of Statistics, Umeå University, Umeå, Sweden gb@matstat.umu.se


Parameter Redundancy

Edward A. Catchpole, Australian Defence Force Academy, Australia
Byron J.T. Morgan, Stephen N. Freeman, University of Kent, Canterbury, England

It is important to be able to check mathematical models to see if all of their parameters can be estimated from data. This is needed both when the models are deterministic, as for example in models describing the flow of material between compartments using first order differential equations, and when the models are stochastic, for instance in describing the annual survival of animals through structured multinomial distributions. Necessary and sufficient conditions are derived for the parameter redundancy of a wide class of nonlinear models. Parameter redundancy can be checked via the rank of a derivative matrix, using symbolic algebra packages. The likelihood surfaces resulting from parameter redundant stochastic models have completely flat regions. However unique maximum likelihood estimates can exist for a useful subset of the parameters. The set of estimable parameters is determined by solving a set of first-order partial differential equations. Illustrative examples are provided from the two areas mentioned above, and avenues for further work are described.

E.A. Catchpole, School of Mathematics and Statistics, University College UNSW, Australian Defence Force Academy, Canberra, ACT 2600, Australia e-catchpole@adfa.oz.au

Friday 12 July: 10:30-12:20

ASC-E Invited: Biological Indicators of Aquatic Pollution


Marine Population and Biodiversity: Quantifying Taxonomic Distinctness and Species {\em \bf Redundancy}

K. Robert Clarke, Plymouth Marine Laboratory, Plymouth UK

Rio {\em Agenda 21} has reinvigorated interest in quantifying biological diversity and its response to pollution. Classic diversity indices, based on the species abundance distribution, can be relatively insensitive to low-level contamination or disturbance, in cases where multivariate statistical methods and associated permutation tests provide convincing evidence of assemblage change. Biodiversity, however, encompasses more than species diversity, and a natural definition of {\em taxonomic distinctness} within a sample is here advocated as one possible biodiversity component, and its effectiveness demonstrated for an oil-rig impact study. A related concept is that of species {\em redundancy}, in the limited sense of interchangeability of species in the (multivariate) description of overall community change through time and as a result of anthropogenic impact/recovery. A possible approach is through successive {\em peeling} of minimal subsets of species, each retaining the ability to replicate the full community structural pattern and possessing demonstrable regularity of taxonomic composition. The procedure is exemplified for benthic data from the Amoco-Cadiz tanker wreck.

K. Robert Clarke, Plymouth Marine Laboratory, Prospect Place, West Hoe, Plymouth PL1 3DH, UK b.clarke@pml.ac.uk


Long-term Biological Monitoring in the Laurentian Great Lakes

Ora E. Johannsson, E. Scott Millard, Great Lakes Laboratory for Fisheries & Aquatic Sciences, Canada

In 1981 a monitoring strategy of high temporal but low spatial resolution was adopted on Lake Ontario in order to incorporate biological variables and lower trophic level production estimates into our lake assessments. Previous work indicated that the lake consisted of three regions offshore. Later analysis of phytoplankton community structure confirmed that biological processes were similar over large areas and that inter-annual variability was more pronounced than regional variability. The Bioindex Program measures physical and nutrient conditions and the biomass, community structure and production of phytoplankton, zooplankton and benthos. These data allow investigation or interactions between trophic levels and development of functional relationships for better assessment of ecosystem changes. We also calculate production for the whole lake by combining biological knowledge with synoptic surveys: this information is essential for proper fisheries management. The measurements and methods of extrapolation differ with each trophic level.

Ora E. Johannsson, Great Lakes Lab for Fisheries & Aquatic Sciences, Canada Centre for Inland Waters, PO Box 5050, Burlington, Ont., Canada L7R4A6, 905-336-6347 johannsson@burdfo.bur.dfo.ca


Modelling Water Quality Changes in Lake Ontario

A. H. El-Shaarawi, National Water Research Institute, Burlington, Ontario, Canada

During the past three decades, water quality data have been routinely collected at a number of sampling locations in lake Ontario. Several times each year during the sampling season, a ship visits a number of sampling locations in the lake where physical, chemical and biological measurements are made on water samples. The objectives of the data collection are to detect, estimate and predict water quality changes in the lake. In this paper we discuss various statistical methods for the analysis of data of this type and illustrate their applications using the available lake Ontario data.

A. H. El-Shaarawi, National Water Research Institute, Burlington, Ontario, Canada L7R 1A6


ASC Invited: Huge Data Sets


Statistical and Computational Issues in Analysing Very Large, Complex Sets of Data

Murray A. Cameron, David X. Chan, Petra M. Kuhnert, Glenn Stone, CSIRO, Sydney, Australia

The analysis of very large data sets (which may involve, say, $10^5$ or more samples and $10$'s or $100$'s of variables) requires new problems to be overcome and changes to standard approaches to data analysis. In addition, to derive the most information from large data sets, the statistician must undertake all aspects of ``greater statistics'' as defined by Chambers [{\it Computers\kern .16667em and\kern .16667em Statistics}\kern .16667em 3\kern .16667em (1993):182-184]. In this paper we discuss some of the statistical and computational issues, including: data cleaning, graphical techniques for exploration, the complexity of models required for very large samples, algorithm choice, automated techniques and the presentation of the results. We illustrate these issues with examples from medicine, government and geophysics.

Murray Cameron, CSIRO Division of Mathematics & Statistics, Locked Bag 17, North Ryde 2113, Australia Murray.Cameron@dms.csiro.au


The Analysis of Call-Detail Data

Colin Mallows, Daryl Pregibon, AT&T Research, New Jersey, USA

We describe some of the problems that have arisen in the study of data on long-distance calls. We consider the relevance of standard statistical theory and methodology to this problem.

Colin Mallows, AT&T Research, Murray Hill, New Jersey 07974 clm@research.att.com


Massive Data Sets

Jon R. Kettenring, Bellcore, Morristown, NJ, USA

Numerous critical applications have arisen in recent years for which the data are so complex and extensive as to render most traditional statistical approaches problematic or worthless. These application areas are far-reaching. They include marketing, fraud detection, information retrieval, traffic analysis, software engineering, and geography. A common theme to many of these applications is non-homogeneity of the data. Effective strategies are needed for breaking the problems down into manageable homogeneous parts. Another bugaboo is often the very high dimensionality of the problem and the need to reduce it to a workable number. Time can be a factor too: in some cases analyses are needed almost as quick as the data arrive. The goal of this talk will be to highlight some of these applications and statistical challenges and will draw heavily from results of a workshop on Massive Data Sets held in 1995 under the auspices of the Committee on Applied and Theoretical Statistics of the National Research Council in the United States.

Jon R. Kettenring, Bellcore 445 South Street Morristown, NJ 07960 USA jon@bellcore.com


IMS Invited: Resampling - II


The Blockwise Bootstrap in Action

Hans R. Künsch, ETH Zürich, Switzerland

Ten years have passed since I submitted my paper with the proposal of resampling blocks of consecutive data in order to deal with dependence. In this talk I will look back and present examples (real and simulated data) to illustrate how useful this procedure is, what modifications and extensions were made, and what still needs improvement.

Hans R. Künsch, Seminar für Statistik, ETH Zentrum, CH-8092 Zürich kuensch@stat.math.ethz.ch


Notions of Limiting P-value Based On Data Depth and Bootstrap

Regina Y. Liu, Kesar Singh, Rutgers University, New Jersey, USA

We introduce some new notions of limiting $P$-values for hypothesis testing. The limiting $P$-value (LP) here not only provides the usual appealing interpretation of a $P$-value as the strength in support of the null hypothesis coming from the observed evidence, it also has the following advantages: First, it allows us to resample directly from the empirical distribution (in the bootstrap implementations) rather than from the estimated population distribution satisfying the null constraints; Second, it serves as a test statistic and as a $P$-value simultaneously, and thus enables us to obtain test results directly without constructing an explicit test statistic and then establishing or approximating its sampling distribution. These are the two steps generally required in a standard testing procedure. Using bootstrap and the concept of data depth we have provided LP's for a broad class of testing problems where the parameters of interest can be either finite or infinite dimensional. Some simulation results will be presented to show the generality and the computational feasibility of our approach.

Regina Y. Liu, Department of Statistics, Rutgers University, Hill Center, Piscataway, NJ 08855, USA RLIU@STAT.RUTGERS.EDU


Subsampling

Joseph P. Romano, Stanford University, CA., USA

In this talk, I will discuss a very general approach to an asymptotic theory of confidence regions via subsampling. The approach is simple and consequently leads to perhaps the most general first-order correct method to date, and even may be offered to remedy bootstrap inconsistencies. The approach applies to the usual i.i.d. setup as well as inference for dependent data, such as time series (including some nonstationary models), spatial data observed over a lattice, or even irregularly spaced data as a realization of a marked point process.

Joseph P. Romano, Stanford University romano@playfair.stanford.edu


ASC Contributed: Topics in Environmental Statistics


Estimation of Uncertainties in the Modeling of Drinking Water Quality

Thierry Fahmy, Eric Parent, Dominique Gatel, ENGRE, Paris, France

Bayesian methods have been developed here to analyze three main types of uncertainties: the model uncertainty, the parameter uncertainty and the sampling errors. To illustrate these techniques in a real case study, a model has been developed to quantify the various uncertainties when predicting the global proportion of water samples containing bacterial pollution indicators weekly monitored by sanitary authorities. The data used to fit and validate the model correspond to water samples gathered in the suburb of Paris. The model uncertainty has been evaluated in the reference class of generalized linear multivariate autoregressive models. The model parameters are determined using the Metropolis-Hastings algorithm (belonging to the Monte Carlo Markov Chain family). The interdependency between each type of uncertainty is also studied. Such an approach, successful when dealing with water quality control, may be also powerful for rare events modeling in hydrology or ecology.

Thierry Fahmy, ENGREF, 19 avenud du Maine, 75015 Paris, France fahmy@ulb.ac.be


Evaluation of Temporal Variability and Trend Assessment for Water Quality and Anthropogenic Impacts in the Upper Volga River Basin

Katia M. KISSELMAN, Roman E. SOKOLOV, Serge G. TOUCHINSKI, Moscow State University, Moscow, Russia

The 30-year time series of irregular observation (with time interval from several days to a month) at 5 monitoring stations in the Upper Volga section were applied for statistical analysis. These observations consist mainly from standard program of hydrochemical analysis from which we had selected: mineralization of water, and chloride and sulfate as a tracer of some combined effect of anthropogenic impacts. In order to determine seasonal effect of water quality fluctuations we had applied smoothing of time series by the Robust technique which can be applied for irregular realizations. Such approach helps us to show that there is some regular fluctuation pattern in all studied parameters. The trend pattern in this smoothed series was calculated by simple regression method, using polynomial approximation of different degrees. The most appropriate degree of polynoms were degree of 3 or 4. The structure of fluctuations on the background of trends was investigated using spectral analysis and was shown that there are some characterized natural processes, that produce such types of fluctuations. Besides water quality changes there had been determined the representative parameters, which identified the anthropogenic factors of water pollution. There for could be predicted changes of water quality according to human activities dynamics in the river basin.

Katia M. KISSELMAN, Department of Environmental Management Faculty of Geography, Moscow State University, Moscow 119899, Russia stush@env.geogr.msu.su


Analysis of Trend in Water Quality in New South Wales Rivers

Grant Robinson, Russell Preece, NSW Dept. of Land and Water Conservation, Australia

The Department of Land and Water Conservation's Key Sites Program is a water quality monitoring study designed to assess trends in selected water quality variables at 89 sites across the State. The water quality variables of interest are salinity as measured by electrical conductivity, water clarity by turbidity and eutrophication potential by total phosphorus concentration. Temporal trends are determined over a minimum period of five years. Trend in water quality was assessed using both raw and flow adjusted time series data. The latter was used to account for effects of streamflow. Loess and the seasonal Kendall test were employed to investigate temporal trend at individual locations. Biplots with GH factorisation (equivalent to principal component analysis) were utilised to investigate spatial trends in water quality. This analysis has highlighted regions where water quality is either improving or deteriorating over time, and demonstrated the higher quality of coastal compared to inland streams.

Grant Robinson, Water Quality Services Unit, NWS Department of Land and Water Conservation, PO Box 3720, Parramatta NSW 2124, Australia grobinson@dlwc.nsw.gov.au


Analysis of Beach Water Quality Data in Hong Kong

Iris Yeung, City University of Hong Kong, Hong Kong

In Hong Kong beach water quality data are taken from the selected beaches between one to three times a month, which are not equally spaced. This paper describes the results of various time series models (discrete time, continuous time, linear, non-linear) fitted to these data.

Iris Yeung, Dept Applied Stats & Operational Rsch, City University of Hong Kong, 83 Tat Chee Ave, Kowloon Hong Kong ARIRIS@CITYU.EDU.HK


Describing Ecosystems

Robert Gittins, Canonical Solutions, Hornsby, Australia

Ecology provides the statistician with many challenges. Field observations are vector-valued; joint distributions tend to be idiosyncratic, nonlinear, multimodal; observations are spatially and temporally correlated. In such settings the anecdotal descriptions of classical ecology are manifestly less useful today than formerly. Ecologists nevertheless continue to rely heavily on the written narrative. This explains in part why convincing descriptive accounts of natural communities and ecosystems are few and far between. What are we to make of this curious neglect of the high-speed computer and the graphics workstation on the part of community ecologists today? We set out to show that the description of natural communities is indeed perfectly tractable, being an interactive, algebraic exercise directed towards arriving at sharp, information-rich graphical images of communities and ecosystems. We illustrate these ideas in the context of a particular, worked grassland example.

Robert Gittins, Canonical Solutions, 21 Northcote Road, Hornsby, NSW 2077, Australia gittinsr@magna.com.au


ASC/IMS Contributed: Topics in Statistics


A.C. Aitken and his Research Students: a Brief Survey

Robin K. MILNE, University of Western Australia, Perth
Elmer G. REES, University of Edinburgh, UK

A list is provided of the research students of A.C. Aitken, the titles of their theses and the dates of award of their degrees. This presentation focuses especially on those students who dealt with statistical topics. A brief survey of the theses is given and some biographical details about the students. The intention is to give a broad view of the influence Aitken has had, particularly on statistics, through his research students and to try to elicit further information from others who may have known these students.

Dr Robin K. Milne, Department of Mathematics, University of Western Australia, Nedlands, WA 6907, Australia milne@maths.uwa.edu.au


Efficient Statistical Modelling for Studying Breast Feeding Pattern in Iran

S.M.T. Ayatollahi, Shiraz University of Medical Sciences, Iran

During the critical period of infancy, breast feeding practices play an important role in determining the growth of an infant. The present study investigates the issue by observing a representative sample consisting of 226 infants who were conceived and born in Shiraz (Iran), and monitored from birth to six months of age. Height, weight, arm circumference and the head circumference of infants were measured at monthly intervals by a team of trained auxologists. Also, family cultural and socio-economic backgrounds were recorded and maternal nutritional status examined at each occasion. A multilevel modelling approach was applied, which allows the regression coefficients to be random. A unified structured model is proposed which takes into account the intrinsic existing hierarchical structure of breast feeding data and estimates the individual as well as interaction effects of factors affecting the breast feeding pattern. This method works with $z$-scores calculated for growth measurements for age using our amalgamated method for estimating age-related centiles. The method effectively removes most of the fixed age and sex effects and is equivalent to centering the data, which disentangles computational difficulties. Analysis showed that premature infants reach a similar growth to full term by their expected date of delivery. Growth velocity was seen to be enormously higher among breast fed infants. Benefits of exclusive breast feeding have been well documented and those for partial breast feeding examined. Breast feeding showed that maternal sizes and obesity tend to be balanced as they were at the conception. Other static and age-related variables were also examined. The paper concludes that 1) longitudinal data on the complex inter-related pattern of breast feeding and its related factors throw much light on the important public health problems. 2) cost-effectiveness of the project justifies conduct of its next stage beyond 6 months. 3) The proposed model proves to be an efficient, flexible and parsimonious approach in longitudinal breast feeding data, which is likely to be applicable to urban population in Iran.

S.M.T. Ayatollahi, Dept. of Biostatistics, Shiraz University of Medical Sciences, PO Box 71345-1874, Shiraz, Islamic Republic of Iran


Density Deconvolution Using Spectral Mixture Models

H. Malcolm Hudson, Craig Walsh, Macquarie University, Sydney, Australia

The aim of this paper is to describe a recent application of mixture models in density deconvolution. We shall describe some background, two general methods for estimating mixing probabilities, and a comparison of these methods in determining the component densities from digitization (or a histogram) of observations from a mixture distribution.

H. Malcolm Hudson, Department of Statistics, Macquarie University, North Ryde NSW, Australia


Maximizing Risk-Adjusted Return in Financial Time Series

Jaewoo Kang, Mark Choey, Andreas S. Weigend, University of Colorado, USA

We present a method for the nonlinear optimization of the Sharpe Ratio, a measure of risk-adjusted performance. Through optimization of the Sharpe ratio we develop a position size strategy for commodity futures using neural networks, discuss several applications, and show derivations. Rather than explicitly performing time series prediction (i.e. predicting the price for the next day), we globally optimize the Sharpe ratio. We train the network on three different objective functions: the Sharpe ratio, profit maximization, and cross-entropy, and show that training on the Sharpe ratio provides superior out-of-sample performance.

Andreas Weigend, University of Colorado, Computer Science Department, Box 430, Boulder, CO 80309, USA


A Robustness Study of the Multi-layer Perceptron Classifier

Robert A. Dunne, Victoria University of Technology, Melbourne, Australia

The multi-layer perceptron (MLP) is a powerful (in terms of the class of functions that can be approximated) distribution-free regression method. However, this very power means that the method is susceptible to over-fitting and to ``modeling noise'' in the data. In order to rectify this tendency, a number of ``regularization'' methods have been tried. In the context of classification problems, all of the regularization techniques are methods for obtaining a smoother separating boundary between classes, and all involve some trade-off between error minimization and smoothness via the selection of a smoothing parameter. In this paper, the problem of over-fitting is approached via a consideration of the robustness of MLPs. We do this via an analytic study of the {\em influence curve}. In addition, a finite sample {\em sensitivity curve} is explored in a number of small experiments which allow us to see several significant aspects of the behavior of MLPs.

R. Dunne, Victoria University of Technology, Footscray Campus, Department of Computer and Mathematical Sciences, P.O. 14428, MCMC Melbourne 8001, Australia dunne@matilda.vut.edu.au


M-estimates for Regression with Changing Scale

C. S. Withers, The New Zealand Institute for Industrial Research and Development, Lower Hutt, NZ

We offer a semi-parametric method of modelling observations subject to trends in both location and scale. Our model is {\it observation = signal + signal $\times $ noise}, where the location and scale signals are given real smooth functions (not necessarily linear) of an unknown parameter $\theta $ in $R^m$, and the noise is stationary with unknown marginal distribution $F(x)$. Rather than assume $F(x)$ has a known or parametric form, we define the (possibly weighted) M-estimate of $\theta $ with respect to a given smooth function $\rho (x): R \to R$. When the scale is not changing this reduces to the ordinary nonlinear regression M-estimate and requires that $F(x)$ is suitably centred with respect to $\rho $. However when the scale is changing as well, then $F(x)$ must also be suitably scaled with respect to $\rho $. Specialising to the case where the marginal distribution of the noise {\it is} known, we obtain for the first time the asymptotic normality of the maximum likelihood estimate.

C. S. Withers, IRL, Box 31-310, Lower Hutt, New Zealand c.withers@irl.cri.nz


Visualising Global Behaviour of Chaotic Image Sequences on 2D Conjugate Map

Zhi Jie Zheng, Victoria University of Technology, Melbourne, Australia

2D map techniques (Poincaré maps) play a key role in representing global behaviour of complex dynamic systems. Image sequences have discrete time, space and state. Discrete conditions create severe difficulties in applying 2D map techniques to visualising global behaviour of chaotic image sequences. In this paper, four statistical measures are proposed for constructing 2D maps to represent global behaviour of binary image sequences. From a given image sequence, four statistical measure sequences can be generated. Four measure sequences can be used to construct four Poincaré maps (for serial measures) and two additional maps, conjugate maps (for parallel measures). In the construction, a 2D conjugate map constructed by two statistical measures of the variations is distributed in a triangular area. Three chaotic image sequences of 1D cellular automata are selected and their conjugate maps are illustrated.

Dr. Zhi Jie Zheng, Department of Computer and Mathematical Sciences, Victoria University of Technology, PO Box 14428, MCMC Melbourne, Vic. 8001, Australia zheng@matilda.vut.edu.au


ASC/IMS Contributed: Applications of Statistics II


Modelling Monthly Rainfall in a Tropical Environment

L. Guenni, M.C. Key, Universidad Simón Bolívar, Caracas, Venezuela
A. Hernandez, Instituto Universitaro de Tecnologia, La Victoria, Venezuela

Stochastic models of rainfall when calibrated to specific locations are a very useful tool to produce long sequences of rainfall data for several applications. Two different models: the truncated normal model and the compound Poisson model were used to simulate monthly rainfall at 80 locations in Guarico State, Venezuela. A parsimonious estimation procedure was implemented to account for the strong seasonality in the rainfall model parameters. Parameter estimation was carried out by maximum likelihood by modelling the parameters with periodic functions. The number of significant coefficients of the periodic functions for each parameter was selected by calculating the log-likelihood ratio and using an appropriate information criteria. This procedure effectively reduces the number of parameters to be estimated and it is demonstrated that both models provide a reliable representation of the rainfall variability in the studied location.

Lelys Guenni, Universidad Simón Bolívar, Departamento de Matemáticas Puras y Aplicadas y Centro de estadística y Software Matemático. APDO. 89.000. Caracas 1080-A, Venezuela lbravo@cesma.usb.ve


The Grouping Problem in Forensic Glass Analysis: A Divisive Approach

Christopher M. Triggs, James M. Curran, University of Auckland, New Zealand
John S. Buckleton, Kevan A.J. Walsh, ESR: Forensic, Auckland, New Zealand

If a window is broken under any circumstances some fragments of glass from the window may be transferred to a person's clothing. These fragments of glass can be used in evidence. The refractive index (RI) of pieces of glass varies widely due to random fluctuations in the chemical composition, manufacture and handling and can serve as a ``fingerprint'' of a window pane or bottle, or any other glass source. If fragments of glass are recovered from a person's clothing their refractive indices can be compared with the RI's of samples of glass from the broken glass source at the crime scene. When examining a sample of glass fragments recovered from a suspect in a forensic case, the question arises whether the fragments may have come from several different sources. It is necessary to include this information in the statistical analysis. A divisive method for dealing with the grouping problem is proposed and compared with the agglomerative methods currently in use via Monte Carlo simulation.

James M. Curran, Dept of Stats, University of Auckland, Private Bag 92019, Auckland, NZ curran@stats.auckland.ac.nz


Estimating the Number of Genetic Components in an Effective Factors Model: a Gibbs Sampling Approach

Roderick D. Ball, The Horticulture and Food Research Institute of New Zealand Ltd

A method using the Gibbs sampler and a continuous approximation to the binomial distribution is developed for estimating the number $k$ of effective factors in a cross and the effect $d$ of each factor when the parents don't necessarily represent the extreme phenotypes. The problem involves two parameters of interest $k,d$, which are highly correlated and a large number of nuisance parameters. Rapid convergence of the sampler is obtained by integrating $d$ out of the conditional distribution for sampling $k$. Unlike the classical estimator, which breaks down except for very large samples, the Gibbs sampling estimates are well behaved with small samples. Our method gives good estimates for a range of sample sizes for $k \ge 5$. For small integral $k$, ($1\le k\le 4$), the progeny distribution can be resolved into distinct classes and $k$ estimated directly. Some recommendations for experimental design are given.

Roderick D. Ball, HortResearch, P.B. 92169, Auckland, New Zealand rball@hort.cri.nz


Statistical Modules in Road Traffic Noise Data Analysis: A Case Study in Calcutta, India

Prasun Das, Debashis Chakrabarty, Subhas Chandra Santra, Indian Statistical Institute, Calcutta, India

This is the first comprehensive study on road traffic noise undertaken at Calcutta with the purpose of estimating noise indicators, predicting noise indicators using traffic flow information and developing human adverse response rating (HARR). The study and its statistical analysis was carried in four modules. Data on noise level and category-wise traffic flow were collected at twenty four transactions for a twenty four hour period at each sit at thirty seconds interval. Standard noise indicators eg. $L_{eq(24)}, L_{dn}$ etc. were calculated. Different exceedence levels of noise were also estimated empirically. Regression-type prediction models for noise indicators were developed using the different regressors like Volume of light, medium, heavy (HV) vehicles; \%HV and ln(\%HV). All the three performed equally well. To develop HARR, clustering was done on the basis of traffic flow density and $L_{eq(24)}$ using centroid clustering technique and cluster-wise impact assessment sampling was conducted. Further work is in progress.


Sampling Petitions for Validity of Signatures

Michael J. Doherty, Statistics New Zealand, Wellington, New Zealand

Like a number of other jurisdictions, New Zealand now has an Act giving procedures which citizens can use to get a referendum on a particular question. A petition requesting the referendum is presented to the House of Representatives. There are administrative requirements (the petition on an approved form, the question wording approved, a time limit for collecting signatures etc.) but if these are met, and the petition has been signed by at least 10\% of registered electors, the referendum must be held. To determine if there are enough valid signatures, the Act allows for a sample of signatures to be checked, instead of a full count. The possibility of multiple signatures complicates this determination. I will discuss this problem, which does not seem to be well-covered in the statistical literature.

M. Doherty, Statistics New Zealand, P.O. Box 2922, Wellington, New Zealand mdoherty@stats.govt.nz


Estimation of Origin-Destination Matrix in Transportation Using Traffic Counts and Random-Coefficients Logit Models

Hing-Po Lo, Wendy Shui-Ping Lam, University of Melbourne, Australia

Origin-Destination (OD) matrix is a table that includes the number of trips between any two districts in a study area. It plays a fundamental role in transportation planning and traffic control. Estimation of OD matrix using transport demand model approach is expensive and time consuming. A recent approach that uses traffic counts eliminates the expensive interview surveys and greatly reduces the cost and time of operation. Statistical models that consider explicitly the presence of randomness in link choice proportions and observed traffic counts are developed in this paper. Also, in order to incorporate the heterogeneity across individuals in their link choice decisions, random coefficients logit models are used to study the effects of traffic conditions and socio-economic factors. Both parametric and non-parametric approaches for heterogeneity are used in the estimation of the model parameters.

Hing-Po Lo, Department of Statistics, The University of Melbourne, Parkville, Victoria, 3052, Australia hingpo@stats.mu.oz.au


The Impact of Rainfall on Tourism in Rain Forest Regions

Denny H Meyer, Keith Dewar, Massey University, Auckland, New Zealand

In this study we look at the effect of inclement weather on the Tourism Industries of two rain forest regions, namely Westlands in New Zealand and the Mountain Parks of Tasmania. These two regions differ with regard to the importance of rainfall in their marketing strategies. The influence of past, present and future rainfall on tourism are all considered. The effect for both short and long-haul tourists is investigated, taking into consideration the "home" rainfall of the short-haul tourists. The analysis is performed using structural statespace models. The resulting models will permit a comprehensive comparison of visitor behaviours in the two regions.

Dr. Denny Meyer, Department of Statistics, Albany Campus, Massey University, Private Bag 102 904, NSMSC, Auckland, New Zealand D.H.Meyer@massey.ac.nz


ASC/IMS Contributed: Time Series IV


A General Methodology for Bayesian Analysis of Multivariate ARMA and ARFIMA Processes

Nalini S. Ravishanker, University of Connecticut, Storrs, USA
Bonnie K. Ray, New Jersey Institute of Technology, Newark, USA

We present a general framework for Bayesian inference of multivariate ARMA and ARFIMA time series. This framework allows the incorporation of prior information or parametric restrictions via prior densities, and facilitates interesting posterior analysis through point estimates, density estimates and scatter plots. We derive the joint posterior density for the parameters corresponding to the exact likelihood function in a form that is computationally feasible. A modified Gibbs sampling algorithm is used to generate samples from the complete conditional distribution associated with each parameter. We illustrate our approach using two sets of series: monthly employment rates obtained using rotational sampling methods and daily sea surface temperatures measured at three locations along the central California coast. The use of rotational sampling methods requires parameter estimation subject to nonlinear constraints. The sea surface temperature series are strongly interdependent due to similarities in local atmospheric conditions at the different locations and have been previously found to exhibit long memory when studied individually.

Bonnie K. Ray, Dept. of Mathematics, New Jersey Institute of Technology, Newark, NJ 07102, USA borayx@chaos.njit.edu


``Homogeneity of Variance Test" for the Comparison of Two or More Spectra

Elizabeth A. Maharaj, Nihal Singh, Brett A. Inder, Monash University, Melbourne, Australia

Let $Z_j(t), j=1,2,...k$ be $k$ independent stationary processes with spectral density functions $S_zj(w)$, $j=1,2...k$. In many real world situations, there is a need to compare two or more spectra. Tests to compare two spectra already exist in the literature. In this paper we propose a simple test based on Bartlett's modification of the likelihood criterion, for comparing two or more spectra. Simulation studies show that for $k$ = 2 and 3 this test performs reasonably well. The test is applied to two sets of real data.

Ms Ann Maharaj, Dept of Econometrics, Monash University (Caulfield Campus), P.O.Box 197, Caulfield East 3145, Australia Ann.Maharaj@BusEco.Monash.edu.au


Smoothing Non-Gaussian Time Series with Autoregressive Structure

Gary K. Grunwald, University of Melbourne, Victoria, Australia
Rob J. Hyndman, Monash University, Victoria, Australia

We consider nonparametric smoothing for time series which are clearly non-Gaussian (for example, counts, proportions, or nonnegative values) and which are subject to an autoregressive random component. Such a decomposition into a correlated random component and a smooth function can give a useful and interpretable model for a series (for example, for daily weather). The problem can be formulated in a general way to include most common non-Gaussian autoregressive models. The amount of smoothing can be chosen by penalized likelihood methods, and we give simulations and parametric bootstrap methods for studying and empirically estimating the penalty function. We illustrate the methods, the generality of their application, and several data analytic methods with real data examples.

Gary Grunwald, Statistics Department, University of Melbourne, Parkville 3052, Australia garykg@stats.mu.oz.au
http://www.stats.mu.oz.au:8001/~garykg/smoothing.ps.



Testing Change Points in a Non-Ergodic Model

Marc Raimondo, Australian National University, Canberra, Australia

A functional limit theorem with a particular function class and topology is derived for non-ergodic type time series. This limit theorem allows us to study the asymptotic law of the associated Likelihood Ratio Test (LRT) statistic for testing the presence of a change in the covariance parameter in the Explosive Gaussian Auto-Regressive (AR) model. We show that the level of the LRT cannot be approximated without introducing appropriate normalization. The limit law of a particular Weighted Likelihood Ratio Test (WLRT) is examined through a simulation study and is compared to the well-known Kolmogorov distribution obtained in the stationary case; we conclude that for practical applications when the root is really close to the unit one can use the same thresholds as in the stationary case. This procedure is applied to the study of three real time series known to be non-stationary.

Marc Raimondo, Centre for Mathematics and its Applications, Australian National University, Canberra ACT 0200, Australia raimondo@alphasun.anu.edu.au


Testing for a Weak Stationarity of Time Series Data

Gan Ohama, Takashi Yanagawa, Kyushu University, Fukuoka, Japan

For testing a weak stationarity, Okabe and Nakano [{\it Hokkaido \kern .16667em Math. \kern .16667em J.}\kern .16667em 20\kern .16667em (1991):45-90] set up a criterion to decide whether any given $d$-dimensional data can be regarded as a realization of a stationry time series that has the sample autocovariance function as its autocovariance function. We generalize this criterion and propose a method of testing stationarity. The usefulness of the method is demonstrated by simulation and also by application to practical data.

Takashi Yanagawa, Graduate School of Math., Kyushu University, Fukuoka 812, Japan yanagawa@math.kyushu-u.ac.jp


Gaussian Density Products and Gaussian Approximations with Applications to Non-linear Filtering and Estimation

Dawei Huang, Queensland University of Technology, Australia

In this talk, we introduce a method for non-linear filtering based on the following result. Suppose that there are two Gaussian densities, both functions of a vector variable $x$. The product of these densities can be rewritten as another product of two Gaussian densities, of which one is a function of $x$ while the other is independent of $x$. Using linearization, we can derive the extended Kalman filters from this result directly. However, it may not be accurate and robust. As an alternative, we introduce Gaussian approximations to solve this problem. Gaussian approximations aim at approximating a non-negative function using Gaussian densities. As a result, it may not be necessary to use linearization for non-linear models. This method can be used for FM demodulation which arises in communications. It can also be used for recursive estimation of the frequency in a sinusoidal model.

Dawei Huang, Centre in Statistical Science and Industrial Mathematics, Queensland University of Technology, GPO Box 2434, Brisbane, Q4001, Australia huang@fsc.qut.edu.au

Friday 12 July: 14:00-15:50

IMS Invited: Plenary address


Why Should Statisticians Care About Wavelets?

Bernard W. Silverman, University of Bristol, UK

Wavelets are a topic of great current interest, both in statistics and in many other fields in mathematics and engineering. The reaction to wavelets in the statistical community has, in common with many other new developments in the past, revealed all the usual regrettable tensions between `theoreticians' and `practitioners'. The lecture will include a crash course in wavelets for non-specialists and then go on to discuss a range of recent and largely unpublished research, both theoretical and practical, by the speaker and collaborators, in order to illustrate the richness of this field, and its potential for future development. Topics likely to be covered include the advantages of wavelets for data with correlated noise; the analysis of nonstationary time series; multiple wavelets and some of their statistical aspects; and wavelet methods for fitting of deformable templates. Applications will be drawn from various medical and biological fields.

Bernard W. Silverman, Department of Mathematics, University of Bristol, University Walk, Bristol BS8 1TW, United Kingdom b.w.silverman@bristol.ac.uk
http://www.stats.bris.ac.uk.



Go back to table of contents for this issue of The IMS Bulletin