Skip to main content
Skip to main menu Skip to spotlight region Skip to secondary region Skip to UGA region Skip to Tertiary region Skip to Quaternary region Skip to unit footer

Slideshow

Tags: Colloquium Series

The Statistics Department hosts weekly colloquia on a variety of statistcal subjects, bringing in speakers from around the world.

Structured covariance matrices characterized by a small number of parameters have been widely used and play an important role in parameter estimation and statistical inference. To assess the adequacy of a specified covariance structure, one often adopts the classical likelihood-ratio test when the dimension of the data (p) is smaller than the sample size (n). However, this assessment becomes quite challenging when p is bigger than n, since the…
In this talk, we introduce a robust testing procedure — the Lq-likelihood ratio test (LqLR).  We derive the asymptotic distribution of our test statistic and demonstrate its robustness properties both analytically and numerically. We further investigate the properties of its influence function and breakdown point.  We also propose a method for selecting the tuning parameter q, and demonstrate that, with the q selected using our…
Experimental costs are rising and it is important to use minimal resources to make statistical inference with maximal precision. Optimal design theory and ideas are increasingly applied to address design issues in a growing number of disciplines, and they include biomedicine, biochemistry, education, agronomy, manufacturing industry, toxicology and food science, to name a few.   I first present a brief overview of optimal design…
We study behavior of the restricted maximum likelihood (REML) estimator under a misspecified linear mixed model (LMM) that has received much attention in recent gnome-wide association studies. The asymptotic analysis establishes consistency of the REML estimator of the variance of the errors in the LMM, and convergence in probability of the REML estimator of the variance of the random effects in the LMM to a certain limit, which is equal to the…
Massively large data sets are routine and ubiquitous given modern computer capabilities. What is not so routine is how to analyse these data. One approach is to aggregate the data sets according to some scientific criteria. The resultant data are perforce symbolic data, i.e., lists, intervals, histograms, and so on. Applications abound, especially in the medical and social sciences. Other data sets (small or large in size) are naturally symbolic…
Definitive Screening Designs (DSDs), discovered in 2011, are a new alternative to standard two-level screening designs. There are many desirable features of this family of designs. They require few runs while providing orthogonal main effects and avoiding any confounding of main effects by two-factor interactions. In addition they allow for estimating any quadratic effect of the continuous factors. The two-factor interactions are correlated but…
We collect the coauthor and citation data for all research papers published in four of the top journals in statistics between 2003 and 2012, analyze the data from several different perspectives (e.g., patterns, trends, community structures) and present an array of interesting findings. (1) Both the average numbers of papers per author published in these journals and the fraction of self citations have been decreasing, but the proportion of…
Dimensional Analysis (DA) is a fundamental method in the engineering and physical sciences for analytically reducing the number of experimental variables prior to the experimentation.  The principle use of dimensional analysis is to reduce from a study of the dimensions of the variables on the form of any possible relationship between those variables.  The method is of great generality.  In this talk, an overview/introduction of…
We propose sequential methods for obtaining approximate confidence limits and optimal sample sizes for the risk ratio (RR) of two independent binomial variates and a measure of reduction (MOR). The procedure is developed based on a modified maximum likelihood estimator (MLE) for the ratio. First-order asymptotic expansions are obtained for large-sample properties of the proposed procedure and we investigate its finite sample behavior through…
Data processing and source identification using lower dimensional hidden structure plays an essential role in many fields of applications, including image processing, neural networks, genome studies, signal processing and other areas where large datasets are often encountered. Representations of higher dimensional random vector using a lower dimensional vector provide a statistical framework to the identification and separation of the sources.…

Support us

We appreciate your financial support. Your gift is important to us and helps support critical opportunities for students and faculty alike, including lectures, travel support, and any number of educational events that augment the classroom experience. Click here to learn more about giving.

Every dollar given has a direct impact upon our students and faculty.