Die Erholunsgzone vor dem D4 Gebäude über dem Brunnen.

Abstracts Research Seminar Winter Term 2011/12

Peter M. Bentler: Reinventing "Guttman Scaling" as a Statistical Model: Absolute Simplex Theory

Guttman scaling is methodology for transforming 0-1 responses to a set of survey/questionnaire/- interview questions into a unidimensional composite score. Based on analysis of item response patterns of a set of respondents, it was a popular methodology from 1950-1980. With the growth of item response theory (IRT), which uses parameterized logistic models to explain the probability of item endorsement from item and person parameters, Guttman scaling fell out of favor. This is unfortunate, because this type of scaling has many unrecognized unique properties (Bentler, 1971, 2009, 2011ab; Bentler & Yuan, 2011). An illustrative unique property is that a total score based on a weighted sum will rank-order individuals identically regardless of the choice of arbitrary positive item weights . In this talk I describe my absolute simplex theory (AST) for binary data with two alternative parameterizations that includes new approaches to extended mathematical model structures and properties; new statistical methods for estimating parameters and testing the model’s goodness-of-fit; the attribute CDF; a continuous unidimensional scale; and errors of measurement in the CDF.


Hedibert Lopes: Cholesky Stochastic Volatility

(Authors: Hedibert Lopes, Robert McCulloch and Ruey Tsay)

Multivariate volatility has many important applications in finance, including asset allocation and risk management. Estimating multivariate volatility, however, is not straightforward because of two major difficulties. The first difficulty is the curse of dimensionality. For p assets, there are p(p+1)=2 volatility and cross-correlation series. In addition, the commonly used volatility models often have many parameters, making them impractical for real application. The second difficulty is that the conditional covariance matrix must be positive definite for all time points. This is not easy to maintain when the dimension is high. In this paper, we develop a new approach to modeling multivariate volatility. We name our approach Cholesky Stochastic Volatility (CSV). Our approach is Bayesian and we carefully derive the prior distributions with an appealing practical flavor that allows us to search for simplifying structure without placing hard restrictions on our model space. We illustrate our approach by a number of real and synthetic examples, including a real application based on the S&P100 components.


Anthony Brabazon: Natural Computing and Finance

Natural computing can be broadly defined as the development of computational algorithms using metaphorical inspiration from systems and phenomena that occur in the natural world. To date, the majority of these algorithms stem from biological inspiration based on the observation that populations of organisms are continually adapting in multi-faceted, uncertain, dynamic environments. These are characteristics which fit well with the nature of financial markets. Prima facie, this makes these algorithms interesting for financial modelling applications. This seminar will outline a series of biologically-inspired algorithms which have utility for optimisation and model-induction purposes and which therefore have wide application for financial modeling.


Leonhard Held: Introducing Bayes Factors

Statistical inference is traditionally taught exclusively from a frequentist perspective both at undergraduate and graduate level. If Bayesian approaches are discussed, then only Bayesian parameter estimation is described, perhaps showing the formal equivalence of a Bayesian reference analysis and the frequentist approach.  However, the Bayesian approach to hypothesis testing and model selection is intrinsically different from the classical approach and offers key insights into the nature of statistical evidence. In this talk I will give an elementary introduction to Bayesian model selection with Bayes factors. I will then summarize important results on the relationship between P-values and Bayes factors. A universal finding is that the evidence against a simple null hypothesis is by far not as strong as the P-value might suggest. I will also describe more recent work on Bayesian model selection in generalized additive models using hyper-g priors.


Gilles Celeux: Different Points of View for Selecting a Latent Structure Model

Latent structure models are an efficient tool to deal with heterogeneity or for model-based cluster analysis. These two points of view could lead to different methods for statistical inference (parameter estimation and model selection). After a survey highlighting their differences, the consequences of those two points of view on model selection will be analyzed and illustrated in both the maximum likelihood and Bayesian context.


Christoph Freudenthaler: Matrix and Tensor Factorization from a Machine Learning Perspective

Matrix decomposition, i.e. PCA or more generally SVD, is a well-known tool from linear algebra used for purposes like inferring and interpreting unobserved heterogeneity or decorrelating/compressing the columns of a (design) matrix. Recently, research in recommender systems applied an adapted version of matrix (and later on tensor) decomposition to impute missing values of the decomposed matrix, e.g., to predict for a set of known users U and known items I missing user-movie-ratings. Taking the perspective of a machine learner, I will focus on matrix and tensor decomposition as a predictive model and (1) discuss existing decomposition approaches, (2) generalize them to a specific class of regression models, (3) relate this class of models to standard prediction models like polynomial regression, and (4) illustrate its increased predictive accuracy on the rating prediction task of recommender systems.


Gary Koop: Hierarchical Shrinkage in Time-Varying Parameter Models

(Authors: Miguel A. G. Belmonte, Gary Koop, Dimitris Korobilis)

In this paper, we forecast EU-area inflation with many predictors using time-varying parameter models. The facts that time-varying parameter models are parameter-rich and the time span of our data is relatively short motivate a desire for shrinkage. In constant coefficient regression models, the Bayesian Lasso is gaining increasing popularity as an effective tool for achieving such shrinkage. In this paper, we develop econometric methods for using the Bayesian Lasso with time-varying parameter models. Our approach allows for the coefficient on each predictor to be: i) time varying, ii) constant over time or iii) shrunk to zero. The econometric methodology decides automatically which category each coefficient belongs in. Our empirical results indicate the benefits of such an approach.