Die Erholunsgzone vor dem D4 Gebäude über dem Brunnen.

Abstracts Research Seminar Winter Term 2015/16

Der Inhalt dieser Seite ist aktuell nur auf Englisch verfügbar.

Marius Hofert: Improved Algorithms for Computing Worst Value-at-Risk: Numerical Challenges and the Adaptive Rearrangement Algorithm

Numerical challenges inherent in algorithms for computing worst Value-at-Risk in homogeneous portfolios are identified and solutions as well as words of warning concerning their implementation are provided. Furthermore, both conceptual and computational improvements to the Rearrangement Algorithm for approximating worst Value-at-Risk for portfolios with arbitrary marginal loss distributions are given. In particular, a novel Adaptive Rearrangement Algorithm is introduced and investigated. These algorithms are implemented using the R package qrmtools.


Claudia Klüppelberg: Modelling, estimation and model assessment of extreme space-time data

Max-stable processes can be viewed as the natural infinite-dimensional generalisation of multivariate extreme value distributions. We focus on the Brown-Resnick space-time process, a prominent max-stable model. We extend existing spatially isotropic models to anisotropic versions and use pairwise likelihood to estimate the model parameters. For regular grid observations we prove strong consistency and asymptotic normality of the estimators for fixed and increasing spatial domain, when the number of observations in time tends to infinity. We also present a statistical test for spatial isotropy versus anisotropy, which is based on asymptotic confidence intervals of the pairwise likelihood estimators. We fit the spatially anisotropic Brown-Resnick model and apply the proposed test to precipitation measurements in Florida. In addition, we present some recent diagnostic tools for model assessment. 

This is joint work with Sven Buhl.


Andreas Hamel: From Multi-Utility Representations to Stochastic Orders and Central Regions - A Set Optimization Perspective

Many important order relations in Economics, Finance and Statistics can be represented by or are defined via families of scalar functions. This includes some non-complete preferences in Economics, in particular Bewley preferences, and stochastic orders. It will be demonstrated that each such order generates (i) a specific closure (hull) operator and (ii) a specific complete lattice of sets. In turn, the latter will be used as image space for optimization problems with set-valued objective functions. A solution concept for such problems will be discussed and applied to particular cases. A current project aims at extending the approach to risk evaluation of multi-variate positions via statistical depth functions.


Nicolas Chopin: Sequential quasi-Monte Carlo and extensions

(joint work with Mathieu Gerber) In this talk, I will discuss SQMC (Sequential quasi-Monte Carlo), a class of algorithms obtained by introducing QMC point sets in particle filtering. Like particle filters, SQMC makes it possible to compute the likelihood and the sequence of filtering distributions of a given state-space (hidden Markov) model. But SQMC converges faster than Monte Carlo filters. I will also discuss how to perform smoothing and to apply it to PMCMC (Particle Markov chain Monte Carlo), and present some numerical illustrations.

Relevant links:
arxiv.org/abs/1402.4039
arxiv.org/abs/1506.06117


Christian Brownlees: Realized Networks

In this work we introduce a LASSO based regularization procedure for large dimensional realized covariance estimators of log-prices. The procedure consists of shrinking the off diagonal entries of the inverse realized covariance matrix towards zero. This technique produces covariance estimators that are positive definite and with a sparse inverse. We name the regularized estimator realized network, since estimating a sparse inverse covariance matrix is equivalent to detecting the partial correlation network structure of the log-prices. We focus in particular on applying this technique to the Multivariate Realized Kernel and the Two-Scales Realized Covariance estimators based on refresh time sampling. These are consistent covariance estimators that allow for market microstructure effects and asynchronous trading. The paper studies the large sample properties of the regularized estimators and establishes conditions for consistent estimation of the integrated covariance and for consistent selection of the partial correlation network. As a by-product of the theory, we also establish novel concentration inequalities for the Multivariate Realized Kernel estimator. The methodology is illustrated with an application to a panel of US bluechips throughout 2009. Results show that the realized network estimator outperforms its unregularized counterpart in an out-of-sample global minimum variance portfolio prediction exercise.


Martyn Plummer: Cuts in Bayesian Graphical Models

Large Bayesian models that combine data from different sources are sometimes difficult to manage, and may exhibit numerical problems such as lack of convergence of MCMC. There is a developing interest in "modularization" as an alternative to full probability modelling, i.e. dividing large models into smaller "modules"  and controlling the degree of communication between them. Cuts are an extreme form of modularization. Informally, a cut works as a valve in the model graph, preventing information from flowing back from the data to certain parameters. Cuts have been used in many applications, but are particularly common in pharmacokinetic / pharmacodynamic (PK/PD) models. They have been popularized by the OpenBUGS software which provides a cut function and a modified MCMC algorithm. Unfortunately, cuts do not work. I have shown that the OpenBUGS cut algorithm does not converge to a well-defined distribution, in the sense that the limiting distribution of the Markov chain depends on which sampling methods are used. This leaves us in a situation with a popular idea that is widely used but has no underlying theory and no valid implementation. I will speculate on where to go next.


Yee Whye Teh: Bayesian Nonparametrics in Mixture and Admixture Modelling

Mixture and admixture models are ubiquitous across many disciplines where data exhibit clustering structure.  Examples include document topic modelling, genetic admixture modelling and subgroup analysis.One of the difficulties in applying such methods is model selection, where one needs to determine the appropriate number of clusters in the data.  In this talk I will overview the Bayesian nonparametric approach to mitigating the model selection difficulty.  The idea is to allow for an unbounded number of clusters to potentially explain the data, and using the Bayesian approach to inference to avoid overfitting.  The approach builds upon the Dirichlet process and the hierarchical Dirichlet process, and I will also describe more recent work trying to model time varying clustering structure.


Christian Robert: Approximate Bayesian computation for model choice via random forests

Introduced in the late 1990’s, the ABC method can be considered from several perspectives, ranging from a purely practical motivation towards handling complex likelihoods to non-parametric justifications. We propose here a different analysis of ABC techniques and in particular of ABC model selection. Our exploration focus on the idea that generic machine learning tools like random forests (Breiman, 2001) can help in conducting model selection among the highly complex models covered by ABC algorithms. Both theoretical and algorithmic output indicate that posterior probabilities are poorly estimated by ABC. I will describe how our research for an alternative first led us to abandoning the use of posterior probabilities of the models under comparison as evidence tools. As a first substitute, we proposed to select the most likely model via a random forest procedure and to compute posterior predictive performances of the corresponding ABC selection method. It is only recently that we realised that random forest methods can also be adapted to the further estimation of the posterior probability of the selected model. I will also discuss our recommendation towards sparse implementation of the random forest tree construction, using severe subsampling and reduced reference tables. The performances in term of power in model choice and gain in computation time of the resulting ABC-random forest methodology are illustrated on several population genetics datasets.

[This is joint work with Jean-Marie Cornuet, Arnaud Estoup, Jean-Michel Marin and Pierre Pudlo. The current version is available as arxiv.org/pdf/1406.6288v3]


Ivan Mizera: Borrowing Strength from Experience: Empirical Bayes Methods and Convex Optimization

We consider classical compound decision models, mixtures of normal and Poisson distributions, from the nonparametric perspective, that is, with general mixing distribution. In this context, the otherwise rudimentary predictions for unobserved random effects can be remarkably improved by using experience: casting the problem in the classical empirical Bayes framework and elucidating either the unknown prior distribution or directly the optimal prediction rule from the estimates of the marginal distribution of the data. Prominent examples include the prediction of the individual success proportion in popular sports, or estimating the Poisson rate in actuarial science. We discuss two methods that owe their feasibility to the modern convex optimization methods: The first introduces a nonparametric maximum likelihood estimator of the mixture density subject to a monotonicity constraint on the resulting Bayes rule; the second implements, as an alternative to earlier-proposed EM-algorithm strategies, a new approach to the Kiefer-Wolfowitz nonparametric maximum likelihood estimator for mixtures, with the resulting reduction in computational effort of several orders of magnitude for typical problems. The procedures are compared with several existing alternatives in simulations, which particularly focus on situations with sparse mixing distributions.


Omiros Papaspiliopoulos: Building MCMC

The talk provides an overview of techniques for building Markov chain Monte Carlo algorithms, connecting some of the classic works in the area to very recent methods used for sampling numerically intractable distributions, methods based on transformations, and methods based on diffusions. The work in the talk is based on a book jointly written with Gareth O. Roberts.