Die Erholunsgzone vor dem D4 Gebäude über dem Brunnen.

Abstracts Research Seminar Summer Term 2020

Bettina Grün: Shrinkage Priors for Sparse Latent Class Analysis

Model-based clustering aims at finding latent groups in multivariate data based on mixture models. In this context the latent class model is commonly used for multivariate categorical data. Important issues to address are the selection of the number of clusters as well as the identification of a suitable set of clustering variables. In a maximum likelihood estimation context Fop et al. (2017) propose a step-wise procedure based on the BIC for model as well as variable selection for the latent class model. They explicitly distinguish between relevant, irrelevant and redundant variables and indicate that their approach leads to a parsimonious solution where the latent groups correspond to a known classification of the data.
In the Bayesian framework we propose a model specification based on shrinkage priors for the latent class model to obtain sparse solutions. In particular regarding variable selection we emphasize the distinction between the different roles of the variables made in the literature so far and the implications on prior choice. We outline the estimation based on Markov chain Monte Carlo methods and post-processing strategies to identify the clustering solution. Advantages of the Bayesian approach are that the computational demanding step-wise procedure is avoided, regularization is naturally imposed to eliminate degenerate solutions and the exploratory nature of clustering is supported by enabling an adaptive way to reveal competing clustering structures from coarser to more fine-grained solutions.

Paul Eisenberg: A roller coaster: Energy markets, Suboptimal control and Pensions.

In this talk I will introduce three threads of my active research.
In the first part of the presentation, we will look at energy futures markets.
Electricity companies are forced to trade their energy on different time horizons and often well in advance. For being profitable, it is crucial to anticipate the energy supply and demand as good as possible. Recently, the traders demonstrate an increasing desire to understand price and price risk of energy contracts with delivery in the future (electricity futures). However, having in mind that electricity is a flow commodity which is delivered over time intervals and that delivery periods often overlap, modelling of electricity futures is a non-trivial task. Prices for intersecting delivery periods do, for obvious reasons, depend on each other which has to be taken into account. We discuss different price models and come up with a relatively simple, (NA)-consistent and interpretable model.
As a second topic, we discuss a method to find explicit error bounds for non-optimal controls in financial control problems. When an optimal control cannot be obtained, then one often relies on numerics or an ad-hoc choice of an control. In both cases one may pose the question how "good" an obtained 'approximate' or 'ad-hoc' control really is. Applying universal density bounds we measure the error of a given control in relation to the unknown optimal control.
In the last part of the talk we will look at the pension industry.
The increase in longevity, the ultra-low interest rates and the guarantees associated to pension benefits have put significant strain on the pension industry. Consequently, insurers need to be in a financially sound position while offering satisfactory benefits to participants. In this paper, we propose a pension design that goes beyond the idea of annuity pools and unit-linked insurance products. The purpose is to replace traditional guarantees with low volatility, mainly achieved by collective smoothing algorithms and an adequate asset management.
A possible pension design allowing insurance companies to offer pension products without classical guarantees will be presented and discussed. This ongoing project is based on joint work with "msg life Austria".

Sebastian Lerch: Deep learning models for distributional regression

Distributional regression approaches aim to model the conditional distribution of a variable of interest given a vector of explanatory variables, thereby going beyond classical regression models for the conditional mean only. In this talk, I will discuss various aspects of designing, estimating and evaluating distributional regression models utilizing recent advances from machine learning. In particular modern deep learning methods offer various advantages over standard parametric distributional regression models. For example, distributional regression models based on neural networks allow to flexibly model nonlinear relations between arbitrary predictor variables and forecast distribution parameters that are automatically learned in a data-driven way rather than requiring prespecified link functions. Further, I will illustrate how neural networks can be leveraged to obtain flexible nonparametric models of the response distribution, and how prior knowledge about underlying processes can be incorporated into model building and estimation. The methodological developments will be illustrated in applications to probabilistic weather forecasting, where systematic errors of physical models of the atmosphere are corrected via distributional regression approaches.

Tobias Fissler: Mind the Efficiency Gap

Parameter estimation via M- and Z-estimation is commonly considered to be equally powerful for semiparametric models of one-dimensional functionals. This is due to the fact that, under sufficient regularity conditions, there is a one-to-one relation between strictly consistent loss functions and oriented strict identification functions via integration and differentiation. When dealing with multivariate functionals such as several moments, quantiles at different levels or the pair (Value at Risk, Expected Shortfall), this one-to-one relation fails due to integrability conditions: Not every identification function possesses an antiderivative. We discuss several implications of this gap between the class of M- and Z-estimators elaborating on two-step procedures and equivariance properties. Most importantly, we show that for certain functionals such the double quantile, the most efficient Z-estimator outperforms the most efficient M-estimator. This efficiency gap means that the semiparametric efficiency bound cannot be attained by the M-estimator in these cases.
We illustrate this gap for the examples of semiparametric models for two quantiles and for the (VaR, ES) and support this through a simulation study.

The talk is based on joint ongoing work with Timo Dimitriadis and Johanna Ziegel.

Johannes Heiny: Recent advances in large sample correlation matrices and their applications

Many fields of modern sciences are faced with high-dimensional data sets. In this talk, we investigate the entrywise and spectral properties of large sample correlation matrices.
First, we study point process convergence for sequences of iid random walks and derive asymptotic theory for the extremes of these random walks. We show convergence of the maximum random walk to the Gumbel distribution under finite variance. As a consequence, we derive the joint convergence of the off-diagonal entries in sample covariance and correlation matrices of a high-dimensional sample whose dimension increases with the sample size. This generalizes known results on the asymptotic Gumbel property of the largest entry. As an application, we obtain new tests for the population covariance and correlation.
In the second part of this talk, we consider a p-dimensional population with iid coordinates in the domain of attraction of a stable distribution with index $\alpha\in (0,2)$. Since the variance is infinite, the sample covariance matrix based on a sample of size n from the population is not well behaved and it is of interest to use instead the sample correlation matrix R. We find the limiting distributions of the eigenvalues of R when both the dimension p and the sample size n grow to infinity such that  $p/n\to \gamma$. The moments of the limiting distributions $H_{\alpha,\gamma}$ are fully identified as sum of two contributions: the first from the classical Marchenko-Pastur law and a second due to heavy tails. Moreover, the family $\{H_{\alpha,\gamma}\}$ has continuous extensions  at the  boundaries  $\alpha=2$ and $\alpha=0$ leading to the Marchenko-Pastur law and a modified Poisson distribution, respectively. A simulation study on these limiting distributions is also provided for comparison with the Marchenko-Pastur law.

 

Alexander J. McNeil: Modelling volatility with v-transforms

An approach to the modelling of financial return series using a class of uniformity-preserving transforms for uniform random variables is proposed. V-transforms describe the relationship between quantiles of the return distribution and quantiles of the distribution of a predictable volatility proxy variable constructed as a function of the return. V-transforms can be represented as copulas and permit the construction and estimation of models that combine arbitrary marginal distributions with linear or non-linear time series models for the dynamics of the volatility proxy. The idea is illustrated using a transformed Gaussian ARMA process for volatility, yielding the class of VT-ARMA copula models. These can replicate many of the stylized facts of financial return series and facilitate the calculation of marginal and conditional characteristics of the model including quantile measures of risk. Estimation of models is carried out by adapting the exact maximum likelihood approach to the estimation of ARMA processes. 

Tobias Kley: Integrated copula spectral densities and their applications

Copula spectral densities are defined in terms of the copulas associated with the pairs $(X_{t+k}, X_t)$ of a process $(X_t)_{t \in \mathbb{Z}}$. Thereby they can capture a wide range of dynamic features, such as changes in the conditional skewness or dependence of extremes, that traditional spectra cannot account for. A consistent estimator for copula spectra was suggested by Kley et al. [Bernoulli 22 (2016) 1707–831] who prove a functional central limit theorem (fCLT) according to which the estimator, considered as a stochastic process indexed in the quantile levels, converges weakly to a Gaussian limit. Similar to the traditional case, no fCLT exists for this estimator when it is considered as a stochastic process indexed in the frequencies. In this talk, we consider estimation of integrated copula spectra and show that our estimator converges weakly as a stochastic process indexed in the quantile levels and frequencies. Interestingly and in contrast to the estimator considered by Kley et al., estimation of the unknown marginal distribution has an effect on the asymptotic covariance. We apply subsampling to obtain confidence intervals for the integrated copula spectra. Further, our results allow to use copula spectra to test a wide range of hypotheses. As an example, we suggest a test for the hypothesis that the underlying process is pairwise time-reversible.

(This is joint work with H. Dette, Y. Goto, M. Hallin, R. Van Hecke and S. Volgushev.)

Yannick Hoga: The Uncertainty in Extreme Risk Forecasts from Covariate-Augmented Volatility Models

For a GARCH-type volatility model with covariates, we derive asymptotically valid forecast intervals for risk measures, such as the Value-at-Risk and Expected Shortfall. To forecast these, we use estimators from extreme value theory. In the volatility model, we allow for the inclusion of exogenous variables, e.g., volatility indices or high-frequency volatility measures. Our framework for the volatility model captures leverage effects, thus allowing for sufficient flexibility in applications. In simulations, we find coverage of the forecast intervals to be adequate. Finally, we investigate if using covariate information from volatility indices or high-frequency data improves risk measure forecasts for real data. While---in our framework---volatility indices appear to be helpful in this regard, intra-day data are not.

Hannes Leeb: Prediction when fitting simple models to high-dimensional data

We study linear subset regression in the context of a high-dimensional linear model. Consider y = a + b'z + e with univariate response y and a d-vector of random regressors z, and a submodel where y is regressed on a set of p explanatory variables that are given by x = M'z, for some d x p matrix M. Here, 'high-dimensional' means that the number d of available explanatory variables in the overall model is much larger than the number p of variables in the submodel. In this paper, we present Pinsker-type results for prediction of y given x. In particular, we show that the mean squared prediction error of the best linear predictor of y given x is close to the mean squared prediction error of the corresponding Bayes predictor $E[y|x], provided only that p/log(d) is small. We also show that the mean squared prediction error of the (feasible) least-squares predictor computed from n independent observations of (y,x) is close to that of the Bayes predictor, provided only that both p/log(d) and p/n are small. Our results hold uniformly in the regression parameters and over large collections of distributions for the design variables z.

Ville Satopää: Bias, Noise, and Information in Elicitation and Aggregation of Crowd Forecasts

A four-year series of subjective-probability forecasting tournaments sponsored by the U.S. intelligence community revealed a host of replicable drivers of predictive accuracy, including experimental interventions such as training, teaming, and tracking-of-talent. Drawing on these data, we propose a Bayesian model (BIN: Bias, Information, Noise) for disentangling the underlying processes that enable certain forecasters and forecasting methods to out-perform: either by tamping down bias and noise in judgment or by ramping up the efficient extraction of valid information from the environment. The BIN model reveals the dominant driver of performance enhancement to be noise reduction, though some interventions also reduce bias and improve information extraction. Even “debiasing training” designed to attenuate bias improved accuracy largely by tamping down noise. Organizations may often discover that the most efficient way to boost forecasting accuracy is to target noise.