Die Erholunsgzone vor dem D4 Gebäude über dem Brunnen.

Abstracts Research Seminar Winter Term 2021/22

Der Inhalt dieser Seite ist aktuell nur auf Englisch verfügbar.

Thorsten Schmidt:
Arbitrage Principles in Insurance

In this work we study the valuation of insurance contracts from a fundamental viewpoint. We start from the observation that insurance contracts are inherently linked to financial markets, be it via interest rates, or – as in hybrid products, equity-linked life insurance and variable annuities – directly to stocks or indices. By defining portfolio strategies on an insurance portfolio and combining them with financial trading strategies we arrive at the notion of insurance-finance arbitrage (IFA). A fundamental theorem provides two sufficient conditions for presence or absence of IFA, respectively. For the first one it utilizes the conditional law of large numbers and risk-neutral valuation. As a key result we obtain a simple valuation rule, called QP-rule, which is market consistent and excludes IFA. Utilizing the theory of enlargements of filtrations we construct a tractable framework for general valuation results, working under weak assumptions. The generality of the approach allows to incorporate many important aspects, like mortality risk or dependence of mortality and stock markets which is of utmost importance in the recent corona crisis. For practical applications, we provide an affine formulation which leads to explicit valuation formulas for a large class of hybrid products.

Paul McNicholas:
Using Subset Log-Likelihoods to Trim Outliers in Gaussian Mixture Models

Mixtures of Gaussian distributions have been a popular choice in model-based clustering. Outliers can affect parameters estimation and, as such, must be accounted for. Predicting the proportion of outliers correctly is paramount as it minimizes misclassification error. It is proved that, for a finite Gaussian mixture model, the log-likelihoods of the subset models are distributed according to a mixture of beta distributions. An algorithm is then proposed that predicts the proportion of outliers by measuring the adherence of a set of subset log-likelihoods to a beta mixture reference distribution. This algorithm removes the least likely points, which are deemed outliers, until model assumptions are met.

Joint work with Katharine M. Clark.

Achim Zeileis:
Strategies and Software for Robust Color Palettes in Data Visualizations

Color is an integral element in many data visualizations such as maps, heat maps, bar plots, scatter plots, or time series displays. Well-chosen colors can make graphics more appealing and, more importantly, help to clearly communicate the underlying information. Conversely, poorly-chosen colors can obscure information or confuse the readers.

To avoid problems and misinterpretations, we introduce general strategies for selecting robust color palettes that are intuitive for many audiences, including readers with color vision deficiencies. The construction of sequential, diverging, or qualitative palettes is based on appropriate light-dark "luminance" contrasts while suitably controlling the "hue" and the colorfulness ("chroma").

The strategies are also easy to put into practice using computations based on the so-called Hue-Chroma-Luminance (HCL) color model, e.g., as provided in our "colorspace" software package (https://colorspace.R-Forge.R-project.org/). To aid selection and application of these palettes the package provides scales for use with ggplot2; shiny (and tcltk) apps for interactive exploration (see also https://hclwizard.org/); visualizations of palette properties; accompanying manipulation utilities (like desaturation and lighten/darken), and emulation of color vision deficiencies.

Ville Satopää:
Herding in Probabilistic Forecasts

Decision makers often ask experts to forecast a future state. Experts, however, can be biased. In the economics and psychology literature, one extensively studied behavioral bias is called herding. Under strong levels of herding, disclosure of public information may lower forecasting accuracy. This result, however, has been derived only for point forecasts. In this work, we consider experts' probabilistic forecasts under herding, find a closed-form expression for the first two moments of a unique equilibrium forecast, and show that the experts report too similar locations and inflate the variance of their forecasts due to herding. Furthermore, we show that the negative externality of public information no longer holds. In addition to reacting to new information as expected, probabilistic forecasts contain more information about the experts' full beliefs and interpersonal structure. This facilitates model estimation. To this end, we consider a one-shot setting with one forecast per expert and show that our model is identifiable up to an infinite number of solutions based on point forecasts, but up to two solutions based on probabilistic forecasts. We then provide a Bayesian estimation procedure for these two solutions and apply it to economic forecasting data collected by the European Central Bank and the Federal Reserve Bank of Philadelphia. We find that, on average, the experts invest around 19% of their efforts into making similar forecasts. The level of herding shows an increasing trend from 1999 to 2007 but drops sharply during the financial crisis of 2007-2009, and then rises again until 2019.

Stefano M. Iacus:
Subjective Well-Being and Social Media

In this talk we quickly review some literature on the measurement of well-being through official statistics and surveys and then we will focus on how to extract some version of subjective well-being from social media posts.
We present the basics of the sentiment analysis approach used to construct a new social media based indicator of subjective well-being.
We will focus mainly on an application to Italy and Japan for which we construct the SWB-I and SWB-J indexes, using Twitter data from 2013 till mid 2020.
The countries are interesting because of their similarities and differences which are captured to some extent by those indicators.
We then discuss how these subjective well-being indexes relates to traditional measures of well-being and their decrease during the COVID-19 pandemic.

An extensive discussion is presented in the newly published book on this topic. The indicator SWB-I have been introduced in the Journal of Official Statistics along with a method to control for social media bias. Its Japanese counter part, namely SWB-J, is available here. Finally, those interested in the COVID-19 impact on these subjective well-being indicators may want to read this preprint.

Alexander J. McNeil:
Time Series Models With Infinite-Order Partial Copula Dependence

Stationary and ergodic time series can be constructed using an s-vine decomposition based on sets of bivariate copula functions. The extension of such processes to infinite copula sequences is considered and shown to yield a rich class of models that generalizes Gaussian ARMA and ARFIMA processes to allow both non-Gaussian marginal behaviour and a non-Gaussian description of the serial partial dependence structure. Extensions of classical causal and invertible representations of linear processes to general s-vine processes are proposed and investigated. A practical and parsimonious method for parameterizing s-vine processes using the Kendall partial autocorrelation function is developed. The potential of the resulting models to give improved statistical fits in many applications is indicated with examples using macroeconomic data.

Joint work with Martin Bladt.

Christian Hennig:
Testing in Models That Are Not True

The starting point of my presentation is the apparently popular idea that in order to do hypothesis testing (and more generally frequentist model-based inference) we need to believe that the model is true, and the model assumptions need to be fulfilled. I will argue that this is a misconception. Models are, by their very nature, not "true" in reality. Mathematical results secure favourable characteristics of inference in an artificial model world in which the model assumptions are fulfilled. For using a model in reality we need to ask what happens if the model is violated in a "realistic" way. One key approach is to model a situation in which certain model assumptions of, e.g., the model-based test that we want to apply, are violated, in order to find out what happens then. This, somewhat inconveniently, depends strongly on what we assume, how the model assumptions are violated, whether we make an effort to check them, how we do that, and what alternative actions we take if we find them wanting. I will discuss what we know and what we can't know regarding the appropriateness of the models that we "assume", and how to interpret them appropriately, including new results on conditions for model assumption checking to work well, and on untestable assumptions. 

Joint work with Iqbal Shamsudheen.