Die Erholunsgzone vor dem D4 Gebäude über dem Brunnen.

Abstracts Research Seminar Summer Term 2021

Der Inhalt dieser Seite ist aktuell nur auf Englisch verfügbar.

Max H. Farrell:
Deep Neural Networks for Estimation and Inference

We study deep neural networks and their use in semiparametric inference. We establish novel rates of convergence for deep feedforward neural nets. Our new rates are sufficiently fast (in some cases minimax optimal) to allow us to establish valid second-step inference after first-step estimation with deep learning, a result also new to the literature. Our estimation rates and semiparametric inference results handle the current standard architecture: fully connected feedforward neural networks (multi-layer perceptrons), with the now-common rectified linear unit activation function and a depth explicitly diverging with the sample size. We discuss other architectures as well, including fixed-width, very deep networks. We establish nonasymptotic bounds for these deep nets for a general class of nonparametric regression-type loss functions, which includes as special cases least squares, logistic regression, and other generalized linear models. We then apply our theory to develop semiparametric inference, focusing on causal parameters for concreteness, such as treatment effects, expected welfare, and decomposition effects. Inference in many other semiparametric contexts can be readily obtained. We demonstrate the effectiveness of deep learning with a Monte Carlo analysis and an empirical application to direct mail marketing.

Joint work with Tengyuan Liang and Sanjog Misra.

Peter Hoff:
An Empirical Bayes Framework for Improving Frequentist Multigroup Inference

Mixed effects models are used routinely in the biological and social sciences to share information across groups and to account for data dependence. The statistical properties of procedures derived from these models are often quite good on average across groups, but may be poor for any specific group. For example, commonly-used confidence interval procedures may maintain a target coverage rate on average across groups, but have near zero coverage rate for a group that differs substantially from the others. In this talk we discuss new prediction interval, confidence interval and p-value procedures that maintain group-specific frequentist guarantees, while still sharing information across groups to improve precision and power.

Luitgard A. M. Veraart: 
When Does Portfolio Compression Reduce Systemic Risk?

We analyse the consequences of portfolio compression on systemic risk. Portfolio compression is a post-trading netting mechanism that reduces gross positions while keeping net positions unchanged and it is part of the financial legislation in the US (Dodd-Frank Act) and in Europe (European Market Infrastructure Regulation).
We derive necessary structural conditions for portfolio compression to be harmful and discuss policy implications. In particular, we show that the potential danger of portfolio compression comes from defaults of firms that conduct portfolio compression. If no defaults occur among those firms that engage in compression, then portfolio compression always reduces systemic risk.

Bradley Efron and Balasubramanian Narasimhan:
Easy to Use Programs for Bootstrap Confidence Intervals

The "standard intervals" for a parameter theta of interest, thetahat +- 1.96*sigmahat (for approximate 95% coverage) are a mainstay of applied statistical practice, and can be computed in an almost automatic fashion in a wide range of situations. They are immensely useful but sometimes not very accurate. Bootstrap confidence intervals require much more computation, but improve coverage accuracy by an order of magnitude. Modern computational capabilities make bootstrap intervals practical on a routine basis. This talk concerns bcaboot, a new package of R programs, that aim to produce bootstrap confidence automatically, without requiring special calculations from the statistician.

Ronnie Sircar:
Cryptocurrencies, Mining & Mean Field Games

We present a mean field game model to study the question of how centralization of reward and computational power occur in Bitcoin-like cryptocurrencies. Miners compete against each other for mining rewards by increasing their computational power.  This leads to a novel mean field game of jump intensity control, which we solve explicitly for miners maximizing exponential utility, and handle numerically in the case of miners with power utilities. We show that the heterogeneity of their initial wealth distribution leads to greater imbalance of the reward distribution, or a “rich get richer” effect. This concentration phenomenon is aggravated by a higher bitcoin mining reward, and reduced by competition. Additionally, an advantaged miner with cost advantages such as access to cheaper electricity, contributes a significant amount of computational power in equilibrium, unaffected by competition from less efficient miners. Hence, cost efficiency can also result in the type of centralization seen among miners of cryptocurrencies.

Joint work with Zongxi Li and A. Max Reppen.

Ulrike Schneider:
On the Geometry of Uniqueness and Model Selection of LASSO, SLOPE and Related Estimators

This talk follows a recent trend in the statistics literature, where geometric properties are exploited to derive results for statistical procedures in the context of high-dimensional regression models. We consider estimation methods such as the Lasso and SLOPE, which are defined as solutions to a penalized optimization problem. We provide a geometric condition for uniqueness of the estimator -- in contrast to previously known conditions in the literature, our approach provides a criterion that is both necessary and sufficient. Moreover, the geometric considerations also give insights into which models are accessible for the corresponding estimation method. This can be determined by investigating which faces of a certain polytope (depending on the estimator) are intersected by the row span of the regressor matrix. We illustrate this approach for the SLOPE estimator using the sign permutahedron.

Joint work with Patrick Tardivel (Université Bourgogne).

Matthias Scherer:
A Comprehensive Model for Cyber Risk Based on Marked Point Processes and Its Application to Insurance

After scrutinizing technical, legal, financial, and actuarial aspects of cyber risk, a new approach for modelling cyber risk using marked point processes is proposed. Key covariables, required to model frequency and severity of cyber claims, are identified. The presented framework explicitly takes into account incidents from untargeted and targeted attacks as well as accidents and failures. The resulting model is able to include the dynamic nature of cyber risk, while capturing accumulation risk in a realistic way. The model is studied with respect to its statistical properties and applied to the pricing of cyber insurance and risk measurement. The results are illustrated in a simulation study.

Joint work with Gabriela Zeller.

Arne Bathke:
Synthesizing Information from Multivariate Data: Inference Methods for Global and Local Questions

When there are several endpoints (response variables) and different predictors, one typically wants to find out which predictors are relevant, and for which endpoints. We present two rather general approaches towards inference for multivariate data, accommodating binary, ordinal, and metric endpoints, and different nominal factors. One of these approaches uses rank-based statistics and an F-approximation of the sampling distribution, the other uses asymptotically valid resampling techniques (bootstrap). We also try to address the question of how well the proposed methods actually accomplish their goals.

Siegfried Hörmann:
Preprocessing Functional Data by a Factor Model Approach

We consider functional data which are measured on a discrete set of observation points. Often such data are measured with noise, and then the target is to recover the underlying signal. Commonly this is done with some smoothing approach, e.g. kernel smoothing or spline fitting. While such methods act function by function, we argue that it is more accurate to take into account the entire sample for the data preprocessing. To this end we propose to fit factor models to the raw data. We show that the common component of the factor model corresponds to the signal which we are interested in, whereas the idiosyncratic component is the noise. Under mild technical assumptions we demonstrate that our estimation scheme is uniformly consistent. From a theoretical standpoint our approach is elegant, because it is not based on smoothness assumptions and generally permits a realistic framework. The practical implementation is easy because we can resort to existing tools for factor models. Our empirical investigations provide convincing results.

This talk is based on joint work with Fatima Jammoul (TU Graz).