Die Erholunsgzone vor dem D4 Gebäude über dem Brunnen.

Abstracts

Miloš Kopa:
Stochastic Dominance in Portfolio Optimization

Stochastic dominance is a statistical tool developed for comparing the random variables among each other. In financial applications, these random variables usually represent random returns of the considered assets or portfolios. The paper focuses on portfolio selection problems with stochastic dominance constraints for various orders of stochastic dominance relations. Firstly, the tractable necessary and sufficient conditions for particular probability distributions are discussed.
Secondly, these conditions are employed in the static and dynamic portfolio selection problems. Finally, the extensions for the case of vector comparisons are presented. The theoretical results are accompanied by empirical examples.

 

Marc Hellmuth:
Explicit Modular Decomposition

The modular decomposition (MD) of an undirected graph G is a natural construction to capture key features of G in terms of a rooted labeled tree (T,t) whose vertices are labeled as "series" (1), "parallel" (0) or "prime".

If a graph G does not contain prime modules, then all structural information of G can be directly derived from its MD tree (T,t). As a consequence, many hard problems become polynomial-time solvable on graphs without prime modules, since the MD tree serves as a guide for algorithms to find efficient exact solutions (e.g.\ for optimal colorings, maximum clique, isomorphism test, ... ).
However, the class of graphs without prime modules (aka cographs) is rather restricted. We introduce here the novel concept of explicit modular decomposition that aims at replacing "prime" vertices in the MD tree by suitable substructures to obtain 0/1-labeled networks (N,t). Understanding which graphs can be explained by which type of network does not only provide novel graph classes but is crucial to understand which hard problem can be solved on which graph class efficiently. We will mainly focus on graphs that can be explained by networks (N,t) whose bi-connected components are simple cycles. These graphs are called GaTEx, can be recognized in linear-time and are characterized by a set of 25 forbidden induced subgraphs. In particular, GaTEx graphs are closely related to many other well-known graph classes such as P4-sparse and P4-reducible graphs, weakly-chordal graphs, perfect graphs with perfect order, comparability and permutation graphs. As a consequence, one can prove that many hard problems become linear-time solvable on GaTEx graphs as well.
 

References:
Hellmuth and Scholz, Resolving Prime Modules: The Structure of Pseudo-cographs and Galled-Tree Explainable Graphs, Discr. Appl. Math, 343, 25-43, 2024

Hellmuth and Scholz, From Modular Decomposition Trees to Level-1 networks: Pseudo-Cographs, Polar-Cats and Prime Polar-Cats, Discr. Appl. Math, 321, 179-219, 2022

Anton Rask Lundborg:
Perturbation-based Analysis of Compositional Data

Existing statistical methods for compositional data analysis are inadequate for many modern applications for two reasons. First, modern compositional datasets, for example in microbiome research, display traits such as high-dimensionality and sparsity that are poorly modelled with traditional approaches. Second, assessing -- in an unbiased way -- how summary statistics of a composition (e.g., racial diversity) affect a response variable is not straightforward. In this work, we propose a framework based on hypothetical data perturbations that addresses both issues. Unlike existing methods for compositional data, we do not transform the data and instead use perturbations to define interpretable statistical functionals on the compositions themselves, which we call average perturbation effects. These average perturbation effects, which can be employed in many applications, naturally account for confounding that biases frequently used marginal dependence analyses. We show how average perturbation effects can be estimated efficiently by deriving a perturbation-dependent reparametrization and applying semiparametric estimation techniques. We analyze the proposed estimators empirically on simulated data and demonstrate advantages over existing techniques on US census and microbiome data. For all proposed estimators, we provide confidence intervals with uniform asymptotic coverage guarantees.

Helga Wagner:
Flexible Bayesian Treatment Effects Models for Panel Outcomes

Identification and estimation of treatment effects from observational data is particularly challenging as firstly, only the outcome for one of the treatments is observed and secondly, confounding due to unobserved endogeneity of treatment selection might be present. In Bayesian approaches to modelling treatment effects these issues are addressed by specifying a joint model for treatment selection and the potential outcomes in terms of observed covariates.
In this talk I will focus on treatment effects for longitudinally observed outcomes and present a flexible model that additionally takes into account longitudinal dependence in the potential outcomes sequences and time-varying covariate effects. To avoid overspecification prior distributions that allow shrinkage to time-constant or even zero covariate effects if appropriate are used. The model is used to analyse the effects of a long maternity leave on earnings of Austrian mothers exploiting a change in the parental leave policy in Austria that extended maternal benefits from 18 months since birth of the child to 30 months.

Antonio Canale:
Advances in Structured Bayesian Factor Models

In this seminar I will discuss novel approaches to matrix factorization that exploits Bayesian shrinkage priors and external information to induce flexible sparse patterns. Such a sparsity has appealing practical consequences beyond the obvious parsimony and regularization. For instance, it can facilitate the emergence of a structured block pattern in factor loadings, confining the influence of specific factors to subsets of observed variables. This promotes interpretability by grouping related variables into coherent blocks or identifying variables independent of others. Conversely, a block structure within latent factors allows for the clustering of subjects, enabling the exploration of heterogeneity among subjects associated with distinct groups. The proposed approaches are based on hierarchical prior specifications that allow both for global and local shrinkage. These methods extend and unify various recent contributions within the realm of Bayesian factor models. Through empirical applications, we showcase the efficacy and practical utility of these methodologies across diverse contexts.

 

Tatyana Krivobokova:
Iterative Regularisation in Ill-Posed Generalised Linear Models

We study the problem of regularized maximum-likelihood optimization in ill-posed generalized linear models with covariates that include subsets that are relevant and that are irrelevant for the response. It is assumed that the source of ill-posedness is a joint low dimensionality of the response and a subset of the relevant covariates in the sense of a latent factor generalized linear model (GLM). In particular, we propose a novel iteratively-reweighted- partial-least-squares (IRPLS) algorithm and show that it is better than any other projection or penalization-based regularisation algorithm. Under regularity assumptions on the latent factor GLM we show that the convergence rate of the IRPLS estimator with high probability is the same as that of the maximum likelihood estimator in our latent factor GLM, which is an oracle achieving an optimal parametric rate. Our findings are confirmed by numerical studies. This is a joint work with Gianluca Finocchio. 

Umut Cetin:
Insider Trading With Legal Risk in Continuous Time

We will consider the Kyle model in continuous time, where the insider may be subject to legal penalties. The insider internalizes this legal risk by trading less aggressively. In the particular case of normally distributed asset value, the trades are split evenly over time for sufficiently large penalties, with trade size proportional to the return on the private signal. Although the noise traders lose less when legal fines increase, the insider's total penalty in equilibrium is non-monotone since the insider trades little when the legal risk surpasses the value of the private signal. Thus, a budget-constrained regulator runs an investigation only if the benefits of the investigation are sufficiently high. Moreover, the optimal penalty policy is reduced to choosing from one of two extremal penalty levels that correspond to high and low liquidity regimes. The optimal choice is determined by the amount of noise trading and the relative importance of price informativeness.

Michal Pešta:
Semi-continuous Time Series for Sparse Data With Volatility Clustering

Time series containing a non-negligible portion of possibly dependent zeros, whereas the remaining observations are positive, are considered. They are regarded as GARCH processes consisting of non-negative values. The aim lies in the estimation of the omnibus model parameters taking into account the semi-continuous distribution. The hurdle distribution, together with dependent zeros, causes the classical GARCH estimation techniques to fail. Two different likelihood-based approaches are derived, namely the maximum likelihood estimator and a new quasi-likelihood estimator. Both estimators are proved to be strongly consistent and asymptotically normal. Predictions with bootstrap add-ons are proposed. The empirical properties are illustrated in a simulation study, which demonstrates the computational efficiency of the methods employed. The developed techniques are presented through an actuarial problem concerning sparse insurance claims.

Andreas Groll:
Automated Effects Selection via Regularization in Cox Frailty Models

In all sorts of regression problems it has become more and more important to deal with high dimensional data with lots of potentially influential covariates. A possible solution is to apply estimation methods that aim at the detection of the relevant effect structure by using regularization methods. In this talk, the effect structure in the Cox frailty model, which is the most widely used model that accounts for heterogeneity in time-to-event data, is investigated. Since in time-to-event modeling one has to account for possible variation of the effect strength over time, the selection of the relevant features has to distinguish between several cases: covariates can have time-varying effects, can have time-constant effects or be irrelevant. Two different regularization approaches are discussed that are able to automatically distinguish between these types of effects to obtain a sparse representation that includes the relevant effects in a proper form, namely penalization and boosting. This idea is applied to a real world data set, illustrating that the complexity of the influence structure can be strongly reduced by using such a regularization approach.

Claudia Ceci:
A BSDE-Based Optimal Reinsurance in a Model With Jump Clusters

We investigate the optimal reinsurance problem in a risk model with jump clustering features for general reinsurance contracts and premiums. This modeling framework is inspired by the concept initially proposed in [3], combining Hawkes and Cox processes with shot noise intensity models. Specifically, these processes describe self-exciting and externally excited jumps in the claim arrival intensity, respectively. The insurer aims to maximize the expected exponential utility of terminal wealth. We discuss the problem under both full and partial information. In the complete information framework, we compare two different methodologies: the classical stochastic control approach based on the Hamilton-Jacobi-Bellman (HJB) equation and a backward stochastic differential equation (BSDE) approach. Differently from the classical HJB-approach, the BSDE method enables us to solve the problem without imposing any requirements for regularity on the associated value function. Furthermore, we investigate the problem when the insurance company has restricted information about the claim arrival intensity. Using nonlinear filtering techniques, we reduce the partially observable problem into an equivalent problem under full information which is solved by applying a BSDE-approach. Note that due the infinite dimensionality of the filter process, the HJB-method can not be applied.  Finally, we discuss optimal strategies and provide explicit results in some relevant cases. The talk is based on the papers [1] and [2].

References:
[1] M. Brachetta, G. Callegaro, C. Ceci, and C. Sgarra. Optimal reinsurance via BSDEs in a partially observable model with jump clusters. Finance and Stochastics, 28, 453-495, 2024. https://doi.org/10.1007/s00780-023-00523-z

[2] C. Ceci and A. Cretarola. A BSDE-based stochastic control for optimal reinsurance in a dynamic contagion model. Submitted 2024. http://arxiv.org/abs/2404.11482

[3] A. Dassios and H. Zhao. A dynamic contagion process. Advances in Applied Probabability, 43, 814-846, 2011. https://doi.org/10.1239/aap/1316792671