Die Erholunsgzone vor dem D4 Gebäude über dem Brunnen.

Abstracts Research Seminar Summer Term 2017

Der Inhalt dieser Seite ist aktuell nur auf Englisch verfügbar.

Julie Josse: Regularized log-bilinear models

Log bilinear models, also known as row-column (RC) association models are well suited to describe contingency tables. The expectation of the counts (on the log scale) is modeled with main row and column effects and an interaction which has a low rank representation. Parameters of the RC models can also be interpreted as describing latent variables in a low-dimensional Euclidean space. However, solving these low-rank log-linear models is non-trivial and standard methods of estimation suffer, especially when the rank is large and the table is sparse. In this talk, I will present a penalized version of the Poisson likelihood to tackle overfitting issues. I will illustrate the method on the specific case of symmetric models to describe a cross-citations matrix where each element corresponds to the number of citations given by a journal to papers from another journal. The analysis of such data often aims at ranking the journals and thus we will offer an alternative to a Bradley terry model.

Nicola Loperfido: Multivariate Skewness for Finite Mixtures

Finite mixtures pose several inferential problems when the analytical form of their components’ densities is not fully known. In particular, it may be difficult to use them for data clustering. The problem might be addressed using measures of multivariate skewness which depend on the third standardized cumulant of a random vector. They are by far the most popular measures of multivariate skewness and admit appealing tensor interpretations. Their applications include, but are not limited to, normality testing, projection pursuit, point estimation, factor analysis, independent component analysis. Other measures of multivariate skewness will be also addressed, together with open research questions. Theory will be illustrated with well-known data sets.

Thorsten Schmidt: Unbiased estimation of risk

The estimation of risk measures recently gained a lot of attention, partly because of the backtesting issues of expected shortfall related to elicitability. In this work we shed a new and fundamental light on optimal estimation procedures in terms of bias. We show that once the parameters of a model need to be estimated, one has to take additional care when estimating risks. The typical plug-in approach, for example, introduces a bias which leads to a systematic underestimation of risk. In this regard, we introduce a novel notion of unbiasedness to the estimation of risk which is motivated from economic principles. In general, the proposed concept does not coincide with the well-known statistical notion of unbiasedness. We show that an appropriate bias correction is available for many well-known estimators. In particular, we consider value-at-risk and expected shortfall (tail value-at-risk). In the special case of normal distributions, closed-formed solutions for unbiased estimators can be obtained. We present a number of motivating examples which show the outperformance of unbiased estimators in many circumstances. The unbiasedness has a direct impact on backtesting and therefore adds a further viewpoint to established statistical properties.

Joint work with Marcin Pitera.
Subjects: Risk Management (q-fin.RM); Statistical Finance (q-fin.ST)

Firdevs Ulus: Utility Indifference Pricing under Incomplete Preferences

For incomplete preference relations that are represented by multiple priors and/or multiple -possibly multivariate- utility functions, we define utility buy (sell) prices as set valued functions of the claim. We show that the set-valued utility buy (sell) prices recover the 'standard' utility indifference prices where a single univariate utility function represents a complete preference relation. As another special case, we consider utility prices under complete preferences that are represented by a single multivariate utility function and show that the set-valued definitions generalizes the scalar valued utility buy (sell) prices considered in the literature. Set-valued buy and sell prices satisfy monotonicity and convexity properties as expected. It is possible to compute (approximate) these prices by solving convex vector optimization problems.

Gernot Müller: Modelling electricity prices using processes with time-varying parameters    

First we consider estimation methods for the electricity price model developed in Benth et al. (2014). This model disentangles the spot price into three components: a trend and seasonality function, a CARMA process driven by an alpha-stable Lévy process, and an additional Lévy process for the long-term fluctuations. We discuss and compare a stepwise maximum likelihood method and a Bayesian estimation procedure. However, due to changing rules and regulations, changing market conditions, and a changing electricity production towards a higher proportion of renewable energies, electricity prices show a changing behaviour over time. We try to modify the model from Benth et al. (2014) by employing processes which show locally a behaviour similar to alpha-stable processes, but allow for time-varying parameters. The processes under consideration have no stationary increments, so that we look at additive (i.e. independent increment) processes instead of Lévy processes. The data which motivates the analysis is taken from the data base of the European Energy Exchange EEX.  

The talk is based on joint work with Boris Buchmann (Australian National University) and Armin Seibert (Universität Augsburg).  

Nikolaus Hautsch: Volatility, Information Feedback and Market Microstructure Noise: A Tale of Two Regimes

We extend the classical “martingale-plus-noise” model for high-frequency prices by an error correction mechanism originating from prevailing mispricing. The speed of price reversal is a natural measure for informational efficiency. The strength of the price reversal relative to the signal-to-noise ratio determines the signs of the return serial correlation and the bias in standard realized variance estimates. We derive the model’s properties and locally estimate it based on mid-quote returns of the NASDAQ 100 constituents. There is evidence of mildly persistent local regimes of positive and negative serial correlation, arising from lagged feedback effects and sluggish price adjustments. The model performance is decidedly superior to existing stylized microstructure models. Finally, we document intraday periodicities in the speed of price reversion and noise-to-signal ratios.

Joint work with Torben G. Andersen and Gökhan Cebiroglu.
Keywords: Volatility estimation; market microstructure noise; price reversal; momentum trading; contrarian trading

Johanna Nešlehová and Christian Genest: Modeling clusters of extremes  

During the spring of 2011, the water level of Lake Champlain, straddled over the Canadian and US border, was at a record high. This caused severe flooding and extensive damage along the Richelieu River (Québec, Canada). Hydrologists have heretofore been unable to estimate the return period of this unprecedented event, caused primarily by episodes of heavy rainfall occurring over several consecutive days. This talk will present a new approach for modeling clusters of extreme precipitation, based on the popular Peaks-Over-Threshold method combined with a polar coordinate decomposition of the clustered values. Distributional assumptions for both the radial and the angular components are made which, combined with non-informative priors, lead to a sensible Bayesian estimate of the return period of the events that triggered the 2011 Richelieu River flood.

Tobias Fissler: Testing the maximal rank of the volatility process for continuous diffusions observed with noise

In this talk, we present a test for the maximal rank of the volatility process in continuous diffusion models observed with noise. Such models are typically applied in mathematical finance, where latent price processes are corrupted by microstructure noise at ultra high frequencies. Using high frequency observations we construct a test statistic for the maximal rank of the time varying stochastic volatility process. Our methodology is based upon a combination of a matrix perturbation approach and pre-averaging. We will show the asymptotic mixed normality of the test statistic and obtain a consistent testing procedure.  

The talk is based on a joint paper with Mark Podolskij (University of Aarhus).

Alexander Steinicke: Backward Stochastic Differential Equations and Applications    

Stochastic differential equations (SDEs) are useful for modeling a tremendous amount of phenomena, where random effects over time are involved. Following the usual procedure, we start with an initial condition at time zero and obtain at time T a random variable X(T), the solution of our SDE. The situation is different if one looks at the situation backward in time: If we start with a given random value at time T, are we able to find a deterministic value X(0) by following the dynamics of a stochastic differential equation, backward in time? This type of problem is called a backward stochastic differential equation (BSDE) and has been introduced in 1971 by Bismut in the context of stochastic control. From then on, BSDEs became more and more important for various applications and their systematic study began in the early 90's. In this talk I will introduce standard BSDEs and outline how they appear e.g. in pricing of contingent claims, stochastic control beyond Markovianity, Feynman-Kac representation for PDEs or utility maximization. Moreover, I will present treatment of BSDEs in simple cases and give an overview about my current field of interest within BSDE-theory.

Piotr Fryzlewicz: Recent advances in multiple change-point detection

The talk will summarize some recent results in multiple change-point detection. In the first part of the talk, we introduce the concept of 'tail-greediness' and discuss a new tail-greedy, bottom-up transform for one-dimensional data, which results in a nonlinear but conditionally orthonormal, multiscale decomposition of the data with respect to an adaptively chosen Unbalanced Haar wavelet basis. The resulting agglomerative change-point detection method avoids the disadvantages of the classical divisive binary segmentation, and offers very good practical performance. In the second part, we discuss a new, generic methodology for nonparametric function estimation, in which we first estimate the number and locations of any features that may be present in the function. The method is general in character due to the use of a new multiple generalised change-point detection device, termed Narrowest-Over-Threshold (NOT). The key ingredient of NOT is its focus on the smallest local sections of the data on which the existence of a feature is suspected. The NOT estimators are easy to implement and rapid to compute. The NOT approach is easy to extend by the user to tailor to their own needs.