Die Erholunsgzone vor dem D4 Gebäude über dem Brunnen.

Abstracts Research Seminar Summer Term 2015

Der Inhalt dieser Seite ist aktuell nur auf Englisch verfügbar.

Jörn Saß: Continuous-time regime switching models, portfolio optimization and filter-based volatility

A continuous time regime switching model, where the observation process is a diffusion whose drift and volatility coefficients jump governed by a continuous time Markov chain, can explain some of the stylized facts of asset returns, even in this simple linear and non-autoregressive form. But due to the switching volatility, in continuous time the underlying Markov chain could be observed and no filtering is needed (in theory). Therefore, if in finance explicit theoretical results are obtained, they may not provide a good approximation for the discretely observed model in which we have to filter. On the other hand, a continuous time hidden Markov model (HMM), where only the drift jumps and the volatility is constant, allows for explicit calculations but has no such good econometric properties. We first discuss estimation, model choice and portfolio optimization in both models. To combine useful aspects of both models, we then look at a HMM where the volatility depends on the filter for the underlying Markov chain. We analyze its relation to Markov switching models and, using examples from portfolio optimization, we illustrate that we can still get quite explicit results and that these provide a good approximation to the discretely observed model.


Elisa Ossola: Time-Varying Risk Premium in Large Cross-Sectional Equity Datasets

We develop an econometric methodology to infer the path of risk premia from a large unbalanced panel of individual stock returns. We estimate the time-varying risk premia implied by conditional linear asset pricing models where the conditioning includes both instruments common to all assets and asset specific instruments. The estimator uses simple weighted two-pass cross-sectional regressions, and we show its consistency and asymptotic normality under increasing cross-sectional and time series dimensions. We address consistent estimation of the asymptotic variance by hard thresholding, and testing for asset pricing restrictions induced by the no-arbitrage assumption. We derive the restrictions given by a continuum of assets in a multi-period economy under an approximate factor structure robust to asset repackaging. The empirical analysis on returns for about ten thousands US stocks from July 1964 to December 2009 shows that risk premia are large and volatile in crisis periods. They exhibit large positive and negative strays from time-invariant estimates, follow the macroeconomic cycles, and do not match risk premia estimates on standard sets of portfolios. The asset pricing restrictions are rejected for a conditional four-factor model capturing market, size, value and momentum effects.

(co-authored by Patrick Gagliardini and Olivier Scaillet)


Mark Jensen: Mutual Fund Performance When Investors Learn About Skill

We contribute to the mutual fund performance literature by allowing the understanding of investors for the cross-sectional distribution of mutual fund skill to be flexibly uncertain. Instead of assuming the universe of mutual fund skill is normally distributed with unknown mean and variance, investors in this paper are completely agnostic about the entire distribution of skill, not just the population mean and variance. Mutual fund returns are then observed and the investors update their posterior understanding of the unknown population distribution of mutual fund skill and the distribution of specific fund performance.


Rémi Piatek: A Parsimonious Multinomial Probit Model for the Study of Joint Decisions

This paper proposes a new parameterization of the multinomial probit model that allows to capture alternative-specific effects in a setup where multiple decisions are made jointly. This approach is attractive from an economic point of view, as it facilitates the interpretation of the model by generating a meaningful latent structure. From a statistical perspective, the specification of the specific effects leads to a parsimonious version of the traditional multinomial probit that is more easily tractable in practice. For inference, we design a Markov chain Monte Carlo sampler that is computationally efficient. A Monte Carlo experiment investigates the performance of the methodology, and we apply this approach to the estimation of the joint decision on education and occupation using real data.


François Caron: Sparse random graphs with exchangeable point processes

Statistical network modeling has focused on representing the graph as a discrete structure, namely the adjacency matrix, and considering the exchangeability of this array. In such cases, the Aldous-Hoover representation theorem (Aldous, 1981; Hoover, 1979) applies and informs us that the graph is necessarily either dense or empty.We instead consider representing the graph as a measure on the positive quadrant. For the associated definition of exchangeability in this continuous space, we rely on the Kallenberg representation theorem (Kallenberg, 2005). We show that for certain choices of the specified graph construction, our network process is sparse with power-law degree distribution.In particular, we build on the framework of completely random measures (CRMs) and use the theory associated with such processes to derive important network properties, such as an urn representation for network simulation. The CRM framework also provides for interpretability of the network model in terms of node-specific sociability parameters, with properties such as sparsity and power-law behavior simply tuned by three hyperparameters. Our theoretical results are explored empirically and compared to common network models.

Joint work with Emily Fox (U. Washington).


François Bachoc: Covariance function estimation in Gaussian process regression

Gaussian process regression consists in predicting a continuous realization of a Gaussian process, given a finite number of observations of it.

When the covariance function of the Gaussian process is known, or when the statistician selects and fix a given covariance function, this prediction is made explicitly thanks to Gaussian conditioning. Thus, most classically, the covariance function is estimated in a first step, and kept fixed to its estimate in a second step, where prediction is carried out ("plug-in approach"). In this presentation, we address parametric estimation, and we consider the Maximum Likelihood and Cross Validation estimators of the covariance parameters. We analyze these two estimators in two cases.

1) Well-specified case where the true covariance function belongs to the parametric set of covariance functions used for estimation. We consider an increasing-domain asymptotic framework, based on a randomly-perturbed regular grid of observation points. We show that both estimators are consistent and asymptotically Gaussian with a square-root-of-n rate of convergence. It is observed that the Maximum Likelihood estimator has a smaller asymptotic variance.

2) Misspecified case where the true covariance function does not belong to the parametric set of covariance functions used for estimation. A finite-sample analysis shows that, for design of observation points that are not too regular, Cross Validation is more robust than Maximum Likelihood. Furthermore, an increasing-domain asymptotic result supports this conclusion. More precisely, for randomly located observation points, the Cross Validation estimator converges to the covariance parameter minimizing the integrated square prediction error.


Ralf Wunderlich: Expert opinions and dynamic portfolio optimization under partial information

We consider a continuous-time financial market with partial information on the drift and solve and compare utility maximization problems which include expert opinions on the unobservable drift. Stock returns are driven by a Brownian motion and the drift depends on a factor process which is either an Ornstein Uhlenbeck process or a continuous-time Markov chain. Thus the drift is hidden and has to be estimated from observable quantities. If the investor only observes stock prices then the best estimates are the Kalman and Wonham filters, respectively.

However, to improve the estimate, an investor may also rely on expert opinions providing a noisy estimate of the current state of the drift.
This reduces the variance of the filter and thus improves expected utility. That procedure can be seen as a continuous-time version of the classical Black-Litterman approach. The expert opinions are modeled by a marked point process with jump-size distribution depending on the current state of the hidden factor process. We also look at models where the expert opinions arrive at fixed and known information dates as well as continuous-time experts.

For the associated portfolio problem with logarithmic utility we give explicit solutions. In case of power utility we apply dynamic programming techniques and solve the corresponding dynamic programming equation numerically. Diffusion approximations for discrete-time experts allow to simplify the problem and to derive more explicit solutions. We illustrate our findings by numerical results.


Evelyn Buckwar: Stochastic numerics and issues in the stability analysis of numerical methods

Stochastic Differential Equations (SDEs) have become a standard modelling tool in many areas of science, e.g., from finance to neuroscience. Many numerical methods have been developed in the last decades and analysed for their strong or weak convergence behaviour. In this talk we will provide an overview on current directions in the area of stochastic numerics and report on recent progress in the analysis of stability properties of numerical methods for SDEs, in particular for systems of equations. We are interested in developing classes of test equations that allow insight into the stability behaviour of the methods and in developing approaches to analyse the resulting systems of equations.


Yoosoon Chang: Distributional Time Series

We develop a new framework and methodology for the time series analysis of cross sectional distributions with stochastic trends. Often individual time series of cross sectional distributions have nonstationary persistent components that may be characterized effectively as functional unit roots. This paper shows how to model and analyze the presence of common trends in multiple time series of such cross sectional distributions. For an illustration, we use the CEX income and expenditure data to investigate the dynamic interactions between the household income and consumption distributions. Many interesting longrun and shortrun interactions between the household income and consumption distributions are found.

(coauthored with Changsik Kim and Joon Park)


Quantitative Risk Management Workshop and Book Launch

Alexander J. McNeil: Backtesting Trading Book Models Using Estimates of VaR, Expected Shortfall and Realised p-Values

With the suggested change to the use of expected shortfall for market risk measurement in the trading book of a bank, there has been renewed interest in the problem of backtesting risk measure estimates. At the same time it has been suggested that there is a fundamental problem with backtesting expected shortfall due to its "non-elicitability" (Gneiting). Elicitable risk measures are functionals of distributions that minimise expected scores calculated using so-called consistent scoring functions; examples are Value-at-Risk and the expectile risk measure. We examine different ways in which scoring functions may be used in practical backtesting. While scoring functions offer new tools for comparing the effectiveness of banks' risk models, we concur with the conclusion of a recent paper by Acerbi and Szekely (2014) that the lack of elicitability does not undermine the potential use of expected shortfall as a measure for the trading book. We also consider the use of realised p-values in backtesting.


Paul Embrechts: How to Model Operational Risk

Under the international regulatory frameworks of Solvency 2 (for insurance) and Basel 3 (for banking) Operational Risk is defined as "The risk of loss resulting from inadequate or failed processes, people and systems or from external events. This definition includes legal risk, but excludes strategic and reputational risk." In this talk I will discuss the main issues underlying the quantitative modelling of Operational Risk.