Die Erholunsgzone vor dem D4 Gebäude über dem Brunnen.

Abstracts

Der Inhalt dieser Seite ist aktuell nur auf Englisch verfügbar.

Benny Hartwig: Robust Inference in Time-Varying Structural VAR Models: The DC-Cholesky Multivariate Stochastic Volatility Model


The ordering of variables is often considered to be negligble for the estimates of the reduced-form covariance matrix of the Cholesky multivariate stochastic volatility model. This paper shows that this procedure imposes systematically different dynamic restriction across alternative orderings on the covariance matrix when the ratio of reduced-form volatilities is time-varying. Consequently, conclusion drawn from this model also hinge on the deemed unimportant selected ordering of variables. This paper illustrates these effects for a small-scale macroeconoemtric model and proposes the dynamic correlation Cholesky multivariate stochastic volatility model as a robust alternative.
 

Sühan Altay: Optimal Convergence Trading with Unobservable Pricing Errors

We study a dynamic portfolio optimization problem related to convergence trading, which is an investment strategy that exploits temporary mispricing by simultaneously buying relatively underpriced assets and selling short relatively overpriced ones with the expectation that their prices converge in the future. We build on the model of Liu and Timmermann (2013) and extend it by incorporating unobservable Markov-modulated pricing errors into the price dynamics of two co-integrated assets. We characterize the optimal portfolio strategies in full and partial information settings both under the assumption of unrestricted and beta-neutral strategies. By using the innovations approach, we provide the filtering equation that is essential for solving the optimization problem under partial information.
Finally, in order to illustrate the model capabilities, we provide an example with a two-state Markov chain.

Diyora Salimova: Deep neural networks in numerical approximation of high-dimensional PDEs

In recent years deep artificial neural networks (DNNs) have very successfully been used in numerical simulations for a numerous of computational problems including, object and face recognition, natural language processing, fraud detection, computational advertisement, and numerical approximations of partial differential equations. Such numerical simulations indicate that DNNs seem to admit the fundamental flexibility to overcome the curse of dimensionality in the sense that the number of real parameters used to describe the DNN grows at most polynomially in both the reciprocal of the prescribed approximation accuracy and the dimension of the function which the DNN aims to approximate in such computational problems. In this talk I present our recent result which rigorously proves that DNNs do overcome the curse of dimensionality in the numerical approximation of Kolmogorov PDEs with constant diffusion and nonlinear drift coefficients.

Sara Svaluto-Ferro: Infinite dimensional polynomial jump-diffusions

Abstract: We introduce polynomial jump-diffusions taking values in an arbitrary Banach space via their infinitesimal generator. We obtain two representations of the (conditional) moments in terms of solution of systems of ODEs. These representations generalize the well-known moment formulas for finite dimensional polynomial jump-diffusions. We illustrate the practical relevance of these formulas by several applications. In particular, we consider (potentially rough) forward variance polynomial models and we illustrate how to use the moment formulas to compute prices of VIX options.

Jean-Bernard Salomond (UPEC): Some properties of the Gaussian Scale mixture prior for Sparse models

We have seen for the past twenty years the explosion of new methods to handle sparse high dimensional models. In the Bayesian setting, the first general theoretical results for high dimensional sparse Gaussian models have been obtained for the Spike and Slab priors. However other approaches to model sparsity are now used, mainly to tackle computational issues. Among them the so called "one group" model have grown more and more popular. Most of the more popular one-group prior can be written as a scale mixture of Gaussian, shrinkage properties being induced by choosing a mixing distribution with a lot of mass near 0. The horseshoe prior for instance, proposed in Cravalho et al. (2010), has been widely studied from a theoretical and practical point of view. Recently, van der Pas et al. (2016) proposed some general conditions on the Gaussian scale mixture prior so that the posterior contracts at the minimax rate. After reviewing this last result, we will see how the priors that meets this conditions can be used for model selection, and we will give upper bounds on the risks. In the last part of the presentation, we will see how the Gaussian scale mixture can be adapted for more complex models.