Die Erholunsgzone vor dem D4 Gebäude über dem Brunnen.

Abstracts Research Seminar Summer Term 2016

Çağın Ararat: Measuring systemic risk via model uncertainty

In the event of a financial crisis, it becomes important to measure and allocate the risk of a network of financial institutions. Such risk which takes into account the interconnectedness of the financial institutions is called “systemic risk.” In this talk, we will focus on a recent multivariate approach for measuring systemic risk where the state of the financial network is modeled as a random vector of individual equities/losses. Then, the systemic risk measure is defined as the set of all capital allocation vectors that make the “impact of the system to the society” acceptable. We present a dual representation theorem for the systemic risk measure and provide economic interpretations of the dual variables. We also show that the systemic risk measure can be seen as a multivariate shortfall risk measure under model uncertainty. As examples, we will consider the classical Eisenberg&Noe network model, a flow network model, and a financial system with exponential aggregation mechanism.

Judith Rousseau: Some results on the behaviour of the posterior distribution of static and dynamic mixture models - parametric and nonparametric cases

Consider a mixture model with k components. If the true distribution has k0 < k components, the parameters are not identifiable in the sense that the true distribution can be approximated by emptying extra components (forcing some of the weights to converge to 0) or by merging the extra components with real ones (in the sense that these components have parameter values equal to existing components). In this case the maximum likelihood estimator does not converge to a single value when the number of observations goes to infinity but to the set of possible values and becomes harder to interpret. Contrarywise, Bayesian posterior distributions based on well chosen prior distributions have a well behaved asymptotic behaviour, which is now fully understood. The case of hidden Markov models is much more delicate. In this talk we will first explain why hidden Markov models lead to much more complex analysis and we will give sufficient conditions on the prior to either empty or merge the extra components of the model. These results are based on fully parametric models. In the recent years some results have been obtained about the identifiability of mixture models - possibly dynamical - when the emission distributions are not specified. In particular, in the case of independent and identically distributed hidden states leaving on a finite state space, the parameters (emission distributions and weights of the mixture) are identifiable when each individual is associated to three independent observations. In the case of non independent hidden states, then as soon as the transition matrix is invertible, then the parameters are identifiable. This opens new possibilities to conduct more robust estimation. We will discuss some aspects of semi-parametric Bayesian estimation, including Bernstein von Mises theorems for the weights (or transition matrices) when the emission distributions are not parametrically specified. This is a joint work with Zoe van Havre and Kerrie Mengersen for the parametric part and by Elisabeth Gassiat and Elodie Vernet for the semi-parametric part.

Gunther Leobacher: QMC methods in quantitative finance, tradition and perspectives 

Co-authors: J. Dick (UNSW Sydney), C. Irrgeher (JKU Linz), F. Pillichshammer (JKU Linz), M. Szölgyenyi (WU Wien)  

Quasi-Monte Carlo (QMC) methods have been developed in the second half of the 20th century with the primary goal of being able to compute integrals over d-dimensional domains for modest dimensions 5 < d < 15. Notable contributions have been made by Hlawka, Korobov, Sobol, Niederreiter and others. Towards the end of the century it has been observed that those methods can be put to good use for integration problems in much higher dimensions, with d being in the hundreds or thousands. This was in particular true for many problems from financial mathematics, i.e., for evaluation of financial derivatives, but also, more generally, for numerical treatment of stochastic differential equations. The success of QMC in these areas could not be explained by the theory available at that time, and since then researchers have been striving to provide explanations. In our talk we will review the basics of quasi-Monte Carlo, e.g., the notion of low-discrepancy sequences and Koksma-Hlawka-type inequalities. Subsequently we shall have a look at some by now classical explanations for the effectiveness of QMC for financial problems, in particular the concepts of effective dimension and weighted Koksma-Hlawka-type inequalities. Then we will present some of our own contributions to those topics, whereby we shall concentrate on the recently developed concepts of Hermite spaces and fast efficient orthogonal transforms. We shall briefly consider tractability of integration in certain Hermite spaces. Numerical examples will serve to illustrate theoretical findings.

[1] C. Irrgeher, P. Kritzer, G. Leobacher, and F. Pillichshammer. Integration in Hermite spaces of analytic functions. Journal of Complexity, 31(3):380-404, 2015. doi: 10.1016/j.jco.2014.08.004

[2] C. Irrgeher and G. Leobacher. High-dimensional integration on Rd, weighted Hermite spaces, and orthogonal transforms. Journal of Complexity, 31(2):174-205, 2015. doi: 10.1016/j.jco.2014.09.002

[3] G. Leobacher and M. Szölgyenyi. A numerical method for SDEs with discontinuous drift. BIT Numerical Mathematics, pages 1-12, 2015. doi: 10.1007/s10543-015-0549-x

[4] G. Leobacher. Fast orthogonal transforms and generation of Brownian paths. J. Complexity, 28:278-302, 2012. doi: 10.1016/j.jco.2011.11.003

Andreas Löhne: On the Dual of the Solvency Cone

(Co-author: Birgit Rudloff)

A solvency cone is a polyhedral convex cone which is used in Mathematical Finance to model proportional transaction costs. It consists of those portfolios which can be traded into nonnegative positions. In this note, we provide a characterization of its dual cone in terms of extreme directions and discuss some consequences, among them: (i) an algorithm to construct extreme directions of the dual cone when a corresponding "contribution scheme" is given; (ii) estimates for the number of extreme directions; (iii) an explicit representation of the dual cone for special cases. The validation of the algorithm is based on the following easy-to-state but difficult-to- solve result on bipartite graphs: Running over all spanning trees of a bipartite graph, the number of left degree sequences equals the number of right degree sequences.

Ruggero Bellio: Fixed-effects estimation of 2PL models

Fixed-effects estimation has been used for some time in the early usage of 2PL models, in the form of Joint Maximum Likelihood (JML) estimation. Despite some theoretical advantages of the fixed-effects model specification, JML was soon abandoned, as its application is hampered by numerical and inferential difficulties. Marginal Maximum Likelihood (MML), based on an arbitrary normal assumption for the ability of the examinees, became then the de facto standard method. This talk illustrates a proposal to overcome the limitations of both JML and MML, resulting in an orthodox fixed-effects approach for 2PL models. The main method for joint estimation of item and per-son parameter is given by bias-reducing estimation, which carries out model-based shrinkage of all the model parameters. Built upon the bias-reducing method, inference on item parameters is performed by means of the modified profile likelihood, with satisfactory results. The modified profile likelihood is also employed to perform model selection through a suitable lasso-type approach. On the computational side, all the methodology has been implemented in R. The TMB package for automatic differentiation has been used for coding the modified profile likelihood in C++, with remarkable numerical performances even in high-dimensional settings.  

This is a joint work with Ioannis Kosmidis (UCL, London) and Nicola Sartori (University of Padova, Italy).

Cristiano Varin: Composite likelihood estimation for spatial clustered binary data

Composite likelihood is an inference function constructed by compounding component likelihoods based on low dimensional marginal or conditional distributions. Since the components are multiplied as if they were independent, the composite likelihood inherits the properties of likelihood inference from a misspecified model. The virtue of composite likelihood inference is “combining the advantages of likelihood with computational feasibility” (Reid, 2013). Given the wide applicability, composite likelihoods are attracting growing interest as surrogate for intractable likelihoods in frequentist and Bayesian inference. Despite the promise, application of composite likelihood is still limited by some theoretical and computational issues which have received only partial or initial responses. Open theoretical questions concern characterization of general model conditions assuring validity of composite likelihood inference, optimal selection of component likelihoods and precise evaluation of estimation uncertainty. Computational issues concern how to design composite likelihood methods to balance between statistical efficiency and computational efficiency. In this talk, after a critical review of composite likelihood theory built on the review paper Varin, Firth and Reid (2011), I shall focus on an ongoing project with Manuela Cattelan (University of Padova) about modelling spatial dependence in clustered binary data arising in disease prevalence studies (Diggle and Giorgi, 2016).

Marica Manisera and Paola Zuccolotto: Analyzing human perceptions from survey data with Nonlinear CUB models  

The analysis of human perceptions is often carried out by resorting to questionnaires, where respondents are asked to express ratings about the objects being evaluated. The goal of the statistical tools proposed for this kind of data is to explicitly characterize the respondents' perceptions about a latent trait, by taking into account, at the same time, the ordinal categorical scale of measurement of the involved statistical variables. This talk deals with a statistical model for rating data, obtained starting from a particular assumption about the unconscious mechanism driving individuals' responses on a rating scale. The basic idea derives from the founding paradigm of CUB models (Piccolo, 2003; D'Elia and Piccolo, 2005; Iannario and Piccolo, 2012). The described model is called Nonlinear CUB (Manisera and Zuccolotto, 2014). The main innovation of Nonlinear CUB models is that, in their framework, it is possible to define the new concept of transition probability, i.e. the probability of increasing one rating point at a given step of the decision process. Transition probabilities and the related transition plots are able to describe the respondents' state of mind towards the response scale used to express judgments. In particular, NLCUB can address the (possible) unequal spacing of the rating categories that occurs when respondents, in their unconscious search for the "right'' response category, find it easier to move, for example, from rating 1 to 2 than from rating 4 to 5. This corresponds to the concept of "nonlinearity'' introduced by NLCUB models, defined as the nonconstantness of the transition probabilities. Case studies on real data are presented, in order to show the functioning of the model. R functions are available for computations.


[1] Manisera M., Zuccolotto P., 2014. Modeling rating data with Nonlinear CUB models. Computational Statistics & Data Analysis 78, 100-118. www.sciencedirect.com/science/article/pii/S0167947314000978
Preprint version: www.researchgate.net/publication/261644061_Modeling_rating_data_with_Nonlinear_CUB_models.
R routines are freely downloadable at www.researchgate.net/publication/277717439, as explained in the paper
[2] Manisera M., Zuccolotto P., 2014. NONLINEAR CUB MODELS: THE R CODE. Statistical Software - Statistica & Applicazioni. XII, 2, 205-223.
[3] Manisera M., Zuccolotto P. (2015). Identifiability of a model for discrete frequency distributions with a multidimensional parameter space, Journal of Multivariate Analysis, 140, 302-316. www.sciencedirect.com/science/article/pii/S0047259X15001268
[4] Manisera M., Zuccolotto P. (2016). Estimation of Nonlinear CUB models via numerical optimization and EM algorithm, Communications in Statistics - Simulation and Computation, forthcoming.
[5] Manisera M., Zuccolotto P. (2015). Visualizing Multiple Results from Nonlinear CUB Models with R Grid Viewports. Electronic Journal of Applied Statistical Analysis, 8, 3, 360-373 (freely available on the web).
[6] Manisera M., Zuccolotto P. (2016). Treatment of ‘don’t know’ responses in a mixture model for rating data, Metron, 74, 99-115. link.springer.com/article/10.1007/s40300-015-0075-2.


Brendan Murphy: Model-based clustering for multivariate categorical data

Latent class analysis is the most common model that is used to perform model-based clustering for multivariate categorical responses. The selection of the variables most relevant for clustering is an important task which can affect the quality of clustering considerably. We outline two approaches for model-based clustering and variable selection for multivariate categorical data. The first method uses a Bayesian approach where both clustering and variable selection are carried out simultaneously using an MCMC approach based on a collapsed Gibbs sampler; post-hoc procedures for parameter and uncertainty estimation are outlined. The second method considers a variable selection method based on stepwise model selection using a model that avoids a local independence assumption which is used in competing approaches. The methods are illustrated on a simulated and real data and are shown to give improved clustering performance compared to competing methods.

Luc Bauwens: Autoregressive Moving Average Infinite Hidden Markov-Switching Models

Markov-switching models are usually specified under the assumption that all the parameters change when a regime switch occurs. Relaxing this hypothesis and being able to detect which parameters evolve over time is relevant for interpreting the changes in the dynamics of the series, for specifying models parsimoniously, and may be helpful in forecasting. We propose the class of sticky infinite hidden Markov-switching autoregressive moving average models, in which we disentangle the break dynamics of the mean and the variance parameters. In this class, the number of regimes is possibly infinite and is determined when estimating the model, thus avoiding the need to set this number by a model choice criterion. We develop a new Markov chain Monte Carlo estimation method that solves the path dependence issue due to the moving average component. Empirical results on macroeconomic series illustrate that the proposed class of models dominates the model with fixed parameters in terms of point and density forecasts.

Joint work with Jean-François Carpantier and Arnaud Dufays.  

Keywords: ARMA, Bayesian inference, Dirichlet process, Forecasting.

Peter Bühlmann: Hierarchical high-dimensional statistical inference  

In presence of highly correlated variables, which is rather common in high-dimensional data, it seems indispensable to go beyond an approach of inferring individual regression coefficients from a generalized linear model. Hierarchical inference is a powerful framework enabling to make significance statements for groups of correlated or sometimes even individual variables. Besides methodology and theory, we obtain interesting results for genomewide association studies: unlike correlation based marginal approaches, they have a straightforward “causal-type” interpretation when making additional assumptions.

Johanna Nešlehová and Christian Genest: Estimating extremal dependence using B-splines  

B-spline smoothing techniques are commonly used in functional data analysis. In this talk, I will explain how this tool can be adapted to derive intrinsic estimators of the Pickands dependence function characterizing the dependence in the maximum attractor of a bivariate continuous distribution. The approach is rooted in a rank-based transformation of the data due to Cormier et al. (2014). As shown therein, a plot of the transformed pairs of points provides a useful tool for detecting extreme-value dependence or extremal tail behavior. When the case arises, a constrained B-spline can be fitted through a suitable subset of the points to get an estimator of the Pickands dependence function associated with the extreme-value attractor. This estimator is intrinsic, i.e., it satisfies all the conditions required to qualify as a Pickands dependence function. The excellent finite-sample performance of this estimator was documented through simulations by Cormier et al. (ibid.).
As a follow-up to this work, I will state minimal conditions under which this estimator is consistent, and I will give its limiting distribution. This result is valid whatever the order of the B-splines. I will also demonstrate through theory and simulations that while B-splines of order 3 are sufficient to estimate the Pickands dependence function with accuracy, B-splines of order 4 are essential to grasp the features of the spectral distribution associated with the maximal attractor. This approach leads to an estimator that generally outperforms the maximum empirical likelihood estimator studied by Einmahl & Segers (2009).
This talk is based on joint work with A. Bücher, C. Genest, and D. Sznajder.