Read out

Abstracts Research Seminar Winter Term 2017/18

Ef­stathia Bura: Near-equi­val­ence in Fore­cast­ing Ac­cur­acy of Lin­ear Di­men­sion Re­duc­tion Meth­ods in Large Pan­els of Mac­ro-vari­ables

We com­pare the fore­cast ac­cur­acy of widely used lin­ear es­tim­at­ors, such as ordin­ary least squares (OLS), dy­namic factor mod­els (DFMs), RIDGE re­gres­sion, par­tial least squares (PLS) and sliced in­verse re­gres­sion (SIR), a suf­fi­cient di­men­sion re­duc­tion (SDR) tech­nique, us­ing a large panel of po­ten­tially re­lated mac­roe­co­nomic vari­ables. We found that (a) PCR, RIDGE re­gres­sion, PLS and SIR ex­hibit over­all near-equi­val­ent fore­cast­ing ac­cur­acy, however (b) SIR ap­pears to have su­per­ior tar­get­ing power with only one or two lin­ear com­bin­a­tions of the pre­dict­ors whereas PCR and PLS need sub­stan­tially more than two com­pon­ents to achieve their min­imum mean square fore­cast er­ror. This em­pir­ical fore­cast ac­cur­acy near-equi­val­ence mo­tiv­ated our the­or­et­ical con­tri­bu­tions which will be presen­ted in this talk. We show that the most widely used lin­ear di­men­sion re­duc­tion meth­ods solve closely re­lated max­im­iz­a­tion prob­lems, and all have closely re­lated solu­tions that can be de­com­posed in “sig­nal” and “scal­ing” com­pon­ents. We or­gan­ize them un­der a com­mon scheme that sheds light on their com­mon­al­it­ies and dif­fer­ences as well as their func­tion­al­ity. The com­pet­it­ive ad­vant­age of SIR, or SDR in gen­eral when deal­ing with large pan­els of macro vari­ables, is the dra­matic de­crease of the com­plex­ity of the fore­cast­ing prob­lem by de­liv­er­ing the most parsi­mo­ni­ous fore­cast model.

Josef Teich­mann: Ma­chine Learn­ing in Fin­ance

We present several ap­plic­a­tions of ma­chine learn­ing tech­niques in Fin­ance and show some de­tails on a cal­ib­ra­tion pro­ject. Several the­or­et­ical in­sights why ma­chine learn­ing can make a dif­fer­ence in cal­ib­ra­tion, risk man­age­ment or fil­ter­ing are presen­ted.

Natesh Pil­lai: Bayesian Factor Mod­els in High Di­men­sions    

Sparse Bayesian factor mod­els are routinely im­ple­men­ted for parsi­mo­ni­ous de­pend­ence mod­el­ing and di­men­sion­al­ity re­duc­tion in high-di­men­sional ap­plic­a­tions. We provide the­or­et­ical un­der­stand­ing of such Bayesian pro­ced­ures in terms of pos­terior con­ver­gence rates in in­fer­ring high-di­men­sional co­v­ari­ance matrices where the di­men­sion can be lar­ger than the sample size. We will also dis­cuss other high di­men­sional shrink­age pri­ors and dis­cuss them in the con­text of factor mod­els.  

Jo­hanna F. Ziegel: Eli­cit­ab­il­ity and backtest­ing: Per­spect­ives for bank­ing reg­u­la­tion

Con­di­tional fore­casts of risk meas­ures play an im­port­ant role in in­ternal risk man­age­ment of fin­an­cial in­sti­tu­tions as well as in reg­u­lat­ory cap­ital cal­cu­la­tions. In order to assess fore­cast­ing per­form­ance of a risk meas­ure­ment pro­ced­ure, risk meas­ure fore­casts are com­pared to the real­ized fin­an­cial losses over a period of time and a stat­ist­ical test of cor­rect­ness of the pro­ced­ure is con­duc­ted. This pro­cess is known as backtest­ing. Such tra­di­tional backtests are con­cerned with assess­ing some op­tim­al­ity prop­erty of a set of risk meas­ure es­tim­ates. However, they are not suited to com­pare dif­fer­ent risk es­tim­a­tion pro­ced­ures. We in­vestig­ate the pro­posal of com­par­at­ive backtests, which are bet­ter suited for method com­par­is­ons on the basis of fore­cast­ing ac­cur­acy, but ne­ces­sit­ate an eli­cit­able risk meas­ure. We ar­gue that sup­ple­ment­ing tra­di­tional backtests with com­par­at­ive backtests will en­hance the ex­ist­ing trad­ing book reg­u­lat­ory frame­work for banks by provid­ing the cor­rect in­cent­ive for ac­cur­acy of risk meas­ure fore­casts. In ad­di­tion, the com­par­at­ive back- test­ing frame­work could be used by banks in­tern­ally as well as by re­search­ers to guide se­lec­tion of fore­cast­ing meth­ods. The dis­cus­sion fo­cuses on two risk meas­ures, Value-at-Risk and ex­pec­ted short­fall, and is sup­por­ted by a sim­u­la­tion study and data ana­lysis.

Joint work with Nat­alia Nolde.

Vladi­mir Veliov: Reg­u­lar­ity and ap­prox­im­a­tions of gen­er­al­ized equa­tions; ap­plic­a­tions in op­timal con­trol

We be­gin with re­mind­ing the no­tions of strong met­ric reg­u­lar­ity and strong (Hölder) sub­-reg­u­lar­ity of set-­val­ued map­pings, x ==>F(x), between sub­sets of Banach spaces, and some ap­plic­a­tions to op­timal con­trol the­ory, where the map­ping F is as­so­ci­ated with the first order op­tim­al­ity sys­tem. These ap­plic­a­tions re­quire a stand­ard co­er­civ­ity con­di­tion. Then we fo­cus on op­timal con­trol prob­lems for con­trol-affine prob­lems where the co­er­civ­ity fails. The ana­lysis of sta­bil­ity of the solu­tions of such prob­lems re­quire en­hance­ment of the ex­ist­ing gen­eral met­ric reg­u­lar­ity the­ory. We do this by in­tro­du­cing the “strong bi-­met­ric reg­u­lar­ity” (SbiMR) and prove a ver­sion of the im­port­ant Ljusternik-­Graves the­orem for SbiMR map­pings. Then we re­turn to the affine op­timal con­trol prob­lems and present ap­plic­a­tions to nu­mer­ical meth­ods. We fo­cus on two is­sues: (i) a New­ton type method and the per­tain­ing con­ver­gence ana­lysis; (ii) a dis­cret­iz­a­tion scheme of higher order ac­cur­acy than the Euler scheme. In the case of affine prob­lems, the in­vest­ig­a­tion of each of these is­sues is tech­nic­ally rather dif­fer­ent from that in the co­er­cive case, espe­cially that for high order dis­cret­iz­a­tion.

The talk is based on joint works with J. Prein­inger and T. Scarinci.

Bernd Bis­chl: Mod­el-­Based Op­tim­iz­a­tion for Ex­pens­ive Black­-Box Prob­lems and Hy­per­para­meter Op­tim­iz­a­tion

The talk will cover the main com­pon­ents of sequen­tial mod­el-­based op­tim­iz­a­tion al­gorithms. Al­gorithms of these kinds rep­res­ent the state-of-the-art for ex­pens­ive black­-box op­tim­iz­a­tion prob­lems and are get­ting in­creas­ingly pop­ular for hy­per­-­para­meter op­tim­iz­a­tion of ma­chine learn­ing al­gorithms, espe­cially on lar­ger data sets. The talk will cover the main com­pon­ents of sequen­tial mod­el-­based op­tim­iz­a­tion al­gorithms, e.g., sur­rog­ate re­gres­sion mod­els like Gaus­sian pro­cesses or ran­dom forests, ini­tial­iz­a­tion phase and point ac­quis­i­tion. In a second part I will cover some re­cent ex­ten­sions with regard to par­al­lel point ac­quis­i­tion, mul­ti-cri­teria op­tim­iz­a­tion and mul­ti-fi­del­ity sys­tems for sub­sampled data. Most covered ap­plic­a­tions will use sup­port vector ma­chines as examples for hy­per­-­para­meter op­tim­iz­a­tion. The talk will fin­ish with a brief over­view of open ques­tions and chal­lenges.

Stefan Weber: Pri­cing of Cy­ber In­sur­ance Con­tracts in a Net­work Model

We develop a novel ap­proach for pri­cing cy­ber in­sur­ance con­tracts. The con­sidered cy­ber threats, such as vir­uses and worms, dif­fuse in a struc­tured data net­work. The spread of the cy­ber in­fec­tion is modeled by an in­ter­act­ing Markov chain. Con­di­tional on the un­derly­ing in­fec­tion, the oc­cur­rence and size of claims are described by a marked point pro­cess. We in­tro­duce and ana­lyze a new poly­no­mial ap­prox­im­a­tion of claims to­gether with a mean-field ap­proach that al­lows to com­pute ag­greg­ate ex­pec­ted losses and prices of cy­ber in­sur­ance. Nu­mer­ical case stud­ies demon­strate the im­pact of the net­work to­po­logy and in­dic­ate that higher order ap­prox­im­a­tions are indis­pens­able for the ana­lysis of non-­lin­ear claims.

This is joint work with Mat­thias Fahr­en­waldt and Ker­stin Weske.

Eric Finn Schaan­ning: Meas­ur­ing sys­temic risk: The In­dir­ect Con­ta­gion In­dex

The rapid li­quid­a­tion of a port­fo­lio can gen­er­ate sub­stan­tial mark-to-­mar­ket losses for mar­ket par­ti­cipants who have over­lap­ping port­fo­lios with the dis­tressed in­sti­tu­tion. In a model of de­lever­aging, we in­tro­duce the no­tion of li­quid­ity-­weighted over­laps to quan­tify these in­dir­ect ex­pos­ures across port­fo­lios. We ap­ply our meth­od­o­logy to ana­lyse in­dir­ect con­ta­gion in the European Bank­ing Au­thor­ity’s stress test from 2016. Key ques­tions that we study are: Which as­set classes are the most im­port­ant chan­nels for price-­me­di­ated con­ta­gion? How can we quan­tify the de­gree of in­ter­con­nec­ted­ness for sys­tem­ic­ally im­port­ant in­sti­tu­tions? Given in­sti­tu­tional port­fo­lio hold­ings, are the stress scen­arios that we con­sider the “right” ones?

Daniel Rösch: Sys­tem­atic Ef­fects among LGDs and their Im­plic­a­tions on Down­turn Es­tim­a­tion

Banks are ob­liged to provide down­turn es­tim­ates for loss given de­faults (LGDs) in their in­ternal rat­ing­s-­based ap­proach. While there seems to be a con­sensus that down­turn con­di­tions re­flect times in which LGDs are sys­tem­at­ic­ally higher, it is un­clear which factors may best cap­ture these con­di­tions. As LGDs de­pend on re­cov­ery pay­ments which are col­lec­ted dur­ing vary­ing eco­nomic con­di­tions of the res­ol­u­tion pro­cess, it is chal­len­ging to identify eco­nomic vari­ables which cap­ture the sys­tem­atic im­pact on LGDs. The aim of this pa­per is to re­veal the nature of sys­tem­atic ef­fects among LGDs us­ing a Bayesian Fin­ite Mix­ture Model. Our res­ults show that the sys­tem­atic pat­terns of LGDs in the US and Europe strongly de­vi­ate from the eco­nomic cycle. This ques­tions the use of eco­nomic vari­ables for down­turn mod­el­ing and leads to the devel­op­ment of a new method for gen­er­at­ing down­turn es­tim­ates. In com­par­ison to other ap­proaches, our pro­posal seems to be con­ser­vat­ive enough dur­ing down­turn con­di­tions and in­hib­its over­-­con­ser­vat­ism.