Read out

Abstracts Research Seminar Summer Term 2018

Jin Ma: Op­timal Di­vidend and In­vest­ment Prob­lems un­der Sparre Ander­sen Model

This talk con­cerns an open prob­lem in Ac­tu­ar­ial Science: the op­timal di­vidend and in­vest­ment prob­lems Sparre Ander­sen model, that is, the claim fre­quency is a “re­newal” pro­cess. The main fea­ture of the prob­lem is that the un­derly­ing re­serve dy­nam­ics, even in its simplest form, is no longer Markovian. By us­ing the back­ward Markov­iz­a­tion tech­nique we recast the prob­lem in a Markovian frame­work with an ad­ded ran­dom “clock”, from which we val­id­ate the dy­namic pro­gram­ming prin­ciple (DPP). We will then show that the cor­res­pond­ing (dy­namic) value func­tion is the unique con­strained vis­cos­ity solu­tion of the (non-­local) HJB equa­tion. We shall fur­ther dis­cuss the possible op­timal strategy or ε-op­timal strategy by ad­dress­ing the reg­u­lar­ity of the value func­tion.

This talk is based on the joint works with Li­hua Bai and Xiaojing Xing.

Andrzej Ruszczyn­ski: Risk-A­verse Con­trol of Par­tially Ob­serv­able Markov Sys­tems

We con­sider risk meas­ure­ment in con­trolled par­tially ob­serv­able Markov sys­tems in dis­crete time. In such sys­tems, part of the state vector is not ob­served, but af­fects the trans­ition ker­nel and the costs. We in­tro­duce new con­cepts of risk fil­ters and study their prop­er­ties. We also in­tro­duce the con­cept of con­di­tional stochastic time con­sist­ency. We de­rive the struc­ture of risk fil­ters en­joy­ing this prop­erty and prove that they can be rep­res­en­ted by a col­lec­tion of law in­vari­ant risk meas­ures on the space of func­tion of the ob­serv­able part of the state. We also de­rive the cor­res­pond­ing dy­namic pro­gram­ming equa­tions. Then we il­lus­trate the res­ults on a clin­ical trial prob­lem and a ma­chine de­teri­or­a­tion prob­lem. In the fi­nal part of the talk, we shall dis­cuss risk fil­ter­ing and risk-a­verse con­trol of par­tially ob­serv­able Markov jump pro­cesses in con­tinu­ous time.

Cosimo-Andrea Mun­ari: Ex­ist­ence, unique­ness and sta­bil­ity of op­timal port­fo­lios of eli­gible as­sets

In a cap­ital ad­equacy frame­work, risk meas­ures are used to de­termine the min­imal amount of cap­ital that a fin­an­cial in­sti­tu­tion has to raise and in­vest in a port­fo­lio of pre-spe­cified eli­gible as­sets in order to pass a given cap­ital ad­equacy test. From a cap­ital ef­fi­ciency per­spect­ive, it is im­port­ant to identify the set of port­fo­lios of eli­gible as­sets that al­low to pass the test by rais­ing the least amount of cap­ital. We study the ex­ist­ence and unique­ness of such op­timal port­fo­lios as well as their sens­it­iv­ity to changes in the un­derly­ing cap­ital pos­i­tion. This nat­ur­ally leads to in­vest­ig­at­ing the con­tinu­ity prop­er­ties of the set-­val­ued map as­so­ci­at­ing to each cap­ital pos­i­tion the cor­res­pond­ing set of op­timal port­fo­lios. We pay spe­cial at­ten­tion to lower semi­con­tinu­ity, which is the key con­tinu­ity prop­erty from a fin­an­cial per­spect­ive. This "sta­bil­ity" prop­erty is al­ways sat­is­fied if the test is based on a poly­hed­ral risk meas­ure but it gen­er­ally fails once we de­part from poly­hed­ral­ity even when the refer­ence risk meas­ure is con­vex. However, lower semi­con­tinu­ity can be often achieved if one if one is will­ing to fo­cuses on port­fo­lios that are close to be­ing op­timal. Be­sides cap­ital ad­equacy, our res­ults have a vari­ety of nat­ural ap­plic­a­tions to pri­cing, hedging, and cap­ital al­loc­a­tion prob­lems.

This is joint work with Michel Baes and Pablo Koch-Med­ina.

Mar­ica Man­isera and Paola Zuc­colotto: Bas­ket­ball data science

The re­search sem­inar will deal with stat­ist­ical ana­lysis of bas­ket­ball data, with spe­cial at­ten­tion to the most re­cent ad­vances show­ing how se­lec­ted stat­ist­ical meth­ods and data min­ing al­gorithms can be ap­plied in bas­ket­ball to solve prac­tical prob­lems. After a brief de­scrip­tion of the state of the art of bas­ket­ball ana­lyt­ics, we will in­tro­duce dif­fer­ent data sources that can be fruit­fully used to per­form bas­ket­ball ana­lyt­ics. Then, in order to show bas­ket­ball data science in ac­tion, we will dis­cuss four case stud­ies, fo­cused on: (i) the pro­posal of new pos­i­tions in bas­ket­ball; (ii) the ana­lysis of the scor­ing prob­ab­il­ity when shoot­ing un­der high-­pres­sure con­di­tions; (iii) per­form­ance vari­ab­il­ity and team­work assess­ment; (iv) sensor data ana­lysis.

The au­thors are scien­ti­fic co­ordin­at­ors of the in­ter­na­tional pro­ject BD­s­ports (Big Data Ana­lyt­ics in Sports, bodai.unibs.it/bd­s­ports), whose main aims in­clude scien­ti­fic re­search, edu­ca­tion, dis­sem­in­a­tion and prac­tical im­ple­ment­a­tion of sports ana­lyt­ics.

Eric Ei­s­en­stat: Ef­fi­cient Es­tim­a­tion of Struc­tural VARMAs with Stochastic Volat­il­ity

This pa­per devel­ops Markov chain Monte Carlo al­gorithms for struc­tural vector autore­gress­ive mov­ing aver­age (VARMA) mod­els with fix coef­fi­cients and time-vary­ing er­ror co­v­ari­ances, modeled as a mul­tivari­ate stochastic volat­il­ity pro­cess. A par­tic­u­lar be­ne­fit of al­low­ing for time vari­ation in the co­v­ari­ances in this set­ting is that it in­duces unique­ness in terms of fun­da­mental and vari­ous non-­fun­da­mental VARMA rep­res­ent­a­tions. Hence, it re­solves an im­port­ant is­sue in ap­ply­ing mul­tivari­ate time ser­ies mod­els to struc­tural mac­roe­co­nomic prob­lems. Al­though com­pu­ta­tion in this set­ting is more chal­len­ging, the con­di­tion­ally Gaus­sian nature of the model renders ef­fi­cient sampling al­gorithms feas­ible. The al­gorithm presen­ted in this pa­per uses two in­nov­at­ive ap­proaches to achieve sampling ef­fi­ciency: (i) the time-vary­ing co­v­ari­ances are sampled jointly us­ing particle Gibbs with ances­try sampling, and (ii) the mov­ing aver­age coef­fi­cients are sampled jointly us­ing an ex­ten­sion of the Whittle like­li­hood ap­prox­im­a­tion. We provide Monte Carlo evid­ence that the al­gorithm per­forms well in prac­tice. We fur­ther em­ploy the al­gorithm to assess the ex­tent to which com­monly used SVAR mod­els sat­isfy their un­derly­ing fun­da­ment­al­ness as­sump­tion and the ef­fect that this as­sump­tion has on struc­tural in­fer­ence.

Nicole Bäuerle: Op­timal Con­trol of Par­tially Ob­serv­able Piece­wise De­termin­istic Markov Pro­cesses

In this talk we con­sider a con­trol prob­lem for a Par­tially Ob­serv­able Piece­wise De­termin­istic Markov Pro­cess of the fol­low­ing type: After the jump of the pro­cess the con­trol­ler re­ceives a noisy sig­nal about the state and the aim is to con­trol the pro­cess con­tinu­ously in time in such a way that the ex­pec­ted dis­coun­ted cost of the sys­tem is min­im­ized. We solve this op­tim­iz­a­tion prob­lem by re­du­cing it to a dis­crete-­time Markov De­cision Pro­cess. This in­cludes the de­riv­a­tion of a fil­ter for the un­ob­serv­able state. Im­pos­ing suf­fi­cient con­tinu­ity and com­pact­ness as­sump­tions we are able to prove the ex­ist­ence of op­timal policies and show that the value func­tion sat­is­fies a fixed point equa­tion. A gen­eric ap­plic­a­tion is given to il­lus­trate the res­ults.

The talk is based on a joint pa­per with Dirk Lange.

Ioan­nis Kos­midis: Loca­tion-­ad­jus­ted Wald stat­ist­ics

In­fer­ence on a scalar para­meter of in­terest is com­monly con­struc­ted us­ing a Wald stat­istic, on the grounds of the valid­ity of the stand­ard nor­mal ap­prox­im­a­tion to its fin­ite-sample dis­tri­bu­tion and com­pu­ta­tional con­veni­ence. A prom­in­ent example are the in­di­vidual Wald tests for re­gres­sion para­met­ers that are re­por­ted by de­fault in re­gres­sion out­put in the ma­jor­ity of stat­ist­ical com­put­ing en­vir­on­ments. The nor­mal ap­prox­im­a­tion can, though, be in­ad­equate, espe­cially when the sample size is small or mod­er­ate re­l­at­ive to the num­ber of para­met­ers. In this talk, the Wald stat­istic is viewed as an es­tim­ate of a trans­form­a­tion of the model para­met­ers and is ap­pro­pri­ately ad­jus­ted so that its null ex­pect­a­tion is asymp­tot­ic­ally closer to zero. The bias ad­just­ment de­pends on the ex­pec­ted in­form­a­tion mat­rix, the first-order term in the bias ex­pan­sion of the max­imum like­li­hood es­tim­ator, and the de­riv­at­ives of the trans­form­a­tion, all of which are either read­ily avail­able or eas­ily ob­tain­able in stand­ard soft­ware for a wealth of well-used mod­els. The fin­ite-sample per­form­ance of the loca­tion-­ad­jus­ted Wald stat­istic is ex­amined ana­lyt­ic­ally in sim­ple mod­els and via sim­u­la­tion in a ser­ies of more real­istic mod­el­ling frame­works, in­clud­ing gen­er­al­ized lin­ear mod­els, meta-re­gres­sion and beta re­gres­sion. The loca­tion-­ad­jus­ted Wald stat­istic is found able to de­liver sig­ni­fic­ant im­prove­ments in in­fer­en­tial per­form­ance over the stand­ard Wald stat­istic, without sac­ri­fi­cing any of its com­pu­ta­tional sim­pli­city.

Kemal Dinçer Dingeç: Evalu­at­ing CDF and PDF of the Sum of Lognor­mals by Monte Carlo Sim­u­la­tion

Evalu­at­ing cu­mu­lat­ive dis­tri­bu­tion func­tion (CDF) and prob­ab­il­ity dens­ity func­tion (PDF) of the sum of lognor­mal ran­dom vari­ates by Monte Carlo sim­u­la­tion is a topic dis­cussed in several re­cent pa­pers. Our ex­per­i­ments show, that in par­tic­u­lar for vari­ances smal­ler than one, con­di­tional Monte Carlo (CMC) in a well chosen main dir­ec­tion leads already to a quite sim­ple al­gorithm with large vari­ance re­duc­tion.
For the gen­eral case the im­ple­ment­a­tion of the CMC al­gorithm re­quires nu­meric root find­ing which can be im­ple­men­ted ro­bustly us­ing up­per and lower bounds for the root. Ad­ding im­port­ance sampling (IS) to the CMC al­gorithm can lead to large ad­di­tional vari­ance re­duc­tion. For the spe­cial case of inde­pend­ent and identic­ally dis­trib­uted (IID) lognor­mal ran­dom vari­ates the root is ob­tained in closed form. It is im­port­ant that for this case the op­timal im­port­ance sampling dens­ity is very close to the pro­duct of its con­di­tional dens­it­ies. So the pro­duct of the ap­prox­im­ate one-di­men­sional con­di­tional dens­it­ies is used as mul­tivari­ate IS dens­ity.
Ap­ply­ing the dif­fer­ent ap­prox­im­a­tion meth­ods for the one-di­men­sional con­di­tional dens­it­ies, it is possible to ob­tain a sig­ni­fic­ant ad­di­tional vari­ance re­duc­tion over the pure CMC al­gorithm by means of im­port­ance sampling. When also the dens­ity of the lognor­mal sum is re­quired, it is im­port­ant that an ap­prox­im­at­ing func­tion with con­tinu­ous first de­riv­at­ive is avail­able.
In this talk the vari­ance re­duc­tion factors ob­tained with dif­fer­ent ap­prox­im­a­tion meth­ods and the ne­ces­sary setup times for the ran­dom vari­ate gen­er­a­tion al­gorithm are com­pared. Also the in­flu­ence of the se­lec­ted main dir­ec­tion is ana­lyzed.

Wayne Old­ford: Ex­plor­at­ory visu­al­iz­a­tion of higher di­men­sional data

Visu­al­iz­a­tion is an im­port­ant as­set to data ana­lysis, both in com­mu­nic­at­ing res­ults and in ex­plic­at­ing the ana­lysis nar­rat­ive which led to them. However, it is some­times at its most power­ful when used prior to com­mit­ment to any ana­lysis nar­rat­ive, simply to ex­plore the data with min­imal pre­ju­dice. This is ex­plor­at­ory visu­al­iz­a­tion and its goal is to re­veal struc­ture in the data, espe­cially unanti­cip­ated struc­ture. In­sights gained from ex­plor­at­ory visu­al­iz­a­tion can in­form and possibly sig­ni­fic­antly af­fect any sub­sequent ana­lysis nar­rat­ive.
The size of mod­ern data, in di­men­sion­al­ity and in num­bers of ob­ser­va­tions, poses a for­mid­able chal­lenge for ex­plor­at­ory visu­al­iz­a­tion. First, di­men­sion­al­ity is lim­ited to at most three phys­ical di­men­sions both by the hu­man visual sys­tem and by mod­ern dis­play tech­no­logy. Second, the num­ber of ob­ser­va­tions that can be in­di­vidu­ally dis­played on any device is con­strained by the mag­nitude and res­ol­u­tion of its dis­play screen. The chal­lenge is to develop meth­ods and tools that en­able ex­plor­at­ory visu­al­iz­a­tion of mod­ern data in the face of such con­straints.
Some meth­ods and soft­ware which we have designed to ad­dress this chal­lenge will be presen­ted in this talk (based on joint work with Ad­rian Wad­dell, Adam Rah­man, Marius Hofert, or Cath­er­ine Hur­ley). Most of the talk will fo­cus on the prob­lem of ex­plor­ing higher di­men­sional spaces, largely through de­fin­ing, fol­low­ing, and present­ing “in­ter­est­ing” low di­men­sional tra­ject­or­ies through high di­men­sional space. Both spa­tial and tem­poral strategies will be used to al­low visual tra­versal of the tra­ject­or­ies. Soft­ware which fa­cil­it­ates ex­plor­a­tion via these tra­ject­or­ies will be demon­strated (based mainly on the in­ter­act­ive and ex­tend­ible ex­plor­at­ory visu­al­iz­a­tion sys­tem called ‘loon’, and ‘zen­plots’, each of which are avail­able as an ‘R’ pack­age from CRAN). If time per­mits, our meth­od­o­logy (and soft­ware) for re­du­cing the num­ber of ob­ser­va­tions (without com­prom­ising too much either the em­pir­ical dis­tri­bu­tion or im­port­ant geo­met­ric fea­tures of the high di­men­sional point-cloud) will also be presen­ted.

Peter Filzmoser: Ro­bust and sparse es­tim­a­tion meth­ods for lin­ear and lo­gistic re­gres­sion in high di­men­sions

The elastic net es­tim­ator has been in­tro­duced for dif­fer­ent mod­els, such as for lin­ear and lo­gistic re­gres­sion. We pro­pose a ro­bust ver­sion of this es­tim­ator based on trim­ming. It is shown how out­lier­-­free data sub­sets can be iden­ti­fied and how ap­pro­pri­ate tun­ing para­met­ers for the elastic net pen­al­ties can be se­lec­ted. A fi­nal re­weight­ing step is pro­posed which im­proves the stat­ist­ical ef­fi­ciency of the es­tim­at­ors. Sim­u­la­tions and data examples un­der­line the good per­form­ance of the newly pro­posed method, which is avail­able in the R pack­age en­etLTS on CRAN.

John M. Ma­heu: Non­para­met­ric Dy­namic Con­di­tional Beta

This pa­per de­rives a dy­namic con­di­tional beta rep­res­ent­a­tion us­ing a Bayesian semi­para­met­ric mul­tivari­ate GARCH model. The con­di­tional joint dis­tri­bu­tion of ex­cess stock re­turns and mar­ket ex­cess re­turns are modeled as a count­ably in­fin­ite mix­ture of nor­mals. This al­lows for de­vi­ations from the el­liptic fam­ily of dis­tri­bu­tions. Em­pir­ic­ally we find the time-vary­ing beta of a stock non­lin­early de­pends on the con­tem­por­an­eous value of ex­cess mar­ket re­turns. In highly volat­ile mar­kets, beta is al­most con­stant, while in stable mar­kets, the beta coef­fi­cient can de­pend asym­met­ric­ally on the mar­ket ex­cess re­turn.