# Seat of Learning

### New job, new place, new blog

After a short but enjoyable time in the Netherlands, I am moving to rainier pastures. I will henceforth be based in Cambridge (the UK one, not the Massachusetts one), working as a postdoc at the MRC Biostatistic Unit. So what better time to start a new blog?

### New Challenges

With the new job will come new challenges; meeting new collaborators, learning about existing research at the BSU (which, I am assured, houses many researchers with much stronger statistical credentials than I can claim), adapting to a new work environment. I will document my progress here, as far as confidentiality and professionalism allows, as I (hopefully) get to grips with my new position. I will also post the occasional personal update about life in Cambridge, and probably complain about the weather a lot. Buckle up, we should be in for a fun ride!

**Mandatory disclaimer**: This blog will represent my personal views, and not those of the BSU or the MRC.

### New Digs

But first things first, somewhere to live. Having lived in the UK before, I’m no stranger to renting (or letting, for you British people) a flat. Yet the combination of high housing prices in and around Cambridge, and demanding letting agencies means that flat hunting has been less than pleasant. Let us hope that it will soon be over. Meanwhile, I’m eternally grateful for the invention of AirBnB, which has allowed me to find cheap and lovely lodgings.

### New Fun

I’ve only been in Cambridge for a week, but I’m already falling in love with it. Sure, there is the occasional pretentious student, there are the posh upper-middle-class shops, and the rough outskirts. But there are also lovely locals, great little shops, and hundreds of new restaurants and cafes for me to try.

\(\)### 4 Comments

Comments are Disabled

Dear Dr. Dondelinger,

Hello to you

My name is Sam Totur, master student in Physics.

I saw one of your articles in title of “ODE parameter inference using adaptive gradient matching with Gaussian processes”

It was amazing to me I want to work in this area. I read it and I have some difficulties to get its concept. The parts that I was confused are:

1- on page 217 equation (10) the part that you said: p(X dot, X, phi / tetha, gamma) = p(xk dot/xk,phi)p(xk dot/X,tetha,gamma)

Inline image 2

the second line how you have derived?

2- according to my understanding, we are looking for parameter estimation I mean we are looking for likelihood probability P(tetha I Y) right? if this is correct why you are looking for joint probability distribution equation (21)? in other words, what is purpose of finding (21)?

3- In first part of you article you give a reference, Calderhead 2008, I read this article and even some of its references like Rasmussen, C. E. and Williams, C. K. I. (2006). but I had difficulties to understand it do you have another material for it?

4- May I have your MATLAB m-file to see step by step your code and algorithm?

Thank you and eagerly looking forward for you reply.

Best regards

Dear Sam,

I’ve already replied to your comment via email, but I reproduce my reply here in the hope that it will be useful for others.

This is not so much a derivation and more of a definition. This equation is originally from Calderhead et al., and they take \(P(\dot{X},X,\Phi|…) = P(\dot{X}|X,\Phi,…)P(X)P(\Phi)\), where they assume that the latter two terms are constant (in their model) and can be dropped, leaving only \(P(\dot{X}|X,\Phi,….) = \prod_k P(\dot{x}_k|X,\Phi,…)\). They use a product of expert approach to combine equation (7) and (9), so \(P(\dot{x}_k|X,…)\) is replaced by \(P(\dot{x}_k|x_k,\Phi)P(\dot{x}_k|X,\theta,\gamma)\).

We can’t obtain an expression for \(P(\theta|Y)\) (which is not a likelihood, but a posterior) directly, because that would require analytically integrating out the other parameters. Instead, we use MCMC to integrate these out by sampling from the joint distribution and then looking at the empirical distribution of our samples of theta. Note that \(P(\theta|Y) = P(\theta,Y)/P(Y)\), and we assume that \(P(Y)\) is constant, so this is equivalent to sampling from the posterior.

Sorry, I don’t think there has been a more comprehensive review of it yet. ODE parameter inference using GPs is a relatively new area of research.

Edited to add: I have now come across a good review paper: MacDonald and Husmeier (2015)It’s R code actually; I have attached the scripts. Unfortunately they are not very well documented, but I hope they will be helpful.

Edited to add: I hope to make a more robust version of this code available via my website at some point, but if anyone else would like the research code in the meantime, please contact me.Dear Dr. Dondelinger, I’m Associate Editor of a scientific journal, and I’d like to invite you to review a paper (following a suggestion by Dirk Husmeier, who indicated you as an expert). Could I get an address through which to explain/send you the invitation?

Happy to have a look at an abstract Fabio. My email (which I should really put on my website somewhere) is f.dondelinger@lancaster.ac.uk.