I am a research assistant at ETH Zurich and interested in applying your method from “ODE parameter inference using adaptive gradient matching with Gaussian processes”. That is why I wanted to ask if it is possible that you likewise share your code for that project?

Thanks a lot for your help in advance and all the best,

Stefan Bauer

]]>Without any prior knowledge, you have the option of dimensionality reduction via some criterion (i.e. explained variance in the case of PCA), or clustering of variables (which is essentially a different form of dimensionality reduction, but can be more interpretable). You might also filter out a number of your genes according to variance, if you think low or high variance genes are of less importance.

]]>I’ve already replied to your comment via email, but I reproduce my reply here in the hope that it will be useful for others.

on page 217 equation (10) the part that you said: p(X dot, X, phi | tetha, gamma) = p(xk dot|xk,phi)p(xk dot|X,tetha,gamma)

Inline image 2the second line how you have derived?

This is not so much a derivation and more of a definition. This equation is originally from Calderhead et al., and they take \(P(\dot{X},X,\Phi|…) = P(\dot{X}|X,\Phi,…)P(X)P(\Phi)\), where they assume that the latter two terms are constant (in their model) and can be dropped, leaving only \(P(\dot{X}|X,\Phi,….) = \prod_k P(\dot{x}_k|X,\Phi,…)\). They use a product of expert approach to combine equation (7) and (9), so \(P(\dot{x}_k|X,…)\) is replaced by \(P(\dot{x}_k|x_k,\Phi)P(\dot{x}_k|X,\theta,\gamma)\).

2- according to my understanding, we are looking for parameter estimation I mean we are looking for likelihood probability P(tetha | Y) right? if this is correct why you are looking for joint probability distribution equation (21)? in other words, what is purpose of finding (21)?

We can’t obtain an expression for \(P(\theta|Y)\) (which is not a likelihood, but a posterior) directly, because that would require analytically integrating out the other parameters. Instead, we use MCMC to integrate these out by sampling from the joint distribution and then looking at the empirical distribution of our samples of theta. Note that \(P(\theta|Y) = P(\theta,Y)/P(Y)\), and we assume that \(P(Y)\) is constant, so this is equivalent to sampling from the posterior.

3- In first part of you article you give a reference, Calderhead 2008, I read this article and even some of its references like Rasmussen, C. E. and Williams, C. K. I. (2006). but I had difficulties to understand it do you have another material for it?

Sorry, I don’t think there has been a more comprehensive review of it yet. ODE parameter inference using GPs is a relatively new area of research.

**Edited to add**: I have now come across a good review paper: MacDonald and Husmeier (2015)

4- May I have your MATLAB m-file to see step by step your code and algorithm?

It’s R code actually; I have attached the scripts. Unfortunately they are not very well documented, but I hope they will be helpful.

**Edited to add**: I hope to make a more robust version of this code available via my website at some point, but if anyone else would like the research code in the meantime, please contact me.

Hello to you

My name is Sam Totur, master student in Physics.

I saw one of your articles in title of “ODE parameter inference using adaptive gradient matching with Gaussian processes”

It was amazing to me I want to work in this area. I read it and I have some difficulties to get its concept. The parts that I was confused are:

1- on page 217 equation (10) the part that you said: p(X dot, X, phi / tetha, gamma) = p(xk dot/xk,phi)p(xk dot/X,tetha,gamma)

Inline image 2

the second line how you have derived?

2- according to my understanding, we are looking for parameter estimation I mean we are looking for likelihood probability P(tetha I Y) right? if this is correct why you are looking for joint probability distribution equation (21)? in other words, what is purpose of finding (21)?

3- In first part of you article you give a reference, Calderhead 2008, I read this article and even some of its references like Rasmussen, C. E. and Williams, C. K. I. (2006). but I had difficulties to understand it do you have another material for it?

4- May I have your MATLAB m-file to see step by step your code and algorithm?

Thank you and eagerly looking forward for you reply.

Best regards

]]>