ASPCS
 
Back to Volume
Paper: Bayesian Model Selection and Extrasolar Planet Detection
Volume: 371, Statistical Challenges in Modern Astronomy IV
Page: 189
Authors: Ford, E.B.; Gregory, P.C.
Abstract: Abstract. The discovery of nearly 200 extrasolar planets during the last decade has revitalized scientific interest in the physics of planet formation and ushered in a new era for astronomy. Astronomers searching for the small signals induced by planets inevitably face significant statistical challenges. For example, radial velocity (RV) planet searches (that have discovered most of the known planets) are increasingly finding planets with small velocity amplitudes, with long orbital periods, or in multiple planet systems. Bayesian inference has the potential to improve the interpretation of existing observations, the planning of future observations and ultimately inferences concerning the overall population of planets. The main obstacle to applying Bayesian inference to extrasolar planet searches is the need to develop computationally efficient algorithms for calculating integrals over high-dimensional parameter spaces. In recent years, the refinement of Markov chain Monte Carlo (MCMC) algorithms has made it practical to accurately characterize orbital parameters and their uncertainties from RV observations of single-planet and weakly interacting multiple-planet systems.

Unfortunately, MCMC is not sufficient for Bayesian model selection, i.e., comparing the marginal posterior probability of models with different parameters, as is necessary to determine how strongly the observational data favor a model with n+1 planets over a model with just n planets. Many of the obvious estimators for the marginal posterior probability suffer from poor convergence properties. We compare several estimators of the marginal likelihood and feature those that display desirable convergence properties based on the analysis of a sample data set for HD 88133b Fischer et al. (2005). We find that methods based on importance sampling are most efficient, provided that a good analytic approximation of the posterior probability distribution is available. We present a simple algorithm for using a sample from the posterior to construct a mixture distribution that approximates the posterior and can be used for importance sampling and Bayesian model selection. We conclude with some suggestions for the development and refinement of computationally efficient and robust estimators of marginal posterior probabilities.

Back to Volume