Amazon cover image
Image from Amazon.com

Bayesian Biostatistics.

By: Contributor(s): Series: Statistics in Practice SerPublisher: Somerset : John Wiley & Sons, Incorporated, 2012Copyright date: ©2012Edition: 1st edDescription: 1 online resource (539 pages)Content type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9781119942405
Subject(s): Genre/Form: Additional physical formats: Print version:: Bayesian BiostatisticsDDC classification:
  • 570.1/5195
LOC classification:
  • QH323.5 -- .L45 2012eb
Online resources:
Contents:
Cover -- Title Page -- Copyright -- Contents -- Preface -- Notation, terminology and some guidance for reading the book -- Part I Basic Concepts in Bayesian Methods -- Chapter 1 Modes of statistical inference -- 1.1 The frequentist approach: A critical reflection -- 1.1.1 The classical statistical approach -- 1.1.2 The P-value as a measure of evidence -- 1.1.3 The confidence interval as a measure of evidence -- 1.1.4 An historical note on the two frequentist paradigms -- 1.2 Statistical inference based on the likelihood function -- 1.2.1 The likelihood function -- 1.2.2 The likelihood principles -- 1.3 The Bayesian approach: Some basic ideas -- 1.3.1 Introduction -- 1.3.2 Bayes theorem - discrete version for simple events -- 1.4 Outlook -- Exercises -- Chapter 2 Bayes theorem: Computing the posterior distribution -- 2.1 Introduction -- 2.2 Bayes theorem - the binary version -- 2.3 Probability in a Bayesian context -- 2.4 Bayes theorem - the categorical version -- 2.5 Bayes theorem - the continuous version -- 2.6 The binomial case -- 2.7 The Gaussian case -- 2.8 The Poisson case -- 2.9 The prior and posterior distribution of h(θ) -- 2.10 Bayesian versus likelihood approach -- 2.11 Bayesian versus frequentist approach -- 2.12 The different modes of the Bayesian approach -- 2.13 An historical note on the Bayesian approach -- 2.14 Closing remarks -- Exercises -- Chapter 3 Introduction to Bayesian inference -- 3.1 Introduction -- 3.2 Summarizing the posterior by probabilities -- 3.3 Posterior summary measures -- 3.3.1 Characterizing the location and variability of the posterior distribution -- 3.3.2 Posterior interval estimation -- 3.4 Predictive distributions -- 3.4.1 The frequentist approach to prediction -- 3.4.2 The Bayesian approach to prediction -- 3.4.3 Applications -- 3.5 Exchangeability -- 3.6 A normal approximation to the posterior.
3.6.1 A Bayesian analysis based on a normal approximation to the likelihood -- 3.6.2 Asymptotic properties of the posterior distribution -- 3.7 Numerical techniques to determine the posterior -- 3.7.1 Numerical integration -- 3.7.2 Sampling from the posterior -- 3.7.3 Choice of posterior summary measures -- 3.8 Bayesian hypothesis testing -- 3.8.1 Inference based on credible intervals -- 3.8.2 The Bayes factor -- 3.8.3 Bayesian versus frequentist hypothesis testing -- 3.9 Closing remarks -- Exercises -- Chapter 4 More than one parameter -- 4.1 Introduction -- 4.2 Joint versus marginal posterior inference -- 4.3 The normal distribution with μ and σ2 unknown -- 4.3.1 No prior knowledge on μ and σ2 is available -- 4.3.2 An historical study is available -- 4.3.3 Expert knowledge is available -- 4.4 Multivariate distributions -- 4.4.1 The multivariate normal and related distributions -- 4.4.2 The multinomial distribution -- 4.5 Frequentist properties of Bayesian inference -- 4.6 Sampling from the posterior distribution: The Method of Composition -- 4.7 Bayesian linear regression models -- 4.7.1 The frequentist approach to linear regression -- 4.7.2 A noninformative Bayesian linear regression model -- 4.7.3 Posterior summary measures for the linear regression model -- 4.7.4 Sampling from the posterior distribution -- 4.7.5 An informative Bayesian linear regression model -- 4.8 Bayesian generalized linear models -- 4.9 More complex regression models -- 4.10 Closing remarks -- Exercises -- Chapter 5 Choosing the prior distribution -- 5.1 Introduction -- 5.2 The sequential use of Bayes theorem -- 5.3 Conjugate prior distributions -- 5.3.1 Univariate data distributions -- 5.3.2 Normal distribution - mean and variance unknown -- 5.3.3 Multivariate data distributions -- 5.3.4 Conditional conjugate and semiconjugate distributions -- 5.3.5 Hyperpriors.
5.4 Noninformative prior distributions -- 5.4.1 Introduction -- 5.4.2 Expressing ignorance -- 5.4.3 General principles to choose noninformative priors -- 5.4.4 Improper prior distributions -- 5.4.5 Weak/vague priors -- 5.5 Informative prior distributions -- 5.5.1 Introduction -- 5.5.2 Data-based prior distributions -- 5.5.3 Elicitation of prior knowledge -- 5.5.4 Archetypal prior distributions -- 5.6 Prior distributions for regression models -- 5.6.1 Normal linear regression -- 5.6.2 Generalized linear models -- 5.6.3 Specification of priors in Bayesian software -- 5.7 Modeling priors -- 5.8 Other regression models -- 5.9 Closing remarks -- Exercises -- Chapter 6 Markov chain Monte Carlo sampling -- 6.1 Introduction -- 6.2 The Gibbs sampler -- 6.2.1 The bivariate Gibbs sampler -- 6.2.2 The general Gibbs sampler -- 6.2.3 Remarks -- 6.2.4 Review of Gibbs sampling approaches -- 6.2.5 The Slice sampler -- 6.3 The Metropolis(-Hastings) algorithm -- 6.3.1 The Metropolis algorithm -- 6.3.2 The Metropolis-Hastings algorithm -- 6.3.3 Remarks -- 6.3.4 Review of Metropolis(-Hastings) approaches -- 6.4 Justification of the MCMC approaches -- 6.4.1 Properties of the MH algorithm -- 6.4.2 Properties of the Gibbs sampler -- 6.5 Choice of the sampler -- 6.6 The Reversible Jump MCMC algorithm -- 6.7 Closing remarks -- Exercises -- Chapter 7 Assessing and improving convergence of the Markov chain -- 7.1 Introduction -- 7.2 Assessing convergence of a Markov chain -- 7.2.1 Definition of convergence for a Markov chain -- 7.2.2 Checking convergence of the Markov chain -- 7.2.3 Graphical approaches to assess convergence -- 7.2.4 Formal diagnostic tests -- 7.2.5 Computing the Monte Carlo standard error -- 7.2.6 Practical experience with the formal diagnostic procedures -- 7.3 Accelerating convergence -- 7.3.1 Introduction -- 7.3.2 Acceleration techniques.
7.4 Practical guidelines for assessing and accelerating convergence -- 7.5 Data augmentation -- 7.6 Closing remarks -- Exercises -- Chapter 8 Software -- 8.1 WinBUGS and related software -- 8.1.1 A first analysis -- 8.1.2 Information on samplers -- 8.1.3 Assessing and accelerating convergence -- 8.1.4 Vector and matrix manipulations -- 8.1.5 Working in batch mode -- 8.1.6 Troubleshooting -- 8.1.7 Directed acyclic graphs -- 8.1.8 Add-on modules: GeoBUGS and PKBUGS -- 8.1.9 Related software -- 8.2 Bayesian analysis using SAS -- 8.2.1 Analysis using procedure GENMOD -- 8.2.2 Analysis using procedure MCMC -- 8.2.3 Other Bayesian programs -- 8.3 Additional Bayesian software and comparisons -- 8.3.1 Additional Bayesian software -- 8.3.2 Comparison of Bayesian software -- 8.4 Closing remarks -- Exercises -- Part II Bayesian Tools for Statistical Modeling -- Chapter 9 Hierarchical models -- 9.1 Introduction -- 9.2 The Poisson-gamma hierarchical model -- 9.2.1 Introduction -- 9.2.2 Model specification -- 9.2.3 Posterior distributions -- 9.2.4 Estimating the parameters -- 9.2.5 Posterior predictive distributions -- 9.3 Full versus empirical Bayesian approach -- 9.4 Gaussian hierarchical models -- 9.4.1 Introduction -- 9.4.2 The Gaussian hierarchical model -- 9.4.3 Estimating the parameters -- 9.4.4 Posterior predictive distributions -- 9.4.5 Comparison of FB and EB approach -- 9.5 Mixed models -- 9.5.1 Introduction -- 9.5.2 The linear mixed model -- 9.5.3 The generalized linear mixed model -- 9.5.4 Nonlinear mixed models -- 9.5.5 Some further extensions -- 9.5.6 Estimation of the random effects and posterior predictive distributions -- 9.5.7 Choice of the level-2 variance prior -- 9.6 Propriety of the posterior -- 9.7 Assessing and accelerating convergence -- 9.8 Comparison of Bayesian and frequentist hierarchical models.
9.8.1 Estimating the level-2 variance -- 9.8.2 ML and REML estimates compared with Bayesian estimates -- 9.9 Closing remarks -- Exercises -- Chapter 10 Model building and assessment -- 10.1 Introduction -- 10.2 Measures for model selection -- 10.2.1 The Bayes factor -- 10.2.2 Information theoretic measures for model selection -- 10.2.3 Model selection based on predictive loss functions -- 10.3 Model checking -- 10.3.1 Introduction -- 10.3.2 Model-checking procedures -- 10.3.3 Sensitivity analysis -- 10.3.4 Posterior predictive checks -- 10.3.5 Model expansion -- 10.4 Closing remarks -- Exercises -- Chapter 11 Variable selection -- 11.1 Introduction -- 11.2 Classical variable selection -- 11.2.1 Variable selection techniques -- 11.2.2 Frequentist regularization -- 11.3 Bayesian variable selection: Concepts and questions -- 11.4 Introduction to Bayesian variable selection -- 11.4.1 Variable selection for K small -- 11.4.2 Variable selection for K large -- 11.5 Variable selection based on Zellner's g-prior -- 11.6 Variable selection based on Reversible Jump Markov chain Monte Carlo -- 11.7 Spike and slab priors -- 11.7.1 Stochastic Search Variable Selection -- 11.7.2 Gibbs Variable Selection -- 11.7.3 Dependent variable selection using SSVS -- 11.8 Bayesian regularization -- 11.8.1 Bayesian LASSO regression -- 11.8.2 Elastic Net and further extensions of the Bayesian LASSO -- 11.9 The many regressors case -- 11.10 Bayesian model selection -- 11.11 Bayesian model averaging -- 11.12 Closing remarks -- Exercises -- Part III Bayesian Methods in Practical Applications -- Chapter 12 Bioassay -- 12.1 Bioassay essentials -- 12.1.1 Cell assays -- 12.1.2 Animal assays -- 12.2 A generic in vitro example -- 12.3 Ames/Salmonella mutagenic assay -- 12.4 Mouse lymphoma assay (L5178Y TK+/-) -- 12.5 Closing remarks -- Chapter 13 Measurement error.
13.1 Continuous measurement error.
Summary: Emmanuel Lesaffre, Professor of Statistics, Biostatistical Centre, Catholic University of Leuven, Leuven, Belgium. Dr Lesaffre has worked on and studied various areas of biostatistics for 25 years. He has taught a variety of courses to students from many disciplines, from medicine and pharmacy, to statistics and engineering, teaching Bayesian statistics for the last 5 years. Having published over 200 papers in major statistical and medical journals, he has also Co-Edited the book Disease Mapping and Risk Assessment for Public Health, and was the Associate Editor for Biometrics. He is currently Co-Editor of the journal "Statistical Modelling: An International Journal", Special Editor of two volumes on Statistics in Dentistry in Statistical Methods in Medical Research, and a member of the Editorial Boards of numerous journals. Andrew Lawson, Professor of Statistics, Dept of Epidemiology & Biostatistics, University of South Carolina, USA. Dr Lawson has considerable and wide ranging experience in the development of statistical methods for spatial and environmental epidemiology. He has solid experience in teaching Bayesian statistics to students studying biostatistics and has also written two books and numerous journal articles in the biostatistics area. Dr Lawson has also guest edited two special issues of "Statistics in Medicine" focusing on Disease Mapping. He is a member of the editorial boards of the journals: Statistics in Medicine and .
Holdings
Item type Current library Call number Status Date due Barcode Item holds
Ebrary Ebrary Afghanistan Available EBKAF00065271
Ebrary Ebrary Algeria Available
Ebrary Ebrary Cyprus Available
Ebrary Ebrary Egypt Available
Ebrary Ebrary Libya Available
Ebrary Ebrary Morocco Available
Ebrary Ebrary Nepal Available EBKNP00065271
Ebrary Ebrary Sudan Available
Ebrary Ebrary Tunisia Available
Total holds: 0

Cover -- Title Page -- Copyright -- Contents -- Preface -- Notation, terminology and some guidance for reading the book -- Part I Basic Concepts in Bayesian Methods -- Chapter 1 Modes of statistical inference -- 1.1 The frequentist approach: A critical reflection -- 1.1.1 The classical statistical approach -- 1.1.2 The P-value as a measure of evidence -- 1.1.3 The confidence interval as a measure of evidence -- 1.1.4 An historical note on the two frequentist paradigms -- 1.2 Statistical inference based on the likelihood function -- 1.2.1 The likelihood function -- 1.2.2 The likelihood principles -- 1.3 The Bayesian approach: Some basic ideas -- 1.3.1 Introduction -- 1.3.2 Bayes theorem - discrete version for simple events -- 1.4 Outlook -- Exercises -- Chapter 2 Bayes theorem: Computing the posterior distribution -- 2.1 Introduction -- 2.2 Bayes theorem - the binary version -- 2.3 Probability in a Bayesian context -- 2.4 Bayes theorem - the categorical version -- 2.5 Bayes theorem - the continuous version -- 2.6 The binomial case -- 2.7 The Gaussian case -- 2.8 The Poisson case -- 2.9 The prior and posterior distribution of h(θ) -- 2.10 Bayesian versus likelihood approach -- 2.11 Bayesian versus frequentist approach -- 2.12 The different modes of the Bayesian approach -- 2.13 An historical note on the Bayesian approach -- 2.14 Closing remarks -- Exercises -- Chapter 3 Introduction to Bayesian inference -- 3.1 Introduction -- 3.2 Summarizing the posterior by probabilities -- 3.3 Posterior summary measures -- 3.3.1 Characterizing the location and variability of the posterior distribution -- 3.3.2 Posterior interval estimation -- 3.4 Predictive distributions -- 3.4.1 The frequentist approach to prediction -- 3.4.2 The Bayesian approach to prediction -- 3.4.3 Applications -- 3.5 Exchangeability -- 3.6 A normal approximation to the posterior.

3.6.1 A Bayesian analysis based on a normal approximation to the likelihood -- 3.6.2 Asymptotic properties of the posterior distribution -- 3.7 Numerical techniques to determine the posterior -- 3.7.1 Numerical integration -- 3.7.2 Sampling from the posterior -- 3.7.3 Choice of posterior summary measures -- 3.8 Bayesian hypothesis testing -- 3.8.1 Inference based on credible intervals -- 3.8.2 The Bayes factor -- 3.8.3 Bayesian versus frequentist hypothesis testing -- 3.9 Closing remarks -- Exercises -- Chapter 4 More than one parameter -- 4.1 Introduction -- 4.2 Joint versus marginal posterior inference -- 4.3 The normal distribution with μ and σ2 unknown -- 4.3.1 No prior knowledge on μ and σ2 is available -- 4.3.2 An historical study is available -- 4.3.3 Expert knowledge is available -- 4.4 Multivariate distributions -- 4.4.1 The multivariate normal and related distributions -- 4.4.2 The multinomial distribution -- 4.5 Frequentist properties of Bayesian inference -- 4.6 Sampling from the posterior distribution: The Method of Composition -- 4.7 Bayesian linear regression models -- 4.7.1 The frequentist approach to linear regression -- 4.7.2 A noninformative Bayesian linear regression model -- 4.7.3 Posterior summary measures for the linear regression model -- 4.7.4 Sampling from the posterior distribution -- 4.7.5 An informative Bayesian linear regression model -- 4.8 Bayesian generalized linear models -- 4.9 More complex regression models -- 4.10 Closing remarks -- Exercises -- Chapter 5 Choosing the prior distribution -- 5.1 Introduction -- 5.2 The sequential use of Bayes theorem -- 5.3 Conjugate prior distributions -- 5.3.1 Univariate data distributions -- 5.3.2 Normal distribution - mean and variance unknown -- 5.3.3 Multivariate data distributions -- 5.3.4 Conditional conjugate and semiconjugate distributions -- 5.3.5 Hyperpriors.

5.4 Noninformative prior distributions -- 5.4.1 Introduction -- 5.4.2 Expressing ignorance -- 5.4.3 General principles to choose noninformative priors -- 5.4.4 Improper prior distributions -- 5.4.5 Weak/vague priors -- 5.5 Informative prior distributions -- 5.5.1 Introduction -- 5.5.2 Data-based prior distributions -- 5.5.3 Elicitation of prior knowledge -- 5.5.4 Archetypal prior distributions -- 5.6 Prior distributions for regression models -- 5.6.1 Normal linear regression -- 5.6.2 Generalized linear models -- 5.6.3 Specification of priors in Bayesian software -- 5.7 Modeling priors -- 5.8 Other regression models -- 5.9 Closing remarks -- Exercises -- Chapter 6 Markov chain Monte Carlo sampling -- 6.1 Introduction -- 6.2 The Gibbs sampler -- 6.2.1 The bivariate Gibbs sampler -- 6.2.2 The general Gibbs sampler -- 6.2.3 Remarks -- 6.2.4 Review of Gibbs sampling approaches -- 6.2.5 The Slice sampler -- 6.3 The Metropolis(-Hastings) algorithm -- 6.3.1 The Metropolis algorithm -- 6.3.2 The Metropolis-Hastings algorithm -- 6.3.3 Remarks -- 6.3.4 Review of Metropolis(-Hastings) approaches -- 6.4 Justification of the MCMC approaches -- 6.4.1 Properties of the MH algorithm -- 6.4.2 Properties of the Gibbs sampler -- 6.5 Choice of the sampler -- 6.6 The Reversible Jump MCMC algorithm -- 6.7 Closing remarks -- Exercises -- Chapter 7 Assessing and improving convergence of the Markov chain -- 7.1 Introduction -- 7.2 Assessing convergence of a Markov chain -- 7.2.1 Definition of convergence for a Markov chain -- 7.2.2 Checking convergence of the Markov chain -- 7.2.3 Graphical approaches to assess convergence -- 7.2.4 Formal diagnostic tests -- 7.2.5 Computing the Monte Carlo standard error -- 7.2.6 Practical experience with the formal diagnostic procedures -- 7.3 Accelerating convergence -- 7.3.1 Introduction -- 7.3.2 Acceleration techniques.

7.4 Practical guidelines for assessing and accelerating convergence -- 7.5 Data augmentation -- 7.6 Closing remarks -- Exercises -- Chapter 8 Software -- 8.1 WinBUGS and related software -- 8.1.1 A first analysis -- 8.1.2 Information on samplers -- 8.1.3 Assessing and accelerating convergence -- 8.1.4 Vector and matrix manipulations -- 8.1.5 Working in batch mode -- 8.1.6 Troubleshooting -- 8.1.7 Directed acyclic graphs -- 8.1.8 Add-on modules: GeoBUGS and PKBUGS -- 8.1.9 Related software -- 8.2 Bayesian analysis using SAS -- 8.2.1 Analysis using procedure GENMOD -- 8.2.2 Analysis using procedure MCMC -- 8.2.3 Other Bayesian programs -- 8.3 Additional Bayesian software and comparisons -- 8.3.1 Additional Bayesian software -- 8.3.2 Comparison of Bayesian software -- 8.4 Closing remarks -- Exercises -- Part II Bayesian Tools for Statistical Modeling -- Chapter 9 Hierarchical models -- 9.1 Introduction -- 9.2 The Poisson-gamma hierarchical model -- 9.2.1 Introduction -- 9.2.2 Model specification -- 9.2.3 Posterior distributions -- 9.2.4 Estimating the parameters -- 9.2.5 Posterior predictive distributions -- 9.3 Full versus empirical Bayesian approach -- 9.4 Gaussian hierarchical models -- 9.4.1 Introduction -- 9.4.2 The Gaussian hierarchical model -- 9.4.3 Estimating the parameters -- 9.4.4 Posterior predictive distributions -- 9.4.5 Comparison of FB and EB approach -- 9.5 Mixed models -- 9.5.1 Introduction -- 9.5.2 The linear mixed model -- 9.5.3 The generalized linear mixed model -- 9.5.4 Nonlinear mixed models -- 9.5.5 Some further extensions -- 9.5.6 Estimation of the random effects and posterior predictive distributions -- 9.5.7 Choice of the level-2 variance prior -- 9.6 Propriety of the posterior -- 9.7 Assessing and accelerating convergence -- 9.8 Comparison of Bayesian and frequentist hierarchical models.

9.8.1 Estimating the level-2 variance -- 9.8.2 ML and REML estimates compared with Bayesian estimates -- 9.9 Closing remarks -- Exercises -- Chapter 10 Model building and assessment -- 10.1 Introduction -- 10.2 Measures for model selection -- 10.2.1 The Bayes factor -- 10.2.2 Information theoretic measures for model selection -- 10.2.3 Model selection based on predictive loss functions -- 10.3 Model checking -- 10.3.1 Introduction -- 10.3.2 Model-checking procedures -- 10.3.3 Sensitivity analysis -- 10.3.4 Posterior predictive checks -- 10.3.5 Model expansion -- 10.4 Closing remarks -- Exercises -- Chapter 11 Variable selection -- 11.1 Introduction -- 11.2 Classical variable selection -- 11.2.1 Variable selection techniques -- 11.2.2 Frequentist regularization -- 11.3 Bayesian variable selection: Concepts and questions -- 11.4 Introduction to Bayesian variable selection -- 11.4.1 Variable selection for K small -- 11.4.2 Variable selection for K large -- 11.5 Variable selection based on Zellner's g-prior -- 11.6 Variable selection based on Reversible Jump Markov chain Monte Carlo -- 11.7 Spike and slab priors -- 11.7.1 Stochastic Search Variable Selection -- 11.7.2 Gibbs Variable Selection -- 11.7.3 Dependent variable selection using SSVS -- 11.8 Bayesian regularization -- 11.8.1 Bayesian LASSO regression -- 11.8.2 Elastic Net and further extensions of the Bayesian LASSO -- 11.9 The many regressors case -- 11.10 Bayesian model selection -- 11.11 Bayesian model averaging -- 11.12 Closing remarks -- Exercises -- Part III Bayesian Methods in Practical Applications -- Chapter 12 Bioassay -- 12.1 Bioassay essentials -- 12.1.1 Cell assays -- 12.1.2 Animal assays -- 12.2 A generic in vitro example -- 12.3 Ames/Salmonella mutagenic assay -- 12.4 Mouse lymphoma assay (L5178Y TK+/-) -- 12.5 Closing remarks -- Chapter 13 Measurement error.

13.1 Continuous measurement error.

Emmanuel Lesaffre, Professor of Statistics, Biostatistical Centre, Catholic University of Leuven, Leuven, Belgium. Dr Lesaffre has worked on and studied various areas of biostatistics for 25 years. He has taught a variety of courses to students from many disciplines, from medicine and pharmacy, to statistics and engineering, teaching Bayesian statistics for the last 5 years. Having published over 200 papers in major statistical and medical journals, he has also Co-Edited the book Disease Mapping and Risk Assessment for Public Health, and was the Associate Editor for Biometrics. He is currently Co-Editor of the journal "Statistical Modelling: An International Journal", Special Editor of two volumes on Statistics in Dentistry in Statistical Methods in Medical Research, and a member of the Editorial Boards of numerous journals. Andrew Lawson, Professor of Statistics, Dept of Epidemiology & Biostatistics, University of South Carolina, USA. Dr Lawson has considerable and wide ranging experience in the development of statistical methods for spatial and environmental epidemiology. He has solid experience in teaching Bayesian statistics to students studying biostatistics and has also written two books and numerous journal articles in the biostatistics area. Dr Lawson has also guest edited two special issues of "Statistics in Medicine" focusing on Disease Mapping. He is a member of the editorial boards of the journals: Statistics in Medicine and .

Description based on publisher supplied metadata and other sources.

Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2019. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.

There are no comments on this title.

to post a comment.