Amazon cover image
Image from Amazon.com

Introduction to Robust Estimation and Hypothesis Testing : Introduction to Robust Estimation and Hypothesis Testing.

By: Series: Statistical Modeling and Decision Science SerPublisher: St. Louis : Elsevier Science & Technology, 2012Copyright date: ©2013Edition: 3rd edDescription: 1 online resource (713 pages)Content type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9780123870155
Subject(s): Genre/Form: Additional physical formats: Print version:: Introduction to Robust Estimation and Hypothesis Testing : Introduction to Robust Estimation and Hypothesis TestingDDC classification:
  • 519.544
LOC classification:
  • QA276.8.W55 2012
Online resources:
Contents:
Front Cover -- Introduction to Robust Estimation and Hypothesis Testing -- Copyright -- Table of Contents -- Preface -- 1 Introduction -- 1.1 Problems with Assuming Normality -- 1.2 Transformations -- 1.3 The Influence Curve -- 1.4 The Central Limit Theorem -- 1.5 Is the ANOVA F Robust? -- 1.6 Regression -- 1.7 More Remarks -- 1.8 Using the Computer: R -- 1.9 Some Data Management Issues -- 1.9.1 Eliminating Missing Values -- 2 A Foundation for Robust Methods -- 2.1 Basic Tools for Judging Robustness -- 2.1.1 Qualitative Robustness -- 2.1.2 Infinitesimal Robustness -- 2.1.3 Quantitative Robustness -- 2.2 Some Measures of Location and Their Influence Function -- 2.2.1 Quantiles -- 2.2.2 The Winsorized Mean -- 2.2.3 The Trimmed Mean -- 2.2.4 M-Measures of Location -- 2.2.5 R-Measures of Location -- 2.3 Measures of Scale -- 2.4 Scale Equivariant M-Measures of Location -- 2.5 Winsorized Expected Values -- 3 Estimating Measures of Location and Scale -- 3.1 A Bootstrap Estimate of a Standard Error -- 3.1.1 R Function bootse -- 3.2 Density Estimators -- 3.2.1 Normal Kernel -- 3.2.2 Rosenblatt's Shifted Histogram -- 3.2.3 The Expected Frequency Curve -- 3.2.4 An Adaptive Kernel Estimator -- 3.2.5 R Functions skerd, kerden, kdplot, rdplot, akerd, and splot -- 3.3 The Sample Trimmed Mean -- 3.3.1 R Functions mean, tmean, and lloc -- 3.3.2 Estimating the Standard Error of the Trimmed Mean -- 3.3.3 Estimating the Standard Error of the Sample Winsorized Mean -- 3.3.4 R Functions winmean, winvar, trimse, and winse -- 3.3.5 Estimating the Standard Error of the Sample Median, M -- 3.3.6 R Function msmedse -- 3.4 The Finite Sample Breakdown Point -- 3.5 Estimating Quantiles -- 3.5.1 Estimating the Standard Error of the Sample Quantile -- 3.5.2 R Function qse -- 3.5.3 The Maritz-Jarrett Estimate of the Standard Error of x?q -- 3.5.4 R Function mjse.
3.5.5 The Harrell-Davis Estimator -- 3.5.6 R Function hd -- 3.5.7 A Bootstrap Estimate of the Standard Error of θ?q -- 3.5.8 R Function hdseb -- 3.6 An M-Estimator of Location -- 3.6.1 R Function mad -- 3.6.2 Computing an M-estimator of Location -- 3.6.3 R Functions mest -- 3.6.4 Estimating the Standard Error of the M-estimator -- 3.6.5 R Function mestse -- 3.6.6 A Bootstrap Estimate of the Standard Error of μ?m -- 3.6.7 R Function mestseb -- 3.7 One-Step M-estimator -- 3.7.1 R Function onestep -- 3.8 W-estimators -- 3.8.1 Tau Measure of Location -- 3.8.2 R Function tauloc -- 3.8.3 Zuo's Weighted Estimator -- 3.9 The Hodges-Lehmann Estimator -- 3.10 Skipped Estimators -- 3.10.1 R Functions mom and bmean -- 3.11 Some Comparisons of the Location Estimators -- 3.12 More Measures of Scale -- 3.12.1 The Biweight Midvariance -- 3.12.2 R Function bivar -- 3.12.3 The Percentage Bend Midvariance and tau Measure of Variation -- 3.12.4 R Functions pbvar, tauvar -- 3.12.5 The Interquartile Range -- 3.12.6 R Function idealf -- 3.13 Some Outlier Detection Methods -- 3.13.1 Rules Based on Means and Variances -- 3.13.2 A Method Based on the Interquartile Range -- 3.13.3 Carling's Modification -- 3.13.4 A MAD-Median Rule -- 3.13.5 R Functions outbox, out, and boxplot -- 3.13.6 Skewness and the Boxplot Rule -- 3.13.7 R Function adjboxout -- 3.14 Exercises -- 4 Confidence Intervals in the One-Sample Case -- 4.1 Problems when Working with Means -- 4.2 The g-and-h Distribution -- 4.2.1 R Functions ghdist and rmul -- 4.3 Inferences About the Trimmed and Winsorized Means -- 4.3.1 R Functions trimci and winci -- 4.4 Basic Bootstrap Methods -- 4.4.1 The Percentile Bootstrap Method -- 4.4.2 R Function onesampb -- 4.4.3 Bootstrap-t Method -- 4.4.4 Bootstrap Methods when Using a Trimmed Mean -- 4.4.5 Singh's Modification -- 4.4.6 R Functions trimpb and trimcibt.
4.5 Inferences About M-Estimators -- 4.5.1 R Functions mestci and momci -- 4.6 Confidence Intervals for Quantiles -- 4.6.1 Beware of Tied Values when Using the Median -- 4.6.2 Alternative Method for the Median -- 4.6.3 R Functions qmjci, hdci, sint, sintv2, qci, and qint -- 4.7 Empirical Likelihood -- 4.7.1 Bartlett Corrected Empirical Likelihood -- 4.8 Concluding Remarks -- 4.9 Exercises -- 5 Comparing Two Groups -- 5.1 The Shift Function -- 5.1.1 The Kolmogorov-Smirnov Test -- 5.1.2 R Functions ks, kssig, kswsig, and kstiesig -- 5.1.3 The S Band and W Band for the Shift Function -- 5.1.4 R Functions sband and wband -- 5.1.5 Confidence Band for the Deciles Only -- 5.1.6 R Function shifthd -- 5.1.7 R Functions g2plot and splotg2 -- 5.2 Student's t-test -- 5.3 Comparing Medians and Other Trimmed Means -- 5.3.1 R Function yuen -- 5.3.2 A Bootstrap-t Method for Comparing Trimmed Means -- 5.3.3 R Functions yuenbt and yhbt -- 5.3.4 Measuring Effect Size: Robust Analogs of Cohen's d -- 5.3.5 R Functions akp.effect, yuenv2, and ees.ci -- 5.3.6 Comments on Measuring Effect Size -- 5.4 Inferences Based on a Percentile Bootstrap Method -- 5.4.1 Comparing M-Estimators -- 5.4.2 Comparing Trimmed Means and Medians -- 5.4.3 R Functions trimpb2, pb2gen, m2ci, and medpb2 -- 5.5 Comparing Measures of Scale -- 5.5.1 Comparing Variances -- 5.5.2 R Function comvar2 -- 5.5.3 Comparing Biweight Midvariances -- 5.5.4 R Function b2ci -- 5.6 Permutation Tests -- 5.6.1 R Function permg -- 5.7 Inferences About a Probabilistic Measure of Effect Size -- 5.7.1 R Function mee -- 5.7.2 The Cliff and Bruner-Munzel Methods: Handling Tied Values -- 5.7.3 R Functions cid, cidv2, bmp, and wmwloc -- 5.8 Comparing Two Independent Binomials -- 5.8.1 Storer-Kim Method -- 5.8.2 Beal's Method -- 5.8.3 KMS Method -- 5.8.4 R Functions twobinom, twobici, bi2KMS, bi2KMSv2, and bi2CR.
5.8.5 Comparing Discrete Distributions: R Functions binband and disc2com -- 5.9 Comparing Dependent Groups -- 5.9.1 A Shift Function for Dependent Groups -- 5.9.2 R Function lband -- 5.9.3 Comparing Deciles -- 5.9.4 R Function shiftdhd -- 5.9.5 Comparing Trimmed Means -- 5.9.6 R Functions yuend and yuendv2 -- 5.9.7 A Bootstrap-t Method for Marginal Trimmed Means -- 5.9.8 R Function ydbt -- 5.9.9 Inferences about the Distribution of Difference Scores -- 5.9.10 R Functions loc2dif and l2drmci -- 5.9.11 Percentile Bootstrap: Comparing Medians, M-Estimators and Other Measures of Location and Scale -- 5.9.12 R Function bootdpci -- 5.9.13 Handling Missing Values -- Method M1 -- Method M2 -- Method M3 -- Comments on Choosing a Method -- 5.9.14 R Functions rm2miss and rmmismcp -- 5.9.15 Comparing Variances -- 5.9.16 The Sign Test and Inferences about the Binomial Distribution -- 5.9.17 R Functions binomci and acbinomci -- 5.10 Exercises -- 6 Some Multivariate Methods -- 6.1 Generalized Variance -- 6.2 Depth -- 6.2.1 Mahalanobis Depth -- 6.2.2 Halfspace Depth -- 6.2.3 Computing Halfspace Depth -- 6.2.4 R Functions depth2, depth, fdepth, fdepthv2, and unidepth -- 6.2.5 Projection Depth -- 6.2.6 R functions pdis and pdisMC -- 6.2.7 Other Measures of Depth -- 6.2.8 R Function zdepth -- 6.3 Some Affine Equivariant Estimators -- 6.3.1 Minimum Volume Ellipsoid Estimator -- 6.3.2 The Minimum Covariance Determinant Estimator -- 6.3.3 S-Estimators and Constrained M-Estimators -- 6.3.4 R Function tbs -- 6.3.5 Donoho-Gasko Generalization of a Trimmed Mean -- 6.3.6 R Functions dmean and dcov -- 6.3.7 The Stahel-Donoho W-Estimator -- 6.3.8 R Function sdwe -- 6.3.9 Median Ball Algorithm -- 6.3.10 R Function rmba -- 6.3.11 OGK Estimator -- 6.3.12 R Function ogk -- 6.3.13 An M-Estimator -- 6.3.14 R Function MARest -- 6.4 Multivariate Outlier Detection Methods.
6.4.1 A Relplot -- 6.4.2 R Function relplot -- 6.4.3 The MVE Method -- 6.4.4 The MCD Method -- 6.4.5 R Functions covmve and covmcd -- 6.4.6 R function out -- 6.4.7 The MGV Method -- 6.4.8 R Function outmgv -- 6.4.9 A Projection Method -- 6.4.10 R functions outpro and out3d -- 6.4.11 Outlier Identification in High Dimensions -- 6.4.12 R Function outproad and outmgvad -- 6.4.13 Approaches Based on Geometric Quantiles -- 6.4.14 Comments on Choosing a Method -- 6.5 A Skipped Estimator of Location and Scatter -- 6.5.1 R Functions smean, wmcd, wmve, mgvmean, L1medcen, spat,mgvcov, skip, skipcov, and dcov -- 6.6 Robust Generalized Variance -- 6.6.1 R Function gvarg -- 6.7 Inference in the One-Sample Case -- 6.7.1 Inferences Based on the OP Measure of Location -- 6.7.2 Extension of Hotelling's T2 to Trimmed Means -- 6.7.3 R Functions smeancrv2 and hotel1.tr -- 6.7.4 Inferences Based on the MGV Estimator -- 6.7.5 R Function smgvcr -- 6.8 Two-Sample Case -- 6.8.1 R Functions smean2, smean2v2, matsplit, and mat2grp -- Data Management -- 6.8.2 Comparing Robust Generalized Variances -- 6.8.3 R function gvar2g -- 6.9 Multivariate Density Estimators -- 6.10 A Two-Sample, Projection-Type Extension of the Wilcoxon-Mann-Whitney Test -- 6.10.1 R functions mulwmw and mulwmwv2 -- 6.11 A Relative Depth Analog of the Wilcoxon-Mann-Whitney Test -- 6.11.1 R function mwmw -- 6.12 Comparisons Based on Depth -- 6.12.1 R Functions lsqs3 and depthg2 -- 6.13 Comparing Dependent Groups Based on All Pairwise Differences -- 6.13.1 R Function dfried -- 6.14 Robust Principal Components Analysis -- 6.14.1 R Functions prcomp and regpca -- 6.14.2 Maronna's Method -- 6.14.3 The SPCA Method -- 6.14.4 Method HRVB -- 6.14.5 Method OP -- 6.14.6 Method PPCA -- 6.14.7 R Functions outpca, robpca, robpcaS, SPCA, Ppca, and Ppca.summary -- 6.14.8 Comments on Choosing the Number of Components.
6.15 Cluster Analysis.
Summary: This revised book provides a thorough explanation of the foundation of robust methods, incorporating the latest updates on R and S-Plus, robust ANOVA (Analysis of Variance) and regression. It guides advanced students and other professionals through the basic strategies used for developing practical solutions to problems, and provides a brief background on the foundations of modern methods, placing the new methods in historical context. Author Rand Wilcox includes chapter exercises and many real-world examples that illustrate how various methods perform in different situations. Introduction to Robust Estimation and Hypothesis Testing, Second Edition, focuses on the practical applications of modern, robust methods which can greatly enhance our chances of detecting true differences among groups and true associations among variables. Covers latest developments in robust regression Covers latest improvements in ANOVA Includes newest rank-based methods Describes and illustrated easy to use software.
Holdings
Item type Current library Call number Status Date due Barcode Item holds
Ebrary Ebrary Afghanistan Available EBKAF00060250
Ebrary Ebrary Algeria Available
Ebrary Ebrary Cyprus Available
Ebrary Ebrary Egypt Available
Ebrary Ebrary Libya Available
Ebrary Ebrary Morocco Available
Ebrary Ebrary Nepal Available EBKNP00060250
Ebrary Ebrary Sudan Available
Ebrary Ebrary Tunisia Available
Total holds: 0

Front Cover -- Introduction to Robust Estimation and Hypothesis Testing -- Copyright -- Table of Contents -- Preface -- 1 Introduction -- 1.1 Problems with Assuming Normality -- 1.2 Transformations -- 1.3 The Influence Curve -- 1.4 The Central Limit Theorem -- 1.5 Is the ANOVA F Robust? -- 1.6 Regression -- 1.7 More Remarks -- 1.8 Using the Computer: R -- 1.9 Some Data Management Issues -- 1.9.1 Eliminating Missing Values -- 2 A Foundation for Robust Methods -- 2.1 Basic Tools for Judging Robustness -- 2.1.1 Qualitative Robustness -- 2.1.2 Infinitesimal Robustness -- 2.1.3 Quantitative Robustness -- 2.2 Some Measures of Location and Their Influence Function -- 2.2.1 Quantiles -- 2.2.2 The Winsorized Mean -- 2.2.3 The Trimmed Mean -- 2.2.4 M-Measures of Location -- 2.2.5 R-Measures of Location -- 2.3 Measures of Scale -- 2.4 Scale Equivariant M-Measures of Location -- 2.5 Winsorized Expected Values -- 3 Estimating Measures of Location and Scale -- 3.1 A Bootstrap Estimate of a Standard Error -- 3.1.1 R Function bootse -- 3.2 Density Estimators -- 3.2.1 Normal Kernel -- 3.2.2 Rosenblatt's Shifted Histogram -- 3.2.3 The Expected Frequency Curve -- 3.2.4 An Adaptive Kernel Estimator -- 3.2.5 R Functions skerd, kerden, kdplot, rdplot, akerd, and splot -- 3.3 The Sample Trimmed Mean -- 3.3.1 R Functions mean, tmean, and lloc -- 3.3.2 Estimating the Standard Error of the Trimmed Mean -- 3.3.3 Estimating the Standard Error of the Sample Winsorized Mean -- 3.3.4 R Functions winmean, winvar, trimse, and winse -- 3.3.5 Estimating the Standard Error of the Sample Median, M -- 3.3.6 R Function msmedse -- 3.4 The Finite Sample Breakdown Point -- 3.5 Estimating Quantiles -- 3.5.1 Estimating the Standard Error of the Sample Quantile -- 3.5.2 R Function qse -- 3.5.3 The Maritz-Jarrett Estimate of the Standard Error of x?q -- 3.5.4 R Function mjse.

3.5.5 The Harrell-Davis Estimator -- 3.5.6 R Function hd -- 3.5.7 A Bootstrap Estimate of the Standard Error of θ?q -- 3.5.8 R Function hdseb -- 3.6 An M-Estimator of Location -- 3.6.1 R Function mad -- 3.6.2 Computing an M-estimator of Location -- 3.6.3 R Functions mest -- 3.6.4 Estimating the Standard Error of the M-estimator -- 3.6.5 R Function mestse -- 3.6.6 A Bootstrap Estimate of the Standard Error of μ?m -- 3.6.7 R Function mestseb -- 3.7 One-Step M-estimator -- 3.7.1 R Function onestep -- 3.8 W-estimators -- 3.8.1 Tau Measure of Location -- 3.8.2 R Function tauloc -- 3.8.3 Zuo's Weighted Estimator -- 3.9 The Hodges-Lehmann Estimator -- 3.10 Skipped Estimators -- 3.10.1 R Functions mom and bmean -- 3.11 Some Comparisons of the Location Estimators -- 3.12 More Measures of Scale -- 3.12.1 The Biweight Midvariance -- 3.12.2 R Function bivar -- 3.12.3 The Percentage Bend Midvariance and tau Measure of Variation -- 3.12.4 R Functions pbvar, tauvar -- 3.12.5 The Interquartile Range -- 3.12.6 R Function idealf -- 3.13 Some Outlier Detection Methods -- 3.13.1 Rules Based on Means and Variances -- 3.13.2 A Method Based on the Interquartile Range -- 3.13.3 Carling's Modification -- 3.13.4 A MAD-Median Rule -- 3.13.5 R Functions outbox, out, and boxplot -- 3.13.6 Skewness and the Boxplot Rule -- 3.13.7 R Function adjboxout -- 3.14 Exercises -- 4 Confidence Intervals in the One-Sample Case -- 4.1 Problems when Working with Means -- 4.2 The g-and-h Distribution -- 4.2.1 R Functions ghdist and rmul -- 4.3 Inferences About the Trimmed and Winsorized Means -- 4.3.1 R Functions trimci and winci -- 4.4 Basic Bootstrap Methods -- 4.4.1 The Percentile Bootstrap Method -- 4.4.2 R Function onesampb -- 4.4.3 Bootstrap-t Method -- 4.4.4 Bootstrap Methods when Using a Trimmed Mean -- 4.4.5 Singh's Modification -- 4.4.6 R Functions trimpb and trimcibt.

4.5 Inferences About M-Estimators -- 4.5.1 R Functions mestci and momci -- 4.6 Confidence Intervals for Quantiles -- 4.6.1 Beware of Tied Values when Using the Median -- 4.6.2 Alternative Method for the Median -- 4.6.3 R Functions qmjci, hdci, sint, sintv2, qci, and qint -- 4.7 Empirical Likelihood -- 4.7.1 Bartlett Corrected Empirical Likelihood -- 4.8 Concluding Remarks -- 4.9 Exercises -- 5 Comparing Two Groups -- 5.1 The Shift Function -- 5.1.1 The Kolmogorov-Smirnov Test -- 5.1.2 R Functions ks, kssig, kswsig, and kstiesig -- 5.1.3 The S Band and W Band for the Shift Function -- 5.1.4 R Functions sband and wband -- 5.1.5 Confidence Band for the Deciles Only -- 5.1.6 R Function shifthd -- 5.1.7 R Functions g2plot and splotg2 -- 5.2 Student's t-test -- 5.3 Comparing Medians and Other Trimmed Means -- 5.3.1 R Function yuen -- 5.3.2 A Bootstrap-t Method for Comparing Trimmed Means -- 5.3.3 R Functions yuenbt and yhbt -- 5.3.4 Measuring Effect Size: Robust Analogs of Cohen's d -- 5.3.5 R Functions akp.effect, yuenv2, and ees.ci -- 5.3.6 Comments on Measuring Effect Size -- 5.4 Inferences Based on a Percentile Bootstrap Method -- 5.4.1 Comparing M-Estimators -- 5.4.2 Comparing Trimmed Means and Medians -- 5.4.3 R Functions trimpb2, pb2gen, m2ci, and medpb2 -- 5.5 Comparing Measures of Scale -- 5.5.1 Comparing Variances -- 5.5.2 R Function comvar2 -- 5.5.3 Comparing Biweight Midvariances -- 5.5.4 R Function b2ci -- 5.6 Permutation Tests -- 5.6.1 R Function permg -- 5.7 Inferences About a Probabilistic Measure of Effect Size -- 5.7.1 R Function mee -- 5.7.2 The Cliff and Bruner-Munzel Methods: Handling Tied Values -- 5.7.3 R Functions cid, cidv2, bmp, and wmwloc -- 5.8 Comparing Two Independent Binomials -- 5.8.1 Storer-Kim Method -- 5.8.2 Beal's Method -- 5.8.3 KMS Method -- 5.8.4 R Functions twobinom, twobici, bi2KMS, bi2KMSv2, and bi2CR.

5.8.5 Comparing Discrete Distributions: R Functions binband and disc2com -- 5.9 Comparing Dependent Groups -- 5.9.1 A Shift Function for Dependent Groups -- 5.9.2 R Function lband -- 5.9.3 Comparing Deciles -- 5.9.4 R Function shiftdhd -- 5.9.5 Comparing Trimmed Means -- 5.9.6 R Functions yuend and yuendv2 -- 5.9.7 A Bootstrap-t Method for Marginal Trimmed Means -- 5.9.8 R Function ydbt -- 5.9.9 Inferences about the Distribution of Difference Scores -- 5.9.10 R Functions loc2dif and l2drmci -- 5.9.11 Percentile Bootstrap: Comparing Medians, M-Estimators and Other Measures of Location and Scale -- 5.9.12 R Function bootdpci -- 5.9.13 Handling Missing Values -- Method M1 -- Method M2 -- Method M3 -- Comments on Choosing a Method -- 5.9.14 R Functions rm2miss and rmmismcp -- 5.9.15 Comparing Variances -- 5.9.16 The Sign Test and Inferences about the Binomial Distribution -- 5.9.17 R Functions binomci and acbinomci -- 5.10 Exercises -- 6 Some Multivariate Methods -- 6.1 Generalized Variance -- 6.2 Depth -- 6.2.1 Mahalanobis Depth -- 6.2.2 Halfspace Depth -- 6.2.3 Computing Halfspace Depth -- 6.2.4 R Functions depth2, depth, fdepth, fdepthv2, and unidepth -- 6.2.5 Projection Depth -- 6.2.6 R functions pdis and pdisMC -- 6.2.7 Other Measures of Depth -- 6.2.8 R Function zdepth -- 6.3 Some Affine Equivariant Estimators -- 6.3.1 Minimum Volume Ellipsoid Estimator -- 6.3.2 The Minimum Covariance Determinant Estimator -- 6.3.3 S-Estimators and Constrained M-Estimators -- 6.3.4 R Function tbs -- 6.3.5 Donoho-Gasko Generalization of a Trimmed Mean -- 6.3.6 R Functions dmean and dcov -- 6.3.7 The Stahel-Donoho W-Estimator -- 6.3.8 R Function sdwe -- 6.3.9 Median Ball Algorithm -- 6.3.10 R Function rmba -- 6.3.11 OGK Estimator -- 6.3.12 R Function ogk -- 6.3.13 An M-Estimator -- 6.3.14 R Function MARest -- 6.4 Multivariate Outlier Detection Methods.

6.4.1 A Relplot -- 6.4.2 R Function relplot -- 6.4.3 The MVE Method -- 6.4.4 The MCD Method -- 6.4.5 R Functions covmve and covmcd -- 6.4.6 R function out -- 6.4.7 The MGV Method -- 6.4.8 R Function outmgv -- 6.4.9 A Projection Method -- 6.4.10 R functions outpro and out3d -- 6.4.11 Outlier Identification in High Dimensions -- 6.4.12 R Function outproad and outmgvad -- 6.4.13 Approaches Based on Geometric Quantiles -- 6.4.14 Comments on Choosing a Method -- 6.5 A Skipped Estimator of Location and Scatter -- 6.5.1 R Functions smean, wmcd, wmve, mgvmean, L1medcen, spat,mgvcov, skip, skipcov, and dcov -- 6.6 Robust Generalized Variance -- 6.6.1 R Function gvarg -- 6.7 Inference in the One-Sample Case -- 6.7.1 Inferences Based on the OP Measure of Location -- 6.7.2 Extension of Hotelling's T2 to Trimmed Means -- 6.7.3 R Functions smeancrv2 and hotel1.tr -- 6.7.4 Inferences Based on the MGV Estimator -- 6.7.5 R Function smgvcr -- 6.8 Two-Sample Case -- 6.8.1 R Functions smean2, smean2v2, matsplit, and mat2grp -- Data Management -- 6.8.2 Comparing Robust Generalized Variances -- 6.8.3 R function gvar2g -- 6.9 Multivariate Density Estimators -- 6.10 A Two-Sample, Projection-Type Extension of the Wilcoxon-Mann-Whitney Test -- 6.10.1 R functions mulwmw and mulwmwv2 -- 6.11 A Relative Depth Analog of the Wilcoxon-Mann-Whitney Test -- 6.11.1 R function mwmw -- 6.12 Comparisons Based on Depth -- 6.12.1 R Functions lsqs3 and depthg2 -- 6.13 Comparing Dependent Groups Based on All Pairwise Differences -- 6.13.1 R Function dfried -- 6.14 Robust Principal Components Analysis -- 6.14.1 R Functions prcomp and regpca -- 6.14.2 Maronna's Method -- 6.14.3 The SPCA Method -- 6.14.4 Method HRVB -- 6.14.5 Method OP -- 6.14.6 Method PPCA -- 6.14.7 R Functions outpca, robpca, robpcaS, SPCA, Ppca, and Ppca.summary -- 6.14.8 Comments on Choosing the Number of Components.

6.15 Cluster Analysis.

This revised book provides a thorough explanation of the foundation of robust methods, incorporating the latest updates on R and S-Plus, robust ANOVA (Analysis of Variance) and regression. It guides advanced students and other professionals through the basic strategies used for developing practical solutions to problems, and provides a brief background on the foundations of modern methods, placing the new methods in historical context. Author Rand Wilcox includes chapter exercises and many real-world examples that illustrate how various methods perform in different situations. Introduction to Robust Estimation and Hypothesis Testing, Second Edition, focuses on the practical applications of modern, robust methods which can greatly enhance our chances of detecting true differences among groups and true associations among variables. Covers latest developments in robust regression Covers latest improvements in ANOVA Includes newest rank-based methods Describes and illustrated easy to use software.

Description based on publisher supplied metadata and other sources.

Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2019. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.

There are no comments on this title.

to post a comment.