Amazon cover image
Image from Amazon.com

Measuring Computer Performance : A Practitioner's Guide.

By: Publisher: Cambridge : Cambridge University Press, 2000Copyright date: ©2000Description: 1 online resource (279 pages)Content type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9780511151347
Subject(s): Genre/Form: Additional physical formats: Print version:: Measuring Computer Performance : A Practitioner's GuideDDC classification:
  • 004.2/4
LOC classification:
  • QA76.9.E94 -- L54 2000eb
Online resources:
Contents:
Cover -- Half-title -- Title -- Copyright -- Dedication -- Contents -- Preface -- Goals -- Philosophy -- Organization -- Suggestions for using this text -- Acknowledgements -- 1 Introduction -- 1.1 Measuring performance -- 1.2 Common goals of performance analysis -- 1.3 Solution techniques -- 1.4 Summary -- 1.5 Exercises -- 2 Metrics of performance -- 2.1 What is a performance metric? -- 2.2 Characteristics of a good performance metric -- 2.3 Processor and system performance metrics -- 2.3.1 The clock rate -- 2.3.2 MIPS -- 2.3.3 MFLOPS -- 2.3.4 SPEC -- 2.3.5 QUIPS -- 2.3.6 Execution time -- 2.4 Other types of performance metrics -- 2.5 Speedup and relative change -- 2.6 Means versus ends metrics -- 2.7 Summary -- 2.8 For further reading -- 2.9 Exercises -- 3 Average performance and variability -- 3.1 Why mean values? -- 3.2 Indices of central tendency -- 3.2.1 The sample mean -- 3.2.2 The sample median -- 3.2.3 The sample mode -- 3.2.4 Selecting among the mean, median, and mode -- 3.3 Other types of means -- 3.3.1 Characteristics of a good mean -- 3.3.2 The arithmetic mean -- 3.3.3 The harmonic mean -- 3.3.4 The geometric mean -- 3.3.5 Weighted means -- 3.4 Quantifying variability -- 3.5 Summary -- 3.6 For further reading -- 3.7 Exercises -- 4 Errors in experimental measurements -- 4.1 Accuracy, precision, and resolution -- 4.2 Sources of errors -- 4.3 A model of errors -- 4.4 Quantifying errors -- 4.4.1 Confidence intervals for the mean -- 4.4.1.1 Case 1: When the number of measurements is large… -- 4.4.1.2 Case 2: When the number of measurements is small (n < 30) -- 4.4.2 Determining the number of measurements needed -- 4.4.3 Confidence intervals for proportions -- 4.4.3.1 Determining the number of measurements needed -- 4.4.4 Normalizing data for confidence intervals -- 4.5 Summary -- 4.6 For further reading -- 4.7 Exercises.
5 Comparing alternatives -- 5.1 Comparing two alternatives -- 5.1.1 Before-and-after comparisons -- 5.1.2 Noncorresponding measurements -- 5.1.3 Comparing proportions -- 5.2 Comparing more than two alternatives -- 5.2.1 Analysis of variance (ANOVA) -- 5.2.2 Contrasts -- 5.3 Summary -- 5.4 For further reading -- 5.5 Exercises -- 6 Measurement tools and techniques -- 6.1 Events and measurement strategies -- 6.1.1 Events-type classification -- 6.1.2 Measurement strategies -- 6.2 Interval timers -- 6.2.1 Timer overhead -- 6.2.2 Quantization errors -- 6.2.3 Statistical measures of short intervals -- 6.3 Program profiling -- 6.3.1 PC sampling -- 6.3.2 Basic-block counting -- 6.4 Event tracing -- 6.4.1 Trace generation -- 6.4.2 Trace compression -- 6.4.2.1 Online trace consumption -- 6.4.2.2 Compression of data -- 6.4.2.3 Abstract execution -- 6.4.2.4 Trace sampling -- 6.5 Indirect and ad hoc measurements -- 6.6 Perturbations due to measuring -- 6.7 Summary -- 6.8 For further reading -- 6.9 Exercises -- 7 Benchmark programs -- 7.1 Types of benchmark programs -- 7.1.1 The single-instruction-execution time -- 7.1.2 Instruction-execution mixes -- 7.1.3 Synthetic benchmark programs -- 7.1.4 Microbenchmarks -- 7.1.5 Program kernels -- 7.1.6 Application benchmark programs -- 7.2 Benchmark strategies -- 7.2.1 Fixed-computation benchmarks -- 7.2.1.1 Amdahl's law -- 7.2.2 Fixed-time benchmarks -- 7.2.2.1 Scaling Amdahl's law -- 7.2.3 Variable-computation and variable-time benchmarks -- 7.3 Example benchmark programs -- 7.3.1 Scientific and engineering -- 7.3.1.1 Livermore loops -- 7.3.1.2 NAS kernels -- 7.3.1.3 LINPACK -- 7.3.1.4 PERFECT club -- 7.3.1.5 SPEC CPU -- 7.3.1.6 Whetstone and Dhrystone -- 7.3.2 Transaction processing -- 7.3.3 Servers and networks -- 7.3.3.1 SFS/LADDIS -- 7.3.3.2 SPECweb -- 7.3.4 Miscellaneous benchmarks -- 7.4 Summary.
7.5 For further reading -- 7.6 Exercises -- 8 Linear regression models -- 8.1 Least squares minimization -- 8.2 Confidence intervals for regression parameters -- 8.3 Correlation -- 8.3.1 The coefficient of determination -- 8.3.2 The correlation coefficient -- 8.3.3 Correlation and causation -- 8.4 Multiple linear regression -- 8.5 Verifying linearity -- 8.6 Nonlinear models -- 8.7 Summary -- 8.8 For further reading -- 8.9 Exercises -- 9 The design of experiments -- 9.1 Types of experiments -- 9.2 Terminology -- 9.3 Two-factor experiments -- 9.3.1 Interaction of factors -- 9.3.2 ANOVA for two-factor experiments -- 9.3.3 The need for replications -- 9.4 Generalized m-factor experiments -- 9.5 n2m experiments -- 9.5.1 Two factors -- 9.5.2 More than two factors -- 9.6 Summary -- 9.7 For further reading -- 9.8 Exercises -- 10 Simulation and random-number generation -- 10.1 Simulation-efficiency considerations -- 10.2 Types of simulations -- 10.2.1 Emulation -- 10.2.2 Static (Monte Carlo) simulation -- 10.2.3 Discrete-event simulation -- 10.2.3.1 The event scheduler -- 10.2.3.2 The global time variable -- 10.2.3.3 Event processing -- 10.2.3.4 Event generation -- 10.2.3.5 Recording and summarization of data -- 10.2.3.6 The simulation algorithm -- 10.3 Random-number generation -- 10.3.1 Uniformly distributed sequences -- 10.3.1.1 Choosing the constants -- 10.3.1.2 Cautions and suggestions -- 10.3.2 Nonuniformly distributed random numbers -- 10.3.2.1 Inverse transformation -- 10.3.2.2 The alias method -- 10.3.2.3 Decomposition -- 10.3.2.4 Special characterization -- 10.3.2.5 The accept-reject method -- 10.4 Verification and validation of simulations -- 10.4.1 Validation -- 10.4.1.1 Comparisons with real systems -- 10.4.1.2 Analytical results -- 10.4.1.3 Engineering judgement -- 10.4.2 Verification -- 10.4.3 Deterministic verification.
10.4.4 Stochastic verification -- 10.4.4.1 Goodness-of-fit testing -- 10.4.4.2 Tests of independence -- 10.5 Summary -- 10.6 For further reading -- 10.7 Exercises -- 11 Queueing analysis -- 11.1 Queueing-network models -- 11.2 Basic assumptions and notation -- 11.3 Operational analysis -- 11.3.1 The utilization law -- 11.3.2 Little's law -- 11.4 Stochastic analysis -- 11.4.1 Kendall's notation -- 11.4.2 The single-queue, single-server (M/M/1) system -- 11.4.3 The single-queue, multiple-server (M/M/c) system -- 11.5 Summary -- 11.6 For further reading -- 11.7 Exercises -- Appendix A Glossary -- Appendix B Some useful probability distributions -- B.1 The Bernoulli distribution -- B.2 The binomial distribution -- B.3 The geometric distribution -- B.4 The discrete uniform distribution -- B.5 The continuous uniform distribution -- B.6 The Poisson distribution -- B.7 The exponential distribution -- B.8 The Gaussian (normal) distribution -- B.9 The Erlang distribution -- B.10 The Pareto distribution -- B.11 For further reading -- Appendix C Selected statistical tables -- C.1 Critical values of Student's t distribution -- C.2 Critical values of the F distribution -- C.3 Critical values of the chi-squared distribution -- C.4 For further reading -- Index.
Summary: Sets out the fundamental techniques used in analyzing and understanding the performance of computer systems.
Holdings
Item type Current library Call number Status Date due Barcode Item holds
Ebrary Ebrary Afghanistan Available EBKAF000337
Ebrary Ebrary Algeria Available
Ebrary Ebrary Cyprus Available
Ebrary Ebrary Egypt Available
Ebrary Ebrary Libya Available
Ebrary Ebrary Morocco Available
Ebrary Ebrary Nepal Available EBKNP000337
Ebrary Ebrary Sudan Available
Ebrary Ebrary Tunisia Available
Total holds: 0

Cover -- Half-title -- Title -- Copyright -- Dedication -- Contents -- Preface -- Goals -- Philosophy -- Organization -- Suggestions for using this text -- Acknowledgements -- 1 Introduction -- 1.1 Measuring performance -- 1.2 Common goals of performance analysis -- 1.3 Solution techniques -- 1.4 Summary -- 1.5 Exercises -- 2 Metrics of performance -- 2.1 What is a performance metric? -- 2.2 Characteristics of a good performance metric -- 2.3 Processor and system performance metrics -- 2.3.1 The clock rate -- 2.3.2 MIPS -- 2.3.3 MFLOPS -- 2.3.4 SPEC -- 2.3.5 QUIPS -- 2.3.6 Execution time -- 2.4 Other types of performance metrics -- 2.5 Speedup and relative change -- 2.6 Means versus ends metrics -- 2.7 Summary -- 2.8 For further reading -- 2.9 Exercises -- 3 Average performance and variability -- 3.1 Why mean values? -- 3.2 Indices of central tendency -- 3.2.1 The sample mean -- 3.2.2 The sample median -- 3.2.3 The sample mode -- 3.2.4 Selecting among the mean, median, and mode -- 3.3 Other types of means -- 3.3.1 Characteristics of a good mean -- 3.3.2 The arithmetic mean -- 3.3.3 The harmonic mean -- 3.3.4 The geometric mean -- 3.3.5 Weighted means -- 3.4 Quantifying variability -- 3.5 Summary -- 3.6 For further reading -- 3.7 Exercises -- 4 Errors in experimental measurements -- 4.1 Accuracy, precision, and resolution -- 4.2 Sources of errors -- 4.3 A model of errors -- 4.4 Quantifying errors -- 4.4.1 Confidence intervals for the mean -- 4.4.1.1 Case 1: When the number of measurements is large… -- 4.4.1.2 Case 2: When the number of measurements is small (n < 30) -- 4.4.2 Determining the number of measurements needed -- 4.4.3 Confidence intervals for proportions -- 4.4.3.1 Determining the number of measurements needed -- 4.4.4 Normalizing data for confidence intervals -- 4.5 Summary -- 4.6 For further reading -- 4.7 Exercises.

5 Comparing alternatives -- 5.1 Comparing two alternatives -- 5.1.1 Before-and-after comparisons -- 5.1.2 Noncorresponding measurements -- 5.1.3 Comparing proportions -- 5.2 Comparing more than two alternatives -- 5.2.1 Analysis of variance (ANOVA) -- 5.2.2 Contrasts -- 5.3 Summary -- 5.4 For further reading -- 5.5 Exercises -- 6 Measurement tools and techniques -- 6.1 Events and measurement strategies -- 6.1.1 Events-type classification -- 6.1.2 Measurement strategies -- 6.2 Interval timers -- 6.2.1 Timer overhead -- 6.2.2 Quantization errors -- 6.2.3 Statistical measures of short intervals -- 6.3 Program profiling -- 6.3.1 PC sampling -- 6.3.2 Basic-block counting -- 6.4 Event tracing -- 6.4.1 Trace generation -- 6.4.2 Trace compression -- 6.4.2.1 Online trace consumption -- 6.4.2.2 Compression of data -- 6.4.2.3 Abstract execution -- 6.4.2.4 Trace sampling -- 6.5 Indirect and ad hoc measurements -- 6.6 Perturbations due to measuring -- 6.7 Summary -- 6.8 For further reading -- 6.9 Exercises -- 7 Benchmark programs -- 7.1 Types of benchmark programs -- 7.1.1 The single-instruction-execution time -- 7.1.2 Instruction-execution mixes -- 7.1.3 Synthetic benchmark programs -- 7.1.4 Microbenchmarks -- 7.1.5 Program kernels -- 7.1.6 Application benchmark programs -- 7.2 Benchmark strategies -- 7.2.1 Fixed-computation benchmarks -- 7.2.1.1 Amdahl's law -- 7.2.2 Fixed-time benchmarks -- 7.2.2.1 Scaling Amdahl's law -- 7.2.3 Variable-computation and variable-time benchmarks -- 7.3 Example benchmark programs -- 7.3.1 Scientific and engineering -- 7.3.1.1 Livermore loops -- 7.3.1.2 NAS kernels -- 7.3.1.3 LINPACK -- 7.3.1.4 PERFECT club -- 7.3.1.5 SPEC CPU -- 7.3.1.6 Whetstone and Dhrystone -- 7.3.2 Transaction processing -- 7.3.3 Servers and networks -- 7.3.3.1 SFS/LADDIS -- 7.3.3.2 SPECweb -- 7.3.4 Miscellaneous benchmarks -- 7.4 Summary.

7.5 For further reading -- 7.6 Exercises -- 8 Linear regression models -- 8.1 Least squares minimization -- 8.2 Confidence intervals for regression parameters -- 8.3 Correlation -- 8.3.1 The coefficient of determination -- 8.3.2 The correlation coefficient -- 8.3.3 Correlation and causation -- 8.4 Multiple linear regression -- 8.5 Verifying linearity -- 8.6 Nonlinear models -- 8.7 Summary -- 8.8 For further reading -- 8.9 Exercises -- 9 The design of experiments -- 9.1 Types of experiments -- 9.2 Terminology -- 9.3 Two-factor experiments -- 9.3.1 Interaction of factors -- 9.3.2 ANOVA for two-factor experiments -- 9.3.3 The need for replications -- 9.4 Generalized m-factor experiments -- 9.5 n2m experiments -- 9.5.1 Two factors -- 9.5.2 More than two factors -- 9.6 Summary -- 9.7 For further reading -- 9.8 Exercises -- 10 Simulation and random-number generation -- 10.1 Simulation-efficiency considerations -- 10.2 Types of simulations -- 10.2.1 Emulation -- 10.2.2 Static (Monte Carlo) simulation -- 10.2.3 Discrete-event simulation -- 10.2.3.1 The event scheduler -- 10.2.3.2 The global time variable -- 10.2.3.3 Event processing -- 10.2.3.4 Event generation -- 10.2.3.5 Recording and summarization of data -- 10.2.3.6 The simulation algorithm -- 10.3 Random-number generation -- 10.3.1 Uniformly distributed sequences -- 10.3.1.1 Choosing the constants -- 10.3.1.2 Cautions and suggestions -- 10.3.2 Nonuniformly distributed random numbers -- 10.3.2.1 Inverse transformation -- 10.3.2.2 The alias method -- 10.3.2.3 Decomposition -- 10.3.2.4 Special characterization -- 10.3.2.5 The accept-reject method -- 10.4 Verification and validation of simulations -- 10.4.1 Validation -- 10.4.1.1 Comparisons with real systems -- 10.4.1.2 Analytical results -- 10.4.1.3 Engineering judgement -- 10.4.2 Verification -- 10.4.3 Deterministic verification.

10.4.4 Stochastic verification -- 10.4.4.1 Goodness-of-fit testing -- 10.4.4.2 Tests of independence -- 10.5 Summary -- 10.6 For further reading -- 10.7 Exercises -- 11 Queueing analysis -- 11.1 Queueing-network models -- 11.2 Basic assumptions and notation -- 11.3 Operational analysis -- 11.3.1 The utilization law -- 11.3.2 Little's law -- 11.4 Stochastic analysis -- 11.4.1 Kendall's notation -- 11.4.2 The single-queue, single-server (M/M/1) system -- 11.4.3 The single-queue, multiple-server (M/M/c) system -- 11.5 Summary -- 11.6 For further reading -- 11.7 Exercises -- Appendix A Glossary -- Appendix B Some useful probability distributions -- B.1 The Bernoulli distribution -- B.2 The binomial distribution -- B.3 The geometric distribution -- B.4 The discrete uniform distribution -- B.5 The continuous uniform distribution -- B.6 The Poisson distribution -- B.7 The exponential distribution -- B.8 The Gaussian (normal) distribution -- B.9 The Erlang distribution -- B.10 The Pareto distribution -- B.11 For further reading -- Appendix C Selected statistical tables -- C.1 Critical values of Student's t distribution -- C.2 Critical values of the F distribution -- C.3 Critical values of the chi-squared distribution -- C.4 For further reading -- Index.

Sets out the fundamental techniques used in analyzing and understanding the performance of computer systems.

Description based on publisher supplied metadata and other sources.

Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2019. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.

There are no comments on this title.

to post a comment.