Amazon cover image
Image from Amazon.com

Iste : Paradigms of Combinatorial Optimization : Problems and New Approaches (2nd Edition).

By: Series: ISTEPublisher: Somerset : John Wiley & Sons, Incorporated, 2014Copyright date: ©2014Edition: 2nd edDescription: 1 online resource (815 pages)Content type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9781119015161
Subject(s): Genre/Form: Additional physical formats: Print version:: Iste : Paradigms of Combinatorial Optimization : Problems and New Approaches (2nd Edition)DDC classification:
  • 519.64
LOC classification:
  • QA402.5 -- .P373 2014eb
Online resources:
Contents:
Cover -- Title Page -- Copyright -- Contents -- Preface -- PART I: Paradigmatic Problems -- Chapter 1: Optimal Satisfiability -- 1.1. Introduction -- 1.2. Preliminaries -- 1.2.1. Constraint satisfaction problems: decision and optimization versions -- 1.2.2. Constraint types -- 1.3. Complexity of decision problems -- 1.4. Complexity and approximation of optimization problems -- 1.4.1. Maximization problems -- 1.4.2. Minimization problems -- 1.5. Particular instances of constraint satisfaction problems -- 1.5.1. Planar instances -- 1.5.2. Dense instances -- 1.5.3. Instances with a bounded number of occurrences -- 1.6. Satisfiability problems under global constraints -- 1.7. Conclusion -- 1.8. Bibliography -- Chapter 2: Scheduling Problems -- 2.1. Introduction -- 2.2. New techniques for approximation -- 2.2.1. Linear programming and scheduling -- 2.2.1.1. Single machine problems -- 2.2.1.2. Problems with m machines -- 2.2.2. An approximation scheme for P||Cmax -- 2.3. Constraints and scheduling -- 2.3.1. The monomachine constraint -- 2.3.2. The cumulative constraint -- 2.3.3. Energetic reasoning -- 2.4. Non-regular criteria -- 2.4.1. PERT with convex costs -- 2.4.1.1. The equality graph and its blocks -- 2.4.1.2. Generic algorithm -- 2.4.1.3. Complexity of the generic algorithm -- 2.4.2. Minimizing the early-tardy cost on one machine -- 2.4.2.1. Special cases -- 2.4.2.2. The lower bound -- 2.4.2.3. The branch-and-bound algorithm -- 2.4.2.4. Lower bounds in a node of the search tree -- 2.4.2.5. Upper bound -- 2.4.2.6. Branching rule -- 2.4.2.7. Dominance rules -- 2.4.2.8. Experimental results -- 2.5. Bibliography -- Chapter 3: Location Problems -- 3.1. Introduction -- 3.1.1. Weber's problem -- 3.1.2. A classification -- 3.2. Continuous problems -- 3.2.1. Complete covering -- 3.2.2. Maximal covering -- 3.2.2.1. Fixed radius -- 3.2.2.2. Variable radius.
3.2.3. Empty covering -- 3.2.4. Bicriteria models -- 3.2.5. Covering with multiple resources -- 3.3. Discrete problems -- 3.3.1. p-Center -- 3.3.2. p-Dispersion -- 3.3.3. p-Median -- 3.3.3.1. Fixed charge -- 3.3.4. Hub -- 3.3.5. p-Maxisum -- 3.4. On-line problems -- 3.5. The quadratic assignment problem -- 3.5.1. Definitions and formulations of the problem -- 3.5.2. Complexity -- 3.5.3. Relaxations and lower bounds -- 3.5.3.1. Linear relaxations -- 3.5.3.2. Semi-definite relaxations -- 3.5.3.3. Convex quadratic relaxations -- 3.6. Conclusion -- 3.7. Bibliography -- Chapter 4: MiniMax Algorithms and Games -- 4.1. Introduction -- 4.2. Games of no chance: the simple cases -- 4.3. The case of complex no chance games -- 4.3.1. Approximative evaluation -- 4.3.2. Horizon effect -- 4.3.3. α-β pruning -- 4.4. Quiescence search -- 4.4.1. Other refinements of the MiniMax algorithm -- 4.5. Case of games using chance -- 4.6. Conclusion -- 4.7. Bibliography -- Chapter 5: Two-dimensional Bin Packing Problems -- 5.1. Introduction -- 5.2. Models -- 5.2.1. ILP models for level packing -- 5.3. Upper bounds -- 5.3.1. Strip packing -- 5.3.2. Bin packing: two-phase heuristics -- 5.3.3. Bin packing: one-phase level heuristics -- 5.3.4. Bin packing: one-phase non-level heuristics -- 5.3.5. Metaheuristics -- 5.3.6. Approximation algorithms -- 5.4. Lower bounds -- 5.4.1. Lower bounds for level packing -- 5.5. Exact algorithms -- 5.6 Acknowledgements -- 5.7. Bibliography -- Chapter 6: The Maximum Cut Problem -- 6.1. Introduction -- 6.2. Complexity and polynomial cases -- 6.3. Applications -- 6.3.1. Spin glass models -- 6.3.2. Unconstrained 0-1 quadratic programming -- 6.3.3. The via minimization problem -- 6.4. The cut polytope -- 6.4.1. Valid inequalities and separation -- 6.4.2. Branch-and-cut algorithms -- 6.4.3. The cut polyhedron.
6.5. Semi-definite programming (SDP) and the maximum cut problem -- 6.5.1. Semi-definite formulation of the MAX-CUT problem -- 6.5.2. Quality of the semi-definite formulation -- 6.5.3. Existing works in the literature -- 6.6. The cut cone and applications -- 6.6.1. The cut cone -- 6.6.2. Relationship to the cut polytope -- 6.6.3. The semi-metric cone -- 6.6.4. Applications to the multiflow problem -- 6.7. Approximation methods -- 6.7.1. Methods with performance guarantees -- 6.7.2. Methods with no guarantees -- 6.8. Related problems -- 6.8.1. Unconstrained 0-1 quadratic programming -- 6.8.2. The maximum even (odd) cut problem -- 6.8.3. The equipartition problem -- 6.8.4. Other problems -- 6.9. Conclusion -- 6.10. Bibliography -- Chapter 7: The Traveling Salesman Problem and its Variations -- 7.1. Introduction -- 7.2. Elementary properties and various subproblems -- 7.2.1. Elementary properties -- 7.2.2. Various subproblems -- 7.3. Exact solving algorithms -- 7.3.1. A dynamic programming algorithm -- 7.3.2. A branch-and-bound algorithm -- 7.4. Approximation algorithm for max TSP -- 7.4.1. An algorithm based on 2-matching -- 7.4.2. Algorithm mixing 2-matching and matching -- 7.5. Approximation algorithm for min TSP -- 7.5.1. Algorithm based on the spanning tree and matching -- 7.5.2. Local search algorithm -- 7.6. Constructive algorithms -- 7.6.1. Nearest neighbor algorithm -- 7.6.1.1. The general case -- 7.6.1.2. The metric case -- 7.6.2. Nearest insertion algorithm -- 7.7. Conclusion -- 7.8. Bibliography -- Chapter 8: 0-1 Knapsack Problems -- 8.1. General solution principle -- 8.2. Relaxation -- 8.3. Heuristic -- 8.4. Variable fixing -- 8.5. Dynamic programming -- 8.5.1. General principle -- 8.5.2. Managing feasible combinations of objects -- 8.6. Solution search by hybridization of branch-and-bound and dynamic programming.
8.6.1. Principle of hybridization -- 8.6.2. Illustration of hybridization -- 8.7. Conclusion -- 8.8. Bibliography -- Chapter 9: Integer Quadratic Knapsack Problems -- 9.1. Introduction -- 9.1.1. Problem formulation -- 9.1.2. Significance of the problem -- 9.1.2.1. A task assignment problem -- 9.1.2.2. Portfolio management -- 9.2. Solution methods -- 9.2.1. The convex separable problem -- 9.2.1.1. Solving the continuous relaxation -- 9.2.1.2. Calculating a "good" quality bound -- 9.2.2. The non-convex separable problem -- 9.2.3. The convex non-separable problem -- 9.2.4. The non-convex non-separable problem -- 9.2.4.1. Special case: the 0-1 variable quadratic knapsack problem -- 9.3. Numerical experiments -- 9.3.1. The convex and separable integer quadratic knapsack problem -- 9.3.2. The convex and separable integer quadratic multi-knapsack problem -- 9.4. Conclusion -- 9.5. Bibliography -- Chapter 10: Graph Coloring Problems -- 10.1. Basic notions of colorings -- 10.2. Complexity of coloring -- 10.3. Sequential methods of coloring -- 10.4. An exact coloring algorithm -- 10.5. Tabu search -- 10.6. Perfect graphs -- 10.7. Chromatic scheduling -- 10.8. Interval coloring -- 10.9. T-colorings -- 10.10. List colorings -- 10.11. Coloring with cardinality constraints -- 10.12. Other extensions -- 10.13. Edge coloring -- 10.13.1. f-Coloring of edge -- 10.13.2. [g, f]-Colorings of edges -- 10.13.3. A model of hypergraph coloring -- 10.14. Conclusion -- 10.15. Bibliography -- PART II: New Approaches -- Chapter 11: Polynomial Approximation -- 11.1. What is polynomial approximation? -- 11.1.1. Efficiently solving a difficult problem -- 11.1.2. Approximation measures -- 11.2. Some first examples of analysis: constant approximation ratios -- 11.2.1. An example of classical approximation: the metric traveling salesman.
11.2.2. Examples of the differential ratio case -- 11.2.2.1. Bin packing -- 11.2.2.2. Minimum graph coloring -- 11.2.2.3. Polynomially bounded traveling salesman -- 11.3. Approximation schemes -- 11.3.1. Non-complete schemes -- 11.3.1.1. A scheduling problem with p processors for the classical ratio -- 11.3.1.2. The Euclidean traveling salesman problem -- 11.3.1.2.1. Phase 1 -- 11.3.1.2.2. Phase 2 -- 11.3.1.2.3. Phase 3 -- 11.3.1.3. An example in the case of the differential ratio: return to BIN PACKING -- 11.3.2. Complete approximation schemes - example of the Boolean knapsack -- 11.3.2.1. A great classic: the positive case for the classical ratio -- 11.3.2.2. A generalization: using the differential ratio -- 11.4. Analyses depending on the instance -- 11.4.1. Set covering and classical ratios -- 11.4.2. Set covering and differential ratios -- 11.4.3. The maximum stable set problem -- 11.5. Conclusion: methods and issues of approximation -- 11.5.1. The types of algorithms: a few great classics -- 11.5.2. Approximation classes: structuring the class NPO -- 11.5.3. Reductions in approximation -- 11.5.4. Issues -- 11.6. Bibliography -- Chapter 12: Approximation Preserving Reductions -- 12.1. Introduction -- 12.2. Strict and continuous reductions -- 12.2.1. Strict reductions -- 12.2.2. Continuous reduction -- 12.3. AP-reduction and completeness in the classes NPO and APX -- 12.3.1. Completeness in NPO -- 12.3.2. Completeness in APX -- 12.3.3. Using completeness to derive negative results -- 12.4. L-reduction and completeness in the classes Max-SNP and APX -- 12.4.1. The L-reduction and the class Max-SNP -- 12.4.2. Examples of L-reductions -- 12.4.2.1. MAX 2SAT and MIN 2SAT -- 12.4.2.2. MAX 3SAT and MAX 2SAT -- 12.4.2.3. MAX 2SAT and MAX ≠ 3SAT -- 12.4.3. Completeness in Max-SNP and APX -- 12.5. Affine reduction -- 12.6. A few words on GAP-reduction.
12.7. History and comment.
Summary: Combinatorial optimization is a multidisciplinary scientific area, lying in the interface of three major scientific domains: mathematics, theoretical computer science and management.  The three volumes of the Combinatorial Optimization series aim to cover a wide range  of topics in this area. These topics also deal with fundamental notions and approaches as with several classical applications of combinatorial optimization. Concepts of Combinatorial Optimization, is divided into three parts: - On the complexity of combinatorial optimization problems, presenting basics about worst-case and randomized complexity; - Classical solution methods, presenting the two most-known methods for solving hard combinatorial optimization problems, that are Branch-and-Bound and Dynamic Programming; - Elements from mathematical programming, presenting fundamentals from mathematical programming based methods that are in the heart of Operations Research since the origins of this field.
Holdings
Item type Current library Call number Status Date due Barcode Item holds
Ebrary Ebrary Afghanistan Available EBKAF00097303
Ebrary Ebrary Algeria Available
Ebrary Ebrary Cyprus Available
Ebrary Ebrary Egypt Available
Ebrary Ebrary Libya Available
Ebrary Ebrary Morocco Available
Ebrary Ebrary Nepal Available EBKNP00097303
Ebrary Ebrary Sudan Available
Ebrary Ebrary Tunisia Available
Total holds: 0

Cover -- Title Page -- Copyright -- Contents -- Preface -- PART I: Paradigmatic Problems -- Chapter 1: Optimal Satisfiability -- 1.1. Introduction -- 1.2. Preliminaries -- 1.2.1. Constraint satisfaction problems: decision and optimization versions -- 1.2.2. Constraint types -- 1.3. Complexity of decision problems -- 1.4. Complexity and approximation of optimization problems -- 1.4.1. Maximization problems -- 1.4.2. Minimization problems -- 1.5. Particular instances of constraint satisfaction problems -- 1.5.1. Planar instances -- 1.5.2. Dense instances -- 1.5.3. Instances with a bounded number of occurrences -- 1.6. Satisfiability problems under global constraints -- 1.7. Conclusion -- 1.8. Bibliography -- Chapter 2: Scheduling Problems -- 2.1. Introduction -- 2.2. New techniques for approximation -- 2.2.1. Linear programming and scheduling -- 2.2.1.1. Single machine problems -- 2.2.1.2. Problems with m machines -- 2.2.2. An approximation scheme for P||Cmax -- 2.3. Constraints and scheduling -- 2.3.1. The monomachine constraint -- 2.3.2. The cumulative constraint -- 2.3.3. Energetic reasoning -- 2.4. Non-regular criteria -- 2.4.1. PERT with convex costs -- 2.4.1.1. The equality graph and its blocks -- 2.4.1.2. Generic algorithm -- 2.4.1.3. Complexity of the generic algorithm -- 2.4.2. Minimizing the early-tardy cost on one machine -- 2.4.2.1. Special cases -- 2.4.2.2. The lower bound -- 2.4.2.3. The branch-and-bound algorithm -- 2.4.2.4. Lower bounds in a node of the search tree -- 2.4.2.5. Upper bound -- 2.4.2.6. Branching rule -- 2.4.2.7. Dominance rules -- 2.4.2.8. Experimental results -- 2.5. Bibliography -- Chapter 3: Location Problems -- 3.1. Introduction -- 3.1.1. Weber's problem -- 3.1.2. A classification -- 3.2. Continuous problems -- 3.2.1. Complete covering -- 3.2.2. Maximal covering -- 3.2.2.1. Fixed radius -- 3.2.2.2. Variable radius.

3.2.3. Empty covering -- 3.2.4. Bicriteria models -- 3.2.5. Covering with multiple resources -- 3.3. Discrete problems -- 3.3.1. p-Center -- 3.3.2. p-Dispersion -- 3.3.3. p-Median -- 3.3.3.1. Fixed charge -- 3.3.4. Hub -- 3.3.5. p-Maxisum -- 3.4. On-line problems -- 3.5. The quadratic assignment problem -- 3.5.1. Definitions and formulations of the problem -- 3.5.2. Complexity -- 3.5.3. Relaxations and lower bounds -- 3.5.3.1. Linear relaxations -- 3.5.3.2. Semi-definite relaxations -- 3.5.3.3. Convex quadratic relaxations -- 3.6. Conclusion -- 3.7. Bibliography -- Chapter 4: MiniMax Algorithms and Games -- 4.1. Introduction -- 4.2. Games of no chance: the simple cases -- 4.3. The case of complex no chance games -- 4.3.1. Approximative evaluation -- 4.3.2. Horizon effect -- 4.3.3. α-β pruning -- 4.4. Quiescence search -- 4.4.1. Other refinements of the MiniMax algorithm -- 4.5. Case of games using chance -- 4.6. Conclusion -- 4.7. Bibliography -- Chapter 5: Two-dimensional Bin Packing Problems -- 5.1. Introduction -- 5.2. Models -- 5.2.1. ILP models for level packing -- 5.3. Upper bounds -- 5.3.1. Strip packing -- 5.3.2. Bin packing: two-phase heuristics -- 5.3.3. Bin packing: one-phase level heuristics -- 5.3.4. Bin packing: one-phase non-level heuristics -- 5.3.5. Metaheuristics -- 5.3.6. Approximation algorithms -- 5.4. Lower bounds -- 5.4.1. Lower bounds for level packing -- 5.5. Exact algorithms -- 5.6 Acknowledgements -- 5.7. Bibliography -- Chapter 6: The Maximum Cut Problem -- 6.1. Introduction -- 6.2. Complexity and polynomial cases -- 6.3. Applications -- 6.3.1. Spin glass models -- 6.3.2. Unconstrained 0-1 quadratic programming -- 6.3.3. The via minimization problem -- 6.4. The cut polytope -- 6.4.1. Valid inequalities and separation -- 6.4.2. Branch-and-cut algorithms -- 6.4.3. The cut polyhedron.

6.5. Semi-definite programming (SDP) and the maximum cut problem -- 6.5.1. Semi-definite formulation of the MAX-CUT problem -- 6.5.2. Quality of the semi-definite formulation -- 6.5.3. Existing works in the literature -- 6.6. The cut cone and applications -- 6.6.1. The cut cone -- 6.6.2. Relationship to the cut polytope -- 6.6.3. The semi-metric cone -- 6.6.4. Applications to the multiflow problem -- 6.7. Approximation methods -- 6.7.1. Methods with performance guarantees -- 6.7.2. Methods with no guarantees -- 6.8. Related problems -- 6.8.1. Unconstrained 0-1 quadratic programming -- 6.8.2. The maximum even (odd) cut problem -- 6.8.3. The equipartition problem -- 6.8.4. Other problems -- 6.9. Conclusion -- 6.10. Bibliography -- Chapter 7: The Traveling Salesman Problem and its Variations -- 7.1. Introduction -- 7.2. Elementary properties and various subproblems -- 7.2.1. Elementary properties -- 7.2.2. Various subproblems -- 7.3. Exact solving algorithms -- 7.3.1. A dynamic programming algorithm -- 7.3.2. A branch-and-bound algorithm -- 7.4. Approximation algorithm for max TSP -- 7.4.1. An algorithm based on 2-matching -- 7.4.2. Algorithm mixing 2-matching and matching -- 7.5. Approximation algorithm for min TSP -- 7.5.1. Algorithm based on the spanning tree and matching -- 7.5.2. Local search algorithm -- 7.6. Constructive algorithms -- 7.6.1. Nearest neighbor algorithm -- 7.6.1.1. The general case -- 7.6.1.2. The metric case -- 7.6.2. Nearest insertion algorithm -- 7.7. Conclusion -- 7.8. Bibliography -- Chapter 8: 0-1 Knapsack Problems -- 8.1. General solution principle -- 8.2. Relaxation -- 8.3. Heuristic -- 8.4. Variable fixing -- 8.5. Dynamic programming -- 8.5.1. General principle -- 8.5.2. Managing feasible combinations of objects -- 8.6. Solution search by hybridization of branch-and-bound and dynamic programming.

8.6.1. Principle of hybridization -- 8.6.2. Illustration of hybridization -- 8.7. Conclusion -- 8.8. Bibliography -- Chapter 9: Integer Quadratic Knapsack Problems -- 9.1. Introduction -- 9.1.1. Problem formulation -- 9.1.2. Significance of the problem -- 9.1.2.1. A task assignment problem -- 9.1.2.2. Portfolio management -- 9.2. Solution methods -- 9.2.1. The convex separable problem -- 9.2.1.1. Solving the continuous relaxation -- 9.2.1.2. Calculating a "good" quality bound -- 9.2.2. The non-convex separable problem -- 9.2.3. The convex non-separable problem -- 9.2.4. The non-convex non-separable problem -- 9.2.4.1. Special case: the 0-1 variable quadratic knapsack problem -- 9.3. Numerical experiments -- 9.3.1. The convex and separable integer quadratic knapsack problem -- 9.3.2. The convex and separable integer quadratic multi-knapsack problem -- 9.4. Conclusion -- 9.5. Bibliography -- Chapter 10: Graph Coloring Problems -- 10.1. Basic notions of colorings -- 10.2. Complexity of coloring -- 10.3. Sequential methods of coloring -- 10.4. An exact coloring algorithm -- 10.5. Tabu search -- 10.6. Perfect graphs -- 10.7. Chromatic scheduling -- 10.8. Interval coloring -- 10.9. T-colorings -- 10.10. List colorings -- 10.11. Coloring with cardinality constraints -- 10.12. Other extensions -- 10.13. Edge coloring -- 10.13.1. f-Coloring of edge -- 10.13.2. [g, f]-Colorings of edges -- 10.13.3. A model of hypergraph coloring -- 10.14. Conclusion -- 10.15. Bibliography -- PART II: New Approaches -- Chapter 11: Polynomial Approximation -- 11.1. What is polynomial approximation? -- 11.1.1. Efficiently solving a difficult problem -- 11.1.2. Approximation measures -- 11.2. Some first examples of analysis: constant approximation ratios -- 11.2.1. An example of classical approximation: the metric traveling salesman.

11.2.2. Examples of the differential ratio case -- 11.2.2.1. Bin packing -- 11.2.2.2. Minimum graph coloring -- 11.2.2.3. Polynomially bounded traveling salesman -- 11.3. Approximation schemes -- 11.3.1. Non-complete schemes -- 11.3.1.1. A scheduling problem with p processors for the classical ratio -- 11.3.1.2. The Euclidean traveling salesman problem -- 11.3.1.2.1. Phase 1 -- 11.3.1.2.2. Phase 2 -- 11.3.1.2.3. Phase 3 -- 11.3.1.3. An example in the case of the differential ratio: return to BIN PACKING -- 11.3.2. Complete approximation schemes - example of the Boolean knapsack -- 11.3.2.1. A great classic: the positive case for the classical ratio -- 11.3.2.2. A generalization: using the differential ratio -- 11.4. Analyses depending on the instance -- 11.4.1. Set covering and classical ratios -- 11.4.2. Set covering and differential ratios -- 11.4.3. The maximum stable set problem -- 11.5. Conclusion: methods and issues of approximation -- 11.5.1. The types of algorithms: a few great classics -- 11.5.2. Approximation classes: structuring the class NPO -- 11.5.3. Reductions in approximation -- 11.5.4. Issues -- 11.6. Bibliography -- Chapter 12: Approximation Preserving Reductions -- 12.1. Introduction -- 12.2. Strict and continuous reductions -- 12.2.1. Strict reductions -- 12.2.2. Continuous reduction -- 12.3. AP-reduction and completeness in the classes NPO and APX -- 12.3.1. Completeness in NPO -- 12.3.2. Completeness in APX -- 12.3.3. Using completeness to derive negative results -- 12.4. L-reduction and completeness in the classes Max-SNP and APX -- 12.4.1. The L-reduction and the class Max-SNP -- 12.4.2. Examples of L-reductions -- 12.4.2.1. MAX 2SAT and MIN 2SAT -- 12.4.2.2. MAX 3SAT and MAX 2SAT -- 12.4.2.3. MAX 2SAT and MAX ≠ 3SAT -- 12.4.3. Completeness in Max-SNP and APX -- 12.5. Affine reduction -- 12.6. A few words on GAP-reduction.

12.7. History and comment.

Combinatorial optimization is a multidisciplinary scientific area, lying in the interface of three major scientific domains: mathematics, theoretical computer science and management.  The three volumes of the Combinatorial Optimization series aim to cover a wide range  of topics in this area. These topics also deal with fundamental notions and approaches as with several classical applications of combinatorial optimization. Concepts of Combinatorial Optimization, is divided into three parts: - On the complexity of combinatorial optimization problems, presenting basics about worst-case and randomized complexity; - Classical solution methods, presenting the two most-known methods for solving hard combinatorial optimization problems, that are Branch-and-Bound and Dynamic Programming; - Elements from mathematical programming, presenting fundamentals from mathematical programming based methods that are in the heart of Operations Research since the origins of this field.

Description based on publisher supplied metadata and other sources.

Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2019. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.

There are no comments on this title.

to post a comment.