This book is the result of a special session on constraint-handling techniques used in evolutionary algorithms within the Congress on Evolutionary Computation (CEC) in 2007. It presents recent research in constraint-handling in evolutionary optimization.
Intended for researchers and practitioners alike, this book covers carefully selected yet broad topics in optimization, machine learning, and metaheuristics. Written by world-leading academic researchers who are extremely experienced in industrial applications, this self-contained book is the first of its kind that provides comprehensive background knowledge, particularly practical guidelines, and state-of-the-art techniques. New algorithms are carefully explained, further elaborated with pseudocode or flowcharts, and full working source code is made freely available. This is followed by a presentation of a variety of data-driven single- and multi-objective optimization algorithms that seamlessly integrate modern machine learning such as deep learning and transfer learning with evolutionary and swarm optimization algorithms. Applications of data-driven optimization ranging from aerodynamic design, optimization of industrial processes, to deep neural architecture search are included.
The use of evolutionary computation techniques has grown considerably over the past several years. Over this time, the use and applications of these techniques have been further enhanced resulting in a set of computational intelligence (also known as modern heuristics) tools that are particularly adept for solving complex optimization problems. Moreover, they are characteristically more robust than traditional methods based on formal logics or mathematical programming for many real world OR/MS problems. Hence, evolutionary computation techniques have dealt with complex optimization problems better than traditional optimization techniques although they can be applied to easy and simple problems where conventional techniques work well. Clearly there is a need for a volume that both reviews state-of-the-art evolutionary computation techniques, and surveys the most recent developments in their use for solving complex OR/MS problems. This volume on Evolutionary Optimization seeks to fill this need. Evolutionary Optimization is a volume of invited papers written by leading researchers in the field. All papers were peer reviewed by at least two recognized reviewers. The book covers the foundation as well as the practical side of evolutionary optimization.
From the contents: Neural networks – theory and applications: NNs (= neural networks) classifier on continuous data domains– quantum associative memory – a new class of neuron-like discrete filters to image processing – modular NNs for improving generalisation properties – presynaptic inhibition modelling for image processing application – NN recognition system for a curvature primal sketch – NN based nonlinear temporal-spatial noise rejection system – relaxation rate for improving Hopfield network – Oja's NN and influence of the learning gain on its dynamics Genetic algorithms – theory and applications: transposition: a biological-inspired mechanism to use with GAs (= genetic algorithms) – GA for decision tree induction – optimising decision classifications using GAs – scheduling tasks with intertask communication onto multiprocessors by GAs – design of robust networks with GA – effect of degenerate coding on GAs – multiple traffic signal control using a GA – evolving musical harmonisation – niched-penalty approach for constraint handling in GAs – GA with dynamic population size – GA with dynamic niche clustering for multimodal function optimisation Soft computing and uncertainty: self-adaptation of evolutionary constructed decision trees by information spreading – evolutionary programming of near optimal NNs
This text examines how multiobjective evolutionary algorithms and related techniques can be used to solve problems, particularly in the disciplines of science and engineering. Contributions by leading researchers show how the concept of multiobjective optimization can be used to reformulate and resolve problems in areas such as constrained optimization, co-evolution, classification, inverse modeling, and design.
The set LNCS 2723 and LNCS 2724 constitutes the refereed proceedings of the Genetic and Evolutionaty Computation Conference, GECCO 2003, held in Chicago, IL, USA in July 2003. The 193 revised full papers and 93 poster papers presented were carefully reviewed and selected from a total of 417 submissions. The papers are organized in topical sections on a-life adaptive behavior, agents, and ant colony optimization; artificial immune systems; coevolution; DNA, molecular, and quantum computing; evolvable hardware; evolutionary robotics; evolution strategies and evolutionary programming; evolutionary sheduling routing; genetic algorithms; genetic programming; learning classifier systems; real-world applications; and search based softare engineering.
This book constitutes the refereed proceedings of the 6th Mexican International Conference on Artificial Intelligence, MICAI 2007, held in Aguascalientes, Mexico, in November 2007. The 116 revised full papers presented were carefully reviewed and selected from numerous submissions for inclusion in the book. The papers are organized in sections on topics that include computational intelligence, neural networks, knowledge representation and reasoning, agents and multiagent systems.
This well-received book, now in its second edition, continues to provide a number of optimization algorithms which are commonly used in computer-aided engineering design. The book begins with simple single-variable optimization techniques, and then goes on to give unconstrained and constrained optimization techniques in a step-by-step format so that they can be coded in any user-specific computer language. In addition to classical optimization methods, the book also discusses Genetic Algorithms and Simulated Annealing, which are widely used in engineering design problems because of their ability to find global optimum solutions. The second edition adds several new topics of optimization such as design and manufacturing, data fitting and regression, inverse problems, scheduling and routing, data mining, intelligent system design, Lagrangian duality theory, and quadratic programming and its extension to sequential quadratic programming. It also extensively revises the linear programming algorithms section in the Appendix. This edition also includes more number of exercise problems. The book is suitable for senior undergraduate/postgraduate students of mechanical, production and chemical engineering. Students in other branches of engineering offering optimization courses as well as designers and decision-makers will also find the book useful. Key Features Algorithms are presented in a step-by-step format to facilitate coding in a computer language. Sample computer programs in FORTRAN are appended for better comprehension. Worked-out examples are illustrated for easy understanding. The same example problems are solved with most algorithms for a comparative evaluation of the algorithms.
Problems demanding globally optimal solutions are ubiquitous, yet many are intractable when they involve constrained functions having many local optima and interacting, mixed-type variables. The differential evolution (DE) algorithm is a practical approach to global numerical optimization which is easy to understand, simple to implement, reliable, and fast. Packed with illustrations, computer code, new insights, and practical advice, this volume explores DE in both principle and practice. It is a valuable resource for professionals needing a proven optimizer and for students wanting an evolutionary perspective on global numerical optimization.
This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.