Wir verwenden Cookies und Analyse-Tools, um die Nutzerfreundlichkeit der Internet-Seite zu verbessern und für Marketingzwecke. Wenn Sie fortfahren, diese Seite zu verwenden, nehmen wir an, dass Sie damit einverstanden sind. Zur Datenschutzerklärung.
General-Purpose Optimization Through Information Maximization
Details
This book examines the mismatch between discrete programs, which lie at the center of modern applied mathematics, and the continuous space phenomena they simulate. The author considers whether we can imagine continuous spaces of programs, and asks what the structure of such spaces would be and how they would be constituted. He proposes a functional analysis of program spaces focused through the lens of iterative optimization.
The author begins with the observation that optimization methods such as Genetic Algorithms, Evolution Strategies, and Particle Swarm Optimization can be analyzed as Estimation of Distributions Algorithms (EDAs) in that they can be formulated as conditional probability distributions. The probabilities themselves are mathematical objects that can be compared and operated on, and thus many methods in Evolutionary Computation can be placed in a shared vector space and analyzed using techniques of functionalanalysis. The core ideas of this book expand from that concept, eventually incorporating all iterative stochastic search methods, including gradient-based methods. Inspired by work on Randomized Search Heuristics, the author covers all iterative optimization methods and not just evolutionary methods. The No Free Lunch Theorem is viewed as a useful introduction to the broader field of analysis that comes from developing a shared mathematical space for optimization algorithms. The author brings in intuitions from several branches of mathematics such as topology, probability theory, and stochastic processes and provides substantial background material to make the work as self-contained as possible.
The book will be valuable for researchers in the areas of global optimization, machine learning, evolutionary theory, and control theory.
The book will be valuable for researchers in the areas of global optimization, machine learning, evolutionary theory, and control theory Optimization is a fundamental problem that recurs across scientific disciplines and is pervasive in informatics research, from statistical machine learning to probabilistic models to reinforcement learning In the final main chapter of the book the author realizes that the basic mathematical objects developed to account for stochastic optimization have applications far beyond optimization, he thinks about them as stimulus-response systems, the key intuition coming from the Optimization Game
Autorentext
Alan J. Lockett received his PhD in 2012 at the University of Texas at Austin under the supervision of Risto Miikkulainen, where his research topics included estimation of temporal probabilistic models, evolutionary computation theory, and learning neural network controllers for robotics. After a postdoc in IDSIA (Lugano) with Jürgen Schmidhuber he now works for CS Disco in Houston.
Klappentext
This book examines the mismatch between discrete programs, which lie at the center of modern applied mathematics, and the continuous space phenomena they simulate. The author considers whether we can imagine continuous spaces of programs, and asks what the structure of such spaces would be and how they would be constituted. He proposes a functional analysis of program spaces focused through the lens of iterative optimization. The author begins with the observation that optimization methods such as Genetic Algorithms, Evolution Strategies, and Particle Swarm Optimization can be analyzed as Estimation of Distributions Algorithms (EDAs) in that they can be formulated as conditional probability distributions. The probabilities themselves are mathematical objects that can be compared and operated on, and thus many methods in Evolutionary Computation can be placed in a shared vector space and analyzed using techniques of functionalanalysis. The core ideas of this book expand from that concept, eventually incorporating all iterative stochastic search methods, including gradient-based methods. Inspired by work on Randomized Search Heuristics, the author covers all iterative optimization methods and not just evolutionary methods. The No Free Lunch Theorem is viewed as a useful introduction to the broader field of analysis that comes from developing a shared mathematical space for optimization algorithms. The author brings in intuitions from several branches of mathematics such as topology, probability theory, and stochastic processes and provides substantial background material to make the work as self-contained as possible. The book will be valuable for researchers in the areas of global optimization, machine learning, evolutionary theory, and control theory.
Inhalt
Introduction.- Review of Optimization Methods.- Functional Analysis of Optimization.- A Unified View of Population-Based Optimizers.- Continuity of Optimizers.- The Optimization Process.- Performance Analysis.- Performance Experiments.- No Free Lunch Does Not Prevent General Optimization.- The Geometry of Optimization and the Optimization Game.- The Evolutionary Annealing Method.- Evolutionary Annealing In Euclidean Space.- Neuroannealing.- Discussion and Future Work.- Conclusion.- App. A, Performance Experiment Results.- App. B, Automated Currency Exchange Trading.
Weitere Informationen
- Allgemeine Informationen
- GTIN 09783662620069
- Sprache Englisch
- Auflage 1st edition 2020
- Größe H241mm x B160mm x T37mm
- Jahr 2020
- EAN 9783662620069
- Format Fester Einband
- ISBN 3662620065
- Veröffentlichung 17.08.2020
- Titel General-Purpose Optimization Through Information Maximization
- Autor Alan J. Lockett
- Untertitel Natural Computing Series
- Gewicht 1027g
- Herausgeber Springer Berlin Heidelberg
- Anzahl Seiten 580
- Lesemotiv Verstehen
- Genre Informatik