Abstraction in Reinforcement Learning

CHF 49.35
Auf Lager
SKU
DHLFO6KN645
Stock 1 Verfügbar
Geliefert zwischen Fr., 27.02.2026 und Mo., 02.03.2026

Details

Reinforcement learning is the problem faced by an
agent that must learn behavior through
trial-and-error interactions with a dynamic
environment. Usually, the problem to be solved
contains subtasks that repeat at different regions of
the state space. Without any guidance
an agent has to learn the solutions of all subtask
instances independently, which in turn degrades the
performance of the learning process. In this work, we
propose two novel approaches for building the
connections between different regions of the search
space. The first approach efficiently discovers
abstractions in the form of conditionally terminating
sequences and represents these abstractions compactly
as a single tree structure; this structure is then
used to determine the actions to be executed by the
agent. In the second approach, a similarity function
between states is defined based on the number of
common action sequences; by using this similarity
function, updates on the action-value function of a
state are re ected to all similar states that allows
experience acquired during learning be applied to a
broader context. The effectiveness of both approaches
is demonstrated empirically over various domains.

Autorentext

Sertan Girgin received his Ph.D. degree in Computer Engineeringfrom Middle East Technical University, Turkey in 2007 and holds adouble major in Mathematics. His research interests includeReinforcement Learning, Distributed AI and Multi Agent Systems,Biologically Inspired Robotics and Evolutionary Algorithms.


Klappentext

Reinforcement learning is the problem faced by anagent that must learn behavior throughtrial-and-error interactions with a dynamicenvironment. Usually, the problem to be solvedcontains subtasks that repeat at different regions ofthe state space. Without any guidancean agent has to learn the solutions of all subtaskinstances independently, which in turn degrades theperformance of the learning process. In this work, wepropose two novel approaches for building theconnections between different regions of the searchspace. The first approach efficiently discoversabstractions in the form of conditionally terminatingsequences and represents these abstractions compactlyas a single tree structure; this structure is thenused to determine the actions to be executed by theagent. In the second approach, a similarity functionbetween states is defined based on the number ofcommon action sequences; by using this similarityfunction, updates on the action-value function of astate are re ected to all similar states that allowsexperience acquired during learning be applied to abroader context. The effectiveness of both approachesis demonstrated empirically over various domains.

Weitere Informationen

  • Allgemeine Informationen
    • GTIN 09783639136524
    • Sprache Englisch
    • Jahr 2009
    • EAN 9783639136524
    • Format Kartonierter Einband (Kt)
    • ISBN 978-3-639-13652-4
    • Titel Abstraction in Reinforcement Learning
    • Autor Sertan Girgin
    • Untertitel Using Option Discovery and State Similarity
    • Herausgeber VDM Verlag
    • Anzahl Seiten 104
    • Genre Informatik

Bewertungen

Schreiben Sie eine Bewertung
Nur registrierte Benutzer können Bewertungen schreiben. Bitte loggen Sie sich ein oder erstellen Sie ein Konto.
Made with ♥ in Switzerland | ©2025 Avento by Gametime AG
Gametime AG | Hohlstrasse 216 | 8004 Zürich | Schweiz | UID: CHE-112.967.470
Kundenservice: customerservice@avento.shop | Tel: +41 44 248 38 38