abstract
(eng) |
A decision-making (DM) agent models its environment and quantifes its DM preferences. An adaptive agent models them locally nearby the realisation of the behaviour of the closed DM loop. Due to this, a simple tool set often sufces for solving complex dynamic DM tasks. The inspected Bayesian agent relies on a unifed learning and optimisation framework, which works well when tailored by making a range of case-specifc options. Many of them can be made of-line. These options concern the sets of involved variables, the knowledge and preference elicitation, structure estimation, etc. Still, some metaparameters need an on-line choice. This concerns, for instance, a weight balancing exploration with exploitation, a weight refecting agent’s willingness to cooperate, a discounting factor, etc. Such options infuence, often vitally, DM quality and their adaptive tuning is needed. Specifc ways exist, for instance, a data-dependent choice of a forgetting factor serving to tracking of parameter changes. A general methodology is, however, missing. The paper opens a pathway to it. The solution uses a hierarchical feedback exploiting a generic, DM-related, observable, mismodelling indicator. The paper presents and justifes the theoretical concept, outlines and illustrates its use. |