Summary. This teaser post sketches our current ideas for dealing with more complex environments. It will ultimately be replaced by one or more longer posts describing these in more detail. Reach out if you would like to collaborate on these issues.
Multi-dimensional aspirations
For real-world tasks that are specified in terms of more than a single evaluation metric, e.g., how much apples to buy and how much money to spend at most, we can generalize Algorithm 2 as follows from aspiration intervals to convex aspiration sets:
Assume there are d>1 many evaluation metrics ui, combined into a vector-valued evaluation metric u=(u1,…,ud).
Preparation: Pick d+1 many linear combinations fj in the space spanned by these metrics so that their convex hull is full-dimensional and contains the origin, and consider the d+1 many policies πj each of which maximizes the expected value of the corresponding function fj. Let Vj(s) and Qj(s,a) be the expected values of u when using πj in state s or after using action a in state s, respectively (see Fig. 1). Let the admissibility simplicesV(s) and Q(s,a) be the simplices spanned by the vertices Vj(s) and Qj(s,a), respectively (red and violet triangles in Fig. 1). They replace the feasibility intervals used in Algorithm 2.
Policy: Given a convex state-aspiration setE(s)⊆V(s) (central green polyhedron in Fig. 1), compute its midpoint (centre of mass) m and consider the d+1 segments ℓj from m to the corners Vj(s) of V(s) (dashed black lines in Fig. 1). For each of these segments ℓj, let Aj be the (nonempty!) set of actions for which ℓj intersects Q(s,a). For each a∈Aj, compute the action-aspiration E(s,a)⊆Q(s,a) by shifting a copy Cj of E(s) along ℓj towards Vj(s) until the intersection of Cj and ℓj is contained in the intersection of Q(s,a) and ℓj (half-transparent green polyhedra in Fig. 1), and then intersecting Cj with Q(s,a) to give E(s,a) (yellow polyhedra in Fig. 1). Then pick one candidate action from each Aj and randomize between these d+1 actions in proportions so that the corresponding convex combination of the sets E(s,a) is included in E(s). Note that this is always possible because m is in the convex hull of the sets Cj and the shapes of the sets E(s,a) "fit" into E(s) by construction.
Aspiration propagation: After observing the successor state s′, the action-aspiration E(s,a) is rescaled linearly from Q(s,a) to V(s′) to give the next state-aspiration E(s′), see Fig. 2.
(We also consider other variants of this general idea)
Hierarchical decision making
A common way of planning complex tasks is to decompose them into a hierarchy of two or more levels of subtasks. Similar to existing approaches from hierarchical reinforcement learning, we imagine that an AI system can make such hierarchical decisions as depicted in the following diagram (shown for only two hierarchical levels, but obviously generalizable to more levels):
Summary. This teaser post sketches our current ideas for dealing with more complex environments. It will ultimately be replaced by one or more longer posts describing these in more detail. Reach out if you would like to collaborate on these issues.
Multi-dimensional aspirations
For real-world tasks that are specified in terms of more than a single evaluation metric, e.g., how much apples to buy and how much money to spend at most, we can generalize Algorithm 2 as follows from aspiration intervals to convex aspiration sets:
(We also consider other variants of this general idea)
Hierarchical decision making
A common way of planning complex tasks is to decompose them into a hierarchy of two or more levels of subtasks. Similar to existing approaches from hierarchical reinforcement learning, we imagine that an AI system can make such hierarchical decisions as depicted in the following diagram (shown for only two hierarchical levels, but obviously generalizable to more levels):