Okasha's paper is addressing emerging discussions in biology that are talking about organisms-as-agents in particular, otherwise being called the Return of the Organism turn in philosophy of biology.
In the paper, he adds "Various concepts have been offered as ways of fleshing out this idea of organismic autonomy, including goal-directedness, functional organization, emergence, self-maintenance, and individuality. Agency is another possible candidate for the job."
This seems like a reasonable stance so far as I can tell, since organisms seem to have some structural integrity -- in what can make delineated cartesian boundaries well-defined.
For collectives, a similar discussion may surface additional upsides and downsides to agency concepts, that may not apply at organism levels.
As an addendum, it seems to me that you may not necessarily need a 'long-term planner' (or 'time-unbounded agent') in the environment. A similar outcome may also be attainable if the environment contains a tiling of time-bound agents who can all trade across each other in ways such that the overall trade network implements long term power seeking.
First of all, these are all meant to denote very rough attempts at demarcating research tastes.
It seems possible to be aiming to solve P1 without thinking much of P4, if a) you advocate ~Butlerian pause, or b) if you are working on aligned paternalism as the target behavior (where AI(s) are responsible for keeping humans happy, and humans have no residual agency or autonomy remaining).
Also a lot of people who focus on the problem from a P4 perspective tend to focus on the human-AI interface, where most of the relevant technical problems lie, but this might reduce their attention on issues of mesa-optimizers or emergent agency despite the massive importance of those issues to their project in the long run.