I don't know how to define control or even point at it except as a word-cloud, so it's probably wanting to be refactored. The point of talking about control is to lay part of the groundwork for understanding what determines what directions a mind ends up pushing the world in. Control is something like what's happening when values or drives are making themselves felt as values or drives. ("Influence" = "in-flow" might be a better term than "control".)
Control is when an element makes another element do something. This relies on elements "doing stuff".
Control is when an element {counterfactually, evidentially, causally, logically...} determines {the behavior, the outcome of the behavior} of an assembly of elements.
Control is when an element modifies the state of an element. This relies on elements having a state. Alternatively, control is when an element replaces an element with a similar element.
Control is when an element selects something according to a criterion.
These definitions aren't satisfying in part because they rely on the pre-theoretic ideas of "makes", "determines", "modifies", "selects". Those ideas could be defined precisely in terms of causality, but doing that would narrow their scope and elide some of the sense of "control". To say, pre-theoretically, "My desire for ice cream is controlling where I'm walking.", is sometimes to say "The explanation for why I'm walking along such-and-such a path, is that I'm selecting actions based on whether they'll get me ice cream, and that such-and-such a path leads to ice cream.", and explanation in general doesn't have to be about causality. Control is whatever lies behind the explanations given in answer to questions like "What's controlling X?" and "How does Y control Z?" and "How can I control W?".
Another way the above definitions are unsatisfactory is that they aren't specific enough; some of them would say that if I receive a message and then update my beliefs according to an epistemic rule, that message controls me. That might be right, but it's a little counterintuitive to me.
There's a tension between describing the dynamics of a mind--how the parts interact over time--vs. describing the outcomes of a mind, which is more easily grasped with gemini modeling of "desires". (I.e. by having your own copy of the "desire" and your own machinery for playing out the same meaning of the "desire" analogously to the original "desire" in the original mind.) I'm focusing on dynamical concepts because they seem more agnostic as discussed above, but it might be promising to instead start with presumptively unified agency and then distort / modify / differentiate / deform / vary the [agency used to gemini model a desire] to allow for modeling less-presumptively-coherent control. (For discussion of the general form of this "whole->wholes" approach, distinct from the "parts->wholes" approach, see Non-directed conceptual founding.) Another definition of control in that vein, a variation on a formula from Sam Eisenstat:
Control is an R-stable relationship between an R-stable element and R-unstable prior/posterior elements (which therefore play overlapping roles). "R-stable" means stable under ontological Revolutions. That is, we have C(X,Y) and C(X,Z), where X and C are somehow the same before and after an ontological revolution, and Y and Z aren't the same.
Control vs. values
I'm talking about control rather than "values" because I don't want to assume:
that there are terminal values,
that there's a clear distinction between terminal values and non-terminal values,
that there are values stable across time and mental life (e.g. self-modification, ontological revolutions),
that there's a fixed world over which values could be defined,
that there's a clear distinction/unentanglement between values and other elements,
that there aren't fundamental conflicts between values within a mind,
that if a mind pushes the world in a direction, that direction must be "represented" in the mind's values or in any of the mind's elements,
that the relevant questions are about stable features of the mind (such as terminal values after reaching reflective stability) rather than about transient features,
that there is, or isn't, or possibly is, or possibly isn't, a "wrapper-mind" with fixed goals or "loci of control" (elements of the mind that determine effects of the mind to an extent disproportionate to the size of the elements, e.g. a limbic system),
that the mind already incorporates convergent instrumental drives and tools, such as being non-Dutch-book-able,
that control is about external outcomes, as opposed to being about internal / intermediate outcomes or something else (e.g. behavior rather than "outcomes").
Expanding on this point: note that the definitions of control given above mostly avoid talking about outcomes. That's because I want to also talk about the control that's exerted by [an agent A minus its utility function]. You could (for some sorts of agents, maybe) slot in a totally different utility function, and the resulting agent A' would have a totally different outcome. But A and A' have something in common: the decision-making machinery is organized in analogous ways, although it will go down many non-overlapping lines of thought in A and A' because of the different priorities held by A and A'. The sense in which the shared decision-making machinery controls the thoughts and actions of A and A' should be included in the concept of control. In particular, this decision-making machinery includes some way of interfacing with the novelty required for the agent to become highly capable, and that task may be very non-trivial.
Word-cloud related to control
Want. Cognate with "vacuum", as in ("having an emptiness, lacking something"). This suggests homeostatic pressure and satisficing.
Try, attempt. "Try" from Old French "trier" ("to choose, test, verify"). "Attempt" = "ad-tent" ("towards-test") (analogous to "attend"; cognate with "tentative", "tense", "-tend", "-tain"). Suggests experimenting to see what works, trial and error.
Desire. Latin "de-sidus" ("from the stars"), cognate with "sidereal" ("of the stars"). Suggests transcendence, universality, wide scope; hope, things out of reach.
Care. From Proto-Germanic *karō ("care, sorrow, cry"), from Proto-Indo-European *ǵeh₂r- ("voice, exclamation"); distantly cognate with "garrulous" ("talkative"). Suggests depth, relations to other agents; negative reinforcement, turning homeostatic pressure into strategic preservation by projecting negative reinforcement with imagination.
Control. "Contra-rotulus" ("against a little wheel"; "a register used to verify accounts"). Suggests tracking, registration, feedback cycles.
Strategy. From στρατός (stratós, "army"; from Proto-Indo-European *ster- ("to spread, stretch out, extend"), whence also "strew", "structure", "-struct") + ἄγω (ágō, "I lead, I conduct"; cognate with "act", "agent"). So, something like "what is done by what conducts an extension". Suggests organization, orchestration, integration; initiation, agitation, without-which-not.
Direct. "Dis-rego" ("apart-straighten", cognate "right"), I think as in telling something where to go. Suggests making things handier by putting them into more specific legible contexts.
Select. "Se-lect", "se" ("away", as in "seduce" ("lead away"), "seclude" ("shut away"), "secede" ("go apart")) and "lect" from PIE *leǵ- ("gather", cognate with "logos" and "-lect" like "dialect"). Suggests taking something from one context and then putting it into another context by naming it and gathering it with other things.
(Some of the other etymons of the following words are also interesting.)
Control transmission or non-transmission. If X controls Z by controlling how Y controls Z, that's transmission (through a line of control). Examples: a general giving orders to a commander giving orders to privates; a hunger calling on a route finder to call on spatial memory of where restaurants are; a mathematician engineering a concept to compactly describe something, so that futurue thoughts using that concept will proceed smoothly; a programmer rewriting a function so that it has different functional behavior when applied in some context. Non-example: an optimizer speeding up a piece of code. The optimized code, when applied, still does all the same stuff as the unoptimized code; the code optimizer hasn't controlled the application of the optimized code. (This isn't entirely right: you could use faster code in new ways because it's faster, and being faster overall is some effect. But those are weak effects of a specific kind, and don't show up in the "internal topology" of the computation. In general, function extensionality implies a kind of control non-transmission, as do APIs, markov blankets, and any kind of screening off.)
Non-fungibility, non-conservation. Unless shown otherwise, there's no conservation or fungibility of control. For example, two people each throwing a rock at a window simultaneously, both cause the window to break. An agent's decision-making machinery and its outcome-target both determine the agent's effect on the world, but not interchangeably (the outcome-target determines the direction and the decision-making determines the magnitude). The parts of a machine all have to work for the machine to work.
World-robustness. Control that is exerted in many possible worlds.
Control distance / depth. Through how many elements is control serially transmitted? Through how many "levels" or "domains" or "realms" is control serially transmitted? Through how much time and space? Is new understanding about a domain "far" from a controlling element recruited to have "near" effects?
Control breadth. Across how many different domains (weighted by control distance) does one element exert control?
Co-control. What's happening with an element that's being controlled.
Co-control context-independence. I.e., being generally useful, predictable, manipulable, programmable, applicable; possibilizing.
Control stability. Is the control still exerted after an ontological revolution? E.g. you keep your house warm by putting in your fireplace materials that are listed in your alchemical handbook as "high in phlogiston", then you learn about oxidization, and then you still put those materials in your fireplace (now thinking of them as "high in rapidly oxidizable stuff").
Control amplitude. The force of the control. Optimization power is an example. A distinct example is if you turn your thermostat to 90F and turn your window AC unit on: the AC unit is going to lose and the room is going to get hot, but the more powerful the AC unit, the harder the furnace has to work. The AC unit has very little optimization power (over the temperature) in this context, since it can only barely change the actual temperature, but it has nonnegligible control amplitude (over the temperature), since it can force the furnace to work noticeably harder.
Explicitness. Some control is explicit; sugergoals, terminal values, back-chaining agency. In contrast, some control routes through not-yet-realized creativity; reinforcement learning. (This is an important concept for comparing novelty with control: implicit control gives up control to the external control exercised by the novelty manifested by the creativity it calls on. This roughly corresponds to saying that inner optimizers happen.)
Internal / external. All elements control the inside of themselves, e.g. the idea of the group D3 is a structure of control in that it's constituted in part by controlling the combination of two distinct reflections to be a non-trivial rotation. Some elements don't control anything else, e.g. a mental picture of a rock doesn't control anything else without itself being controlled to control, while others do.
Ambition. How far would this control push the world if unchecked, unconstrained, unopposed, unblocked?
Yearning vs. pursuing. Yearning is waiting passively and seizing opportunities when they present themselves on their own; following lines of control that are already known, handy, interfaced with, incorporated, integrated. Pursuing is seeking and creating new lines of control; calling on creativity; routing aroung a given stubborn failure by recursing on trying new things, by seeking knowledge, by expanding into new domains, by instigating ontological revolutions, by exploring possible epistemic stances. (The line between yearning and pursuing is blurred when there are lines of control, already integrated, that include seeking and creating new lines of control.)
Locality. I haven't analyzed this concept. There's in-degree of control / sensitivity of the controlled thing to the controller; there's out-degree weighted by in-degree of the target; there's integration (combining information from different domains, making whole-world hypotheses, making comparisons); there's orchestration / organization / coordination / planning / combination / assembly / arrangement / recruitment; there's bottlenecks through which control flows; and in contrast, there's participating in an assembly that's controlling something.
Criterial delegation. A type of transmission. Controlling an element E by setting a criterion which E will apply when E controls other elements. (Requires that the delegate has and applies "criteria", e.g. agents with goals or search processes with success criteria.)
Goal delegation. A type of criterial delegation where the criterion is a goal. Controlling an outcome by setting the outcome as a target of another element's control. (Requires that the delegate can have "targets", e.g. agents with outcomes as goals; implies the controlled element has some control breadth (so that "goal" has meaning beyond "search criterion").)
Not all criterial delegation is goal delegation: setting the expected-utility threshold applied by a quantilizer is criterial delegation because it's changing the criterion applied by the quantlizer, but it's not changing the direction of the outcome selected by the criterion. Other examples: setting a success criterion for a domain-narrow search, setting a homeostatic target for a simple feedback system. (Neither of those systems have goals, so they can't be goal delegates.)
Superiority. E₁ is superior to E₂ when E₁'s control is higher than E₂'s in amplitude, breadth, depth, creativity, externality, pursuantness. (Note that amplitude, locality, ambitiousness, and stability aren't listed.)
Domination. E₁ controlling E₂ to make / keep it the case that E₁ is superior to E₂. Done e.g. by directly cutting off E₂'s access to domains, by punishing or threatening to punish E₂ for increasing its control, by weakening E₂, and generally keeping E₂ within bounds so that E₁ can't be overpowered by E₂ (as doing so becomes convergently instrumental for E₂, though it may not be the type of element that picks up on convergently instrumental things). Satisficers are more easily dominable than optimizers. The point is to make E₂ more predictable, understandable, and reliable (because it's not pursuing other things), and less of a threat.
Cybernetic control. A specific flavor of control that's empirically common: E₁ criterially delegates to E₂, and E₁ is (mostly) superior to E₂.
"Cybernetic" = steering, cognate with "govern" and possibly "whirl" via PIE *kʷerb- ("to turn").
Examples: setting the target of a homeostat / control system, setting the success criterion of a search, setting subgoals of subagents, giving orders, subsidizing and regulating an industry.
Non-examples: getting clear on how D3 works is control, but it's not cybernetic control; the idea of D3 might later be involved in controlling other elments, but not with criteria set by the element that orchestrated getting clear on D3. Designing a car is mainly non-cybernetic control because the car doesn't control anything. But making a detailed design for a car has a significant admixture of cybernetic control, whenever the designer makes decisions with the manufacturing process in mind, because parts of the design will control parts of the process of manufacturing the car, e.g. the decision about axle thickness provides a target-point for the lathe (or whatever). Making a request of a free person isn't cybernetic control because they can refuse your request and because you aren't superior to them (these two things are related of course). (I haven't fully disentangled superiority and domination from an element actually exerting its capacity to threaten / extort another to accept delegation or other control, which seems to require conflict and communication.)
Note that not all cybernetic control is goal delegation because there's criterial delegation that's not goal delegation.
E₁ is only mostly superior to E₂; otherwise there'd be no point in delegating to E₂. Centrally, E₁ is superior to E₂ except that E₂'s control has higher amplitude than E₁'s for some narrow set of kinds of control.
Cybernetic control is common because if E₁ is superior to E₂, that makes it easier and more effective for E₁ to criterially delegate to E₂ (and for this reason sometimes E₁ will dominate E₂).
Since E₁ is superior to E₂, often E₂ needs constant feedback from E₁, i.e. new information and new settings of criteria. E.g. constantly adjusting one's posture, or issuing new orders, or opening / closing a throttle. Thus cybernetic control correlates with active/ongoing oversight and directive feedback.
Ambitiousness makes E₂ less amenable to be cybernetically controlled, because it implies that escaping domination by E₁ is more likely to be a convergent instrumental goal of E₂.
Control stability seems like a plus for cybernetic control because it implies a kind of reliability, though it also implies breadth which is harder to dominate.
[Metadata: crossposted from https://tsvibt.blogspot.com/2022/08/control.html. First completed 3 July 2022.]
I don't know how to define control or even point at it except as a word-cloud, so it's probably wanting to be refactored. The point of talking about control is to lay part of the groundwork for understanding what determines what directions a mind ends up pushing the world in. Control is something like what's happening when values or drives are making themselves felt as values or drives. ("Influence" = "in-flow" might be a better term than "control".)
Previous: Structure, creativity, and novelty
Definitions of control
These definitions aren't satisfying in part because they rely on the pre-theoretic ideas of "makes", "determines", "modifies", "selects". Those ideas could be defined precisely in terms of causality, but doing that would narrow their scope and elide some of the sense of "control". To say, pre-theoretically, "My desire for ice cream is controlling where I'm walking.", is sometimes to say "The explanation for why I'm walking along such-and-such a path, is that I'm selecting actions based on whether they'll get me ice cream, and that such-and-such a path leads to ice cream.", and explanation in general doesn't have to be about causality. Control is whatever lies behind the explanations given in answer to questions like "What's controlling X?" and "How does Y control Z?" and "How can I control W?".
Another way the above definitions are unsatisfactory is that they aren't specific enough; some of them would say that if I receive a message and then update my beliefs according to an epistemic rule, that message controls me. That might be right, but it's a little counterintuitive to me.
There's a tension between describing the dynamics of a mind--how the parts interact over time--vs. describing the outcomes of a mind, which is more easily grasped with gemini modeling of "desires". (I.e. by having your own copy of the "desire" and your own machinery for playing out the same meaning of the "desire" analogously to the original "desire" in the original mind.) I'm focusing on dynamical concepts because they seem more agnostic as discussed above, but it might be promising to instead start with presumptively unified agency and then distort / modify / differentiate / deform / vary the [agency used to gemini model a desire] to allow for modeling less-presumptively-coherent control. (For discussion of the general form of this "whole->wholes" approach, distinct from the "parts->wholes" approach, see Non-directed conceptual founding.) Another definition of control in that vein, a variation on a formula from Sam Eisenstat:
Control vs. values
I'm talking about control rather than "values" because I don't want to assume:
Word-cloud related to control
(Some of the other etymons of the following words are also interesting.)
Choose, constrain; sway, pursue, force, pressure, pull, push; effect, cause, make, determine, modify; power, influence, reign, rule, manage, regulate, lead, obey, prescribe, hegemony, preside, principal, authority, govern, cybernetic, order, command; steer, pilot, compass, rudder, reins, helm, drive; organize, orchestrate, design, manufacture; manipulate, craft, use, tool; supervise, guide, instruct, wield, ambition; wish, will, aim, target, value, utility function, objective function, criterion.
Aspects of control
Control transmission or non-transmission. If X controls Z by controlling how Y controls Z, that's transmission (through a line of control). Examples: a general giving orders to a commander giving orders to privates; a hunger calling on a route finder to call on spatial memory of where restaurants are; a mathematician engineering a concept to compactly describe something, so that futurue thoughts using that concept will proceed smoothly; a programmer rewriting a function so that it has different functional behavior when applied in some context. Non-example: an optimizer speeding up a piece of code. The optimized code, when applied, still does all the same stuff as the unoptimized code; the code optimizer hasn't controlled the application of the optimized code. (This isn't entirely right: you could use faster code in new ways because it's faster, and being faster overall is some effect. But those are weak effects of a specific kind, and don't show up in the "internal topology" of the computation. In general, function extensionality implies a kind of control non-transmission, as do APIs, markov blankets, and any kind of screening off.)
Non-fungibility, non-conservation. Unless shown otherwise, there's no conservation or fungibility of control. For example, two people each throwing a rock at a window simultaneously, both cause the window to break. An agent's decision-making machinery and its outcome-target both determine the agent's effect on the world, but not interchangeably (the outcome-target determines the direction and the decision-making determines the magnitude). The parts of a machine all have to work for the machine to work.
World-robustness. Control that is exerted in many possible worlds.
Control distance / depth. Through how many elements is control serially transmitted? Through how many "levels" or "domains" or "realms" is control serially transmitted? Through how much time and space? Is new understanding about a domain "far" from a controlling element recruited to have "near" effects?
Control breadth. Across how many different domains (weighted by control distance) does one element exert control?
Co-control. What's happening with an element that's being controlled.
Co-control context-independence. I.e., being generally useful, predictable, manipulable, programmable, applicable; possibilizing.
Control stability. Is the control still exerted after an ontological revolution? E.g. you keep your house warm by putting in your fireplace materials that are listed in your alchemical handbook as "high in phlogiston", then you learn about oxidization, and then you still put those materials in your fireplace (now thinking of them as "high in rapidly oxidizable stuff").
Control amplitude. The force of the control. Optimization power is an example. A distinct example is if you turn your thermostat to 90F and turn your window AC unit on: the AC unit is going to lose and the room is going to get hot, but the more powerful the AC unit, the harder the furnace has to work. The AC unit has very little optimization power (over the temperature) in this context, since it can only barely change the actual temperature, but it has nonnegligible control amplitude (over the temperature), since it can force the furnace to work noticeably harder.
Explicitness. Some control is explicit; sugergoals, terminal values, back-chaining agency. In contrast, some control routes through not-yet-realized creativity; reinforcement learning. (This is an important concept for comparing novelty with control: implicit control gives up control to the external control exercised by the novelty manifested by the creativity it calls on. This roughly corresponds to saying that inner optimizers happen.)
Internal / external. All elements control the inside of themselves, e.g. the idea of the group D3 is a structure of control in that it's constituted in part by controlling the combination of two distinct reflections to be a non-trivial rotation. Some elements don't control anything else, e.g. a mental picture of a rock doesn't control anything else without itself being controlled to control, while others do.
Ambition. How far would this control push the world if unchecked, unconstrained, unopposed, unblocked?
Yearning vs. pursuing. Yearning is waiting passively and seizing opportunities when they present themselves on their own; following lines of control that are already known, handy, interfaced with, incorporated, integrated. Pursuing is seeking and creating new lines of control; calling on creativity; routing aroung a given stubborn failure by recursing on trying new things, by seeking knowledge, by expanding into new domains, by instigating ontological revolutions, by exploring possible epistemic stances. (The line between yearning and pursuing is blurred when there are lines of control, already integrated, that include seeking and creating new lines of control.)
Locality. I haven't analyzed this concept. There's in-degree of control / sensitivity of the controlled thing to the controller; there's out-degree weighted by in-degree of the target; there's integration (combining information from different domains, making whole-world hypotheses, making comparisons); there's orchestration / organization / coordination / planning / combination / assembly / arrangement / recruitment; there's bottlenecks through which control flows; and in contrast, there's participating in an assembly that's controlling something.
Criterial delegation. A type of transmission. Controlling an element E by setting a criterion which E will apply when E controls other elements. (Requires that the delegate has and applies "criteria", e.g. agents with goals or search processes with success criteria.)
Goal delegation. A type of criterial delegation where the criterion is a goal. Controlling an outcome by setting the outcome as a target of another element's control. (Requires that the delegate can have "targets", e.g. agents with outcomes as goals; implies the controlled element has some control breadth (so that "goal" has meaning beyond "search criterion").)
Superiority. E₁ is superior to E₂ when E₁'s control is higher than E₂'s in amplitude, breadth, depth, creativity, externality, pursuantness. (Note that amplitude, locality, ambitiousness, and stability aren't listed.)
Domination. E₁ controlling E₂ to make / keep it the case that E₁ is superior to E₂. Done e.g. by directly cutting off E₂'s access to domains, by punishing or threatening to punish E₂ for increasing its control, by weakening E₂, and generally keeping E₂ within bounds so that E₁ can't be overpowered by E₂ (as doing so becomes convergently instrumental for E₂, though it may not be the type of element that picks up on convergently instrumental things). Satisficers are more easily dominable than optimizers. The point is to make E₂ more predictable, understandable, and reliable (because it's not pursuing other things), and less of a threat.
Cybernetic control. A specific flavor of control that's empirically common: E₁ criterially delegates to E₂, and E₁ is (mostly) superior to E₂.
"Cybernetic" = steering, cognate with "govern" and possibly "whirl" via PIE *kʷerb- ("to turn").
Examples: setting the target of a homeostat / control system, setting the success criterion of a search, setting subgoals of subagents, giving orders, subsidizing and regulating an industry.
Non-examples: getting clear on how D3 works is control, but it's not cybernetic control; the idea of D3 might later be involved in controlling other elments, but not with criteria set by the element that orchestrated getting clear on D3. Designing a car is mainly non-cybernetic control because the car doesn't control anything. But making a detailed design for a car has a significant admixture of cybernetic control, whenever the designer makes decisions with the manufacturing process in mind, because parts of the design will control parts of the process of manufacturing the car, e.g. the decision about axle thickness provides a target-point for the lathe (or whatever). Making a request of a free person isn't cybernetic control because they can refuse your request and because you aren't superior to them (these two things are related of course). (I haven't fully disentangled superiority and domination from an element actually exerting its capacity to threaten / extort another to accept delegation or other control, which seems to require conflict and communication.)
Note that not all cybernetic control is goal delegation because there's criterial delegation that's not goal delegation.
E₁ is only mostly superior to E₂; otherwise there'd be no point in delegating to E₂. Centrally, E₁ is superior to E₂ except that E₂'s control has higher amplitude than E₁'s for some narrow set of kinds of control.
Cybernetic control is common because if E₁ is superior to E₂, that makes it easier and more effective for E₁ to criterially delegate to E₂ (and for this reason sometimes E₁ will dominate E₂).
Since E₁ is superior to E₂, often E₂ needs constant feedback from E₁, i.e. new information and new settings of criteria. E.g. constantly adjusting one's posture, or issuing new orders, or opening / closing a throttle. Thus cybernetic control correlates with active/ongoing oversight and directive feedback.
Ambitiousness makes E₂ less amenable to be cybernetically controlled, because it implies that escaping domination by E₁ is more likely to be a convergent instrumental goal of E₂.
Control stability seems like a plus for cybernetic control because it implies a kind of reliability, though it also implies breadth which is harder to dominate.