Coherent Aggregated Volition is presented as simpler than his interpretation of CEV, and intended to be easier to formalize and prototype in the foreseeable future (with the help of platforms such as OpenCog. CAV is not, however, intended to answer the question of provably Friendly AI, although Goertzel claims CEV may not answer that question either.
Coherent Aggregated Volition is one of Ben Goertzel's responseresponses to Eliezer Yudkowsky's Coherent Extrapolated Volition, the other being Coherent Blended Volition. CAV would be a combination of the goals and beliefs of humanity at the present time.
The author considers the "extrapolation" aspect of CEV as distorting the concept of volition and to be highly uncertain. He considers that if the person whose volition is being extrapolated has some inconsistent aspects (which is tipicallytypically human), then there could be a great variety of possible extrapolations. The problem would then be which version of this extrapolated human to choose, or how to aggregate them, which would be very difficult to achieve.
Coherent Aggregated Volition is presented as simpler than his interpretation of CEV, and intended to be easier to formalize and prototype in the foreseeable future (with the help of platforms such as OpenCog. CAV is not, however, intended to answer the question of provably Friendly AI,AI, although Goertzel claims CEV is possiblymay not the answer as well.question either.
The author starts by defendingarguing that we must treat goals and beliefs together, as a single concept, which he calls gobs (and gobses for the plural). Each agent thus can have several gobs, logically consistent or not. As a way to measure how muchof measuring the distance of these gobs are distant tofrom each other, the term gobs metric is used - the persons or AGI could agree, with various degrees, on several metrics, but it seems probable that this individual's metrics would differ less than their gobses.
In some experiments with iteratively repairsrepairing inconsistent beliefs within a probabilistic reasoning system, the author claims that it seems like we can reach a set of beliefs very different from the one we started - which seems to reflect the CEV process and main problem.may be a problem with CEV. That is, the iterative refinement of the agents' goals and beliefs canmight not always be a good systemway to turn inconsistent values withinto similar consistent ones.
Finally, whileGoertzel feels that it seems thatlike CEV bypasses the humanity feeling,an essential aspect of being human, by not encompassing the person's growth process andallowing humans to resolve their inconsistencies in the human mind,themselves. CAV tries to summarize it,this process, respecting and not replacing it. The intrinsic inheritance of possiblyit, even if this leads to more "bad" aspects of humanity than CEV is then seen as more human.being retained.
Finally, while it seems that CEV bypasses the humanity feeling, by not encompassing the person's growth process and inconsistencies in the human mind, CAV tries to summarize it, respecting and not replacing it. The intrinsic inheritance of possibly more "bad" aspects of humanity than CAVCEV is then seen as more human.
Although CEV is seen as possibly giving a feasible solution, Goertzel states there's no guarantee of this, and that Yudkowsky's method can generate solutions very far from the populations'population's gobses.
Coherent Aggregated Volition is presented as simpler than his interpretation of CEV, and intended to be easier to formalize and prototype in the foreseeable future.future (with the help of platforms such as OpenCog. CAV is not, however, intended to answer the question of Friendly AI, although Goertzel claims CEV is possibly not the answer as well.
Finally, while it seems that CEV bypasses the humanity feeling, by not encompassing the person's growth process and inconsistencies in the human mind, CAV tries to summarize it, respecting and not replacing it. The intrinsic inheritance of possibly more "bad" aspects of humanity than CAV is then seen as more human.
Although CEV is seen as possibly giving a feasablefeasible solution, Goertzel states there's no guarantee of this, and that Yudkowsky's method can generate solutions very far from the populations' gobses.
Although CEV is seen as possibly giving a feasable solution, Goertzel states there's no guarantee of this, and that Yudkowsky's method can generate solutions very far from the populations' gobses.
In some experiments with iteratively repairs within a probabilistic reasoning system, the author claims it seems we can reach a set of beliefs very different from the one we started - which seems to reflect the CEV process and main problem. That is, the iterative refinement of the agents' goals and beliefs can not always be a good system to turn inconsistent values with similar consistent ones.
Finally, while it seems that CEV bypasses the humanity feeling, by not encompassing the person's growth process and inconsistencies in the human mind, CAV tries to summarize it, respecting and not replacing it. The intrinsic inheritance of more "bad" aspects of humanity than CAV is then seen as more human.
CAV has some free parameters, like the averaging method, the measure of compactness, consistency evaluation and so on, but these are seen as features rather than limitations and do not taint the simplicity of the idea. At the same time, it is possible to refine some of the criteria stated before without changing the nature of the method.
First, theThe author defendsstarts by defending that we must treat goals and beliefs together, as a single concept, which he calls gobs (and gobses for the plural). Each agent thus can have several gobs, logically consistent or not. As a way to measure how much these gobs are distant to each other, the term gobs metric is used - the persons or AGI could agree, with various degrees, on several metrics, but it seems probable that this individual's metrics would differ less than their gobses.
GivenThen, given a population of intelligent agents with different gobses, we could then try to find a single gobs that maximizes logical consistency, compactness, similarity to the different gobses in the population and amount of evidence supporting these beliefs. This "multi-extremal optimization algorithm" is what the authorhe calls Coherent Aggregated Volition. The term expresses the attempt to achieve both coherence and an aggregation of the population volitions.
Coherent Aggregated Volition is presented as simpler,simpler than his interpretation of CEV, and intended to be easier to formalize and prototype in the foreseeable future. CAV is not, however, intended to answer the question of Friendly AI, although Goertzel claims CEV is possibly not the answer as well.
As stated, Coherent Aggregated Volition is an attempt to capture the idea of CEV, as interpretated by Goertzel, but in a way that makes it easier to implement and which we are able to prototype.
First, the author defendesdefends that we must treat goals and beliefs together, as a single concept, which he calls gobs (and gobses for the plural). Each agent thus can have several gobs, logically consistent or not. As a way to measure how much these gobs are distant to each other, the term gobs metric is used - the persons or AGI could agree, with various degrees, on several metrics, but it seems probable that this individual's metrics would differ less than their gobses.
First, the author defendes that we must treat goals and beliefs together, as a single concept, which he calls gobs (and gobses for the plural). Each agent thus hascan have several gobs, logically consistent or not. As a way to measure how much these gobs are distant to each other, the term gobs metric is used - the persons or AGI could agree, with various degrees, on several metrics, but it seems probable that this individual's metrics would differ less than their gobses.
Given a population of intelligent agents with different gobses, we could then try to find a single gobs that maximizes logical consistency, compactness, similarity to the different gobses in the population and amount of evidence supporting these beliefs. This "multi-extremal optimization algorithm" is what the author calls Coherent Aggregated Volition.
First, the author defendes that we must treat goals and beliefs together, as a single concept, which he calls gobs (and gobses for the plural.plural). Each agent thus has several gobs, logically consistent or not. As a way to measure how much these gobs are distant to each other, the term gobs metric is used - the persons or AGI could agree, with various degrees, on several metrics, but it seems probable that this individual's metrics would differ less than their gobses.
As stated, Coherent Aggregated Volition is an attempt to capture the idea of CEV, as interpretated by Goertzel, but in a way that makes it easier to implement and which we are able to prototype.
First, the author defendes that we must treat goals and beliefs together, as single concept, which he calls gobs (and gobses for the plural. Each agent thus has several gobs, logically consistent or not. As a way to measure how much these gobs are distant to each other, the term gobs metric is used - the persons or AGI could agree, with various degrees, on several metrics, but it seems probable that this individual's metrics would differ less than their gobses.
Given a population of intelligent agents with different gobses, we could try to find a single gobs that maximizes logical consistency, compactness, similarity to the different gobses in the population and amount of evidence supporting these beliefs. This "multi-extremal optimization algorithm" is what the author calls Coherent Aggregated Volition.
Coherent Aggregated Volition is Ben Goertzel's response to Eliezer Yudkowsky's Coherent Extrapolated Volition. CAV would be a combination of the goals and beliefs of humanity at the present time. Without
The author considers the "extrapolation" aspect of CEV, CEV as distorting the concept of volition and to be highly uncertain. He considers that if the person whose volition is being extrapolated has some inconsistent aspects (which is tipically human), then there could be a great variety of possible extrapolations. The problem would then be which version of this extrapolated human to choose, or how to aggregate them, which would be very difficult to achieve.
Coherent Aggregated Volition is presented as simpler, and intended to be easier to formalize and prototype in the foreseeable future. CAV is not, however, intended to answer the question of Friendly AI, although Goertzel claims CEV is possibly not the answer as well.