7.x billion general intelligences do not think "as a whole" on much. I doubt it matters what even significant chunks of us think, at least not up to the point where pitchforks and torches come out.
For the relatively small group of researchers and theorists consumed by curiosity about the nature of knowledge, experience, and learning, AGI is a fairly understandable goal.
The more important questions may be how/why/whether entities with emergent-aggregate intelligence (organizations such as corporations, nation-states, political parties, international coalitions, activist movements, schools of thought, etc.) would have AGI as a goal. As individuals, we often tolerate and even willingly engage in (sometimes with zeal) all sorts of pursuits that we're "not on board with" because of the gravity of these aggregates. Even when we realize that the goals of the aggregate are incompatible with our individual goals.
Aggregate intelligences don't have to be as well-defined as my initial examples. Consider the goal-setting behavior that might emerge from entire industries, networks of interest/policy groups with overlapping interests/participants, shareholders of sector/industry/index funds, networks of overlapping corporate stakeholders, and so on.
For the relatively small group of researchers and theorists consumed by curiosity about the nature of knowledge, experience, and learning, AGI is a fairly understandable goal.
The more important questions may be how/why/whether entities with emergent-aggregate intelligence (organizations such as corporations, nation-states, political parties, international coalitions, activist movements, schools of thought, etc.) would have AGI as a goal. As individuals, we often tolerate and even willingly engage in (sometimes with zeal) all sorts of pursuits that we're "not on board with" because of the gravity of these aggregates. Even when we realize that the goals of the aggregate are incompatible with our individual goals.