Wednesday, December 29, 2010

Will Decreasing Scarcity Allow us to Approach an Optimal (Meta-)Society?

When chatting with a friend about various government systems during a long car drive the other day (returning from New York where we were hit by 2 feet of snow, to relatively dry and sunny DC), it occurred to me that one could perhaps prove something about the OPTIMAL government system, if one were willing to make some (not necessarily realistic) assumptions about resource abundance.

This led to an interesting train of thought -- that maybe, as technology reduces scarcity, society will gradually approach optimality in certain senses...

The crux of my train of thought was:

  • Marcus Hutter proved that the AIXI algorithm is an optimal approach to intelligence, given the (unrealistic) assumption of massive computational resources.
  • Similarly, I think one could prove something about the optimal approach to society and government, given the (unrealistic) assumptions of massive natural resources and a massive number of people.

I won't take time to try to prove this formally just now, but in this blog post I'll sketch out the basic idea.... I'll describe what I call the meta-society, explain the sense in which I think it's optimal, and finally why I think it might get more and more closely approximated as the future unfolds...

A Provably Optimal Intelligence

As a preliminary, first I'll review some of Hutter's relevant ideas on AI.

In Marcus Hutter's excellent (though quite technical) book Universal AI, he presents a theory of "how to build an optimally intelligent AI, given unrealistically massive computational resources."

Hutter's algorithm isn't terribly novel -- I discussed something similar in my 1993 book The Structure of Intelligence (as a side point to the main ideas of that book), and doubtless Ray Solomonoff had something similar in mind when he came up with Solomonoff induction back in the 1960s. The basic idea is: Given any computable goal, and infinite computing power, you can work toward the goal very intelligently by (my wording, not a quote) ....


at each time step, searching the space of all programs to find those programs P that (based on your historical knowledge of the world and the goal) would (if you used P to control your behaviors) give you the highest probability of achieving the goal. Then, take the shortest of all such optimal programs P and actually use it to determine your next action.


But what Hutter did uniquely is to prove that a formal version of this algorithm (which he calls AIXI) is in a mathematical sense maximally intelligent.

If you have only massive (rather than infinite) computational resources, then a variant (AIXItl) exists, the basic idea of which is: instead of searching the space of all programs, only look at those programs with length less than L and runtime less than T.

It's a nice approach if you have the resources to pay for it. It's sort of a meta-AI-design rather than an AI design. It just says: If you have enough resources, you can brute-force search the space of all possible ways of conducting yourself, and choose the simplest of the best ones and then use it to conduct yourself. Then you can repeat the search after each action that you take.

One might argue that all this bears no resemblance to anything that any actual real-world mind would do. We don't have infinite nor massive resources, so we have to actually follow some specific intelligent plans and algorithms, we can't just follow a meta-plan of searching the space of all possible plans at each time-step and then probabilistically assessing the quality of each possibility.

On the other hand, one could look at Hutter's Universal AI as a kind of ideal which real-world minds may approach more and more closely, as they get more and more resources to apply to their intelligence.

That is: If your resources are scarce, you need to rely on specialized techniques. But the more resources you have, the more you can rely on search through all the possibilities, reducing the chance that your biases cause you to miss the best solution.

(I'm not sure this is the best way to think about AIXI ... it's certainly not the only way ... but it's a suggestive way...)

Of course there are limitations to Hutter's work and the underlying way of conceptualizing intelligence. The model of minds as systems for achieving specific goals has its limitations, which I've explained how to circumvent in prior publications. But for now we're using AIXI only as a broad source of inspiration anyway, so there's no need to enter into such details....

19-Year-Old Ben Goertzel's Design for an Better Society

Now, to veer off in a somewhat different direction....

Back when I was 19 and a math grad student at NYU, I wrote (in longhand, this was before computers were so commonly used for word processing) a brief manifesto presenting a design for a better society. Among other names (many of which I can't remember) I called this design the Meta-society. I think the title of the manifesto was "The Play of Power and the Power of Play."

(At that time in my life, I was heavily influenced by various strains of Marxism and anarchism, and deeply interested in social theory and social change. These were after all major themes of my childhood environment -- my dad being a sociology professor, and my mom the executive of a social work program. I loved the Marxist idea of the mind and society improving themselves together, in a carefully coupled way -- so that perhaps the state and the self could wither away at the same time, yielding a condition of wonderful individual and social purity. Of course I realized that existing Communist systems fell very far short of this ideal though, and eventually I got pessimistic about there ever being a great society composed of and operated by humans in their current form. Rather than improving society, I decided, it made more sense to focus my time on improving humanity ... leading me to a greater focus on transhumanism, AI and related ideas.)

The basic idea for my meta-society was a simple one, and probably not that original: Just divide society into a large number of fairly small groups, and let each small group do whatever the hell it wanted on some plot of land. If one of these "city-states" got too small due to emigration it could lose its land and have it ceded to some other new group.

If some group of people get together and want to form their own city-state, then they get put in a queue to get some free land for their city-state, when the land becomes available. To avoid issues with unfairness or corruption in the allocation of land to city-states, a computer algorithm could be used to mediate the process.

There would have to be some basic ground-rules, such as: no imprisoning people in your city-state, no invading or robbing other city-states, etc. To support a police force to enforce the ground-rules would require a central government and some low level of taxation, which however could sometimes be collected in the form of goods rather than money (the central gov't could then convert the goods into money). Environmental protection poses some difficulties in this sort of system, and has to be centrally policed also.

This meta-society system my 19 year old self conceived (and I don't claim any great originality for it, though I don't currently know anything precisely the same in the literature) has something in common with Libertarian philosophy, but it's not exactly the same, because at the top there's a government that enforces a sort of "equal rights for city-state formation" for all.

One concern I always had with the meta-society was: What do you do with orphans or others who get cast out of their city-states? One possibility is for the central government to operate some city-states composed of random people who have nowhere else to go (or nowhere else they want to go).

Another concern is what do you do about city-states that oppress and psychologically brainwash their inhabitants. But I didn't really see any solution to that. One person's education is another person's brainwashing, after all. From a modern American view it's tempting to say that all city-states should allow their citizens free access to media so they can find out about other perspectives, but ultimately I decided this would be too much of an imposition on the freedom of the city-states. Letting citizens leave their city-state if they wish ultimately provides a way for any world citizen to find out what's what, although there are various strange cases to consider, such as a city-state that allows its citizens no information about the outside world, and also removes the citizenship of any citizen who goes outside its borders!

I thought the meta-society was a cool idea, and worked out a lot of details -- but ultimately I had no idea how to get it implemented, and not much desire to spend my life proselytizing for an eccentric political philosophy or government system, so I set the idea aside and focused my time on math, physics, AI and such.

As a major SF fan, it did occur to me that such a meta-society of city-states might be more easily achievable in future once space colonies were commonplace. If it were cheap to put up a small space colony for a few hundred or thousand or ten thousand people, then this could lead to a flowering of city-states of exactly the sort I was envisioning...

When I became aware of Patri Friedman's Seasteading movement, I immediately sensed a very similar line of thinking. Their mission is "To further the establishment and growth of permanent, autonomous ocean communities, enabling innovation with new political and social systems." Patri wants to make a meta-society and meta-economy on the high seas. And why not?



Design for an Optimal Society?

The new thought I had while driving the other day is: Maybe you could put my old idealistic meta-society-design together with the AIXI idea somehow, and come up with a design for a "society optimal under assumption of massive resources."

Suppose one assumes there's

  • a lot of great land (or sea + seasteading tech, or space + space colonization tech, whatever), so that fighting over land is irrelevant
  • a lot of people
  • a lot of natural resources, so that one city-state polluting another one's natural resources isn't an issue

Then it seems one could argue that my meta-society is near-optimal, under these conditions.

The basic proof would be: Suppose there were some social order X better than the meta-society. Then people could realize that X is better, and could simply design their city-states in such a way as to produce X.

For instance, if US-style capitalist democracy is better than the meta-society, and people realize it, then people can just construct their city-states to operate in the manner of US-style capitalist democracy (this would require close cooperation of multiple city-states, but that's quite feasible within the meta-society framework).

So, one could argue, any other social order can only be SLIGHTLY better than the meta-society... because if there's something significantly better, then after a little while the meta-society can come to emulate it closely.

So, under assumptions of sufficiently generous resources, the meta-society is about as good as anything.

Now there are certainly plenty of loopholes to be closed in turning this heuristic argument into a formal proof. But I hope the basic idea is clear.

As with AIXI, one can certainly question the relevance of this sort of design, since resource scarcity is a major fact of modern life. But recall that I originally started thinking about meta-societies outside the "unrealistically much resources" context.

Finally, you'll note that for simplicity, I have phrased the above discussion in terms of "people." But of course, the same sort of thinking applies for any kind of intelligent agent. The main assumption in this case is that the agents involved either have roughly equal power and intelligence, or else that if there are super-powerful agents involved, they have the will to obey the central government.

Can We Approach the Meta-Society as Technology Advances?


More and more resources are becoming available for humanity, as technology advances. Seasteading and space colonization and so forth decrease the scarcity of available "land" for human habitation. Mind uploading would do so more dramatically. Molecular nanotech (let alone femotech and so forth) may dramatically reduce material scarcity, at least on the scale interesting to humans.

So, it seems the conditions for the meta-society may be more and more closely met, as the next decades and centuries unfold.

Of course, the meta-society will remain an idealization, never precisely achievable in practice. But it may be we can approach it closer and closer as technology improves.

Marxism had the notion of society gradually becoming more and more pure, progressively approaching Perfect Communism. What I'm suggesting here is similar in form but different in content: society gradually becoming more and more like the meta-society, as scarcity of various sorts becomes less and less of an issue.

As I write about this now, it also occurs to me that this is a particularly American vision. America, in a sense, is a sort of meta-society -- the central government is relatively weak (compared to other First World countries) and there are many different subcultures, some operating with various sorts of autonomy (though also a lot of interconnectedness). In this sense, it seems I'm implicitly suggesting that America is a better model for the future than other existing nations. How very American of me!

If superhuman AI comes about (as I think it will), then the above arguments make sense only if the superhuman AI chooses to respect the meta-society social structure. The possibility even exists that a benevolent superhuman AI could serve itself as the central government of a meta-society.

And so it goes....