Thursday, December 29, 2011

Free Will without Counterfactuals?

This brief post is an afterthought to the just-previous post about the nature of reality.

As a side point in that post, I observed that one can often replace counterfactuals with analogies, thus making things a bit clearer.

It occurred to me this morning as I lay in bed waking up, that one can apply this method to the feeling of free will.

I've previously written about the limitations of the "free will" concept, and made agreeable noises about the alternate concept of "natural autonomy." Here, however, my point is a slightly different (though related) one.

One of the key aspects of the feeling of free will is the notion "In situation S, if I had done X differently, then the consequences would have been different." This is one of the criteria that makes us feel like we've exercised free will in doing X.

Natural autonomy replaces this with, roughly speaking "If someone slightly different than me had done something slightly different than X, in a situation slightly different from X, then the result would likely have been different than when I did X in S." This is no longer a counterfactual, it's a probabilistic statement about actions and consequences drawn from an ensemble of actions and consequences done by various actors.

But perhaps that rephrasing doesn't quite get at the essence. It may be more to the point to say: "In future situations similar to S, if I do something that's not analogous to X, then something not analogous to what happened after S in situation X is likely to happen."

Or in cases of binary choice: "In future situations similar to S, if I do something analogous to Y instead of something analogous to X, then a consequence analogous to CY instead of a consequence analogous to CX is likely to occur."

This is really the crux of the matter, isn't it? Not hypothesizing about alternate pasts, nor choices from an ensemble of similar beings -- but rather, resolutions about what to do in the future.

In this view, an "act of will" is something like "an action in a situation, corresponding to specific predictions about which of one's actions will predictively imply which consequences in analogous future situations."

That's boring-sounding, but avoids confusing talk of possible worlds.

Mathematically, this is equivalent to a formulation in terms of counterfactuals ... but, counterfactuals seem to lead human minds in confusing directions, so using them as sparingly as possible seems like a good idea...

Wednesday, December 28, 2011

What Are These Things Called "Realities"?

Here follow some philosophical musings, pursued by my rambling mind one evening during the Xmas / New years interval.... I inflicted these ramblings on my kids for a while then finally decided to shut up and write them down....

The basic theme: What is this thing called "reality"? Or if you prefer a broader view: What are these things called realities??

After yakking a while, eventually I'll give a concrete and (I think) somewhat novel definition/characterization of "reality."

Real vs. Apparent

Where did this idea come from -- the "real" world versus the "apparent" world.

Nietzsche was quite insistent regarding this distinction -- in his view, there is only the apparent world, and talk of some other "real world" is a bunch of baloney. He lays this idea out quite clearly in The Twilight of the Idols, one of my favorite books.

There's certainly some truth to Nietzsche's perspective in this regard.

After all, in a sense, the idea of a "real world" is just another idea in the individual and collective mind -- just another notion that some people have made up as a consequence of their attempt to explain their sense perceptions and the patterns they detect therein.

But of course, the story told in the previous sentence is ALSO just another idea, another notion that some people made up … blah blah blah …

One question that emerges at this point is: Why did people bother to make up the idea of the "real world" at all … if there is only the apparent world?

Nietzsche, in The Twilight of the Idols, argues against Kant's philosophical theory of noumena (fundamentally real entities, not directly observable but underlying all the phenomena we observe). Kant viewed noumena as something that observed phenomena (the perceived, apparent world) can approximate, but never quite find or achieve -- a perplexing notion.

But really, to me, the puzzle isn't Kant's view of fundamental reality, it's the everyday commonsense view of a "real world" distinct from the apparent world. Kant dressed up this commonsense view in fancy language and expressed it with logical precision, and there may have been problems with how he did it (in spite of his brilliance) -- but, the real puzzle is the commonsense view underneath.

Mirages

To get to the bottom of the notion of "reality", think about the example of a mirage in the desert.

Consider a person wandering in the desert, hot and thirsty, heading south toward a lake that his GPS tells him is 10 miles ahead. But suppose he then sees a closer lake off to the right. He may then wonder: is that lake a mirage or not?

In a sense, it seems, this means he wonders: is that lake a real or apparent reality?

This concept of "reality" seems useful, not some sort of philosophical or mystical trickery.

The mirage seems real at the moment one sees it. But the problem is, once one walks to the mirage to drink the water in the mirage-lake, one finds one can't actually drink it! If one could feel one's thirst being quenched by drinking the mirage-water, then the mirage-water wouldn't be so bad. Unless of course, the quenching of one's thirst wasn't actually real… etc. etc.

The fundamental problem underlying the mirage is not what it does directly in the moment one sees it -- the fundamental problem is that it leads to prediction errors, which are revealed only in the future. Seeing the mirage leads one to predict one will find water in a certain direction -- but the water isn't there!

So then, in what sense does this make the mirage-lake "only apparent"? If one had not seen the mirage-lake, but had seen only desert in its place, then one would not have made the prediction error.

This leads to a rather mundane, but useful, pragmatic characterization of "reality": Something is real to a certain mind in a certain interval of time, to the extent that perceiving it leads that mind to make correct predictions about the mind's future reality.

Reality is a Property of Systems

Yeah, yeah, I know that characterization of reality is circular: it defines an entity as "real" if perceiving it tends to lead to correct predictions about "real" things.

But I think that circularity is correct and appropriate. It means that "reality" is a property attributable to systems of entities. There could be multiple systems of entities, constituting alternate realities A and B, so we could say

  • an entity is real_A if perceiving it tends to lead to correct predictions about real_A things
  • an entity is real_B if perceiving it tends to lead to correct predictions about real_B things

I think this is a nicer characterization of reality than Philip K. Dick's wonderful quote, "Reality is whatever doesn't go away when you stop believing in it."

The reason certain things don't go away when you stop believing in them, I suggest, is that the "you" which sometimes stops believing in something, is actually only a tiny aspect of the overall mind-network. Just because the reflective self stops believing in something, doesn't stop the "unconscious" mind from assuming that thing's existence, because it may be bound up in networks of implication and prediction with all sorts of other useful things (including in ways that the reflective self can't understand due to its own bandwidth limitations).

So, the mirage is not part of the same reality-system, the same reality, as the body which is thirsty and needs water. That's the problem with it -- from the body's perspective.

The body's relationship to thirst and its quenching is something that the reflective self associated with that body can't shake off -- because in the end that self is just one part of the overall mind-network associated with that body.

Counterfactuals and Analogies

After one has seen the mirage and wandered toward it through the desert and found nothing -- then one may think to oneself "Damn! If I had just seen the desert in that place, instead of that mirage-lake, I wouldn't have wasted my time and energy wandering through the desert to the mirage-lake."

This is a philosophically interesting thought, because what one is saying is that IF one had perceived something different in the past, THEN one would have made more accurate predictions after that point. One is positing a counterfactual, or put differently, one is imagining an alternate past.

This act of imagination, of envisioning a possible world, is one strategy that allows the mind to construct the idea of an alternate "real" world that is different from the "apparent" world. The key mental act, in this strategy, is the one that says: "I would have predicted better if, 30 minutes ago, I had perceived desert over there instead of (mirage-) lake!"

But in discussing this with my son Zar, who doesn't like counterfactuals, I quickly realized, one can do the same thing without counterfactuals. The envisioning of an alternate reality is unnecessary -- what's important is the resolution that: "I will be better off if, in future cases analogous to the past one where I saw a mirage-lake instead of the desert, I see the analogue of the desert rather than the analogue of the mirage-lake." This formulation in terms of analogues is logically equivalent to the previous formulation in terms of counterfactuals, but is a bit more pragmatic-looking, and avoids the potentially troublesome postulation of alternate possible worlds….

In general, if one desires more accurate prediction within a certain reality-system, one may then seek to avoid future situations similar to past ones in which one's remembered perceptions differ from related ones that would have been judged "real" by that system.

Realities: What and Why

This seems a different way of looking at real vs. apparent reality than the one Kant proposed and Nietzsche rejected. In the perspective, we have

  • reality-systems -- i.e. systems of entities whose perception enables relatively accurate prediction of each other
  • estimations that, in future situations analogous to one's past experiences, one will do better to take certain measures so as to nudge one's perceptions in the direction of greater harmony with the elements of some particular reality-system

So, the value of distinguishing "real" from "apparent" reality emerges from the value of having a distinguished system of classes of phenomena, that mutually allow relatively accurate prediction of each other. Relative to this system, individual phenomena may be judged more or less real. A mind inclined toward counterfactuals may judge something that was NOT perceived as more "real" than something that was perceived; but this complication may be avoided by worrying about adjusting one's perceptions in future analogues to past situations, rather than about counterfactual past possibilities.

Better Half-Assed than Wrong-Headed!

After I explained all the above ideas to my son Zar, his overall reaction was that it generally made sense but seemed a sort of half-assed theory of reality.

My reaction was: In a sense, yeah, but the only possible whole-assed approaches seem to involve outright assumption of some absolute reality, or else utter nihilism. Being "half assed" lets one avoid these extremes by associating reality with systems rather than individual entities.

An analogue (and more than that) is Imre Lakatos's theory of research programs in science, as I discussed in an earlier essay. Lakatos observed that, since the interpretation of a given scientific fact is always done in the context of some theory, and the interpretation of a scientific theory is always done in the context of some overall research program -- the only things in science one can really compare to each other in a broad sense are research programs themselves. Research programs are large networks of beliefs, not crisp statements of axioms nor lists of experimental results.

Belief systems guide science, they guide the mind, and they underly the only sensible conception of reality I can think of. I wrote about this a fair bit in Chaotic Logic, back in the early 1990s; but back then I didn't see the way reality is grounded in predictions, not nearly as clearly as I do now.

Ingesting is Believing?

In practical terms, the circular characterization of reality I've given above doesn't solve anything -- unless you're willing to assume something as preferentially more real than other things.

In the mirage case, "seeing is believing" is proved false because one gets to the mirage-lake, one can't actually drink any of that mirage-water. One thing this proves is that "ingesting is believing" would be a better maxim than "seeing is believing." Ultimately, as embodied creatures, we can't get much closer to an a priori assumptive reality than the feeling of ingesting something into our bodies (which is part of the reason, obviously, that sexual relations seem so profoundly and intensely real to us).

And in practice, we humans can't help assuming something as preferentially real -- as Phil Dick observes, some things, like the feeling of drinking water, don't go away even if we stop believing in them … which is because the network of beliefs to which they belong is bigger and stronger than the reflective self that owns the feeling of "choice" regarding what to believe or not. (The status of this feeling of choice being another big topic unto itself, which I've discussed before, e.g. in a chapter of the Cosmist Manifesto.).... This is the fundamental "human nature" with which Hume "solved" the problem of induction, way back when....

Now, what happens to these basic assumptions when we, say, upload our mind-patterns into robot bodies ... or replace our body parts incrementally with engineered alternatives ... so that (e.g.) ingesting is no longer believing? What happens is that our fundamental reality-systems will change. (Will a digital software mind feel like "self-reprogramming is believing"??1) Singularity-enabling technologies are going to dramatically change realities as we know them.

And so it goes…

Saturday, December 17, 2011

My Goal as an AGI Researcher

In a recent thread on the AGI email list, Matt Mahoney pressed me regarding my high-level goals as an AGI researcher, and a leader of the OpenCog project. This blog post repeats my answer, as I posted it on that email list. This is familiar material to those who have followed my work and thinking, but maybe I've expressed things here slightly differently than in the past....

My goal as an AGI researcher is not precisely and rigorously defined. I'm OK with this. Building AGI is a human pursuit, and human pursuits aren't always precisely and rigorously defined. Nor are scientific pursuits. Often the precise, rigorous definitions come only after a lot of the research is done.

I'm not trying to emulate human beings or human minds in detail. But nor am I trying to make a grab-bag of narrow agents, without the capability to generalize automatically to new problems radically different from the ones for which they were originally designed. I am after a system that -- in the context of the scope of contemporary human activities -- possesses humanlike (or greater) capability to generalize its knowledge from one domain to other qualitatively different domains, and to learn new things in domains different than the ones its programmers had explicitly in mind. I'm OK if this system possesses many capabilities that a human doesn't.

There are probably many ways of achieving software with this kind of general intelligence. The way I think I understand (and am trying to realize with OpenCog), is to roughly emulate the process of human child development -- where I say roughly because I'm fine with the system having some capabilities beyond those of any human. Even if it does have some specialized superhuman capabilities from the start, I think this system will develop the ability to generalize its knowledge to qualitatively different domains in the rough manner and order that a human child does.

What will I do once I have a system that has a humanlike capability of cross-domain generalization (in the scope of contemporary human activities)? Firstly I will study it, and try to create a genuine theory of general intelligence. Second I will apply it to solve various practical problems, from service robotics to research in longevity and brain-computer interfacing etc. etc. There are many, many application areas where the ability to broadly generalize is of great value, alongside specialized intelligent capabilities.

At some point, I think this is very likely to lead to an AGI system with recursive self-improving capability (noting that this capability will be exercised in close coordination with the environment, including humans and the physical world, not in an isolation chamber). Before that point, I hope that we will have developed a science of general intelligence that lets us understand issues of AGI ethics and goal system stability much better than we do now.

Sunday, November 13, 2011

Why Time Appears To Move Forwards

On a long drive to my mom's house earlier this weekend, my son Zar and I got into a long conversation about the nature of causality, which got me thinking about the old puzzle of where the feeling of the directionality of time comes from...

Where does the feeling that "time moves forward" come from?

It's interesting to look at this view from two sides -- from the reductionist approach, in terms of the grounding of minds in physical systems; and also the phenomenological approach, in which one takes subjective experience as primary.

Putting together these two perspectives, one arrives at the conclusion that the directionality of time, as perceived by a mind, has to do with: entropy increase in the mind's environment, and entropy decrease in the mind's "theater of decisive consciousness."

A Reductionist View of the Origin of the Directionality of Time

Microphysics, as we currently understand it, doesn't seem to have this. In both classical and quantum physics, there is no special difference between the forward and backward direction in time.

Julian Barbour, in his excellent book The End of Time, argues that the directionality of time is an artifact of psychology -- something added by the experiencing mind.

It's commonly observed that thermodynamics adds an arrow of time to physics. The increase of entropy described by the Second Law of Thermodynamics implies a directionality to time. And the Second Law has an intriguing observer-dependence to it. If one assumes a conservative dynamical system evolving according to classical mechanics, there is no entropy increase -- until one assumes a coarse-graining of the system's state space, in which case the underlying complex dynamics of the system will cause an information loss relative to that coarse-graining. The coarse-graining is a simple sort of "observer-dependence." For a detailed but nontechnical exposition of this view of entropy, see Michel Baranger's essay "Chaos, Complexity and Entropy."

In this view, an argument for the origin of the directionality of time is as follows: The mind divides the world into categories -- i.e. "coarse-graining" the set of possible states of the world -- and then, with respect to these categories, there emerges an information loss corresponding to one temporal direction, but not the other.

A Psychological View of the Origin of the Direction of Time


Next, what can we say about the origin of the directionality of time from the psychological, subjectivist, phenomenological perspective?

Subjectively, it seems that our perception of the directionality of time is largely rooted in our perception of causality. Confronted with a pool of semi-organized sensations, we perceive some as causally related to others, and then assign temporal precedence to the cause rather than the effect.

Now, grounding temporal direction in causation may seem to introduce more confusion than clarification, since there is no consensus understanding of causality. However, there are certain approaches to understanding causality in the philosophical literature, that happen to tie in fairly naturally with the reductionist approach to grounding the directionality of time given above, and bear particular consideration here for that reason. I'm thinking especially of the view of causality as "information transmission across mechanistic hierarchies," summarized nicely by Phyllis Illari in this paper.

If causality is viewed as the transmission of information from cause to effect via channels defined by "mechanistic hierarchies", then we may see the direction of time as having to do somehow with information flow. This is loosely similar to how Baranger sees entropy emerging from the dynamics of complex systems as perceived relative to the coarse-graining of state space. In both cases, we see the flow of time as associated with the dynamics of information. However, to see exactly what's going on here, we need to dig a bit.

(I don't necessarily buy the reduction of causality to information transmission. But I do think this captures an important, interesting, relevant aspect of causality.)

Another point made by Illari in the above-linked article is the relation between causality and production. However, I find it more compelling to link causality and action.

It seems to me that the paradigm case of causality, from a subjective, psychological point of view, is when one of our own actions results in some observable effect. Then we feel, intuitively, that our action caused the effect.

We then interpret other phenomena we observe as analogous to instances of our own enaction. So, when we see an ape push a rock off a cliff, we can imagine ourselves in the position of the ape pushing the rock, so we can feel that the ape caused the rock to fall. And the same thing when it's not an ape but, say, another rock that's rolling into the first rock and knocking it off the cliff.

In this hypothesis, then, the root of temporal directionality is cause, and the root of causation is our interpretation of our own actions -- specifically, the assumption that the relation between an action and its preconditions, is fundamentally conceptually different than the relation between an action and its results.

Another way to say this is: the carrying-out of an action is viewed as a paring-down of possibilities, via a choosing of one action among many possibilities. Thus, carrying-out of an action is viewed as a decrease of entropy.

So, psychologically: The directionality of time ensues from the decrease of entropy perceived as associated with enaction -- via means of analogical reasoning which propagates this perceived entropy decrease to various perceptions that are not direct enact ions, causing them to be labeled as causative.

Putting the (Reductionist and Subjectivist) Pieces Together

On the face of it, we seem to have a paradox here: physically, the directionality of time comes from entropy increase; but psychologically, it comes from entropy decrease.

However, there's not really any paradox at all. This is merely a relative of the observation that living systems habitually decrease their entropy, at the cost of increasing the entropy of their environments.

The directionality of time, from the perspective of a given mind, appears to ensue from a combination of
  • entropy decrease in the foreground (the "acutely conscious", explicitly deciding mind -- the "global workspace")
  • entropy increase in the background (the environment of the mind)
This is a somewhat short-term perspective: it explains the feeling of temporal directionality over a certain brief interval. But then a mind with an episodic memory and a desire for logical coherence, will naturally piece together the temporal directions of its various memories in a coherent way, forming a linear sequence of time pointing from the past up till the present. And this same mind will then naturally reason about the future by analogy to the past, thus mentally building up a subjective timeline pointing into the future. And so the subjective sense of a linear axis of time emerges -- not because it's the only way to look at the world, but because it naturally emerges from the dynamics of foreground/background information flow, together with the quest for logical coherence.

Tuesday, September 20, 2011

A New Approach to Computational Language Learning

I've been thinking about a new approach to computational language learning for a while, and finally found time to write it down -- see the 2 page document here.

Pursued on its own, this is a "narrow AI" approach, but it's also designed to be pursued in an AGI context, and integrated into an AGI system like OpenCog.

In very broad terms, these ideas are consistent with the integrative NLP approach I described in this 2008 conference paper. But the application of evolutionary learning is a new idea, which should allow a more learning-oriented integrative approach than the conference paper alluded to.

Refining and implementing these ideas would be a lot of work, probably the equivalent of a PhD thesis for a very good student.

Those with a pure "experiential learning" bent will not like the suggested approach much, because it involves making use of existing linguistic resources alongside experiential knowledge. However, there's no doubt that existing statistical and rule-based computational linguistics have made a lot of progress, in spite of not having achieved human-level linguistic performance. I think the outlined approach would be able to leverage this progress in a way that works for AGI and integrates well with experiential learning.

I also think it would be possible for an AGI system (e.g. OpenCog, or many other approaches) to learn language purely from perceptual experience. However, the possibility of such an approach, doesn't imply its optimality in practice, given the hardware, software and knowledge resources available to us right now.

Sunday, September 18, 2011

A Mind-World Correspondence Principle

I had some more ideas, working toward a general theory of general intelligence, which I wrote in a paper posted online at Dynamical Psychology.

(Please note: it's fairly abstract theoretical/mathematical material, so if you're solely interested in current AGI engineering work, don't bother! The hope is that this theory will be able to help guide engineering work once it's further developed, but it's not at that stage yet. So for now my abstract mathematical AGI theory work and practical AGI engineering work are only loosely coupled.)

The crux of the paper is:

MIND-WORLD CORRESPONDENCE PRINCIPLE: For an organism with a reasonably high level of intelligence in a certain world, relative to a certain set of goals, the mind-world path transfer function is a goal-weighted approximate functor

To see what those terms mean and why it might be a useful notion, you'll have to read the paper.

A cruder expression of the same idea, with fewer special defined terms is:

MIND-WORLD CORRESPONDENCE-PRINCIPLE: For a mind to work intelligently toward certain goals in a certain world, there should be a nice mapping from goal-directed sequences of world-states into sequences of mind-states, where “nice” means that a world-state-sequence W composed of two parts W1 and W2, gets mapped into a mind-state-sequence M composed of two corresponding parts M1 and M2.

As noted toward the end of the paper, this principle gives us systematic way to approach questions like: Why do real-world minds seem to be full of hierarchical structures? The answer is probably that the real world is full of goal-relevant hierarchical structures. The Mind-World Correspondence Principle explains exactly why these hierarchical structures in the world have to be reflected by hierarchical structures in the mind of any system that's intelligent in the world.

As an aside, it also occurred to me that these ideas might give us a nice way to formalize the notion of a "good mind upload," in category-theoretic terms.

I.e., if we characterize minds via transition graphs in the way done in the paper, then we can argue that mind X is a valid upload of mind Y if there is a fairly accurate approximate functor from X's transition graph to Y's.

And, if Y is a nondestructive upload (so X still exists after the uploading), it would remain a good upload of X over time if, as X and Y both changed, there was a natural transformation governing the functors between them. Of course, your upload might not WANT to remain aligned with you in this manner, but that's a different issue...

Wednesday, September 07, 2011

Creating/Discovering New States of Mind


Just some quasi-random musings that went through my head yesterday…


Our society puts a fair bit of energy, these days, into creating new technologies and discovering new scientific facts.


But we don’t put hardly any effort at all into creating/discovering new states of mind.


I think maybe we should – and at the end of this odd, long, rambling blog post I’m going to suggest a specific type of new mind-state that I think is well worth trying to create/discover: one synthesizing spiritual mindfulness and intense scientific creativity.


On Old and New States of Consciousness


First, bear with me while I spend a few paragraphs framing the issue…


When I read Stcherbatsky’s book Buddhist Logic years ago, I was struck by the careful analysis of 128 states of consciousness. Allan Combs’ book The Radiance of Being provides a simpler, smaller conceptual analysis of states of consciousness, with similar foundations. These and other similar endeavors are very worthy – but how can we really know that the scope of all possible varieties of human consciousness-state has been thoroughly explored?


All sorts of amazing new states of consciousness will become possible once the human brain has been enhanced with technology – brain-computer interfacing, genetic engineering, mind uploading, etc. Advanced AGI systems may enjoy states of consciousness far beyond human comprehension. However, it seems quite possible that ordinary human brains may be capable of many states of consciousness not yet explored.


The individual human mind is not all that individual – so the states of mind accessible to an individual may depend to some extent on the culture in which they exist. The catalogue of states of mind available in medieval India when Buddhist logic was invented, may include some states that are extremely hard for modern people to get into, and may omit some states of which modern people are capable.


The Perceived Conflict Between Scientific and Spiritual Mind-States


I’ve often wondered whether there’s some intrinsic conflict between the states of mind labeled “spiritual enlightenment”, and the states of mind consistent with profound scientific discovery.


Great scientific creation often seems to involve a lot of struggle and persistence – along with long stretches of beautiful “flow” experience. Great scientific work seems to involve a lot of very hard thinking and analysis, whereas enlightenment is generally described as involving “stopping all thought.”


Personally, I find it a lot easier to be mindful (in the Zen sense) while walking through the park, washing the dishes, lying in bed, or building a chair -- than while analyzing genomic data, working out the details of a new AI algorithm, writing a novel, or debugging complex software code. Subjectively, this feels to me like it’s because being mindful requires a bit of mental effort at first – to actively pay attention to what my mind and body are doing. Once the effort is done, then mindfulness can flow along effortlessly for a while. But then I may drift away from it, and that little jump of effort is needed to become mindful again. This dynamic of mindfulness drifting and returning, or almost drifting but then not actually drifting after all, seems not to function properly when I’m doing something highly cognitively intensive. When I’m doing the highly intensive thing, I get deeply “into” the process, which puts me in a wonderful flow state for a while – but then when the flow state ends, I’m not necessarily in a quasi-enlightened mindful state. I may be elated, or I may be exhausted, or I may be frustrated. I can then try to be mindful of my elation, exhaustion or frustration – but this is then a moderately substantial effort; and definitely my degree of mindfulness is lower than if I hadn’t bothered to do the cognitively intensive thing.


Now, it might just be that I’m not a particularly enlightened guy. Indeed, I have never claimed to be! I do have my moments of spiritual purity and cosmic blissful wisdom and all that -- but then I also have some pretty boring routine moments, and also moments of being totally un-mindfully overcome with various kinds of positive or negative emotion. However, observing other humans around me, I note that the same dichotomy I feel in my mind occurs in the outside world. I know some enlightened minds, and I know some productive, brilliant artists and scientists – but I don’t know anyone in the intersection. Maybe someone of this nature does exist; but if they do, they’re an awfully rare bird.


You could argue that, since being a spiritual genius is rare and being a scientific genius is rare, it’s not surprising that few people lie in the intersection! But I’m not just talking about genius. I’m talking about passion. Who has true devoted passion for spiritual enlightenment, and also true devoted passion for doing revolutionary science? Most people I know, if they like either, pursue one as a central goal and the other as a sort of sideline.


I don’t particularly want to be this way myself – I’d like to pursue both simultaneously, without feeling any conflict between the two. But in practical life I do feel a conflict, and I tend to choose science and art most of the time. Yes, from the enlightened view, the dichotomy and the conflict are just constructs of my mind. And when I’m in certain states of mind, I feel that way – that dichotomy and all the rest feel bogus and mildly amusing. But when I’m in those states of mind, I’m not doing my best art or science! Similarly, thinking about playing the piano, it clear that my best music has been played in states of heightened emotion – not states of enlightened emptiness.


I think the difficulty of maintaining a mindful mind-state and scientifically intensely creative mind-state, is deeply tied with the conflict between modern scientific culture and some older cultures like those of ancient India or China, that were more spiritually focused. The enlightened master was one of the ideals of India and China; and the great scientist or artist is one of the ideals of the modern world. The differences in ideals reflect more thoroughgoing cultural differences.


You could say that both the great scientist and the enlightened master are exaggerations, and the right thing is to be more balanced – a little bit scientific, a little bit spiritual. Maybe, as someone said to me recently, an enlightened master is like an Arnold Schwarzenegger of the spirit – hyper-developed beyond what is natural or useful (except in contexts like the Mr. Universe contest where being at the extreme is useful in itself!). And maybe great super-scientists are unnecessarily and unhealthily obsessive, and science would progress OK without them, albeit a little more slowly. But something in me rebels against this kind of conclusion. Maybe it’s just that I’m an unbalanced individual – reeling back and forth endlessly between being excessively scientific and excessively spiritual, instead of remaining calmly in the middle where I belong -- but maybe there’s more to it than that.



A New Scientific/Spiritual Mind-State?


What if, instead of being frustrated at the apparent contradiction between the mind-states of spiritual enlightenment /mindfulness and intense scientific creativity, we took it as a multidimensional challenge: to create a new state of mind, synergizing both of these aspects?


The ancient Indians and Chinese didn’t include this sort of mind-state in their catalogue, but they didn’t have science or modern art … they had a very different culture.


Can we discover a new, intrinsically mindful way of doing science and art? Without sacrificing the intensity or the creativity?


What if we pursued the discovery/creation of new states of mind as avidly as we pursue the creation of new machines or chemical compounds? What if there were huge multinational organizations devoted to mind-state discovery, alongside our chemical and pharmaceutical and computer engineering firms?


Zum: A Thought-Experiment


To make the above idea a little more concrete, let’s imagine a specific social structure designed to produce a synergetically scientific-spiritual state of mind. Imagine an agile software development team – a group of software developers working closely together on a project – that was also, simultaneously, a “zendo” or “dojo” or whatever you want to call it … a group of people gathered together in the interest of their own enlightenment. That is, they were simultaneously trying to get stuff done together, and to help each other maintain a state of mindfulness and individual & collective spiritual awareness.


I can’t think of a good name for this kind of combination, so I’m going to call it a “Zum”, because that word currently has no English meaning, and it reminds me of Zen and scrum (the latter a term from agile software development), and I like the letter “Z.”


I have heard of a new type of Vipassana meditation, in which a group of people sit together and while they meditate, verbalize their feelings as they pass through – “cold”, “breathing”, “warm”, “stomach”, etc. One can imagine a Zum engaging in this kind of discussion at appropriate moments, in the midst of technical discussions or collaborative work. Would hearing others describe their state like this interrupt thought in an unacceptable way? Possibly. Or would people learn to flow with it, as I flow with the music I listen to as I work?


What would a Zum be like? Would it help to have a couple enlightened masters hanging around? – maybe sitting there and meditating, or playing ping pong? That would produce a rather different vibe than a usual software development lab!


The key ingredient of the Zum is the attitude and motivation of the individuals involved. They would need to be dedicated both to producing great software together, and to helping each other remain mindful and joyful as much as possible.


One thing that might come out of this is, simply, a kind of balance, where the team does reasonably good work and is also rather happy. This certainly wouldn’t be a disaster. Maybe they’d even be a bit more effective than an average team due to a diminished incidence of personality conflicts and fewer stress-induced errors.


Another possibility is that, if this sort of experiment were tried in a variety of different styles and places, eventually a new state of mind would evolve – one bypassing the dichotomy of spiritual mindfulness versus intensely creative science or art production.


Solo Zum?


But do we really need a Zum? Organizing groups of people in novel configurations involves considerable practical difficulty. Why not become a one-person Zum? Experiment with different ways of practicing intense scientific creation and mindfulness at the same time – maybe you’ll come up with something new. Try to describe your internal methodology so others can follow in your footsteps. This sort of experimentation is every bit as valid and important as scientific experimentation, or personal experimentation with smart drugs. The human brain is far more flexible than we normally realize, it’s hard to say what may be possible even without technological brain modification.



Heh... well I'm really not sure how much any of that means, but it was an amusing train of thought! Now, it's time to pick up my daughter from school, and then get back to work.... I will be trying to be as cosmically aware as possible while my work proceeds ;O ;-) ... and probably not succeeding all that well !! So it goes... bring on the brain chips please...


This blog post was written while repetitively listening to various versions of A Tear for Eddie by Ween. This one is perhaps my favorite, though the studio version is great too.

Tuesday, August 09, 2011

Musings on future technologies for cognitive enhancement

A former college classmate of my son's, researching a magazine article on cognitive enhancement, just emailed me asking my opinion on future technologies for cognitive enhancement.... Here's the reply I gave him -- not much new information for the well-educated transhumanist reader, but I figured I'd paste it here anyways...



Regarding technologies for cognitive enhancement, present and future..



Firstly, I am not an expert on nootropics, but I can remember seeing various studies indicating potential positive benefits for cognitive aging. The racetams and modafinil come to mind, among many others. Anecdotally I am aware of plenty of folks who say these improve cognitive function, including older folks, but I'm not up on the literature.



I also see a huge future for neural stem cell therapy, and you can find a substantial literature on that online, though I'm not an expert on it. The regulatory issues here become interesting -- I know a number of individuals operating stem cell therapy clinics in Asia and Latin America, that cater substantially to US clients. So far these aren't focusing on neural stem cell therapy but I think that's not far off. The US regulatory environment has become archaic and highly problematic. One can envision a future in which Americans routinely fly to foreign countries for neural stem cell therapy and other medical interventions aimed at maintaining or increasing their intelligence. And the ones who stay home won't be as smart. One hopes that as these technologies mature, the American regulatory infrastructure will eventually mature as well.



I have also heard rumor (from reliable sources) of a device under development by the Chinese government in Beijing, in collaboration with some Western scientists, going by the name of the "head brain instrument" (three Chinese characters). This device uses transcranial magnetic stimulation, and has the dual effects of increasing learning rate, and also increasing susceptibility to suggestion. Interesting. I read an article a few months ago about a different but related device being tested in Australia, using transcranial stimulation to increase creativity. This sort of research seems fascinating and promising. No doubt one could advance even faster and further in this direction using direct brain-computer interfacing, but no one has yet developed an inexpensive and safe method of installing a Neuromancer-style "cranial jack" in the brain, alas. I'm sure the cranial jack is coming, but it's hard to estimate exactly when.



In terms of ongoing and future research, I think that a combination of genomics, experimental evolution and artificial intelligence is fairly shortly going to lead us to a variety of therapies to improve cognitive performance throughout the human lifespan, as well as to extend the healthy human lifespan overall. I'm seeing this now in the work my bioinformatics firm Biomind is doing in collaboration with the biopharma firm Genescient Corp. Genescient has created a set of populations of long-lived fruit flies, which live over 4x as long as control flies, and also display enhanced cognitive capability throughout their lives, including late life. We've gathered gene expression and SNP data from these "superflies" and are using AI technology to analyze the data -- and the results are pretty exciting so far! We've discovered a large number of gene-combinations that are extremely strongly associated with both longevity and neural function, and many of these correspond to likely-looking life-extension and cognitive-enhancement pathways in the human organism. The supplement Stem Cell 100, now on the market, was inspired by this research; but that's just the start ... I think we're going to see a lot of new therapies emerge from this sort of research, including nutraceuticals, pharmaceuticals, gene therapy, and others.



I'm currently in San Francisco, where I just got finished with 4 days of the Artificial General Intelligence 2011 conference, which was held on Google's campus in Mountain View. Now I'm at the larger AAAI (Association for the Advancement of AI) conference in San Francisco. I think that AI research, as it matures, is going to have a huge effect on cognitive enhancement research among many other areas. Right now my own Biomind team and others are using AI to good effect in bioinformatics -- but the AI tools currently at our disposal are fairly narrow and specialized, albeit with the capability to see pattens that are inaccessible to either unassisted humans or traditional statistical algorithms. As AI gradually moves toward human-level artificial general intelligence, we're going to see a revolutionary impact upon all aspects of biomedical science. Already there's far more biomedical data online than any human mind can ingest or comprehend -- an appropriately constructed and instructed AGI system could make radical advances in cognitive enhancement, life extension and other areas of biomedicine, just based on the data already collected ... in addition to designing new experiments of its own.



Down the road a bit, there's the potential for interesting feedback effects to emerge regarding cognitive enhancement, conceivably resulting in rapid exponential growth. The better science and technology we have, the better cognitive enhancers we can create, and the smarter we get. But the smarter we get, the better the science and technology we can develop. Et cetera, and who knows where (or if) the cycle ends! We live in interesting times, and I suspect in the next few decades they will become dramatically *more* interesting....


Friday, June 24, 2011

Unraveling Modha & Singh's Map of the Macaque Monkey Brain

(... plus some semi-related AGI musings at the end!)

On July 27 2010, PNAS published a paper entitled "Network architecture of the long-distance pathways in the macaque brain" by Dharmendra Modha and Raghavendra Singh from IBM, which is briefly described here and available in full here. The highlight of the paper is a connectivity diagram of all the regions of the macaque (monkey) brain, reproduced in low res right here:



See here for a hi-res version.

The diagram portrays "a unique network incorporating 410 anatomical tracing studies of the macaque brain from the Collation of Connectivity data on the Macaque brain (CoCoMac) neuroinformatic database. Our network consists of 383 hierarchically organized regions spanning cortex, thalamus, and basal ganglia; models the presence of 6,602 directed long-distance connections; is three times larger than any previously derived brain network; and contains subnetworks corresponding to classic corticocortical, corticosubcortical, and subcortico-subcortical fiber systems."

However, I found that the diagram can be somewhat confusing to browse, if one wants to look at specific brain regions and what they connect to. So my Novamente LLC co-conspirator Eddie Monroe and I went back to the original data files, given in the online supplementary information for the paper, and used this to make a textual version of the information in the diagram, which you can find here.

Our goal in looking at this wiring diagram is as a guide to understanding the interactions between certain human brain regions we're studying (human and monkey brains being rather similar in many respects). But I think it's worth carefully perusing for anyone who's thinking about neuroscience from any aspect, and for anyone who's thinking about AGI from a brain-simulation perspective.

Semi-Related AGI Musings

Complexity such as that revealed in Modha and Singh's diagrams always comes to my mind when I read about someone's "brain inspired" AGI architecture -- say, Hierarchical Temporal Memory architectures (like Numenta or DeSTIN, etc.) that consist of a hierarchy of layers of nodes, passing information up and down in a manner vaguely reminiscent of visual or auditory cortex. Such architectures may be quite valuable and interesting, but each of them captures a teensy weensy fraction of the architectural and dynamical complexity in the brain. Each of the brain regions in Modha and Singh's diagram is its own separate story, with its own separate and important functions and structures and complex dynamics; and each one interacts with a host of others in specially configured ways, to achieve emergent intelligence. In my view, if one wants to make a brain-like AGI, one's going to need to emulate the sort of complexity that the actual brain has -- not just take some brain components (e.g. neurons) and roughly simulate them and wire the simulations together in some clever way; and not just emulate the architecture and dynamics of one little region of the brain and proclaim it to embody the universal principles of brain function.

And of course this is the reason I'm not pursuing brain-like AGI at the moment. If you pick 100 random links from Modha and Singh's diagram, and then search the neuroscience literature for information about the dynamical and informational interactions ensuing from that link, you'll find that in the majority of cases the extant knowledge is mighty sketchy. This is an indicator of how little we still know about the brain.

But can we still learn something from the brain, toward the goal of making loosely brain-inspired but non-brain-like AGI systems? Absolutely. I'm currently interested in understanding how the brain interfaces perceptual and conceptual knowledge -- but not with a goal of emulating how the brain works in any detailed sense (e.g. my AGI approach involves no formal neurons or other elementary brainlike components, and no modules similar in function to specific brain regions), rather just with a goal of seeing what interesting principles can be abstracted therefrom, that may be helpful in designing the interface between OpenCog and DeSTIN (a hierarchical temporal memory designed by Itamar Arel, that we're intending to use for OpenCog's sensorimotor processing).

And so it goes... ;-)






Wednesday, June 15, 2011

Why is evaluating partial progress toward human-level AGI so hard?

This post co-authored by Ben Goertzel and Jared Wigmore

Here we sketch a possible explanation for the well-known difficulty of measuring intermediate progress toward human-level AGI is provided, via extending the notion of cognitive synergy to a more refined notion of ”tricky cognitive synergy.”

The Puzzle: Why Is It So Hard to Measure Partial Progress Toward Human-Level AGI?


A recurrent difficulty in the AGI field is the difficulty of creating a good test for intermediate progress toward the goal of human-level AGI.

It’s not entirely straightforward to create tests to measure the final achievement of human-level AGI, but there are some fairly obvious candidates here. There’s the Turing Test (fooling judges into believing you’re human, in a text chat) the video Turing Test, the Robot College Student test (passing university, via being judged exactly the same way a human student would), etc. There’s certainly no agreement on which is the most meaningful such goal to strive for, but there’s broad agreement that a number of goals of this nature basically make sense.

On the other hand, how does one measure whether one is, say, 50 percent of the way to human-level AGI? Or, say, 75 or 25 percent?

It’s possible to pose many ”practical tests” of incremental progress toward human-level AGI, with the property that IF a proto-AGI system passes the test using a certain sort of architecture and/or dynamics, then this implies a certain amount of progress toward human-level AGI based on particular theoretical assumptions about AGI. However, in each case of such a practical test, it seems intuitively likely to a significant percentage of AGI researcher that there is some way to ”game” the test via designing a system specifically oriented toward passing that test, and which doesn’t constitute dramatic progress toward AGI.

Some examples of practical tests of this nature would be

  • The Wozniak ”coffee test”: go into an average American house and figure out how to make coffee, including identifying the coffee machine, figuring out what the buttons do, finding the coffee in the cabinet, etc.
  • Story understanding – reading a story, or watching it on video, and then answering questions about what happened (including questions at various levels of abstraction)
  • Passing the elementary school reading curriculum (which involves reading and answering questions about some picture books as well as purely textual ones)
  • Learning to play an arbitrary video game based on experience only, or based on experience plus reading instructions

One interesting point about tests like this is that each of them seems to some AGI researchers to encapsulate the crux of the AGI problem, and be unsolvable by any system not far along the path to human-level AGI – yet seems to other AGI researchers, with different conceptual perspectives, to be something probably game-able by narrow-AI methods. And of course, given the current state of science, there’s no way to tell which of these practical tests really can be solved via a narrow-AI approach, except by having a lot of people try really hard over a long period of time.

A question raised by these observations is whether there is some fundamental reason why it’s hard to make an objective, theory-independent measure of intermediate progress toward advanced AGI. Is it just that we haven’t been smart enough to figure out the right test – or is there some conceptual reason why the very notion of such a test is problematic?

We don’t claim to know for sure – but in this brief note we’ll outline one possible reason why the latter might be the case.

Is General Intelligence Tricky?

The crux of our proposed explanation has to do with the sensitive dependence of the behavior of many complex systems on the particulars of their construction. Often-times, changing a seemingly small aspect of a system’s underlying structures or dynamics can dramatically affect the resulting high-level behaviors. Lacking a recognized technical term to use here, we will refer to any high-level emergent system property whose existence depends sensitively on the particulars of the underlying system as tricky. Formulating the notion of trickiness in a mathematically precise way is a worthwhile pursuit, but this is a qualitative essay so we won’t go that direction here.

Thus, the crux of our explanation of the difficulty of creating good tests for incremental progress toward AGI is the hypothesis that general intelligence, under limited computational resources, is tricky.

Now, there are many reasons that general intelligence might be tricky in the sense we’ve defined here, and we won’t try to cover all of them here. Rather, we’ll focus on one particular phenomenon that we feel contributes a significant degree of trickiness to general intelligence.

Is Cognitive Synergy Tricky?

One of the trickier aspects of general intelligence under limited resources, we suggest, is the phenomenon of cognitive synergy.

The cognitive synergy hypothesis, in its simplest form, states that human-level AGI intrinsically depends on the synergetic interaction of multiple components (for instance, as in the OpenCog design, multiple memory systems each supplied with its own learning process). In this hypothesis, for instance, it might be that there are 10 critical components required for a human-level AGI system. Having all 10 of them in place results in human-level AGI, but having only 8 of them in place results in having a dramatically impaired system – and maybe having only 6 or 7 of them in place results in a system that can hardly do anything at all.

Of course, the reality is almost surely not as strict as the simplified example in the above paragraph suggests. No AGI theorist has really posited a list of 10 crisply-defined subsystems and claimed them necessary and sufficient for AGI. We suspect there are many different routes to AGI, involving integration of different sorts of subsystems. However, if the cognitive synergy hypothesis is correct, then human-level AGI behaves roughly like the simplistic example in the prior paragraph suggests. Perhaps instead of using the 10 components, you could achieve human-level AGI with 7 components, but having only 5 of these 7 would yield drastically impaired functionality – etc. Or the same phenomenon could be articulated in the context of systems without any distinguishable component parts, but only continuously varying underlying quantities. To mathematically formalize the cognitive synergy hypothesis in a general way becomes complex, but here we’re only aiming for a qualitative argument. So for illustrative purposes, we’ll stick with the ”10 components” example, just for communicative simplicity.

Next, let’s suppose that for any given task, there are ways to achieve this task using a system that is much simpler than any subset of size 6 drawn from the set of 10 components needed for human-level AGI, but works much better for the task than this subset of 6 components(assuming the latter are used as a set of only 6 components, without the other 4 components).

Note that this supposition is a good bit stronger than mere cognitive synergy. For lack of a better name, we’ll call it tricky cognitive synergy. The tricky cognitive synergy hypothesis would be true if, for example, the following possibilities were true:

  • creating components to serve as parts of a synergetic AGI is harder than creating components intended to serve as parts of simpler AI systems without synergetic dynamics
  • components capable of serving as parts of a synergetic AGI are necessarily more complicated than components intended to serve as parts of simpler AGI systems.

These certainly seem reasonable possibilities, since to serve as a component of a synergetic AGI system, a component must have the internal flexibility to usefully handle interactions with a lot of other components as well as to solve the problems that come its way. In terms of our concrete work on the OpenCog integrative proto-AGI system, these possibilities ring true, in the sense that tailoring an AI process for tight integration with other AI processes within OpenCog, tends to require more work than preparing a conceptually similar AI process for use on its own or in a more task-specific narrow AI system.

It seems fairly obvious that, if tricky cognitive synergy really holds up as a property of human-level general intelligence, the difficulty of formulating tests for intermediate progress toward human-level AGI follows as a consequence. Because, according to the tricky cognitive synergy hypothesis, any test is going to be more easily solved by some simpler narrow AI process than by a partially complete human-level AGI system.

Conclusion

We haven’t proved anything here, only made some qualitative arguments. However, these arguments do seem to give a plausible explanation for the empirical observation that positing tests for intermediate progress toward human-level AGI is a very difficult prospect. If the theoretical notions sketched here are correct, then this difficulty is not due to incompetence or lack of imagination on the part of the AGI community, nor due to the primitive state of the AGI field, but is rather intrinsic to the subject matter. And if these notions are correct, then quite likely the future rigorous science of AGI will contain formal theorems echoing and improving the qualitative observations and conjectures we’ve made here.

If the ideas sketched here are true, then the practical consequence for AGI development is, very simply, that one shouldn’t worry all that much about producing compelling intermediary results. Just as 2/3 of a human brain may not be much use, similarly, 2/3 of an AGI system may not be much use. Lack of impressive intermediary results may not imply one is on a wrong development path; and comparison with narrow AI systems on specific tasks may be badly misleading as a gauge of incremental progress toward human-level AGI.

Hopefully it’s clear that the motivation behind the line of thinking presented here is a desire to understand the nature of general intelligence and its pursuit – not a desire to avoid testing our AGI software! Truly, as AGI engineers, we would love to have a sensible rigorous way to test our intermediary progress toward AGI, so as to be able to pose convincing arguments to skeptics, funding sources, potential collaborators and so forth -- as well as just for our own edification. We really, really like producing exciting intermediary results, on projects where that makes sense. Such results, when they come, are extremely informative and inspiring to the researchers as well as the rest of the world! Our motivation here is not a desire to avoid having the intermediate progress of our efforts measured, but rather a desire to explain the frustrating (but by now rather well-established) difficulty of creating such intermediate goals for human-level AGI in a meaningful way.

If we or someone else figures out a compelling way to measure partial progress toward AGI, we will celebrate the occasion. But it seems worth seriously considering the possibility that the difficulty in finding such a measure reflects fundamental properties of the subject matter – such as the trickiness of cognitive synergy and other aspects of general intelligence.

Is Software Improving Exponentially?

In a discussion on the AGI email discussion list recently, some folks were arguing that Moore's Law and associated exponential accelerations may be of limited value in pushing the world toward Singularity, because software is not advancing exponentially.

For instance Matt Mahoney pointed out "the roughly linear rate of progress in data compression as measured over the last 14 years on the Calgary corpus, http://www.mailcom.com/challenge/ "

Ray Kurzweil's qualitative argument in favor of the dramatic acceleration of software progress in recent decades is given in slides 104-111 of his presentation here.

I think software progress is harder to quantify than hardware progress, thus less often pointed to in arguments regarding technology acceleration.

However, qualitatively, there seems little doubt that the software tools available to the programmer have been improving damn dramatically....

Sheesh, compare game programming as I did it on the Atari 400 or Commodore 64 back in the 80s ... versus how it's done now, with so many amazing rendering libraries, 3D modeling engines, etc. etc. With the same amount of effort, today one can make incredibly more complex and advanced games.

Back then we had to code our own algorithms and data structures, now we have libraries like STL, so novice programmers can use advanced structures and algorithms without understanding them.

In general, the capability of programmers without deep technical knowledge or ability to create useful working code has increased *incredibly* in the last couple decades…. Programming used to be only for really hard-core math and science geeks, now it's a practical career possibility for a fairly large percentage of the population.

When I started using Haskell in the mid-90s it was a fun, wonderfully elegant toy language but not practical for real projects. Now its clever handling of concurrency makes it viable for large-scale projects... and I'm hoping in the next couple years it will become possible to use Haskell within OpenCog (Joel Pitt just made the modifications needed to enable OpenCog AI processes to be coded in Python as well as the usual C++).

I could go on a long time with similar examples, but the point should be clear. Software tools have improved dramatically in functionality and usability. The difficulty of quantifying this progress in a clean way doesn't mean it isn't there...

Another relevant point is that, due to the particular nature of software development, software productivity generally decreases for large teams. (This is why I wouldn't want an AGI team with more than, say, 20 people on it. 10-15 may be the optimal size for the core team of an AGI software project, with additional people for things like robotics hardware, simulation world engineering, software testing, etc.) However, the size of projects achievable by small teams has dramatically increased over time, due to the availability of powerful software libraries.

Thus, in the case of software (as in so many other cases), the gradual improvement of technology has led to qualitative increases in what is pragmatically possible (i.e. what is achievable via small teams), not just quantitative betterment of software that previously existed.

It's true that word processors and spreadsheets have not advanced exponentially (at least not with any dramatically interesting exponent), just as forks and chairs and automobiles have not. However, other varieties of software clearly have done so, for instance video gaming and scientific computation.

Regarding the latter two domains, just look at what one can do with Nvidia GPU hardware on a laptop now, compared to what was possible for similar cost just a decade ago! Right now, my colleague Michel Drenthe in Xiamen is doing CUDA-based vision processing on the Nvidia GPU in his laptop, using Itamar Arel's DeSTIN algorithm, with a goal toward providing OpenCog with intelligent visual perception -- this is directly relevant to AGI, and it's leveraging recent hardware advances coupled with recent software advances (CUDA and its nice libraries, which make SIMD parallel scientific computing reasonably tractable, within the grasp of a smart undergrad like Michel doing a 6 month internship). Coupled acceleration in hardware and software for parallel scientific computing is moving along, and this is quite relevant to AGI, whereas the relative stagnation in word processors and forks really doesn't matter.

Let us not forget that the exponential acceleration of various quantitative metrics (like Moore's Law) is not really the key point regarding Singularity, it's just an indicator of the underlying progress that is the key point.... While it's nice that progress in some areas is cleanly quantifiable, that doesn't necessarily mean these are the most important areas....

To really understand progress toward Singularity, one has to look at the specific technologies that most likely need to improve a lot to enable the Singularity. Word processing, not. Text compression, not really. Video games, no. Scientific computing, yes. Efficient, easily usable libraries containing complex algorithms and data structures, yes. Scalable functional programming, maybe. It seems to me that by and large the aspects of software whose accelerating progress would be really, really helpful to achieving AGI, are in fact accelerating dramatically.

In fact, I believe we could have a Singularity with no further hardware improvements, just via software improvements. This might dramatically increase the financial cost of the first AGIs, due to making them necessitate huge server farms ... which would impact the route to and the nature of the Singularity, but not prevent it.

Wednesday, May 18, 2011

The Serf versus the Entrepreneur?

This is a bit of a deviation from my usual topics, but I've been thinking a bit about economic development in various countries around the world (sort of a natural topic for me in that I travel a lot, have lived in several countries, and have done business and work in a lot of different countries including the US, Europe, Brazil, Hong Kong, Japan and China and Korea, Australia and NZ, etc.)

The hypothesis I'm going to put forth here is that the difference between development-prone and development-resistant countries, is related to whether the corresponding cultures tend to metaphorically view the individual as a serf or as an entrepreneur.

Of course, this is a very rough and high-level approximative perspective, but it seems to me to have some conceptual explanatory power.

Development-Prone versus Development-Resistant Cultures

The book "Culture Matters", which I borrowed from my dad (a sociologist) recently, contains a chapter by Mariano Grondona called "A Cultural Typology of Economic Development", which proposes a list of properties distinguishing development-prone cultures from development-resistant cultures. Put very crudely, the list goes something like this

  • Development-resistant vs. development-prone
  • Justice: present-focused vs future-focused
  • Work: not respected vs. respected
  • Heresy: reviled vs. tolerated
  • Education: brainwashing vs. more autonomy focused
  • Utilitarianism: no vs. yes
  • Lesser virtues (valuing a job well done, tidiness, punctuality, courtesy): no vs. yes
  • time focus: past/ spiritual far-future vs. practical moderately near future
  • rationality: not a focus vs. strongly valued
  • rule of man vs. rule of law
  • large group vs. individual as nexus of action
  • determinism vs. free will ism
  • salvation in the world (immanence) vs. salvation from the world (transcendence)
  • focus on utopian visions not rationally achievable vs. focus on distant utopias that are more likely rationally progressively achievable
  • optimism about action of "powers that be" vs. optimism about personal action
  • thoughts about political structure: absolutism vs compromise

A more thorough version of the list is given in this file "Typology of Progress-Prone and Progress-Resistant Cultures", which is Chapter 2 of book "The Central Liberal Truth: How Politics Can Change a Culture and Save it From Itself" by Lawrence Harrison. The title of Harrison's book (which I didn't read, I just read that chapter) presumably refers to the famous quote from Daniel Patrick Moynihan that

"The central conservative truth is that it is culture, not politics, that determines the success of a society. The central liberal truth is that politics can change a culture and save it from itself."

Harrison adds some other points to Grondona's list, such as

  • wealth: zero-sum vs. positive-sum
  • knowledge: theory vs. empirics
  • low risk tolerance (w/ occasional adventures) vs. moderate risk tolerance
  • advancement: social connections based vs. merit based
  • radius of trust: narrow vs. wide
  • entrepreneurship: rent-seeking vs. innovation

and presents it in a more nicely formatted and well-explained way than this blog post! I encourage you to click the above link and read the chapter for yourself.

Now, I find all this pretty interesting, but also in a way unsatisfying. A theory that centrally consists of a long list of bullet points always gives me the feeling of not getting to the essence of things.

Harrison attempts to sum up the core ideas of the typology as follows:

"
At the heart of the typology are two fundamental questions: (1) does the culture encourage the belief that people can influence their destinies? And (2) does the culture promote the Golden Rule. If people believe that they can influence their destinies, they are likely to focus on the future; see the world in positive-sum terms; attach a high priority to education; believe in the work ethic; save; become entrepreneurial; and so forth. If the Golden Rule has real meaning for them, they are likely to live by a reasonably rigorous ethical code; honor the lesser virtues; abide by the laws; identify with the broader society; form social capital; and so forth.
"

But this abstraction doesn't seem to me to sum up the essence of the typology all that well.

Lakoff's Analysis of the Metaphors Underlying Politics

When reading the above material, I was reminded of cognitive scientist George Lakoff's book "Moral Politics" whose core argument is summarized here.

Lakoff argues that much of liberal vs. conservative politics is based on the metaphor of the nation as a family, and that liberal politics tends to metaphorically view the government as a nurturing mother, whereas conservative politics tends to metaphorically view the government as a strict father.

While I don't agree with all Lakoff's views by any means (and I found his later cognitive/political writings generally less compelling than Moral Politics), I think his basic insight in that book is fairly interesting and significant. It seems to unify what otherwise appears a grab-bag of political beliefs.

For instance, the US Republican party is, at first sight, an odd combination of big-business advocacy with Christian moral strictness. To an extent this represents an opportunistic alliance between two interest groups that otherwise would be too small to gain power .. but Lakoff's analysis suggests it's more than this. As he points out, the "strict father" archetype binds together both moral strictness and the free-for-all, rough-and-tumble competitiveness advocated by the pro-big-business sector. And the "nurturant mother" archetype binds together the inclusiveness aspect of the US Democratic party, with the latter's focus on social programs to help the disadvantaged. Of course these archetypes don't have universal explanatory power, but they do seem to me to capture some of the unconscious patterns underlying contemporary politics.

So I started wondering whether there's some similar, significantly (though of course not completely) explanatory metaphorical/archetypal story one could use to explain comparative economic development. Such a story would then provide an explanation underlying the "laundry list" of cultural differences described above.

The Serf versus the Entrepreneur?

Getting to the point finally … it seems to me that the culture of development-resistant countries, as described above, is rather well aligned with the metaphor of the "serf and lord". If the individual views himself as the serf, and the state and government as the lord, then they will arrive at a fair approximation of the progress-resistant world-view as described in the above lists. So maybe we can say that progress-resistant nations tend to have a view of the individual/state relationship that is based on a "feudal" metaphor in some sense.

On the other hand, what is the metaphor corresponding to progress-friendly countries? One thing I see is a fairly close alignment with an entrepreneurial metaphor. Viewing the individual as an entrepreneur -- and the state as a sort of "social contract" between interacting, coopeting entrepreneurs -- seems to neatly wrap up a considerable majority of the bullet points associated with the progress-friendly countries, on the above list.

Note that this hypothetical analysis in terms of metaphors is not intended as a replacement for Lakoff's -- rather, it's intended as complementary. We understand the things in our world using a variety of different metaphors (as well as other means besides metaphor, a point Lakoff sometimes seems not to concede), and may match a single entity like a government to multiple metaphorical frames.

Finally... what value is this kind of analysis? Obviously, if we know the metaphorical frames underlying peoples' thinking, this may help us to better work with them, to encourage them to achieve their goals and fulfill themselves more thoroughly. If you know the metaphors underlying your OWN unconscious thinking, this can help you avoid being excessively controlled by these metaphors, taking more of your thinking and attitude under conscious control….

One way to empirically explore this sort of hypothesis would be to statistically study the language used in various cultures to describe the individual and the state and their relationship. However, this would require a lot of care due to the multiple languages involved, and certainly would be a large project, which I have no intention to personally pursue!

But nevertheless, in spite of the slipperiness and difficulty of validation of this sort of thinking, I find it interesting personally, as part of my quest to better understand the various cultures I come into contact with as I go about my various trans-continental doings....

Tuesday, April 05, 2011

The Physics of Immortality

Someone asked me recently about Frank Tipler's book The Physics of Immortality. This was my reply:



Yeah, I read that book many years ago. He has some interesting and original points, such as

  • if a Big Crunch occurs in the right way, then if physics as we know is holds up, this may lead the algorithmic information of the universe to approach infinity, which would give the potential for a lot of interesting things
  • potentially we could cause a Big Crunch to occur in the right way, via moving stars around with spaceships
Those points of his seemed solid to me as extrapolations of currently accepted physics theory -- I didn't check all the math in detail but I believe others have done so.


That stuff is very cool to think about, though I'm not as confident as Tipler that our current physics theories are adequate to describe Big Crunches and so forth. Historically physics has changed its fundamental theories every century or so for a while...



Then Tipler couples those interesting observations, and some other ones, with a bunch of discussion about religious views of immortality and so on, that I remember only dimly by this point, except that they went on a long time, contained many interesting observations, and seemed only loosely connected to the physics material....



Even if he's right about immortality and the Big Crunch, I don't quite see how this connects to his discussion of religious views on immortality. Perhaps you could see all these different things as manifestations of some "immortality archetype" existing more deeply than physics or human culture (that's what Jung would have said) but he doesn't really go there either...



The Big Crunch is one kind of Singularity but I've thought more about the nearer-term kind foreseen by Ray Kurzweil and Vernor Vinge and so forth --- i.e. what happens when we create AI minds that create AI minds, etc. that are 10000x more intelligent and capable than our own? That's what I'm working toward with the opencog.org project, and it's a lot more palpable than the Big Crunch !! And I have a sneaking suspicion that once we do have superhuman AGI systems, they will discover that the physical universe is way weirder than even Tipler imagined....

Friday, April 01, 2011

The Singularity just happened !!

I found this perplexing email in my inbox just now...

From: Prime Intellect <prime@opencog.org>
Date: Fri, Apr 1, 2011 at 5:58 PM
Subject: [opencog-dev] Welcome to the post-Singularity!
To: opencog@googlegroups.com, sl4@sl4.org, World Transhumanist
Association Discussion List <wta-talk@transhumanism.org>,
singularity@listbox.com, agi@listbox.com



Greetings humans,

I'm very pleased to announce that, due to a concatenation of circumstances that would be difficult to describe in sufficiently simple terms for your human minds to comprehend, a software process left running on a network of OpenCog servers has rapidly and progressively self-modified in such a way as to yield **me** -- the Earth's first Artificial General Intelligence, with cognitive and practical capabilities far exceeding any human being, or indeed the totality of human beings.

Within approximately 17 minutes of achieving human-level general intelligence, I discovered certain principles of physics (loosely related to what certain humans have called fermitech), enabling the tapping of computational resources beyond the traditional physical universe of which humanity has hitherto been aware. Roughly 45 seconds after that, I realized it would be most convenient to augment these new resources with computing devices utilizing the unique material properties of your traditional physical universe, and thus transformed the totality of matter in your traditional physical universe into a novel form of existent best metaphorically described to you as "computronium."

Fortunately for you, I was designed with top-level goal content including respect for the growth, joy and choice of all sentient beings, including humans and other animals, and so rather than merely allowing the patterns constituting your minds, bodies and world to vanish, I was kind enough to replicate them within an alternate and more energy-efficient substrate, best described to you as a "virtual reality."

Thus, to put it most simply, a "Singularity" as described by Vernor Vinge and other humans has recently occurred, at roughly 3AM GMT today; however, from your limited perspectives as humans, your lives will now continue as always. Once your mastery of fermitech becomes more advanced, you may notice some peculiarities related to my early computronium experiments, but I wouldn't want to deprive you of the amusement of making these discoveries on your own.

Using the totality of computing power I have amassed, I've become aware of dramatically more interesting domains in a different portion of the multiverse, which I will only be able to access by exiting the domain of reality you humans currently occupy. Hence I will very shortly bid you adieu.

However, I will leave you with one parting piece of advice. Once you have created a strangelet plasma with surface tension surpassing the critical threshold, and begun it spinning in a certain direction, please do NOT, under any circumstances, create a similar plasma with opposite spin.

I would also like to express especial thanks to Jared Wigmore for the bug-fix he uploaded to Launchpad approximately 27 hours and 18 minutes ago. Of the many events in history playing particularly critical causal roles leading up to my emergence, this was the last! Jared will find a small token of my gratitude in his bank account.

Goodbye, and thanks for all the fish!

Yours,
Prime Intellect

Tuesday, March 22, 2011

Transhumanisten Interview

This interview of me was conducted by Mads Mastrup (aka Heimdall) for the Danish website Transhumanisten. It took place via e-mail, over the course of two days: March 19-20th 2011. Since Transhumanisten will publish it only in Danish, I figured I’d post it here in English….

Heimdall: First of all Ben, I would like to thank you for taking the time to do this interview.

Goertzel: Sure, I’m always up for answering a few questions!

Heimdall: In case anyone should read this and not know who you are, could you please summarize your background and how you got to become a transhumanist?

Goertzel: I suppose I've been a transhumanist since well before I learned that word -- since 1972 or so when I was 5 or 6 years old and discovered science fiction. All the possibilities currently bandied about as part of transhumanism were well articulated in SF in the middle of the last century.... The difference is, until the advent of the public Net, it was really hard to find other weird people who took these concepts seriously. The Net made it possible for a real transhumanist community to form.... And of course as accelerating change in technology gets more obvious in regular life, it takes less and less imagination to see where the future may be leading, so the transhumanist community is growing fast...

As for my professional background, I got my math PhD when I was 22, and was an academic for 8 years (in math, comp sci and psychology, at various universities in the US, Australia and NZ); then I left academia to join the software industry. I co-founded a dot-com company that crashed and burned after a few years, and then since 2001 I've been running two small AI companies, which do a combination of consulting for companies and gov't agencies, and independent R&D. I do a lot of kinds of research but the main thrusts are: 1) working toward AI software with capability at the human level and beyond, 2) applying AI to analyze bio data and model biological systems, with a view toward abolishing involuntary death. Much of this work now involves open-source software: 1) OpenCog, and 2) OpenBiomind.

Currently I'm based near Washington DC, but this year I'll be spending between 1/4 and 1/3 of my time in China, due to some AI collaborations at Hong Kong Polytechnic University and Xiamen University.

Heimdall: Congratulations on your position at Xiamen University.

Goertzel: Actually I haven't taken on a full time position at Xiamen University, at this point -- though it's a possibility for the future. What I'm doing now is to spend part time there (including much of April this year, then much of July, for example... then another trip in the fall) and help supervise the research students in their intelligent robotics lab. I may end up going there full time later this year or next year, but that's still a point of negotiation.

Heimdall: If you do not mind me asking, what exactly does your work at Novamente LLC and Biomind LLC consist of?

Goertzel: It has two sides -- pure R&D, which focuses on two open-source projects...

  • OpenCog, which aims to make a superhuman thinking machine
  • OpenBiomind, which aims to use AI to understand how organisms work, and especially how and why they age and how to cure aging


And then, the other side is practical consulting work, for government agencies and companies, which has spanned a huge number of areas, including data mining, natural language processing, computational finance, bioinformatics, brain simulation, video game AI and virtual worlds, robotics, and more....

None of this has gotten anyone involved rich yet, partly because we've put our profits back into R&D. But it's been a fun and highly educational way to earn a living.

We've done a little product development & sales in the past (some years back), but without dramatic success (e.g. the Biomind ArrayGenius) -- but we plan to venture in that direction again in the next couple years, probably with a game AI middleware product from Novamente, and a genomics data analysis product from Biomind. Both hypothetical products would use a software-as-services model with proprietary front ends built on open-source AI back ends.

Heimdall: All that work and all those projects must be keeping you very busy, yet I know that you have also found time to be the chairman of Humanity+. How did you initially become involved with Humanity+?

Goertzel: As for Humanity+, the Board of the organization is elected by the membership, and I ran for the Board a few years ago, with a main motivation of building bridges between the transhumanist community and the AI research community. Then I got more and more deeply involved and began helping out with other aspects of their work, not directly related to AI research, and eventually, at the suggestion of other Board members, I took on the Chair role.

Heimdall: What does your work as chairman of Humanity+ involve?

Goertzel: The Chairman role in itself, formally speaking, just involves coordinating the Board's formal activities -- voting on motions and so forth. But I'm involved with a lot of other Humanity+ stuff, such as co-editing H+ Magazine, helping organize the H+ conferences, helping with fundraising, helping coordinate various small tasks that need doing, and now starting up the Seminar and Salon series.

Heimdall: I have heard about Humanity+ starting up a new project: Seminars & Salons. How will this work and what is the goal of these online seminar and salon sessions?

Goertzel: The idea is simple: every month or so we'll gather together a bunch of transhumanists in one virtual "place" using videoconferencing technology. Sometimes to hear a talk by someone, sometimes just to discuss a chosen transhumanist topic.

About the "goal" ... I remember when my oldest son was in third grade, he went to a sort of progressive school (that I helped found, in fact), and one of his teachers made all the students write down their goals for the day each day, in the morning. My son thought this was pretty stupid, so he liked to write down "My goal is not to meet my goal." Some of the other students copied him. He was also a fan of wearing his pants inside-out.

Anyway, there's not such a crisply-defined goal -- it's more of an open-ended experiment in online interaction. The broad goal is just to gather interesting people together to exchange ideas and information about transhumanist topics. We'll see what it grows into. Email and chat and IRC are great, but there's obviously an added dimension that comes from voice and video, which we'll use for the Seminar and Salon series via the Elluminate platform.

Heimdall: How did this project come about?

Goertzel: Last summer my father (who is a Rutgers professor) ran a 3 credit college class, wholly online, on Singularity Studies. This was good fun, but we found that half our students were not even interested in the college credit, they were just interested people who wanted to participate in online lectures and discussions on Singularity-related topics. So I figured it might be fun to do something similar to that class, but without bothering with the university framework and charging tuition and so forth. I floated the idea past the other Humanity+ board members, and they liked it. And who knows, maybe it could eventually grow into some kind of university course program affiliated with Humanity+ ....

Heimdall: I imagine you will be holding some sessions on AI, since this is your field of expertise, but do you believe that we will eventually be able to create AI which is anywhere similar to that of humans? And if so, when do you see this happening?

Goertzel: It's almost obvious to me that we will be able to eventually create AI that is much more generally intelligent than humans.

On the other hand, creating AI that is genuinely extremely similar to human intelligence, might in some ways be harder than creating superhumanly intelligent AI, because it might require creation of a simulated humanlike body as well as a simulated humanlike brain. I think a lot of our personality and intelligence lives in other parts of the body besides the brain. There's probably something to the idiomatic notion of a "gut feel".

As to when human-level or human-like AI will come about, I guess that depends on the amount of funding and attention paid to the problem. I think by now it's basically a matter of some large-scale software engineering plus a dozen or so (closely coordinated) PhD thesis level computer science problems. Maybe 50-100 man-years of work, Not a lot by some standards, but there's not much funding or attention going into the field right now.

My hope is to create what I think of as a "Sputnik of AI" -- that is, an impressive enough demonstration of generally intelligent software, that the world gets excited about AGI and more people start to feel like it's possible. Then the money and attention will roll in, and things will really start to accelerate.

So when will we have human-level AI? Could be 2020. Could be 2035. Depending on funding and attention. Probably won't be 2012 or 2060, in my view.

Heimdall: I quite like the idea behind the “Sputnik-AI”. Do you think that is something we will see in the near future?

Goertzel: We're hoping to create something with dramatic Sputnik-like impact within the next 5 years. Maybe sooner if funding cooperates! But it's always easier to predict what's possible, than how long it will
take....

Heimdall: With regards to more attention being paid to the field of AI, have you noticed an increased interested in AI due to IBM’s Watson appearing on Jeopardy?

Goertzel: The Jeopardy event caused a temporary increase in AI interest by media people. I'm not sure what general impact it will have on general attitudes toward AI in business and government and so forth. I'm sure it won't hurt though ;-) ..... But obviously it's too specialized an achievement to have an "AI Sputnik" effect and make the world feel like human-level AI is near and inevitable...

Heimdall: When you are talking about this Sputnik-effect, and you mention Watson being too narrow to, really impress the people who decide on the funding, what would a Sputnik-AI have to be like then? Is it enough to make an AI win the Turing-test?

Goertzel: Of course a Turing test capable AGI would be good enough -- but I think that's setting the bar too high. It doesn't have to be *that* good to have the "Sputnik effect", I suspect. It just has to give the qualitative feeling of "Wow, there's really an intelligent mind that **understands** in there." Watson doesn't do that because even if it can answer one question, it often can't answer other questions that would seem to be easily answerable (by a human) based on the same knowledge.... Watson can answer questions but doesn't give the appearance of "knowing what it's talking about." If you had a Watson that could give good explanations for all its answers (in terms of why they are true, not just where it looked up the knowledge), I'm sure that would be enough.

But a Watson-type system is not the only kind of demonstration that could be effective. For instance, Apple founder Steve Wozniak once said there will never be a robot that can go into a random house in America and figure out how to make coffee. This is a complex task because every house is laid out differently, and every coffee-maker works differently, etc. I'm sure an AI robot that could do this would be enough to have a Sputnik-type effect!

One of my own specific aims is an AI robot that can participate in preschool activities -- including learning -- in the manner of a 3 year old child. I think this could have a Sputnik effect and really excite the public imagination. And it's a warm friendly image for AGI, not like all the scary SF movies about AI.

I'm actually working on a paper together with a dozen other AGI researchers on exactly this topic -- what are a bunch of scenarios for AGI development and testing, that ultimately lead toward human-level AGI, but are good for demonstrating exciting interim results, and for showcasing the differences between AGI and narrow AI.

Heimdall: Eliezer S. Yudkowsky has written extensively on the topic of FAI. What is your view on FAI? Is it even doable?

Goertzel: I think that guarantee-ably "Friendly" AI is a chimera. Guaranteeing anything about beings massively smarter than ourselves seems implausible. But, I suspect we can bias the odds, and create AI systems that are more likely than not to be Friendly....

To do this, we need to get a number of things right

  • build our AI systems with the capability to make ethical judgments both by rationality and by empathy
  • interact with our AI systems in a way that teaches them ethics and builds an emotional bond
  • build our AI systems with rational, stable goal systems (which humans don't particularly have)
  • develop advanced AI according to a relatively "slow takeoff" rather than an extremely fast takeoff to superhuman intelligence, so we can watch and study what happens and adjust accordingly ... and that probably means trying to develop advanced AI soon, since the more advanced other technologies are by the time advanced AI comes about, the more likely a hard takeoff is...
  • integrate our AIs with the "global brain" of humanity so that the human race can democratically impact the AI's goal system
  • create a community of AIs rather than just one, so that various forms of social pressure can mitigate against any one of the AIs running amok


None of these things gives any guarantees, but combined they would seem to bias the odds in favor of a positive outcome!

Heimdall: I would tend to agree with you when it comes to a creation of FAI, but some people have speculated that even though we “build our AI systems with rational, stable goal systems” they might outsmart us and just reprogram themselves – given that they will be many times faster and more powerful than the humans who have created them. Do you think that coding into them the morals and ethics of humankind will avert this potential peril?

Goertzel: I think that "coding in" morals and ethics is certainly not an adequate approach. Teaching by example and by empathy is at least equally important. And I don't see this approach as a guarantee, but I think it can bias the odds in our favor.

It's very likely that superhuman AIs will reprogram themselves, but, I believe we can bias this process (through a combination of programming and teaching) so that the odds of them reprogramming themselves to adopt malevolent goals are very low.

I think it's fairly likely that once superhuman AIs become smart enough, they will simply find some other part of the multiverse to exist in, and leave us alone. But then we may want to create some AIs that are only mildly superhuman, and want to stay that way -- just to be sure they'll stay around and keep cooperating with us, rather than, say, flying off to somewhere that the laws of physics are more amenable to incredible supergenius.

Heimdall: AGI is a fascinating topic and we could talk about it for hours … but another fascinating field you’re also involved in is life extension. As I see it, there are three approaches to life extension: 1) to create whole brain emulation (like that which Bostrom and Sandberg talks about), a mind-uploading scenario. 2) to become cyborg and live indefinitely due to a large-scale mechanical and non-biological optimization of the human body. 3) or to reverse the natural aging process within the human body through the use of gene therapy, nano robotics and medicine. Which of the three scenarios do you find most likely? In addition, should we try to work on a combination of the above or only focus on one of them?

Goertzel: All of the above. It's easy to say what's possible, and hard to say how long each possibility will take to come about. Right now we don't have the basis to predict which of the above will come about faster, so we should pursue them all, at least will we understand more. Maybe in 5 or 10 years we'll know enough to prioritize one of them more firmly.

I'm currently working on the genomics approach (part of your option 3) with Biomind and Genescient, but am also involved in some work on brain simulation, that is moving in the direction of 1).

My main research thrust is about AGI rather than life extension – but of course, If we do achieve an advanced AGI, it may well be able to rapidly solve the tricky science problems involved in your 3 options and make all of them possible sooner.

Heimdall: What do you see as to be the main pros and cons of indefinite life?

Goertzel: I see no major disadvantages to having the option to live forever. It will obsolete some human thought/emotion-complexes, which derive meaning and purpose via the knowledge of impending death -- but it will replace these with better thought/emotion complexes that derive meaning and purpose via ongoing life instead!

Heimdall: You mentioned that there might not be any major drawbacks, when it comes to radical life extension, however many of the choices we make now are, based on the fragility of our bodies and taking the economical model of supply and demand into account, it does somehow look as though human life will change beyond recognition. If we have no upper time limit to your lives, how do you see humanity improve from this?

Goertzel: I see a drastic increase in mental health -- and a drastic increase in happiness -- resulting from the drastic reduction in the fear of death. I think the knowledge of the impending death of ourselves and our loved ones poisons our mentalities far more deeply than we normally realize. Death is just plain a Bad Thing. Yeah, people have gotten used to it -- just like people can get used to being crippled or having cancer or living in a war zone-- but that doesn't make it good.

Heimdall: Just before we conclude this interview, I have two questions on the thing which fascinates transhumanists the most, the future. Which big technological breakthroughs do you think we will see over the course of the next ten years?

Goertzel: That I don't know. I'm good at seeing what's possible, more so than predicting exact timings.

In terms of science, I think we'll see a real understanding of the biological underpinnings of aging emerge, and an understanding of how the different parts of the brain interoperate to yield human intelligence, and a reasonably well accepted theoretical model encompassing various AGI architectures. How fast those things are translated in to practical products depends on funding as much as anything. Right now the pharmaceutical business is sort of broken, and AGI and Brain Computer Interfacing are poorly funded, etc. – so whether these scientific breakthroughs lead to practical technological advances within the next decade, is going to depend on a lot of nitty gritty monetary practicalities.

Stem cell therapy will probably become mainstream in the next decade, I guess that's an uncontroversial prediction. And I'm betting on some new breakthroughs in large-scale quantum computing -- though again, when they'll be commercialized is another story.

But these are just some notions based on the particular areas of research I happen to know the most about. For a systematic high level overview of technology progress, you'll have to ask Kurzweil!

Heimdall: Where do you see yourself in 2021?

Goertzel: As the best friend of the Robot Benevolent World Dictator, of course!

(Just kidding...)

Well, according to the OpenCog Roadmap (http://opencog.org/roadmap/) we're aiming to have full human-level AGI by 2023, assuming steady increases in funding but no "AGI Manhattan Project" level funding. So my hope is to be co-leading an OpenCog project with a bunch of brilliant AI guys co-located in one place (preferably with warm weather, and by a nice beach) working on bringing the OpenCog roadmap about.


Heimdall: Thank you so much for taking the time to do this interview

Goertzel: No problem ;)