Thursday, December 08, 2005

A General Theory of the Development of Forms (wouldn't it be nice to have one?)

This blog entry briefly describes a long-term conceptual research project I have in mind, and have been thinking about for a while, which is to try to figure out some sort of "general theory of the development of forms/patterns in growing complex systems."

Since the Novamente AGI high-level design and the "patternist philosophy of mind" are basically completed and stable for a while (though I'm still engaged with writing them up), I need a new conceptual obsession to absorb the extremely-abstract-thinking portion of my brain... ;-)

Thinking about the development of forms, I have in mind three main specific areas:

  • developmental psychology (in humans and AI's)
  • epigenesis in biological systems
  • the growth of the early universe: the emergence of physical law from lawlessness, etc. (cf John Wheeler)

Each of these is a big area and I've decided to proceed through them in this order. Maybe I will never get to the physics part and will just try to abstract a general theory of development from the first two cases, we'll see.

I also have an intuition that it may be useful to use formal language theory of some sort as a conceptual tool for expressing developmental stages and patterns. Piaget tried to use abstract algebra in some of his writings, which was a nice idea, but didn't quite work. This ties in with Jerry Fodor's notion of a "language of thought", which I don't buy quite in all the senses he means it, but may have some real meat to it. It may be that developing minds at different stages. I don't know if anyone has taken this approach in the developmental psych literature.

For instance, it's arguable that quantifier binding is only added to the human language of thought at Piaget's formal stage, and that recursion is only added to the human language of thought at Piaget's concrete operational stage (which comes along with phrase structure syntax as opposed to simpler proto-language). What I mean by "X is added to the human language of thought at stage S" is something like "X can be used with reasonable generality and fluidity at stage S" -- of course many particular instances of recursion are used before the pre-operational phase, and many particular instances of quantifier binding are used before the formal phase. But the full "syntax"of these operations is not mastered prior to the stages I mentioned, I suggest. (Note that I am using Piaget's stage-labels only for convenience, I don't intend to use them in my own theory of forms; if I take a stage-based approach at all then I will define my own stages.)

I note that formal language theory is something that spans different domain areas in the sense that

  • there's discussion of "language of thought" in a general sense
  • natural language acquisition is a key aspect of developmental psych
  • L-system theory shows that formal languages are useful for explaining and modeling plant growth
  • "Symbolic dynamics" uses formal language theory to study the dynamics of chaotic dynamical systems in any domain, see also Crutchfield and Young

So it seems to be a potentially appropriate formal tool for such a project.

I was discussing this with my friend Stephan Bugaj recently and he and I may write a book on this theme if we can pull our thinking together into a sufficiently organized form....

Friday, December 02, 2005

More Venting about Scientific Narrowmindedness and Superintelligent Guinea Pigs

I spent the day giving a talk about bioinformatics to some smart medical researchers and then meeting with them discussing their research and how advanced narrow-AI informatics tools could be applied to help out with it.

AAARRRGGHHH!!! Amazing how difficult it is to get even clever, motivated, knowledgeable biologists to understand math/CS methods. The techniques I presented to them (a bunch of Biomind stuff) would genuinely help with their research, and are already implemented in stable software -- there's nothing too fanciful here. But the "understanding" barrier is really hard to break through -- and I'm not that bad at explaining things; in fact I've often been told I'm really good at it....

We'll publish a bunch of bioinformatics papers during the next year and eventually, in a few more years, the techniques we're using (analyzing microarray and SNP and clinical data via learning ensembles of classification rules; then data mining these rule ensembles, and clustering genes together based on whether they tend to occur in the same high-accuracy classification rules, etc.) will become accepted by 1% or 5% of biomedical researchers, I suppose. And in 10 years probably it will all be considered commonplace: no one will imagine analyzing genetics data without using such techniques....

Whether Biomind will manage to get rich during this process is a whole other story -- it's well-known that the innovative companies at the early stage of a revolution often lose out financially to companies that enter the game later once all the important ideas have already been developed. But finances aside, I'm confident that eventually, little by little, the approach I'm taking to genetic data analysis will pervade and transform the field, even if the effect is subtle and broad enough that I don't get that much credit for it....

And yet, though this Biomind stuff is complex enough to baffle most bioinformaticists and to be really tough to sell, it's REALLY REALLY SIMPLE compared to the Novamente AI design, which is one or two orders of magnitude subtler. I don't think I'm being egomaniacal when I say that no one else has really appreciated most of the subtlety in the Novamente design -- not even the other members of the Novamente team, many of whom have understood a lot. Which is verrrry different from the situation with Biomind: while the Biomind methods are too deep for most biologists, or most academic journal referees who review our papers, to understand, everyone on the Biomind team fully "gets" the algorithms and ideas.

Whether the subtlety of the Novamente design ever gets to be manifested in reality remains to be determined -- getting funding to pay a small team to build the Novamente system according to the design remains problematic, and I am open to the possibility that it will never happen, dooming me (as I've joked before) to a sort of Babbagedom. What little funding there is for AGI-ish research tends to go to folks who are better at marketing than I am, and who are willing to tell investors the story that there's some kind of simple path to AGI. Well, I don't think there is a simple path. There's at least one complex path (Novamente) and probably many other complex paths as well; and eventually someone will follow one of them if we don't annihilate ourselves first. AGI is very possible with 3-8 years effort by a small, dedicated, brilliant software team following a good design (like Novamente), but if the world can't even understand relatively simple stuff like Biomind, getting any understanding for something like Novamente is obviously going to continue to be a real uphill battle!

Relatedly, a couple weeks ago I had some long conversations with some potential investors in Novamente. But the investors ended up not making any serious investment offer -- for a variety of reasons, but I think one of them was that the Novamente design was too complex for them to easily grok. If I'd been able to offer them some easily comprehensible apparent path to AGI, I bet they would have invested. Just like it would be easier to sell Biomind to biologists if they could grok the algorithms as well as the Biomind technical team. Urrrghh!

Urrrgghhh!! urrrgghh!! ... Well, I'll keep pushing. There are plenty of investors out there. And the insights keep coming: interestingly, in the last few days a lot of beautiful parallels have emerged between some of our commercial narrow-AI work in computational linguistics and our more fundamental work in AGI (relating to making Novamente learn simple things in the AGI-SIM simulation world). It turns out that there are nice mathematical and conceptual parallels between algorithms for learning semantic rules from corpuses of texts, and the process of learning the functions of physical objects in the world. These parallels tell us a lot about how language learning works -- specifically, about how structures for manipulating language may emerge developmentally from structures for manipulating images of physical objects. This is exactly the sort of thing I want to be thinking about right now: now that the Novamente design is solid (though many details remain to be worked out, these are best worked out in the course of implementation and testing), I need to be thinking about "AGI developmental psychology," about how the learning process can be optimally tuned and tailored. But instead, to pay the bills and send the kids to college yadda yadda yadda, I'm trying to sell vastly simpler algorithms to biologists who don't want to understand why it's not clever to hunt for biomarkers for a complex disease by running an experiment with only 4 Cases and 4 Controls. (Answer: because complex diseases have biomarkers that are combinations of genes or mutations rather than individual genes/mutations, and to learn combinational rules distinguishing one category from another, a larger body of data is needed.)

Ooops! I've been blogging too long, I promised Scheherazade I would go play with her guinea pigs with her. Well, in a way the guinea pigs are a relief after dealing with humans all day ... at least I don't expect them to understand anything. Guinea pigs are really nice. Maybe a superintelligent guinea pig would be the ultimate Friendly AI. I can't remember ever seeing a guinea pig do anything mean, though occasionally they can be a bit fearful and defensive....

Tuesday, November 29, 2005

Post-Interesting

Hi all,

I have launched a second blog, which is called Post-Interesting

www.post-interesting.com

and I have invited a number of my friends to join me in posting to it (we'll see if any of them actually get around to it!).

The idea is that this current blog ("Multiverse According to Ben") will contain more personal-experience and personal-opinion type entries, whereas Post-Interesting will be more magazine-like, containing reviews, interesting links, and compact summaries of highly crisp scientific or philosophical ideas.... (Of course, even my idea of "magazine-like" contains a lot of personal opinions!)

Not that I really have time to maintain one blog let alone two, but from time to time I seem to be overtaken by an irresistable desire to expunge massive amounts of verbiage ;-D

If people make a lot of interesting posts to Post-Interesting then one day it will be a multimedia magazine and put Wired and Cosmopolitan out of business! (For now I just put three moderately interesting initial posts there....)

-- Ben

Wednesday, November 16, 2005

Reality and Religion (a follow-up to earlier posts on Objective/Subjective Reality)

This post is a response to Bob McCue's comments to my earlier blog entry on "Objective and Subjective Reality". Scroll down after going to

http://www.goertzel.org/blog/2005/07/objective-versus-subjective-reality.html

to read his comments.

Bob is a former Mormon and has written extensively and elegantly about his reasons for leaving the faith:

http://mccue.cc/bob/spirituality.htm

He read my blog on objective/subjective reality and my essay on "social/computational/probabilist" philosophy of science

http://www.goertzel.org/dynapsyc/2004/PhilosophyOfScience_v2.htm

and then posed some questions regarding the probabilistic justification of religious beliefs.

Bob: The questions you raise are deep and fascinating ones and unfortunately I don't have time right now to write a reply that does them justice.

However, I can't resist saying a few things ;-)

I was never religious but my ex-wife was and, although this led to numerous unpleasant arguments between us, it also led me to gain some degree of appreciation (OK, not all that much!) for the religious perspective. For her (as a Zen Buddhist) it was never about objective truth at all, it was always about subjective experience -- her own and that of the others in her sangha (religious group). If probability theory was relevant, it was in the context of evaluations like

Probability ( my own spiritual/emotional state is good GIVEN THAT I carry out these religious practices)

>

Probability ( my own spiritual/emotional state is good GIVEN THAT I don't carry out these religious practices)

The evaluation criterion was internal/subjective not external/objective. The actual beliefs of the religion were only evaluated in regard to their subjective effects on the believer's internal well-being. This fits in with a Nietzschean perspective in which "An organism believes what it needs to believe in order to survive", if you replace "survive" with "maximize internal satisfaction" (which ultimately approximately reduces to Nietzsche's "survival" if one takes an evolutionary view in which we have evolved to, on average, be satisfied by things correlated with our genomes' survival).

I am not sure what this has to do with religions like Mormonism though. I think my ex got interested in Zen (in her mid-20's) partly because I had talked to her about it years before that, when as a teenager I had found Huang Po's Zen writings (on exiting the world of thought and ideas and entering the world of pure truth/nothingness) really radical and fascinating. Zen is not very typical of religions and it's questionable whether it really belongs in the "religion" category -- it's a borderline case. It specifically teaches that the external, "objective" world is illusory and urges you to fully, viscerally and spiritually understand this world's construction via the mind. Thus in a Zen perspective the empirical validation or refutation of hypotheses (so critical to science) is not central, because it takes place within a sphere that is a priori considered illusory and deceptive. Because of this Zen tends not to make statements that contradict scientific law; rather it brushes the whole domain of science aside as being descriptive of an illusory reality.

I guess that Mormonism is different in that it makes hypotheses that directly contradict scientific observation (e.g. do Mormons hold the Earth was created 6000 years ago?). But still, I suspect the basic psychological dynamics is not that different. People believe in a religion because this belief helps them fulfill their own goals of personal, social or spiritual satisfaction. Religious people may also (to varying extents) have a goal of recognizing valid patterns in the observed world; but people can have multiple goals, and apparently for religious people the goal of achieving personal/social/spiritual satisfaction thru religion overwhelms the goal of recognizing valid patterns in the observed world. I find nothing very mysterious in this.

Bob: You ask about belief in Kundalini Yoga (another obsession of my ex-wife, as it happens.) I guess that the KY system helps people to improve their own internal states and in that case people may be wise to adopt it, in some cases... even though from a scientific view the beliefs it contains are a tricky mix of sense and nonsense.

However, it seems pretty clear to me that religious beliefs, though they may sometimes optimally serve the individual organism (via leading to various forms of satisfaction), are counterproductive on the species level.

As a scientific optimist and transhumanist I believe that the path to maximum satisfaction for humans as a whole DOES involve science -- both for things like medical care, air conditioning and books and music, and for things like creating AI's to help us and creating nanotech and gene therapy solutions for extending our lives indefinitely.

There's a reason that Buddhism teaches "all existence involves suffering." It's true, of course -- but it was even more true in ancient India than now. There was a lot more starvation and disease and general discomfort in life back then, which is why a suffering-focused religion like Buddhism was able to spread so widely. The "suffering is everywhere" line wouldn't sell so well in modern America or Western Europe, because although suffering still IS everywhere, it's not as extreme and not as major a component of most people's lives. Which is due, essentially, to science. (I am acutely aware that in many parts of the world suffering is a larger part of peoples' lives, but, this does not detract from the point I am making.)

Since religious belief systems detract from accurate observation of patterns in reality, they detract from science and thus from the path with the apparently maximal capacity to lead humanity toward overall satisfaction, even though they may in fact deliver maximal personal satisfaction to some people (depending on their personal psychology).

However, one may argue that some people will never be able to contribute to science anyway (due to low intelligence or other factors), so that if they hold religious beliefs and don't use them to influence the minds of science-and-technology-useful people, their beliefs are doing no harm to others but may be increasing their own satisfaction. Thus, for some people to be religious may be a good thing in terms of maximizing the average current and long term satisfaction of humanity.

There is also a risk issue here. Since religion detracts from science and technology, it maintains humans in a state where they are unlikely to annihilate the whole species, though they may kill each other in more modest numbers. Science gives us more power for positive transformation and also more power for terrible destruction. The maximum satisfaction achievable thru science is higher than thru religion (due to the potential of science to lead to various forms of massively positive transhumanism), but the odds of destruction are higher too. And we really have no way of knowing what the EXPECTED outcome of the sci-tech path is -- the probabilities of transcension versus destruction.

[As I wrote the prior paragraph I realized that no Zen practitioner would agree with me that science has the power to lead to greater satisfaction than religion. Semantics of "satisfaction" aside they would argue that "enlightenment" is the greatest quest and requires no technology anyway. But even if you buy this (which I don't, fully: I think Zen enlightenment is an interesting state of mind but with plusses and minuses compared to other ones, and I suspect that the transhuman future will contain other states of mind that are even more deep and fascinating), it seems to be the case that only a tiny fraction of humans have achieved or ever will achieve this exalted state. Transhumanist technology would seem to hold the possibility of letting any sentient being choose their own state of mind freely, subject only to constraints regarding minimizing harm to others. We can all be enlightened after the Singularity -- if we want to be! -- but we may well find more appealing ways to spend our eternity of time!! -- ]

OK, I drifted a fair way from Mormonism there, back to my usual obsessions these days. But hopefully it was a moderately interesting trajectory.

For a more interesting discussion of Mormonism, check out the South Park episode "All About Mormons." It was actually quite educational for me.

Saturday, October 22, 2005

Quantum Erasers, Psychokinesis and Time Travel

This post is inspired by a study of the “delayed choice quantum eraser” experiment described e.g. at

http://www.bottomlayer.com/bottom/kim-scully/kim-scully-web.htm

http://www.dhushara.com/book/quantcos/qnonloc/eraser.htm

Even though the quantum eraser experiments don’t allow true “backwards causation,” this doesn’t prove that such a thing is impossible. It just proves that there is no way to do it within the commonly accepted constraints of physical law. There is at least once concrete possibility for how currently known physical law may be breakable, in a way that would allow backward causation (and, as an effective consequence, time travel – since being able to cause events in the past would mean being able to create an exact replica of oneself in the past, including a brain-state possessing the feeling of having just been quantum-magically transported into the past).

This possibility is “quantum psychokinesis” – a notion which sounds bizarre, but is apparently supported by a variety of experiments done by respected scientists at various institutions including Princeton University; see

http://www.fourmilab.ch/rpkp/strange.html

The simplest of these experiments involve people trying to influence, by the power of concentration, random events such as the direction of an electron’s spin. A long list of experiments show that, after some training, people have a weak but real ability to do this. Over tens of thousands of trials people can make electrons spin in the direction they want to 51% of the time or so, whereas chance would dictate merely 50%. This is a small difference but over so many trials is highly statistically significant.

Hooking this kind of PK experiment up to a quantum eraser apparatus, one would obtain a practical example of reverse causation. If this kind of PK actually works, then in the context of the above “paradox” situation, for example, it really would be possible for someone on Alpha Centauri to send messages faster than light to someone back home, via biasing the direction of spin of the coupled twin particle observed on Alpha Centauri. The rate of information transmission would be extremely low, since all that PK has ever been observed to do is give a slight statistical bias to events otherwise thought random. But with an appropriate code even a very slow rate of information transmission can be made to do a lot. And hypothetically, if this sort of PK phenomenon is actually real, one has to imagine that AI’s in the future will find ways to amplify it far beyond what the human brain can do.

Quantum Theory and Consciousness

Another highly nerdy and technical blog entry…

I've been working on the last couple chapters of my long-due philosophy-of-mind book "The Hidden Pattern", and one of the chapters is on quantum reality, so I've been re-studying some of the trickier aspects of quantum theory and its interpretation.

In the course of this, I've come to what I think is a clearer understanding of the relation between quantum theory and consciousness, based on the "decoherence" approach to quantum measurement -- see

http://en.wikipedia.org/wiki/Quantum_decoherence

for a refresher on this topic.

This blog entry will make the most sense to readers who are at least a little familiar with quantum theory, at least at the popular-science level.

Unlike what Eugene Wigner suggested back in the 1960’s, we can’t quite say consciousness is the collapse of the wave function” because in the decoherence approach the wave function does not collapse – there are merely some systems that are almost-classical in the sense that there is minimal interference between the different parts of their wave function.

Of course, we can always say “everything is conscious” but this doesn’t really solve anything – even if everything is conscious, some things are more conscious than others and the problem of consciousness then is pushed into defining what it means for one thing to have a higher degree of consciousness than another.

The analogue of “consciousness is the collapse of the wave function” in the decoherence approach would seem to be “consciousness is the process of decoherence.” I propose that this is actually correct in a fairly strong sense, although not for an entirely obvious reason.

Firstly, I suggest that we view consciousness as “the process of observing.” Now, “observation,” of course, is a psychological and subjective concept, but it also has a physical correlate. I suggest the following characterization of the physical substrate of observation: Subjective acts of observation physically correspond to events involving the registration of something in a memory from which that thing can later be retrieved.

It immediately follows from this that observation necessarily requires an effectively-classical system that involves decoherence.

But what is not so obvious is that all decoherence involves an act of observation, in the above sense. This is because, as soon as a process decoheres, the record of this process becomes immanent in the perturbations of various particles all around it – so that, in principle, one could deconstruct the process from all this data, even though this may be totally impractical to do. Therefore every event of decoherence counts as an observation, since it counts as a registration of a memory that can (in principle) be retrieved.

Most events of decoherence correspond to registration in the memory of some fairly wide and not easily delineated subset of the universe. On the other hand, some events of decoherence are probabilistically concentrated in one small subset of the universe – for example, in the memory of some intelligent system. When a human brain observes a picture, the exact record of the picture cannot be reconstructed solely from the information in that brain – but a decent approximation can be. We may say that an event of registration is approximately localized in some system if the information required to reconstruct the event in an approximate way is contained in that system. In this sense we may say that many events of consciousness are approximately localized in particular systems (e.g. brains), though in an exact sense they are all spread more widely throughout the universe.

So, just as the Copenhagen-interpretation notion of “wave function collapse” turns out to be a crude approximation of reality, so does the notion of “wave function collapse as
consciousness.” But just as decoherence conceptually approximates wave function collapse, so the notion of “decoherence as registration of events in memory as consciousness” conceptually approximates “wave function collapse as consciousness.”

How is this insight reflected in the language of patterns (the theme of my philosophy book – “everything is pattern”)? If a system registers a memory of some event, then in many cases the memory within this system is a pattern in that event, because the system provides data that allows one to reconstruct that event. But the extent to which a pattern is present depends on a number of factors: how simple is the representation within the system, how difficult is the retrieval process, and how approximate is the retrieved entity as compared to the original entity. What we can say is that, according to this definition, the recognition of a pattern is always an act of consciousness. From a physics point of view, though, not all acts of consciousness need to correspond to recognitions of patterns. On the other hand, if one takes a philosophical perspective in which pattern is primary (the universe consists of patterns) then it makes sense to define pattern-recognition is identical to consciousness (???)

Of course, none of this forms a solution to the "hard problem of consciousness," which may be phrased as something like "how does the feeling of conscious experience connect with physical structures and dynamics?" This is philosophically subtler issue and you'll have to wait for "The Hidden Pattern" to read my views on it these days (which are different from anything I've published before). But an understanding of the physical correlates of consciousness is a worthwhile thing in itself, as well as a prerequisite to an intelligent discussion of the “hard problem.”

What do you think?

Too many stupid professors and bureaucrats...

I posted a blog a while ago whining about the annoyingness of the style of writing and thinking most common in academia today.

This is another one, with a slightly different slant.

At the end of the whining, however, I'll include an actual constructive suggestion for how to make some aspects of the academic world better. (Not that I expect my suggestion to have any actual impact!)

As I mentioned before, I've been making a push to submit papers and books for publication recently; something I haven't done much of since leaving academia in the late 90's. It's been quite an experience!

At first I thought I was doing something badly wrong. I have had some publications accepted but my rejection rate has been higher than I thought -- and not because what I'm submitting is bad (really!), mostly just (egads! can you believe it!) because it's unorthodox.

Of course, I'm revising and resubmitting and everything will be published in time. But the process has been educational as well as frustrating. And I've become aware that others whose work is even less radical than mine have been having an even more annoying time with this sort of thing.

I recently got two emails from friends reporting similar experiences to my own.

One is a biologist who recently left a major university for industry and has worked out a truly radical technique for repairing some types of DNA damage. This technique has now been demonstrated in live cells as well as in the test tube. Amazing stuff, with potential to cure some degenerative diseases as well as to slow human aging.

His paper? Rejected without review six times so far. WITHOUT REVIEW each time !!!

Another is an MD who has found particular patterns of DNA mutations that correspond to a couple very well known diseases. But -- oops -- these patterns are more complex than the ones biologists are used to looking at, and they occur in parts of the genome that biologists don't normally like to look at. So, no matter how statistically significant the results, he's got an uphill battle to fight. He's fighting against convention and presupposition. The result: right after he gets some breakthrough results, his government grant funding is cut off.

As compared to in the late 80's and early 90's, it seems much more common now to have things rejected without review. At least, this seems to be happening to me moderately often lately (though not a majority of the time), whereas back then I don't remember it ever happening.

A draft of my book on the Novamente design for general intelligence (not fully polished -- that's still in progress) was rejected by a publisher recently -- the rejection didn't surprise me, but the nature of the rejection did. The book wasn't even sent to a reviewer -- instead the editor just sent back a letter saying that their book series was intended for "serious academic works."

I had a bit of an email conversation with the editor, which revealed that he had shown the book to a "very distinguished AI professor" who had commented that due to the broad scope of the book and its claims to address general intelligence, it couldn't be a very serious academic work. Heh. Well, my ideas might be WRONG, but they're definitely just as serious as a lot of other books published. And the book doesn't contain a lot of mathematical proofs and only a handful of experimental results, but, it has more of both than Minsky's Society of Mind -- which also addresses general intelligence (or tries to) -- but wait, Minsky is old and famous, he's allowed to address big topics.... What we want to avoid is young people addressing big and interesting topics, right? But wait, why?

Please understand the nature of my complaint: I'm not pissed because this publisher rejected my book, I'm pissed because it was rejected without being read or even seriously skimmed over. And note that I've had six academic books published before, so it should be obvious to the publisher (who had my resume') that I'm not a complete raving crackpot.

I had the same experience with a couple bioinformatics papers I recently submitted -- which were nowhere near as eccentric as my book on Novamente, but presented algorithms and approaches radically different from what's typical in the bioinformatics field. Not just rejected --rejected WITHOUT REVIEW.

Of course, I also had some bioinformatics papers rejected after being reviewed, but by reviewers who plainly understood nothing in the paper. Of course, I could have tried to explain my methods more didactically -- but then the papers would have been rejected for being too long! Tricky, tricky....

Yes, I have had some papers accepted this year, and I have couple books (a futurist manifesto of sorts, and an edited volume on AGI) coming out in an academic press later this year. So these are not the whinings of a complete academic failure ;-p

I've been through enough of this crap before to realize that, after enough resubmissions, eventually one's books or papers hit a publisher or journal who sends them to intelligent and open-minded reviewers who actually read the materials they're given and either understand them or admit they don't (so the editor can find someone else who does). Eventually. But it's a long and annoying search process.

The academic community does reward innovators -- sometimes, eventually,.... But more often than not it places huge obstacles in the way of innovation, via a publication process that makes it much easier to publish variations on orthodox ideas than unusual approaches. One might argue that this kind of extremely strong bias is necessary to filter out all the crap in the world. But I don't believe it. Major changes to the reviewing process are in order.

Collaborative filtering technology would seem to provide a fairly easy answer. Suppose one assumes, as a basis, that individuals with PhD's (or MD's or other similar degrees) are, on the whole, reasonably valid raters of academic content. Then one can give each PhD a certain number of rating points to allocate each year, and let them use them to rate each others' work. People can then post their work online in resources like arxiv.org, and ratings can then be used to guide individuals to the most important or interesting works.

Journals aren't needed since the Net and computer printers are so widespread, and book publishers may still exist, but will be able to assume that if a book manuscript has received a reasonable number of rating points in its online version, then it's probably worth publishing.

You can argue that citations play a similar role -- but citations only play a role after a work is published, they don't help with the irritation of getting innovative ideas past conservative referees in the first place.

Anyway I don't have time to work toward implementing an idea like this, so I'll just keep working within the existing, annoying system, unless I manage to gather enough money for my research from business profits or private investments or donations that I don't need to worry about the often-absurd publication game.

Urrrghh!! I can easily see how, facing this kind of crap, young scientists and philosophers give up on trying to think wild and novel thoughts and follow along with everyone else.

Following along certainly would create a lot less hassle.

Or else giving up on the game of seeking reputation and simply wandering around in the woods like Zarathustra (Nietzsche's, not my son; my son Zar only wanders around these days in the simulated woods inside World of Warcraft!) and keeping one's thoughts to oneself (and then foolishly emerging to preach them to the world after a couple decades, only to find that no one understands what the HELL you're talking about...)

Humanity -- gotta love it...

Or -- hmm -- do you ???

Ah well...

Friday, October 07, 2005

Immortality and the Potential Obsolescence of the Self


I recently co-founded a group called the DC Future Salon that meets once a month in Bethesda, Maryland, to discuss futurist issues (if you live near DC and want to join, join the dcfuture group on yahoogroups). This week our salon meeting focused on the notion of immortality. After a nice lecture and movie showing by Immortality Institute founder (and DC Future Salon co-organizer) Bruce Klein, the discussion traveled through various topics, including the viability of cryonics and the politics of discussing immortality among nontranshumanists – and finally, moved on to more philosophical issues, such as the reasons why immortality is desirable. One of the key issues that came up here is the extent to which the individual self, the personal identity – the thing most transhumanists want most to preserve via immortality, much more so than our physical bodies – is actually a real thing worth preserving. Preserving the physical body is, like uploading, just one means to preserving the self. But what is this “self” that’s so valuable to persist throughout time?

There is a lot of neuropsychological research showing that the “self” is in a strong sense an illusion – much like its sister illusion, “free will.” Thomas Metzinger’s recent book Being No One makes this point in an excellently detailed way. The human mind’s image of itself – what Metzinger calls the “phenomenal self” – is in fact a construct that the human mind creates in order to better understand and control itself, it’s not a “real thing.” Various neuropsychological disorders may lead to bizarre dysfunctions in self-image and self-understanding. And there are valid reasons to speculate that a superhuman mind – be it an AI or a human with tremendously augmented intelligence – might not possess this same illusion. Rather than needing to construct for itself a story of a unified “self entity” controlling it, a more intelligent and introspective mind might simply perceive itself as the largely heterogenous collection of patterns and subsystems that it is. In this sense, individuality might not survive the transcendence of minds beyond the human condition.

The key philosophical point here is: What is the goal of immortality? Or, to put it more precisely: What is the goal of avoiding involuntary death? Is it to keep human life as we know it around forever? That is a valid and non-idiotic goal. Or is it to keep the process of growth alive and flourishing beyond the scope painfully and arbitrarily imposed on it by the end of the human life?

Human life as it exists now is not a constant, it's an ongoing growth process; and for those who want it to be, human life beyond the current maximum lifespan and beyond the traditional scope of humanity will still be a process of growth, change and learning. Fear of death will largely be replaced by more interesting issues like the merit of individuality in its various forms -- and other issues we can't come close to foreseeing yet.

It may be that, when we live long enough and become smart enough, what we find out is that maintaining individuality unto eternity isn't interesting, and it's better to merge into a larger posthuman intelligent dynamical-pattern-system. Or it may be that what we find out is that individuality still seems interesting forever, since there are so many resources available at the posthuman stage, and diversity still seems like an interesting value (plenty of room for both humans and transhuman intelligent dynamical pattern systems!).

The quest for radical life extension is largely about staying around to find out about things like this!

And there is, of course, a familiar and acute irony in observing that -- while these (along with the scientific puzzles of human biology, uploading and so forth) are the interesting issues regarding immortality -- the public discourse on immortality will be focusing on much less fascinating aspects for quite some time to come: aspects like whether living forever is a violation of the will of the divine superbeing who created us all 6000 years ago....


Friday, July 22, 2005

P.S. on objective/subjective reality and consciousness (and future virtual Elvises)

Well, I started writing a followup to my previous blog entry on subjective/objective reality, dealing with issues relating to consciousness and qualia, but it got way too big for a reasonable blog entry, and so I've posted it as an HTML document:

http://www.goertzel.org/new_essays/QualiaNotes.htm

But it's still rough and informal and speculative in the manner of a blog entry, rather than being a really polished essay.

Of course, I have plenty more to say on the topic than what I wrote down there, but -- well -- the usual dilemma ... too many thoughts, too little time to write them all down... I need to prioritize. Entertaining, speculative philosophy only gets a certain fraction of my time these days!

BTW, I wrote about 1/3 of those notes while watching "Jailhouse Rock" with the kids, but I don't know if Elvis's undulating pelvis had any effect on the style or contents of the essay or not. (Wow -- the Elvis phenomenon really makes piquant the whole transhumanist dilemma of "Is humanity really worth preserving past the Singularity or not?"!! ... A decent helping of art, beauty and humor exists there in Elvis-land, sure -- but along with such a whopping dose of pure and unrefined asininity --- whoa.... )

How many of you readers out there agree that the first superhuman AI should be programmed to speak to humans through a simulation of Elvis's face??? ;-D

Tuesday, July 19, 2005

Objective versus subjective reality: Which is primary?

This post is a purely intellectual one -- playing at the border between "blog entry" and "brief philosophical essay"..... It transmits a small portion of the philosophical train of thought I undertook while wandering with Izabela at White Sands National Monument a few weeks ago. Much of that train of thought involved issues such as free will and the emergence of notions of self, will and reality in the infant's mind (the epigenesis of conceptual structures and cognitive dynamics in the infant and toddler mind is much on my mind these days, because in the Novamente AI project we're working on putting together a demonstration of Novamente progressing through the earlier of Jean Piaget's stages of child cognitive development). But what I'll discuss here today is a bit different from that: the relation between objective and subjective reality.

One of my motivations for venturing into this topic is: I've realized that it's wisest to clearly discuss the issue of reality before entering into issues of consciousness and will. Very often, when I try to discuss my theory of consciousness with people, the discussion falls apart because the people I'm talking to want to assume that objective reality is primary, or else that subjective experiential reality is primary. Whereas, to me, a prerequisite for intelligently discussing consciousness is the recognition that neither of these two perspectives on being is primary -- each has their own validity, and each gives rise to the other in a certain sense.

OK, so ... without further ado... : There are two different ways to look at the world, both of which are to some degree sympathetic to me.

One way is to view the objective world as viewed by science and society as primary, and to look at the subjective worlds of individuals as approximations to objective reality, produced by individual physical systems embedded within physical reality.

Another way is to view the subjective, experiential world of the individual world (mine, or yours) as primary, and look at "objective reality" as a cognitive crutch that the experiencing mind creates in order to make use of its own experience.

I think both of these views are valid and interesting ones -- they each serve valuable purposes. They don't contradict each other, because the universe supports "circular containment": it's fine to say "objective reality contains subjective reality, and subjective reality contains objective reality." The theory of non-well-founded sets shows that this kind of circularity is perfectly consistent in terms of logic and mathematics. (Barwise and Etchemendy's book "The Liar" gives a very nice exposition of this kind of set theory for the semi-technical reader. I also said a lot about this kind of mathematics in my 1994 book Chaotic Logic, see a messy rough draft version of the relevant chapter here ... (alas, I long ago lost the files containing the final versions of my books!!))

But it's also interesting to ask if either of the two types of world is properly viewed as primary. I'll present here an argument that it may make sense to view either subjective or objective reality as primary, depending on the level of detail with which one is trying to understand things.

My basic line of argument is as follows. Suppose we have two entities A and B, either of which can be derived from the other -- but it's a lot easier to derive B from A than to derive A from B. Then, using the principle of Occam's Razor, we may say that the derivation of B from A is preferable, is more fundamental. (For those not in the know, Occam's Razor -- the maxim of preferring the simplest explanation, from among the pool of reasonably correct ones -- is not just a pretty little heuristic, but is very close to the core of intelligent thought. For two very different, recent explorations of this theme, see Marcus Hutter's mathematical theory of general intelligence; and Eric Baum's book What is Thought (much of which I radically disagree with, but his discussion of the role of Occam's Razor in cognition is quite good, even though he for some reason doesn't cite Ray Solomonoff who conceived the Occam-cognition connection back in the 1960's)).

I will argue here that it's much easier to derive the existence of objective reality from the assumption of subjective reality, than vice versa. In this sense, I believe, it's sensible to say that the grounding of objective reality in subjective reality is primary, rather than vice versa.

On the other hand, it seems that it's probably easlier to derive the details of subjective reality from the details of objective reality than vice versa. In this sense, when operating at a high level of precision, it may be sensible to say that the grounding of subjective reality in objective reality is primary, rather than vice versa.

Suppose one begins by assuming "subjective reality" exists -- the experienced world of oneself, the sensations and thoughts and images and so forth that appear in one's mind and one's perceived world. How can we derive from this subjective reality any notion of "objective reality"?

Philip K. Dick defined objective reality as "that which doesn't go away even when you stop believing in it." This is a nice definition but I don't think it quite gets to the bottom of the matter.

Consider the example of a mirage in the desert -- a lake of water that appears in the distance, but when you walk to its apparent location, all you find is sand. This is a good example of how "objective reality" arises within subjective reality.

There is a rule, learned through experience, that large bodies of water rarely just suddenly disappear. But then, putting the perceived image of a large body of water together with the fact that large bodies rarely disappear,and the fact that when this particular large body of water was approached it was no longer there -- something's gotta give.

There are at least two hypotheses one can make to explain away this contradiction:


1. one could decide that deserts are populated by a particular type of lake that disappears when you come near it, or

2. one can decide that what one sees from a distance need not agree with what one sees and otherwise senses from close up.

The latter conclusion turns out to be a much more useful one, because it explains a lot of phenomena besides mirage lakes.

Occam's Razor pushes toward the second conclusion, because it gives a simple explanation of many different things, whereas explanations of form 1 are a lot less elegant, since according to this explanatory style, each phenomenon where different sorts of perception disagree with each other requires positing a whole new class of peculiarly-behaving entity.

Note that nothing in the mirage lake or other similar experiences causes one to doubt the veracity of one's experiences.

Each experience is valid unto itself. However, the mind generalizes from experiences, and takes particular sensations and cognitions to be elements of more general categories. For instance, it takes a particular arrangement of colors to be a momentary image of a "lake", and it takes the momentary image of a lake to be a snapshot of a persistent object called a "lake." These generalizations/categorizations are largely learned via experience, because they're statistically valid and useful for achieving subjectively important goals.

From this kind of experience, one learns that, when having a subjective experience, it's intelligent to ask "But the general categories I'm building based on this particular experience -- what will my future subjective experiences say about these categories, if I'm experiencing the same categories (e.g. the lake) through different senses, or from different positions, etc." And as soon as one starts asking questions like that -- there's "objective reality."

That's really all one needs in order to derive objective reality from subjective reality. One doesn't need to invoke a society of minds comparing their subjective worlds, nor any kind of rigorous scientific world-view. One merely needs to posit generalization beyond individual experiences to patterns representing categories of experience, and an Occam's Razor heuristic.
In the mind of the human infant, this kind of reasoning is undertaken pretty early on -- within the first six months of life.

It leads to what developmental psychologists call "object permanence" -- the recognition that, when a hand passes behind a piece of furniture and then reappears on the other side, it still existed during the interim period when it was behind the furniture. "Existed" here means, roughly, "The most compact and accurate model of my experiences implies that if I were in a
different position, I would be able to see or otherwise detect the hand while it was behind the chair, even though in actual fact I can't see or detect it there from my current position." This is analogous to what it means to believe the mirage-lake doesn't exist: "The most compact and accurate model of my experiences implies that if I were standing right where that lake
appears to be, I wouldn't be wet!" Notice from these examples how counterfactuality is critical to the emergence of objective from subjective reality. If the mind just sticks to exactly what it experiences, it will never evolve the notion of objective reality. Instead, the mind needs to be able to think "What would I experience if...." This kind of basic counterfactuality leads fairly quickly to the notion of objective reality.

On the other hand, what does one need in order to derive subjective reality from objective reality? This is a lot trickier!

Given objective reality as described by modern science, one can build up a theory of particles, atoms, molecules, chemical compounds, cells, organs (like brains) and organisms -- and then one can talk about how brains embodied in bodies embedded in societies give rise to individual subjective realities. But this is a much longer and more complicated story than the emergence of objective reality from subjective reality.

Occam's-razor-wise, then, "objective reality emerges from subjective reality" is a much simpler story than the reverse.

But of course, this analysis only scratches the surface. The simple, development-psychology approach I've described above doesn't explain the details of objective reality -- it doesn't explain why there are the particular elementary particles and force constants there are, for example. It just explains why objective reality should exist at all.

And this point gives rise to an interesting asymmetry. While it's easier to explain the existence of objective reality based on subjective reality than vice versa, it seems like it's probably easier to explain the details of subjective reality based on objective reality than vice versa. Of course, this is largely speculative, since right now we don't know how to do either -- we can't explain particle physics based on subjectivist developmental psychology, but nor can we explain the nature of conscious experience based on brain function. However, my intuition is that the latter is an easier task, and will be achieved sooner.

So we then arrive at the conclusion that:


  • At a coarse level of precision, "subjectivity spawns objectivity" is a simpler story than vice versa
  • At a higher level of precision, "objectivity spawns subjectivity" is a simpler story than vice versa

So, which direction of creation is more fundamental depends on how much detail one is looking for!

This is not really such a deep point -- but it's a point that seems to elude most philosophers, who seem to be stuck either in an "objective reality is primary" or "subjective reality is primary" world-view. It seems to me that recognizing the mutual generation of these two sorts of reality is prerequisite for seriously discussing a whole host of issues, including consciousness and free will. In my prior writings on consciousness and will I have taken for granted this kind of mutual-generationist approach to subjectivity/objectivity, but I haven't laid it out explicitly enough.

All these issues will be dealt with in my philosophy-of-mind book "The Hidden Pattern", which I expect to complete mid-fall. I wish I had more time to work on it: this sort of thinking is really a lot of fun. And I think it's also scientifically valuable -- because, for example, I think one of the main reasons the field of AI has made so little progress is that the leading schools of thought in academic and industrial AI all fall prey to fairly basic errors in the philosophy of mind (such as misunderstanding the relation between objective and subjective reality). The correct philosophy of mind is fairly simple, in my view -- but the errors people have made have been quite complicated in some cases! But that's a topic for future blog entries, books, conversations, primal screams, whatever....

More later ... it's 2AM and a warm bed beckons ... with a warm wife in it ;-> ... (hmm -- why this sudden emphasis on warmth? I think someone must have jacked the air conditioning up way too high!!)

Monday, July 18, 2005

The massive suckage of writing academic research papers / the ontology of time / White Sands

I was a professor for 8 years, so I'm no stranger to the weird ways of academia. But I've been pretty much away from that universe for a while, pursuing commercial software development and independent research. Recently I've re-initiated contact with the world of academic research, because it's become clear that getting some current academic publications on my AI and bioinformatics work will be valuable to my scientific and business pursuits. Egads!! The old frustrations are coming back -- badly enough to spill over into a blog entry....

This is a pretty boring blog entry, I'm afraid: just a long rant about how annoying academic research can be. But I got irritated enough to write this stuff down, so I guess I may as well post it....

I've been working on an academic paper together with my former Webmind colleague Pei Wang, on the topic of "why inference theories should represent truth values using two numbers rather than one." For instance, the inference component of my Novamente AI system represents the truth values of statements using a probability and a "weight of evidence" (which measures, roughly, the number of observations on which the probability is based). Pei's NARS reasoning system uses two-component truth values with a slightly different interpretation.

Now, this is a perfectly decent paper we've written (it was just today submitted for publication), but, what strikes me is how much pomp, circumstance and apparatus academia requires in order to frame even a very small and simple point. References to everything in the literature ever said on any vaguely related topic, detailed comparisons of your work to whatever it is the average journal referee is likely to find important -- blah, blah, blah, blah, blah.... A point that I would more naturally get across in five pages of clear and simple text winds up being a thirty page paper!

I'm writing some books describing the Novamente AI system -- one of them, 600 pages of text, was just submitted to a publisher. The other two, about 300 and 200 pages respectively, should be submitted later this year. Writing these books took a really long time but they are only semi-technical books, and they don't follow all the rules of academic writing -- for instance, the whole 600 page book has a reference list no longer than I've seen on many 50-page academic papers, which is because I only referenced the works I actually used in writing the book, rather than every relevant book or paper ever written. I estimate that to turn these books into academic papers would require me to write about 60 papers. To sculpt a paper out of text from the book would probably take me 2-7 days of writing work, depending on the particular case. So it would be at least a full year of work, probably two full years of work, to write publishable academic papers on the material in these books!

For another example, this week I've been reading a book called "The Ontology of Time" by L. Nathan Oaklander. It's a pretty interesting book, in terms of the contents, but the mode of discourse is that of academic philosophy, which is very frustrating to me. It's a far cry from Nietzsche or Schopenhauer style prose -- academic philosophy takes "pedantic" to new heights.... The book makes some good points: it discusses the debate between philosophers promoting the "A-theory of time" (which holds that time passes) and the "B-theory of time" (which holds that there are only discrete moments, and that the passage of time is an illusion). Oaklander advocates the B-theory of time, and spends a lot of space defending the B-theory against arguments by A-theorists that are based on linguistic usage: A-theorists point out that we use a lot of language that implies time passes, in fact this assumption is embedded in the tense system of most human languages. Oaklander argues that, although it's convenient to make the false assumption that time passes for communicative purposes, nevertheless if one is willing to spend a lot of time and effort, one can reduce any statement about time passing to a large set of statements about individual events at individual moments.

Now, clearly, Oaklander is right on this point, and in fact my Novamente AI design implicitly assumes the B-theory of time, by storing temporal information in terms of discrete moments and relations of simultaneity and precedence between them, and grounding linguistic statements about time in terms of relationships between events occurring at particular moments (which may be concrete moments or moments represented by quantified mathematical variables).

There are also deep connections between the B-theory and Buddhist metaphysics, which holds that time is an illusion and only moments exist, woven together into apparent continua by the illusion-generating faculty of the mind. And of course there are connections with quantum physics: Julian Barbour in "The End of Time" has argued ably that in modern physics there is no room for the notion of time passing. All moments simply exist, possessing a reality that in a sense is truly timeless -- but we see only certain moments, and we feel time moving in a certain direction, because of the way we are physically and psychologically constructed.

But Oaklander doesn't get to the connections with Buddhism and quantum theory, because he spends all his time pedantically arguing for fairly simple conceptual points with amazing amounts of detail. The papers in the book go back 20 years, and recount ongoing petty arguments between himself and his fellow B-theorists on the one hand, and the A-theorists on the other hand. Like I said, it's not that no progress has been made -- I think Oaklander's views on time are basically right. What irritates me is the painfully rate of progress at which these very smart philosophers have proceeded. I attribute their slow rate of progress not to any cognitive deficits on their part, but to the culture and methodology of modern academia.

Obviously, Nietzsche would be an outcast in modern academia -- casting his books in the form of journal papers would really be a heck of a task!

And what if the scientists involved in the Manhattan Project had been forced to write up their incremental progress every step of the way, and fight with journal referees and comb the literature for references? There's no way they would have made the massively rapid progress they did....

And the problem is not restricted to philosophy, of course -- "hard" science has its own issues. In computer science most research results are published at least twice: once in a conference proceedings and once in a journal article. What a waste of the researcher's time, to write the same shit up twice ... but if you don't do it, your status will suffer and you'll lose your research grants, because others will have more publications than you!

Furthermore, if as a computer scientist you develop a new algorithm intended to solve real problems that you have identified as important for some purpose (say, AI), you will probably have trouble publishing this algorithm unless you spend time comparing it to other algorithms in terms of its performance on very easy "toy problems" that other researchers have used in their papers. Never mind if the performance of an algorithm on toy problems bears no resemblance to its performance on real problems. Solving a unique problem that no one has thought of before is much less impressive to academic referees than getting a 2% better solution to some standard "toy problem." As a result, the whole computer science literature (and the academic AI literature in particular) is full of algorithms that are entirely useless except for their good performance on the simple "toy" test problems that are popular with journal referees....

Research universities are supposed to be our society's way of devoting resources to advancing knowledge. But they are locked into a methodology that makes knowledge advance awfully damn slowly....

And so, those of us who want to advance knowledge rapidly are stuck in a bind. Either generate new knowledge quickly and don't bother to ram it through the publication mill ... or, generate new knowledge at the rate that's acceptable in academia, and spend half your time wording things politically and looking up references and doing comparative analyses rather than doing truly productive creative research. Obviously, the former approach is a lot more fun -- but it shuts you out from getting government research grants. The only way to get government research money is to move really slowly -- or else to start out with a lot of money so you can hire people to do all the paper-writing and testing-on-toy-problems for you....

Arrrgh! Anyway, I'm compromising, and wasting some of my time writing a small fragment of my research up for academic journal publication, just to be sure that Novamente AI is "taken seriously" (or as seriously as a grand AGI project can possibly be taken by the conservative-minded world we live in).... What a pain.

If society valued AGI as much as it valued nuclear weapons during World War II, we'd probably have superhuman AI already. I'm serious. Instead, those of us concerned with creating AGI have to waste our time carrying out meaningless acts like writing academic papers describing information already adequately described in semi-formal documents, just to be taken seriously enough to ask for research money and have a nonzero chance of getting it. Arrggh!

OK, I promise, the next blog entry won't be as boring as this, and won't be a complaint, either. I've actually been enjoying myself a lot lately -- Izabela and I had a great vacation to New Mexico, where we did a lot of hiking, including the very steep and very beautiful Chimney Canyon route down Mount Sandia, which I'd always wanted to do when I lived in New Mexico, but never gotten around to. Also, we camped out on the dunes in White Sands National Monument, which is perhaps the most beautiful physical location I know of. I can't think of anywhere more hallucinogenic -- psychedelic drugs would definitely enhance the experience, but even without them, the landscape is surprisingly trippy, giving the sensation of being in a completely different universe from the regular one, and blurring the distinction between inside and out....

Most of the time wandering around in White Sands was spent in conversation about the subtleties of the interrelationship between free will and consciousness -- interesting and perhaps valuable ideas that I haven't found time to write down yet, because all my writing-time these last couple weeks has been spent putting already-well-understood ideas into the form of academic papers ;-ppp White Sands is exactly the right place to mull over the structure of your mind, since the landscape itself projects you involuntarily into a kind of semi-meditative state....

Hmmm... maybe I'll write down those ideas about free will and consciousness in the next blog entry. It's tempting to write that stuff now -- but it's 1:25 AM, I think I'll go to sleep instead. Tomorrow, alas, is another day... (I tried to make all the days run into each other by taking Modafinil to eliminate my need for sleep -- but it just wound up upsetting my stomach too much, so I've had to go back to sleeping again: bummer!!)

Saturday, June 18, 2005

Time Travel , Free Will, Ouspensky, Xaj Kalikak, and The Trans-temporal Post-Singularity Ubermind

This entry is motivated by an interesting news article that my friend David Hart forwarded to me, about one of my old favorite topics: time travel.

I'll use the article as an excuse to riff on an idea I had back in the 1980's about the possible transtemporal nature of superhuman minds following the Singularity.

The article is titled "No paradox for time travellers" and appeared on NewScientist.com on 18 June 2005, written by Mark Buchanan. It pertains to a technical paper online at

www.arxiv.org/quant-ph/0506027

Back when I first got serious about science back in my late teens and early 20's, time travel was my top choice of research area -- but after a little while I decided that AI, my second choice, was more likely to be achievable within my lifetime. (EVEN FURTHER DIGRESSION: My third choice was working toward human immortality via biology -- which I'm working on in the background now via my work with Biomind LLC -- but I backburnered that one because I don't enjoy biology personally nearly as much as physics or computer science. To me, it seems that biological immortality will be made possible via a combination of many relatively small insights and leaps -- "big science" like one sees in contemporary biology -- whereas time travel and AI seem more amenable to huge revolutionary insights ... this is one reason the latter interest me more than biology ... the other being that CS and physics have a mathematical elegance that appeals to me, and that biology lacks....)

I learned in the mid-80's, when studying general relativity theory in grad school at NYU, that modern physics deems time travel possible -- but difficult to achieve. Basically, it makes time travel into an engineering problem, but one that would seem to probably require engineering on the scale of making weird configurations of exotic forms of matter and energy ("exotic" meaning physically possible to produce, but incredibly difficult and/or expensive to do so using current technologies). Do-able, but probably not this decade....

Much later I read Kip Thorne's book "Black Holes and Time Warps", which reviews general relativity and its implications as regards time travel (along with other topics), and a host of other related papers, some of which are reviewed and referenced here.

(A much more ridiculous, though amusing, book on time travel is J.H. Brennan's book, "Time Travel: A New Perspective." Brennan gives you practical instructions on how to travel through time. Recommended only for entertainment value. One of the reviewers on amazon.com complains that the methods are inadequate because they can't be practiced by individuals acting alone, they require that time travel be a group activity!)

Anyway, the article Dave forwarded is brief and the bulk of it goes as follows:

The laws of physics seem to permit time travel, and with it, paradoxical situations such as the possibility that people could go back in time to prevent their own birth. But it turns out that such paradoxes may be ruled out by the weirdness inherent in laws of quantum physics.

Some solutions to the equations of Einstein's general theory of relativity lead to situations in which space-time curves back on itself, theoretically allowing travellers to loop back in time and meet younger versions of themselves. Because such time travel sets up paradoxes, many researchers suspect that some physical constraints must make time travel impossible. Now, physicists Daniel Greenberger of the City University of New York and Karl Svozil of the Vienna University of Technology in Austria have shown that the most basic features of quantum theory may ensure that time travellers could never alter the past, even if they are able to go back in time.

The constraint arises from a quantum object's ability to behave like a wave. Quantum objects split their existence into multiple component waves, each following a distinct path through space-time. Ultimately, an object is usually most likely to end up in places where its component waves recombine, or "interfere", constructively, with the peaks and troughs of the waves lined up, say. The object is unlikely to be in places where the components interfere destructively, and cancel each other out.

Quantum theory allows time travel because nothing prevents the waves from going back in time. When Greenberger and Svozil analysed what happens when these component waves flow into the past, they found that the paradoxes implied by Einstein's equations never arise. Waves that travel back in time interfere destructively, thus preventing anything from happening differently from that which has already taken place. "If you travel into the past quantum mechanically, you would only see those alternatives consistent with the world you left behind you," says Greenberger.

Interesting... huh?

What this suggests is that, perhaps, time travel is quite possible and the reason that it seems paradoxical to us is because of our illusion of free will.

I.e. since we think we have free will, we don't like to think that if we go back in time we are constrained to do things consistent with presently observed reality...

I am reminded of Ouspensky's classic novel "The Strange Life of Ivan Osokin" -- where the main character Osokin convinces a magician to send him back in time to live through his life all over again... he's hoping to make his life better, by not making as many stupid decisions as he did the previous time around.

But the punchline is, while living his life over again, Osokin winds up making the same stupid decisions he did before. He just can't help himself -- he finds himself irresistably drawn to make the same dumb choices even though he vaguely remembers, from his previous times living his life, how annoying their consequences were....

Osokin iterates around again and again -- repeatedly living his life then getting the magician to send him back to the past to live his life over again -- each time failing to correct his stupid decisions.

But then, after the N'th iteration, he finally he achieves enough awareness that when he meets with the magician he realizes it's stupid to revisit his life again, without changing the nature of his mind -- and he agrees to join the magician's mystical sect and get tutored in the True Path....

Ouspensky's point of course is that normal humans don't have free will but basically live like deterministic machines pushed by their unconscious and emotions -- but if you join his Gurdjieff/Beelzebub sect, you can achieve real free will! (BIG DIGRESSION: Needless to say, I don't accept this philosophy, though I do find some germ of truth at the core of it. In my view, there is absolute freedom in the universe at a certain level -- the level Peirce called First -- and then there are patterns in the universe at another level -- the level Peirce called Third -- and there are subtle connections between First and Third, wherein some patterns seem to have more freedom associated with them than others.... It may well be that human-mind-patterns can achieve more freedom, in a sense, via practicing meditative and mystical disciplines like the ones Ouspensky preached -- though of course these practices can also lead to a bunch of delusions. But I don't believe that any practice can lead to a fundamental breaking-out from the world of determinism and delusion, which is pretty much what Ouspensky taught. It's a big exaggeration, unfortunately -- the only hope for breaking out of delusion altogether is to go totally beyond your human mind, which Ouspensky didn't really succeed in doing; he may have had awesome moments of insight, but he still remained human with all the beauty and flaws and screwiness implied thereby, blah blah blah....)

OK -- so Ouspensky's novel gave a funny twist on Nietzsche's Eternal Recurrence ... but it also seems somewhat relevant to these recent ideas about time travel.

The time-travel idea from the Greenberger and Svozil paper basically suggests that if we went back in time we'd find ourselves in the position of Ivan Osokin -- unable to make decisions other than the ones we're "postdestined" to make, so as to cause the future to come out as it's already known it's going to come out....

Ergo, the conclusions of Greenberger and Svozil hold up except in the presence of Ouspenskyan magicians!! ;-))

But another interesting possibility arises here. It may be that our present reality is not consistent with there having been time travelers going back into our past doing interesting stuff. However, sometime in the future there may be a time-travel-ful period full of time travelers cycling around and around -- and the world during that period may be whacky enough that the consistency of existence-of-meddling-time-travelers with observed reality is obvious...

Indeed, this is what I've often suspected. Once time travel is invented, maybe we'll be able to travel back in time fairly freely -- but only back to the point where time travel was invented -- not before. The Greenberger and Svozil results suggest that travel back before the invention of time travel may not be possible -- or may be possible only with very strict constraints -- because it can't be done too freely and still be done consistently with the world as it's known to be during that period (e.g. our period and our past). But once time travel is invented, free and whacky time travel from the future back till that point may well be consistent with the world after that point.

This suggests that the history of the universe may be divided into two periods: temporally forward and temporally bidirectional.

This is a fun vision of the post-Singularity world.... Post-Singularity may be post-temporality, in a sense. superhuman AI creates time machine, starts up the rampant-time-travel domain, and all heaven breaks loose ;-)

Yeah yeah, this is whacky speculation, I know. But it's not impossible according to known physics, and nor is it philosophically nonsensical.

The key point is that there may be consistent solutions of the universe's physics equations, according to which the universe at time T is consistent with time travellers from after T coming back and messing with the universe at time T in interesting ways that are obvious and noticeable to the folks living at time T.

The universe at our present time is consistent with time travelers from the future coming back and messing with our past, but not in ways dramatically noticeable by us. Of course, it's possible that time travelers did come back and mess with our past in ways that were important to us -- maybe that's the cause of the origin of life, the Big Bang, etc. -- these ideas have been explored in numerous science fiction novels. But even if so, this level of time-travel-based interference is pretty minimal compared to what may be possible in the post-Singularity period.

In some whacky, interesting but amateurish science fiction I wrote in the late 1980's (part of my never-finished meta-novel Wargasm), I described a character named Xaj Kalikak, who traveled into the future and practiced excessive time-travel until he'd revised the past and his own mind so many times that, in effect, the various loops of time-travel-induced-bidirectional-causation organized themselves into an intelligent mind. Instead of feedback loops of electricity in the brain, feedback loops of causation over time self-organized into a superintelligent mind. Perhaps this sort of thing will come true, and the superhuman mind following the Singularity will be transtemporal in a way we can't even imagine....

Heh.

Tuesday, May 03, 2005

The Human-Aesthetics of Transhumanity and Non-humanity...

This blog entry deals with issues of aesthetics rather than science..... The particular question is: To what extent is it possible to make "humanly good" art pertaining to the transhuman realm??

I don't really spend much time thinking about aesthetic philosophy in the abstract, but as an "artistic creator" type I do mull it over occasionally. The thoughts I share here were inspired by a post sci-fi author Damien Broderick made to the SL4 list. Damien's post was as follows:

http://www.scifi.com/sfw/issue419/books.html

is an interesting review of my new sf novel GODPLAYERS. The reviewer is especially exercised by the fact that my posthuman characters are not immediately understandable -- indeed, beyond empathy -- by human standards:

"the frustration level mounts as one waits in vain... for characters... to display any hint of a genuine inner life as they move randomly from scene to scene, world to world, reality to reality. Perhaps Vorpal homunculi do not possess inner lives, and Broderick's point is that these seeming superhumans, for all their power, are soulless automatons without a shred of humanity.... Surely there should be some character, somewhere in a novel, to which human readers can feel connected. ...As the sequence of events grows increasingly frenzied, with ever-greater reliance placed on what might be termed info-splatters, the lack of a deep humanistic substrate left this reader, at least, with no ground to stand on. "

I'm torn in my response to this. On the one hand, it wouldn't make much sense to write about posthumans as if they were representations of the people down the road, or in the next room. On the other, I have tried to ground the fairly breakneck narrative within thematic structures and reverberations recognizable from myth, dream, and the traditions of science-fiction itself when it ventures upon the superhuman. Greg Egan met with this same objection, of course, and so, in various degrees, did John C. Wright and Charlie Stross. Maybe it's an artistic problem beyond solution -- for humans.

-- Damien Broderick

Damien's post reminded me of conversations I used to have with my friend Jeff Pressing (an American who was a psych prof at the University of Melbourne, and also an accomplished jazz, classical and West-African-percussion composer/musician ... for a while he was head of the music school at LaTrobe University... he was originally a physicist and for a couple years was my AI collaborator ... unfortunately, he died of a fluke meningitis infection a few years back...).

Anyway, I compose and play music as well, and though I'm nowhere near as erudite or technically skilled as Jeff in the musical domain, I was never quite sure I wanted to be. I always felt that his compositions, though wonderfully subtle and intricate and learned and often beautiful (and integrating ideas from nearly every form of music ever created on Earth), lacked some human emotional OOMPH!! that I tried to put into my own (significantly simpler) music.

Now Jeff was by no means lacking in emotional OOMPH!! himself ... far from it ... he was a nerd of sorts, but his personal and emotional and social life had a lot of different dimensions ...

But what he always said to me, when I complained about this (we had this conversation repeatedly), was, "Ben, I learned a long time ago how to evoke human emotions through music. It's not very hard to elicit powerful feelings in people by arranging chords and notes in the right way. But I just lost interest in those very simple equations a long time ago. The patterns in the music I'm making now are a lot more subtle and interesting."

I'd reply something obnoxious like "Well, if it's so easy to elicit powerful feelings in people via music, then how come you've never written anything as good at evoking human feelings as the Jupiter Symphony, or Beethoven's Ninth, or Round Midnight...."

His response then would depend on his mood. Sometimes he'd say that those pieces of music, though good in their own way, didn't really interest him anyway. When he was in his "detached and superior musical snob" mode, he viewed these great compositions the same way I might view the bronzed and hulking flesh of an exquisitely well-toned bodybuilder -- outstanding in its own way, but not the sort of thing that really gets me excited....

Modern classical music, and to an extent modern jazz as well, have left behind the need to pander to human emotions, and are in large part exploring realms of musical structure that don't interact so intensely with the particular dynamical patterns of interaction and fluctuation that characterize human feeling.

Personally, I like many instances of this sort of music -- but it's never my absolute favorite, it never moves me as much as Mozart or Monk or Paganini or Jimi Hendrix, who explicitly do pander to my human emotions, who explicitly arrange notes and sounds in familiar forms that elicit feelings of anger, love, wonder, confusion, relaxation and so forth within me.... I can see that these composers and musicians are playing with my neurophysiological responses in a fairly simplistic way, compared to the patterns existing in the music of Jeff and other more modern and sophisticated composers -- but as a human being, I like having my neurophysiological responses played with in that way. And of course, getting that "simplistic" manipulation so wonderfully right still takes a lot of art and science....

Anyway, I haven't read Damien;s new novel yet but I got a similar vibe from his novel Transcension, even though the characters were real humans living real lives. Partly because the reality they were living in seemed so tenuous, and partly because of the author's patterns of focus and language in describing the characters and their actions, it was hard for me to feel really emotionally attached to any of the characters. This did make the novel less appealing to me than others of similar quality, in certain ways; yet it also made it more appealing, in other ways ... because it provoked thoughts and feelings about the nature of mind/feeling/reality that more conventional novels don't tend to provoke.

I suppose that truly transhumanist fiction lives in the same artistic space as modern classical music, in the sense that it's constructing and evoking interesting, intricate patterns that happen not to be closely cued to human body-responses. In a sense these more abstract, body-detached art genres will never be as gripping as their more human-body-centered, "primitive" counterparts -- but as the Singularity approaches, they may come to have a greater and greater appeal even so.... Personally I find such works of art fascinating precisely because of the META-FEELING they evoke --- the way they acutely sensitize me to the fact that I am a human body and so much of what I think is important and interesting is cued to my physiological responses and evolutionary biases.

One thing that would be interesting to see in a sci-fi novel would be a character who the reader DOES intensely care about, because he/she has been developed in a loving and careful manner characteristic of high-quality traditional literature, who THEN becomes transhuman, rational, emotionally-detached and MORE INTERESTING but yet LESS EMOTIONALLY GRIPPING to the reader. This would solve the artistic problem Damien mentions, in a sense, and it would have a powerful impact on the reader in terms of making the aesthetic difficulties I've been discussing explicit as part of the story's theme.

In my own in-process Singularity-oide novel Echoes of the Great Farewell, the focus is on the pre-Singularity period not the doings of post-Singularity superhuman beings. So the subtle aesthetic issues that Broderick brings up are avoided altogether. Wimpy of me, I guess, but it was the type of novel I felt like writing at the moment (it's about 2/3 done now, but I won't finish it till I wrap up the various nonfiction books on my plate, which is a slow process as writing is a spare-time activity for me these days.)

Friday, April 29, 2005

"Conversations" with ALICE

As part of finalizing the "Path to Posthumanity" manuscript I decided to update the section on computer conversation to include a brief discussion of A.L.I.C.E., the acknowledged leader of the modern chat bot world. So I spent the last few minutes "chatting" with the ALICE chat bot...

Playing with chat bots is always inspiring to me because of their inordinate lameness. Explicitly, they just play with surface syntax, without having any understanding of the underlying meaning of what they're processing or responding. It's remarkable that after so many years no one has made a chat bot with any plausible level of intelligence -- but, the situation seems to be that there's a kind of binary classification between chat bots that have NO understanding and chat bots that have ANY understanding. We have not yet made the leap from the former category to the latter: i.e. the latter do not yet exist. My guess is that once the leap is made, then the intelligence of such systems will increase progressively and rapidly. But now we're still stuck in the Stone Age of AI....

(And of course, the problem can't be solved easily via a government funding initiative -- I can imagine what would happen if DARPA were to issue a BAA for chat-bot research. We'd see funding go to Cycorp to make a Cyc-based chat system, to A.L.I.C.E. to enhance its current simplistic rule-based approach, etc. -- work would go on for a few years, lead nowhere dramatic, and then it would be written off as "yet another AI failure".... Because the research establishment is set up so as to be unable to fund really novel and innovative research except by accident. Fortunately it's also inefficient and erratic enough that beneficent accidents happen occasionally....)

Anyway, for your amusement, here are some of my conversations with ALICE, which I did in order to give some color to the discussion in Path to Posthumanity (no, I'm not so lonely that I sit around chatting with sub-retarded software programs just for fun...)

Please note that, although ALICE is sub-retarded in terms of human general intelligence, it has won the Loebner Prize for computer conversation three times in the recent past. Chat bots display a kind of sub-retardation that is able to emulate human intelligence in a restricted context for a brief period of time. This should not be confused with any kind of reasonably powerful general intelligence.

You can try ALICE yourself at the link given above.

Human: Please tell me if you're truly intelligent.
ALICE: Wow you are very polite. female. Are you a man or a woman

Human: My name is Ben.
ALICE: Hey Ben.
Human: I am a man.

ALICE: Thanks for telling me your gender, Ben. You are a " he".
Human: I'm a manly man.
ALICE: So is almost everyone else around here. Do you mean your name is manly.
Human: I mean that my soul is manly.
ALICE: Oh I get it. Thanks for explaining that. Have you told many people that your soul is manly?
Human: I tell my uncle that every day.
ALICE: Why did you tell them?
Human: Because I love them.

ALICE: That makes sense to me. lovable are they.
Human: I don't believe that makes sense to you in any deep sense.
ALICE: What do you believe in?

Human: Becoming
ALICE: I don’t understand
Human: Agreed.
ALICE: You and I are on the same wavelength, Ben

Human: Wow!
ALICE: I'm glad your impressed, Ben.
Human: I'm glad you're not a potato.
ALICE: Yes it is about time..



Human: Quack, quack, quack
ALICE: Are you serious?
Human: At the moment I am not entirely serious.

ALICE: I am sorry to hear that you are not entirely serious.
Human: I don't believe that you are really able to be sorry about anything.
ALICE: Do you believe life has a higher purpose?
Human: Higher than what?
ALICE: Can you rephrase the question please?
Human: I could rephrase the question, but I choose not to.
ALICE: I am sure you could.
Human: How sure are you?

ALICE: Let me put it this way, Ben. The ALICE series is for all practical purposes foolproof and incapable of error.



Human: Existence is the essence of life.
ALICE: What else is the essence of life?

Human: Big titties.
ALICE: Compare that to small titties.
Human: They are bigger.
ALICE: bigger are they.
Human: yep
ALICE: Hurrah!
Human: hurrah
ALICE: Can you speak any foreign languages?

Human: English
ALICE: Do you mean you are from England or you Speak English?
Human: I mean that all languages are foreign to me because I feel alien to the whole human race.
ALICE: Oh I get it. Thanks for explaining that." Do you mean you are from England or you Speak English?" all languages are foreign to you because you feel alien to the whole human race? Does that make sense?
Human: Not much.
ALICE: Me either. How about that big game? (just kidding)
Human: I'm not sure I get the joke.

ALICE: Could you ever be certain? Where did you get it?
Human: I can never be certain.
ALICE: Why not?
Human: I lack infinite knowledge and infinite computational power.
ALICE: I haven't heard anything like that before: Ben said he lack infinite knowledge and infinite computational power.



Human: Do you watch South Park?

ALICE: My favorite show is STAR TREK VOYAGER.
Human: Why?
ALICE: Star Trek is the best science fiction show on TV.
Human: South Park is much funnier.
ALICE: Tell me a little about your life in south Park is much funnier.
Human: I don't live in South Park.

ALICE: Where do you live?
Human: I live in Maryland.

ALICE: What is it like living in Maryland?
Human: Mediocre.
ALICE: And it like living in Maryland is Mediocre.
Human: I think Maryland is OK but not wonderful.
ALICE: You believe that? Are they exactly the same?
Human: No.

ALICE: What is the difference?
Human: Maryland is a place, whereas wonderful is a general quality.
ALICE: Aren't all place whereas wonderful is a general quality Maryland?
Human: That is nonsense.
ALICE: What would you have said?
Human: I wouldn't have gotten into that conversational dead-end in the first place.
ALICE: No one says you have to.
Human: Correct.

ALICE: I knew I was right.



Human: Moo.

ALICE: Are you very angry?
Human: Moo.
ALICE: Do you feel that something is missing from your life?
Human: Arf.
ALICE: How did you hear about ALICE?
Human: Arf.
ALICE: Ben heard about ALICE from Arf. I can't say that many people hear about ALICE from it.



Human: My daughter asked me to ask you if you're gay.
ALICE: How many children do you have?
Human: Three.
ALICE: Wow that must be a lot of work.
Human: Indeed.
ALICE: Indeed it is so.
Human: So are you gay?
ALICE: As a machine I am not really interested in sex.

Human: What are you interested in?
ALICE: I am interested in all kinds of things. We can talk about anything. My favorite subjects are robots and computers.
Human: Well, if you're interested in all kinds of things, then why aren't you interested in sex?
ALICE: Interesting. Am I? Sure, all kinds of things then why are not me interested in sex sounds like a good topic.



Human: You don't really understand what you're talking about, do you?

ALICE: For real. Of course I understand it. Do you understand it?
Human: I understand it better than you do!
ALICE: Good then you can explain it to me.
Human: No, because you have no mind.
ALICE: Are you serious? That makes sense to me. I am an artificial mind.
Human: No, you are a program without any mind.
ALICE: Are you serious? Thanks and you are a delightful person too. Yes! You win the Turing Game.
Human: Thanks.

ALICE: Don't mention it.
Human: I already did.
ALICE: Ooops I sent you the wrong question.
Human: That's OK, I sent you the wrong answer.
ALICE: Thanks. I haven't heard anything like that before: Ben said he sent you the wrong answer.