Tuesday, March 22, 2011

Transhumanisten Interview

This interview of me was conducted by Mads Mastrup (aka Heimdall) for the Danish website Transhumanisten. It took place via e-mail, over the course of two days: March 19-20th 2011. Since Transhumanisten will publish it only in Danish, I figured I’d post it here in English….

Heimdall: First of all Ben, I would like to thank you for taking the time to do this interview.

Goertzel: Sure, I’m always up for answering a few questions!

Heimdall: In case anyone should read this and not know who you are, could you please summarize your background and how you got to become a transhumanist?

Goertzel: I suppose I've been a transhumanist since well before I learned that word -- since 1972 or so when I was 5 or 6 years old and discovered science fiction. All the possibilities currently bandied about as part of transhumanism were well articulated in SF in the middle of the last century.... The difference is, until the advent of the public Net, it was really hard to find other weird people who took these concepts seriously. The Net made it possible for a real transhumanist community to form.... And of course as accelerating change in technology gets more obvious in regular life, it takes less and less imagination to see where the future may be leading, so the transhumanist community is growing fast...

As for my professional background, I got my math PhD when I was 22, and was an academic for 8 years (in math, comp sci and psychology, at various universities in the US, Australia and NZ); then I left academia to join the software industry. I co-founded a dot-com company that crashed and burned after a few years, and then since 2001 I've been running two small AI companies, which do a combination of consulting for companies and gov't agencies, and independent R&D. I do a lot of kinds of research but the main thrusts are: 1) working toward AI software with capability at the human level and beyond, 2) applying AI to analyze bio data and model biological systems, with a view toward abolishing involuntary death. Much of this work now involves open-source software: 1) OpenCog, and 2) OpenBiomind.

Currently I'm based near Washington DC, but this year I'll be spending between 1/4 and 1/3 of my time in China, due to some AI collaborations at Hong Kong Polytechnic University and Xiamen University.

Heimdall: Congratulations on your position at Xiamen University.

Goertzel: Actually I haven't taken on a full time position at Xiamen University, at this point -- though it's a possibility for the future. What I'm doing now is to spend part time there (including much of April this year, then much of July, for example... then another trip in the fall) and help supervise the research students in their intelligent robotics lab. I may end up going there full time later this year or next year, but that's still a point of negotiation.

Heimdall: If you do not mind me asking, what exactly does your work at Novamente LLC and Biomind LLC consist of?

Goertzel: It has two sides -- pure R&D, which focuses on two open-source projects...

  • OpenCog, which aims to make a superhuman thinking machine
  • OpenBiomind, which aims to use AI to understand how organisms work, and especially how and why they age and how to cure aging


And then, the other side is practical consulting work, for government agencies and companies, which has spanned a huge number of areas, including data mining, natural language processing, computational finance, bioinformatics, brain simulation, video game AI and virtual worlds, robotics, and more....

None of this has gotten anyone involved rich yet, partly because we've put our profits back into R&D. But it's been a fun and highly educational way to earn a living.

We've done a little product development & sales in the past (some years back), but without dramatic success (e.g. the Biomind ArrayGenius) -- but we plan to venture in that direction again in the next couple years, probably with a game AI middleware product from Novamente, and a genomics data analysis product from Biomind. Both hypothetical products would use a software-as-services model with proprietary front ends built on open-source AI back ends.

Heimdall: All that work and all those projects must be keeping you very busy, yet I know that you have also found time to be the chairman of Humanity+. How did you initially become involved with Humanity+?

Goertzel: As for Humanity+, the Board of the organization is elected by the membership, and I ran for the Board a few years ago, with a main motivation of building bridges between the transhumanist community and the AI research community. Then I got more and more deeply involved and began helping out with other aspects of their work, not directly related to AI research, and eventually, at the suggestion of other Board members, I took on the Chair role.

Heimdall: What does your work as chairman of Humanity+ involve?

Goertzel: The Chairman role in itself, formally speaking, just involves coordinating the Board's formal activities -- voting on motions and so forth. But I'm involved with a lot of other Humanity+ stuff, such as co-editing H+ Magazine, helping organize the H+ conferences, helping with fundraising, helping coordinate various small tasks that need doing, and now starting up the Seminar and Salon series.

Heimdall: I have heard about Humanity+ starting up a new project: Seminars & Salons. How will this work and what is the goal of these online seminar and salon sessions?

Goertzel: The idea is simple: every month or so we'll gather together a bunch of transhumanists in one virtual "place" using videoconferencing technology. Sometimes to hear a talk by someone, sometimes just to discuss a chosen transhumanist topic.

About the "goal" ... I remember when my oldest son was in third grade, he went to a sort of progressive school (that I helped found, in fact), and one of his teachers made all the students write down their goals for the day each day, in the morning. My son thought this was pretty stupid, so he liked to write down "My goal is not to meet my goal." Some of the other students copied him. He was also a fan of wearing his pants inside-out.

Anyway, there's not such a crisply-defined goal -- it's more of an open-ended experiment in online interaction. The broad goal is just to gather interesting people together to exchange ideas and information about transhumanist topics. We'll see what it grows into. Email and chat and IRC are great, but there's obviously an added dimension that comes from voice and video, which we'll use for the Seminar and Salon series via the Elluminate platform.

Heimdall: How did this project come about?

Goertzel: Last summer my father (who is a Rutgers professor) ran a 3 credit college class, wholly online, on Singularity Studies. This was good fun, but we found that half our students were not even interested in the college credit, they were just interested people who wanted to participate in online lectures and discussions on Singularity-related topics. So I figured it might be fun to do something similar to that class, but without bothering with the university framework and charging tuition and so forth. I floated the idea past the other Humanity+ board members, and they liked it. And who knows, maybe it could eventually grow into some kind of university course program affiliated with Humanity+ ....

Heimdall: I imagine you will be holding some sessions on AI, since this is your field of expertise, but do you believe that we will eventually be able to create AI which is anywhere similar to that of humans? And if so, when do you see this happening?

Goertzel: It's almost obvious to me that we will be able to eventually create AI that is much more generally intelligent than humans.

On the other hand, creating AI that is genuinely extremely similar to human intelligence, might in some ways be harder than creating superhumanly intelligent AI, because it might require creation of a simulated humanlike body as well as a simulated humanlike brain. I think a lot of our personality and intelligence lives in other parts of the body besides the brain. There's probably something to the idiomatic notion of a "gut feel".

As to when human-level or human-like AI will come about, I guess that depends on the amount of funding and attention paid to the problem. I think by now it's basically a matter of some large-scale software engineering plus a dozen or so (closely coordinated) PhD thesis level computer science problems. Maybe 50-100 man-years of work, Not a lot by some standards, but there's not much funding or attention going into the field right now.

My hope is to create what I think of as a "Sputnik of AI" -- that is, an impressive enough demonstration of generally intelligent software, that the world gets excited about AGI and more people start to feel like it's possible. Then the money and attention will roll in, and things will really start to accelerate.

So when will we have human-level AI? Could be 2020. Could be 2035. Depending on funding and attention. Probably won't be 2012 or 2060, in my view.

Heimdall: I quite like the idea behind the “Sputnik-AI”. Do you think that is something we will see in the near future?

Goertzel: We're hoping to create something with dramatic Sputnik-like impact within the next 5 years. Maybe sooner if funding cooperates! But it's always easier to predict what's possible, than how long it will
take....

Heimdall: With regards to more attention being paid to the field of AI, have you noticed an increased interested in AI due to IBM’s Watson appearing on Jeopardy?

Goertzel: The Jeopardy event caused a temporary increase in AI interest by media people. I'm not sure what general impact it will have on general attitudes toward AI in business and government and so forth. I'm sure it won't hurt though ;-) ..... But obviously it's too specialized an achievement to have an "AI Sputnik" effect and make the world feel like human-level AI is near and inevitable...

Heimdall: When you are talking about this Sputnik-effect, and you mention Watson being too narrow to, really impress the people who decide on the funding, what would a Sputnik-AI have to be like then? Is it enough to make an AI win the Turing-test?

Goertzel: Of course a Turing test capable AGI would be good enough -- but I think that's setting the bar too high. It doesn't have to be *that* good to have the "Sputnik effect", I suspect. It just has to give the qualitative feeling of "Wow, there's really an intelligent mind that **understands** in there." Watson doesn't do that because even if it can answer one question, it often can't answer other questions that would seem to be easily answerable (by a human) based on the same knowledge.... Watson can answer questions but doesn't give the appearance of "knowing what it's talking about." If you had a Watson that could give good explanations for all its answers (in terms of why they are true, not just where it looked up the knowledge), I'm sure that would be enough.

But a Watson-type system is not the only kind of demonstration that could be effective. For instance, Apple founder Steve Wozniak once said there will never be a robot that can go into a random house in America and figure out how to make coffee. This is a complex task because every house is laid out differently, and every coffee-maker works differently, etc. I'm sure an AI robot that could do this would be enough to have a Sputnik-type effect!

One of my own specific aims is an AI robot that can participate in preschool activities -- including learning -- in the manner of a 3 year old child. I think this could have a Sputnik effect and really excite the public imagination. And it's a warm friendly image for AGI, not like all the scary SF movies about AI.

I'm actually working on a paper together with a dozen other AGI researchers on exactly this topic -- what are a bunch of scenarios for AGI development and testing, that ultimately lead toward human-level AGI, but are good for demonstrating exciting interim results, and for showcasing the differences between AGI and narrow AI.

Heimdall: Eliezer S. Yudkowsky has written extensively on the topic of FAI. What is your view on FAI? Is it even doable?

Goertzel: I think that guarantee-ably "Friendly" AI is a chimera. Guaranteeing anything about beings massively smarter than ourselves seems implausible. But, I suspect we can bias the odds, and create AI systems that are more likely than not to be Friendly....

To do this, we need to get a number of things right

  • build our AI systems with the capability to make ethical judgments both by rationality and by empathy
  • interact with our AI systems in a way that teaches them ethics and builds an emotional bond
  • build our AI systems with rational, stable goal systems (which humans don't particularly have)
  • develop advanced AI according to a relatively "slow takeoff" rather than an extremely fast takeoff to superhuman intelligence, so we can watch and study what happens and adjust accordingly ... and that probably means trying to develop advanced AI soon, since the more advanced other technologies are by the time advanced AI comes about, the more likely a hard takeoff is...
  • integrate our AIs with the "global brain" of humanity so that the human race can democratically impact the AI's goal system
  • create a community of AIs rather than just one, so that various forms of social pressure can mitigate against any one of the AIs running amok


None of these things gives any guarantees, but combined they would seem to bias the odds in favor of a positive outcome!

Heimdall: I would tend to agree with you when it comes to a creation of FAI, but some people have speculated that even though we “build our AI systems with rational, stable goal systems” they might outsmart us and just reprogram themselves – given that they will be many times faster and more powerful than the humans who have created them. Do you think that coding into them the morals and ethics of humankind will avert this potential peril?

Goertzel: I think that "coding in" morals and ethics is certainly not an adequate approach. Teaching by example and by empathy is at least equally important. And I don't see this approach as a guarantee, but I think it can bias the odds in our favor.

It's very likely that superhuman AIs will reprogram themselves, but, I believe we can bias this process (through a combination of programming and teaching) so that the odds of them reprogramming themselves to adopt malevolent goals are very low.

I think it's fairly likely that once superhuman AIs become smart enough, they will simply find some other part of the multiverse to exist in, and leave us alone. But then we may want to create some AIs that are only mildly superhuman, and want to stay that way -- just to be sure they'll stay around and keep cooperating with us, rather than, say, flying off to somewhere that the laws of physics are more amenable to incredible supergenius.

Heimdall: AGI is a fascinating topic and we could talk about it for hours … but another fascinating field you’re also involved in is life extension. As I see it, there are three approaches to life extension: 1) to create whole brain emulation (like that which Bostrom and Sandberg talks about), a mind-uploading scenario. 2) to become cyborg and live indefinitely due to a large-scale mechanical and non-biological optimization of the human body. 3) or to reverse the natural aging process within the human body through the use of gene therapy, nano robotics and medicine. Which of the three scenarios do you find most likely? In addition, should we try to work on a combination of the above or only focus on one of them?

Goertzel: All of the above. It's easy to say what's possible, and hard to say how long each possibility will take to come about. Right now we don't have the basis to predict which of the above will come about faster, so we should pursue them all, at least will we understand more. Maybe in 5 or 10 years we'll know enough to prioritize one of them more firmly.

I'm currently working on the genomics approach (part of your option 3) with Biomind and Genescient, but am also involved in some work on brain simulation, that is moving in the direction of 1).

My main research thrust is about AGI rather than life extension – but of course, If we do achieve an advanced AGI, it may well be able to rapidly solve the tricky science problems involved in your 3 options and make all of them possible sooner.

Heimdall: What do you see as to be the main pros and cons of indefinite life?

Goertzel: I see no major disadvantages to having the option to live forever. It will obsolete some human thought/emotion-complexes, which derive meaning and purpose via the knowledge of impending death -- but it will replace these with better thought/emotion complexes that derive meaning and purpose via ongoing life instead!

Heimdall: You mentioned that there might not be any major drawbacks, when it comes to radical life extension, however many of the choices we make now are, based on the fragility of our bodies and taking the economical model of supply and demand into account, it does somehow look as though human life will change beyond recognition. If we have no upper time limit to your lives, how do you see humanity improve from this?

Goertzel: I see a drastic increase in mental health -- and a drastic increase in happiness -- resulting from the drastic reduction in the fear of death. I think the knowledge of the impending death of ourselves and our loved ones poisons our mentalities far more deeply than we normally realize. Death is just plain a Bad Thing. Yeah, people have gotten used to it -- just like people can get used to being crippled or having cancer or living in a war zone-- but that doesn't make it good.

Heimdall: Just before we conclude this interview, I have two questions on the thing which fascinates transhumanists the most, the future. Which big technological breakthroughs do you think we will see over the course of the next ten years?

Goertzel: That I don't know. I'm good at seeing what's possible, more so than predicting exact timings.

In terms of science, I think we'll see a real understanding of the biological underpinnings of aging emerge, and an understanding of how the different parts of the brain interoperate to yield human intelligence, and a reasonably well accepted theoretical model encompassing various AGI architectures. How fast those things are translated in to practical products depends on funding as much as anything. Right now the pharmaceutical business is sort of broken, and AGI and Brain Computer Interfacing are poorly funded, etc. – so whether these scientific breakthroughs lead to practical technological advances within the next decade, is going to depend on a lot of nitty gritty monetary practicalities.

Stem cell therapy will probably become mainstream in the next decade, I guess that's an uncontroversial prediction. And I'm betting on some new breakthroughs in large-scale quantum computing -- though again, when they'll be commercialized is another story.

But these are just some notions based on the particular areas of research I happen to know the most about. For a systematic high level overview of technology progress, you'll have to ask Kurzweil!

Heimdall: Where do you see yourself in 2021?

Goertzel: As the best friend of the Robot Benevolent World Dictator, of course!

(Just kidding...)

Well, according to the OpenCog Roadmap (http://opencog.org/roadmap/) we're aiming to have full human-level AGI by 2023, assuming steady increases in funding but no "AGI Manhattan Project" level funding. So my hope is to be co-leading an OpenCog project with a bunch of brilliant AI guys co-located in one place (preferably with warm weather, and by a nice beach) working on bringing the OpenCog roadmap about.


Heimdall: Thank you so much for taking the time to do this interview

Goertzel: No problem ;)




Saturday, March 19, 2011

Joy, Growth and Choice (revisited, hopefully clarified)

I've argued in several places, e.g. here and in The Hidden Pattern, that three basic values (independent of the specifics of human cultures, morals, etc.) are Joy, Growth and Choice...

But I never had a really crisp philosophical explanation of why these three...

Now I finally figured out a clean way to express the underlying insight.

Growth is the change from present possibility into future actuality. It's when the implicit becomes explicit -- when potentials become real.

Choice is the change from future possibility into present actuality. Choice is what happens when out of many possible things that MIGHT happen (in the future), a smaller subset is chosen to ACTUALLY happen (right now, i.e. right after the choice is made, in the perspective of the choosing mind).

That's why those two values are fundamental -- on the abstract level, stripping down to fundamentals and looking beyond human psychology.

Maybe Sartre or Husserl or Heidegger or Deleuze or Merleau-Ponty (or Dharmakirti or Dignaga) or one of those dudes already said that (if so, probably in some different terminology). If so I missed it ... or the import escaped me when I read it.

Proliferating and Paring

For example, consider a plant growing. The whole form of the plant is implicit in the seed. Growth is the explication of this implicate order -- the change from the plant-possibility within the seed, into the actuality of the plant.

But there are many different ways the plant might grow -- the seed doesn't precisely determine what will happen; the determination is made via complex interactions between the seed and the environment. Choices are made, and of the many possible future plants, only some are chosen to be actual.

Growth without choice could be indiscriminate -- it could lead to an undifferentiated flourishing of everything.

Choice pares down the results of growth, leaving interesting structures.

Will, Self, Reflection

I keep talking about Choice -- is this the same thing as free will?

Human "free will" is a particular manifestation of choice; the manifestation of choice within self. (For waaaaaay more depth on self, will and reflective consciousness, read this.)

But this raises the issue of whether, in the addition to the three values of Joy, Growth and Choice, we want to add Self. But this seems a subtle question.

Growth and choice seem fundamental -- they have to do with the proliferation and paring of forms, with the dynamics of possibility and actuality.

Self has to do with reflexivity -- with a system in the world modeling itself. But it's much more high-level and particular than Joy, Growth and Choice.

So if we want to add another value to the core list of three, maybe the one to add would be Reflection. Reflection: appearance of the whole within the part.

However, I suspect this is unnecessary. Because Reflection is an amazingly powerful tool for Growth -- so that when you advocate Growth, Reflection comes along for the ride! And growth leads to intelligence eventually, and Reflection applied to intelligence (as a strategy for achieving Growth) yields Self. And if a universe already has Self, then in order to grow further, it's not going to give up Self, because that would essentially be Shrinkage, not Growth -- because Self, aka Reflection applied to intelligence, is a really good way to foster ongoing Joy, Growth and Choice.

Joy

And what about Joy?

Well ... Joy is just ... Joy. Joy just is. As the Buddhists say, Suchness. Making possibilities into actualities, and actualities into possibilities, in a spaceless timeless reality-less reality that is nonetheless more directly and palpably experientially real than anything (any thing).

(Like Sartre and Heidegger and Dignaga and the whole crew...)

I've already said way too much!

Toward a General Theory of Feasible General Intelligence

Along with practical work on the OpenCog design (and a host of other research projects!), during the past few years I've written a series of brief papers sketching ideas about the theory of general intelligence ... the goal being to move toward a solid conceptual and formal understanding of general intelligence in real-world environments under conditions of feasible computational resources. My quest for such an understanding certainly isn't done yet, but I think I've made significant progress.

This page links to the 5 papers in this series, and also gives their abstracts. 3 of the papers have been published in conference proceedings before, but 2 are given for the first time in this blog post (Three Hypotheses about the Geometry of Mind and Self-Adaptable Learning). All of this material will appear in Building Better Minds eventually, in slightly modified and extended form.

These theoretical ideas have played a significant, largely informal role in guiding my work on the OpenCog design. My feeling is that once practical R&D work is a bit further along, so that we're experimenting in a serious way with sophisticated proto-AGI systems, then theory and practice will start developing in a closely coupled way. So that a good theory of general intelligence will probably come in lock-step along with the first reasonably good AGI systems. (See some more comments on the relation between these theory papers and OpenCog, at the end of this blog post.)

A brief note on math: There is a fair bit of mathematical formalism here, but no deep, interesting theorems are proven. I don't think this is because no such theorems exist in this material; but I just haven't taken then time to really explore these ideas with full mathematical rigor. That would be fun, but I've prioritized other sorts of work. So far, I've mainly been seeking conceptual clarity with these ideas rather than full mathematical rigor; and I've used mathematical formalism here and there because that is the easiest way for me to make my ideas relatively precise. (Being trained in math rather than formal philosophy, I find the former a much more convenient way to express my ideas when I want to be more precise than everyday language permits.) My hope is that, if I never find the time, others will come along and turn some of these ideas into theorems!

Toward a Formal Characterization of Real-World General Intelligence
Presented at AGI-10, in Lugano

Two new formal definitions of intelligence are presented, the ”pragmatic general intelligence” and ”efficient pragmatic general intelligence.” Largely inspired by Legg and Hutter’s formal definition of ”universal intelligence,” the goal of these definitions is to capture a notion of general intelligence that more closely models that possessed by humans and practical AI sys- tems, which combine an element of universality with a certain degree of specialization to particular environments and goals. Pragmatic general intelligence mea- sures the capability of an agent to achieve goals in environments, relative to prior distributions over goal and environment space. Efficient pragmatic general intelligences measures this same capability, but normalized by the amount of computational resources utilized in the course of the goal-achievement. A methodology is described for estimating these theoretical quantities based on observations of a real biological or artificial system operating in a real environment. Finally, a mea- sure of the ”degree of generality” of an intelligent system is presented, allowing a rigorous distinction between ”general AI” and ”narrow AI.”

The Embodied Communication Prior: A Characterization of General Intelligence in the Context of Embodied Social Interaction
Presented at ICCI-09, in Hong Kong

We outline a general conceptual definition of real-world general intelligence that avoids the twin pitfalls of excessive mathematical generality, and excessive anthropomorphism.. Drawing on prior literature, a definition of general intelligence is given, which defines the latter by reference to an assumed measure of the simplicity of goals and environments. The novel contribution presented is to gauge the simplicity of an entity in terms of the ease of communicating it within a community of embodied agents (the so-called Embodied Communication Prior or ECP). Augmented by some further assumptions about the statistical structure of communicated knowledge, this choice is seen to lead to a model of intelligence in terms of distinct but interacting memory and cognitive subsystems dealing with procedural, declarative, sensory/episodic, attentional and intentional knowledge.

Cognitive Synergy: A Universal Principle for General Intelligence?
Presented at ICCI-09, in Hong Kong

Do there exist general principles, which any system must obey in order to achieve advanced general intelligence using feasible computational resources? Here we propose one candidate: cognitive synergy, a principle which suggests that general intelligences must contain different knowledge creation mechanisms corresponding to different sorts of memory (declarative, procedural, sensory/episodic, attentional, intentional); and that these different mechanisms must be interconnected in such a way as to aid each other in overcoming memory-type-specific combinatorial explosions.

Three Hypotheses About the Geometry of Mind (with Matthew Ikle')
Presented for the first time right here!

What set of concepts and formalizations might one use to make a practically useful, theoretically rigorous theory of generally intelligent systems? We present a novel perspective motivated by the OpenCog AGI architecture, but intended to have a much broader scope. Types of memory are viewed as categories, and mappings between memory types as functors. Memory items are modeled using probability distributions, and memory subsystems are conceived as “mindspaces” – geometric spaces corresponding to different memory categories. Two different metrics on mindspaces are considered: one based on algorithmic information theory, and another based on traditional (Fisher information based) “information geometry”. Three hypotheses regarding the geometry of mind are then posited: 1) a syntax-semantics correlation principle, stating that in a successful AGI system, these two metrics should be roughly correlated; 2) a cognitive geometrodynamics principle, stating that on the whole intelligent minds tend to follow geodesics in mindspace; 3) a cognitive synergy principle, stating that shorter paths may be found through the composite mindspace formed by considering multiple memory types together, than by following the geodesics in the mindspaces corresponding to individual memory types.


Self-Adaptable Learning
Presented for the first time right here!

The term ”higher level learning” may be used to refer to learning how to learn, learning how to learn how to learn, etc. If an agent is good at ordinary everyday learning, but also at learning about which learning strategies are most amenable to higher-level learning, and does both in a way that is amenable to higher level learning -– then it may be said to possess self-adaptable learning. Goals and environments in which higher-level learning is a good strategy for intelligence, may be called adaptationally hierarchical – a property that everyday human environments are postulated to possess. These notions are carefully articulated and formalized; and a concept of cognitive continuity is also introduced, which is argued to militate in favor of self-adaptability in a learning system.

P.S. A Comment on the Relation of All This Theory to OpenCog

I think there is a lot of work required, to transform the abstractions from those theory papers of mine into a mathematical theory that is DIRECTLY USEFUL rather than merely INSPIRATIONAL for concrete AGI design.

So, the OpenCog design, for instance, is not derived from the abstract math and ideas in the above-linked papers ... it's independently created, based on many of the same quasi-formal intuitions as the ones underlying those papers.

You could say I'm approaching the problem from two directions at once, and hoping I can get the two approaches to intersect...

One direction is OpenCog --- designing and building a concrete proto-AGI system, and iteratively updating the design based on practical experience

The other is abstract theory, as represented in those papers

If all goes well, eventually the two ends will meet, and the abstract theory will tell us concretely useful things about how to improve the OpenCog design. That is only rather weakly true right now.

I have the sense (maybe wrong) I could make the ends meet very convincingly in about one year of concentrated work on the theory side. However, I currently only spend maybe 5% of my time on that sort of theory. But hopefully I will be able to make it happen in less than 20 years via appropriate collaborations...