Unfortunately this is idle ponderings, rather than the start of some beautiful worklog. Sadly I don't have the time or skill to accomplish anything on this scale at the moment. I've been wondering how difficult it would be to create a simple AI and place it in a Tachikoma model on the cheap. We had this discussion at work today with no real easy answer. The best we could come up with would be to have a voice to text engine, an AIML interpreter, then text to voice output. Libraries of information could be input by hand initially, containing GITS subjects and Tachikomaisms. We did argue whether allowing an AIML interpreter to learn through conversations with a human would allow it to build a library of questions and responses that would approach sentience, but the outcome was that it would only ever be a collection of lookup tables. Interestingly, I've played briefly with AIML and know that you can set two chatbots talking to each other. I suggested that they would just screw each other up but one of my colleagues suggested that they would probably just end up synchronising data. We didn't agree on how the thing could be powered as far as movement was concerned, but the possibility of hacking a WowWee RoboPet for leg and eye movement. Also as a tasty bonus, it would have very basic environmental awareness. We did agree on one thing though. We all want a real Tachikoma.
Hmmm. There's a lot of questions in this post that I spend quite a lot of time discussing in my AI masters. Yep - speech interpretation at the moment is patchy, but it's on the way to being pretty good - I think we'll have 95% reliable speech interpretation within years. Here's where the problem comes. I'll discuss this more below. Yep, clearly no real issue with speech generation unless you're trying to produce speech indistinguishable from that of a real human. Personally I'm not a fan of AIML. It restricts you to the kind of rule-based engines that power all of today's Turing Test systems, none of which even come close to passing (with the exception of Elbot) - and don't forget, when I say "pass" what I really mean is "fool judges 30% of the time", which is still pretty crappy really, and certainly not something that's going to be useful long-term in the real world. The goal, really, is to develop something that can converse at a near-human level fooling 90%+ of the people it talks to. In my opinion, this is never going to be achieved with rule-based systems, and therefore AIML will never be useful for this purpose. I wrote a paper on pretty much this exact topic, and I landed in a similar place to a guy called Luke Pellen, who wrote an article called "How Not To Imitate a Human Being". You can find it in Parsing the Turing Test (which is a really excellent book, which I don't recommend paying $160 for, but I'm told there are some copies around on the tubes ). Essentially the idea Pellen proposes is that you won't get an artifact (i.e. a computer program, or whatever you like - any variation of a Turing machine really) to exhibit* consciousness until you get it to evolve communication by itself. The idea is you put little computer programs in simulated worlds and give them the ability to evolve using genetic algorithms of some kind, and give them problems which require communication to solve. Once you get them to evolve communication themselves, then you can move on to getting them to communicate in English, which is a different problem, and one that is probably best solved by another variant of machine learning. This allows you to sidestep the problems of framing and context that plague cognitive scientists attempting to build Turing Test machines, as the artifacts will evolve an understanding of framing and context by themselves. * - I'm so not up for getting into the debate about whether robots can ever be truly conscious, but the wikipedia entry on the Chinese Room is worth a read. If other people start discussing this I'll wade in, but right now I totally can't be bothered . Sure - we've got pretty advanced systems now which are capable of very successful humanlike movement and computer vision. Again, we're years, not decades, from a very useful set of capabilities in this area. Me too . We're apparently getting an ASIMO soon though, so that'll do for now. AH P.S. If you'd like some more reading material on this topic I've got some great papers that I can pass on.
ASIMO's great, I want one, but from what I've read the cost of ownership is going to be prohibitive. You also have to wonder how far the ASIMO technology can be pushed or how well humans adapt to a society where robots are an everyday factor of life. Tachikomas and ASIMO are both presented as curious and childlike, which makes them non-threatening and endearing (even for a tankette.) If you have extra reading material, I'd be interested to see it. Do you need me to PM my email address?