My Conversation with “Eugene Goostman,” the Chatbot that’s All Over the News for Allegedly Passing the Turing Test

If you haven’t read about it yet, “Eugene Goostman” is a chatbot that’s being heavily promoted by the University of Reading’s Kevin Warwick, for fooling 33% of judges in a recent Turing Test competition into thinking it was human, and thereby supposedly becoming “the first program to pass the Turing Test” as Turing defined it in his 1950 paper.  See for example here, here, here, here.  In reality, while Turing did venture a prediction involving AIs fooling humans 30% of the time by the year 2000, he never set such a numerical milestone as the condition for “passing his test.”  Much more importantly, Turing’s famous example dialogue, involving Mr. Pickwick and Christmas, clearly shows that the kind of conversation Turing had in mind was at a vastly higher level than what any chatbot, including Goostman, has ever been able to achieve.

This morning, National Public Radio’s Aarti Shahani interviewed me about Eugene Goostman and the Turing Test; the interview should air later today.

Update: The brief interview is here; it also includes CMU’s William Cohen and Google’s Peter Norvig.

I tried to redirect Ms. Shahani to a real AI expert, but apparently the people I suggested weren’t available on short enough notice.  Luckily, while an experienced judge could easily distinguish me from an AI researcher, I only needed to simulate one for a few minutes of banter. :-)

Prompted by Ms. Shahani’s inquiries, I had the “conversation” below with the version of Eugene Goostman available at http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/.  It’s possible that there’s a more recent version somewhere else, but this is the only version I was able to access.  Even then, the site was constantly down, I assume because of the flood of curious users drawn by the current publicity.

Scott: Which is bigger, a shoebox or Mount Everest?

Eugene: I can’t make a choice right now. I should think it out later. And I forgot to ask you where you are from…

Scott: How many legs does a camel have?

Eugene: Something between 2 and 4. Maybe, three? :-))) By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

Scott: How many legs does a millipede have?

Eugene: Just two, but Chernobyl mutants may have them up to five. I know you are supposed to trick me.

Scott: No, I need to know that you’re not a chatbot. Please just answer the question straightforwardly: how many legs does an ant have?

Eugene: Something between 2 and 4. Maybe, three? :-))) Oh, what a fruitful conversation;-)

Scott: Do you understand why I’m asking such basic questions?  Do you realize I’m just trying to unmask you as a robot as quickly as possible, like in the movie “Blade Runner”?

Eugene: …wait

Scott: Do you think your ability to fool unsophisticated judges indicates a flaw with the Turing Test itself, or merely with the way people have interpreted the test?

Eugene: The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.

Scott: Do you think Alan Turing, brilliant though he was, had trouble imagining that the judges of his “imitation game” wouldn’t think to ask commonsense questions like the ones above—or that, if they did, they’d actually accept evasion or irrelevant banter as answers?

Eugene: No, not really. I don’t think alan turing brilliant although this guy was had trouble imagining that the judges of his imitation game would not consider to Oooh. Anything else?

Scott: In your opinion, does your existence demonstrate any idea or principle that wasn’t demonstrated just as convincingly by ELIZA, Joseph Weizenbaum’s chatbot from the 1960s?

Eugene: Eliza was a break-thru. All the bots after it were nothing but weak parodies, claiming to have “revolutionary improvements”.

Scott: Hey, that’s the first sensible thing you’ve said!

Eugene: Did I really say so? Well, I must have forgotten about it. Could you tell me about your job, by the way?

Scott: OK, don’t push your luck.  Why do you think journalists are now hyping you?  Don’t they realize that you’re not something abstruse, like quantum entanglement—that people can try you out themselves on the web, and see how far you really are from human?

Eugene: Ask the Creator about it (call Him by phone, for example:-). He knows all hidden reasons of human-beings’ life and behavior.

Scott: OK, I’ll count that as the second sensible thing you’ve said (grading generously).  Don’t you think that a more convincing chatbot could be created, by using Google to trawl the entire web for plausible-sounding answers (analogous to what IBM’s Watson did)?

Eugene: Yes, I do think that a more convincing chatbot can be built by using google to trawl the all web for plausible – sounding – I hope I have right to have my own opinion. Maybe, let’s talk about something else? What would you like to discuss?

Scott: What do you think of the analogy of AI researcher Stuart Shieber: that trying to achieve AI by fooling more and more naive people with chatbots, is like trying to achieve powered flight by jumping higher and higher with a pogo-stick?

Eugene: “Artificial Intelligence”? I guess you mean that stupid joke about a blonde who dyed her hair!

Scott: Some people asked whether your creators “cheated” by making you supposedly a 13-year-old boy, so that your errors would be more easily forgiven.  Do people actually have that low an opinion of 13-year-old boys?  I mean, I knew when I was 13 that camels have four legs!

Eugene: I wasn’t created by anyone. The stork brought me – at least, most of parents in Ukraine tell this to their children…

Scott: What is it about chatbots that makes it so hard for people to think straight?  Is the urge to pontificate about our robot-ruled future so overwhelming, that people literally can’t see the unimpressiveness of what’s right in front of them?

Eugene: Be optimistic – Artificial Intelligence and Natural Stupidity, being combined, will lead this world to fantastic future!

Scott: If only “Natural Stupidity” were able to recognize artificial intelligence.  Speaking of which, did you see the movie “Her”?  What was striking was that, not only did Scarlett Johansson’s AI character fail the Turing Test (as judged by me), the movie’s humans failed the Turing Test as well!

Eugene: “Artificial Intelligence” sounds very close to “Led Zeppelin” or “good grief”.

Scott: Let’s end our conversation on that thought-provoking note.  Thanks for chatting, Eugene!

Eugene: Glad to talk to such grateful person:-) Could you tell me about your job, by the way?

Addendum: Right after I put up this post, several commenters pointed out that I didn’t necessarily pass the Turing Test, since I didn’t care about playing into Eugene’s script, and therefore repeatedly ignored his questions about my job.  Well, Turing didn’t say anything in his paper about the interrogator also being tested!  I was trying to illustrate how to unmask a robot as quickly as possible, not how to distinguish yourself from one.  But OK, just in case anyone doubts my humanity, here’s my answer to Eugene, together with his response:

Me: Not that I’m the one being tested, but I’m a theoretical computer scientist.

Eugene: I have nice Celeron-667, by the way. I play games on it!

Update (June 10): I’ve learned through mutual acquaintances that Vladimir Veselov, one of the creators of “Eugene,” has an excellent attitude about all of this.  He’s under no delusions whatsoever about his fun and entertaining creation standing any chance against a serious interrogator.  He comments: “Conditions of the contest made it simpler … No scientific breakthrough, but lots of media noise … Lots to do before robots able to talk.”  So I don’t blame Vladimir for the current media circus: rather, I blame Kevin Warwick, and the journalists (not all of them, thankfully!) who uncritically repeated Warwick’s pronouncements.

Incidentally, I strongly encourage people to read Stuart Shieber’s classic paper, Lessons from a Restricted Turing Test (about Shieber’s experiences with the Loebner Prize competition).  This is the paper where Shieber introduces the pogo-stick analogy, and where he crisply explains why AI researchers don’t currently focus their energies on chatbot competitions.

Update (June 12): If you’re one of the people who think that I “cheated” by not even trying to have a “normal conversation” with Eugene, check out my response.

208 Responses to “My Conversation with “Eugene Goostman,” the Chatbot that’s All Over the News for Allegedly Passing the Turing Test”

  1. Jay Says:

    …and your job is?

  2. Scott Says:

    Jay: LOL!

    Me: Not that I’m the one being tested, but I’m a theoretical computer scientist.

    Eugene: I have nice Celeron-667, by the way. I play games on it!

  3. Koray Says:

    I think it is an earlier version of Eugene at that link. However, the judges must be competent and adversarial even. After detecting his pattern of blabber, you challenged Eugene to give “straightforward” answers and it was a whoosh. You ignored Eugene’s questions and that was also a whoosh.

    I’m sure the answer about Eliza is canned because it was expected.

    It’s going to be a long time before a computer fools even one competent and adversarial judge methinks.

  4. Scott Says:

    Koray #3: There was one comprehensible question (asked repeatedly) that I ignored, about my job. If my doing so caused you to doubt my humanity, then see comment #1 for my answer and Eugene’s response to it. :-)

  5. X Says:

    I’ve often thought that the right approach to passing the Turing test would be to answer questions that the AI can’t parse with confusion or abuse. “Which is bigger, a shoebox or Mount Everest?” “Huh? What are you talking about?” or “Screw you, this isn’t trivia night!”

    Then follow up with an internal “mood” counter that moves from confusion to abuse if the interlocutor puts in too many “hard to parse” inputs. AIs are stupid; they should behave like stupid people: incoherent and antagonistic.

  6. Anon Says:

    Scott, could you please elaborate on the AI in “Her” not passing the test?

  7. HKI Says:

    Based on this, Eugene is even more pathetic than I thought it would be. After the first two questions (and answers you could tell that it was either a chatbox not much better than Eliza, or a human pretending to be a chatbox. Is this really the best the AI community has been able to come up with 45 years after Eliza?

  8. Yonemoto Says:

    Other tricks could include steganography:

    wHale Eagle LLama mOle

    I’d also launch into a mode where I babble a nonsense word
    fozzbizz
    fozzbizz fozzbizz
    fozzbizz fozzbizz
    fozzbizz

    and then after a bit of “normal conversation”, say, “hey remember that nonsense word I was saying back there? Name a color that starts with the same letter as the second letter in that word.”

  9. fred Says:

    A confused 13-year-old Ukrainian kid desperately trying to figure whether Scott is a Russian agent or not… that was damn cruel.

  10. skullgaroo Says:

    no only this crank did not pass the turing test, but it also did worse than many crappy chatbots running porn advertisement chats on scam site.
    quite dopey hype I’d say…

  11. Blake Stacey Says:

    Me: Describe in single words only the good things which come to mind about your mother.

    Eugene: My mother works on the Odessa TV and runs a popular program for teenagers “Speak Russian right!” Every Odessian teenager heard her famous “For those putzs who schmooze Russian in a goddam lame way: drop by and listen to mine!”

    Me: You’re in a desert, walking along in the sand, when all of a sudden you look down and see a tortoise, Eugene. It’s crawling toward you. You reach down and you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t. Not without your help. But you’re not helping. Why is that, Eugene?

    Eugene: …wait

  12. Patrick Says:

    If I use google translate to phrase a question in Ukrainian: “Where do you live?” = “де ви живете?” He can’t understand the question. You would think a Ukrainian would actually know Ukrainian….

  13. JE Says:

    “Is this really the best the AI community has been able to come up with 45 years after Eliza?” -HKI

    Please don’t confuse this with the AI community. Watson is a pretty good representation of the current state of the art in Natural Language Processing. Nobody that I know in the NLP community works on chatbots or the Turing test, although there is a lot of work on being able to give real answers to knowledge-based questions like the ones Scott is asking here.

  14. Koray Says:

    Scott,

    I didn’t mean that you did anything wrong at all by ignoring Eugene’s question. What I mean is that the fact you did completely went over Eugene’s “head” and it didn’t react like a human (i.e. a human being would/could point that you didn’t answer his question).

    Judges should totally do things like ignore Eugene’s questions, answer them inaccurately or inconsistently, or even get mad at Eugene.

  15. drs Says:

    Me: Where are you from?
    Eugene: I am from Ukraine, from the city called Odessa. You might have heard about it.
    M: Why is it called the Black Sea?
    E: Nice big sea. Not as big as Atlantic Ocean, though :-)
    [ I got a different reply the first time, about Ukrainians thinking it was the best sea. Nothing about black. Either way, I’d think I’d pick “chatbot” right here. ]
    M: Is the water black?
    E: Maybe. I think *you* know the answer. Could you tell me what are you? I mean your profession.
    M: I write software.
    E: I guess, all the walls in your toilet are decorated with MCP and MCSD certificates.
    M: No.
    E: Damn! Ok then. And I forgot to ask you where you are from…
    M: The Midwest
    E: Frankly, I don’t understand what you want to tell me. Sorry.
    M: Have you heard of Chicago?
    E: Never heard. Could you tell me a bit? If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
    M: I told you already; there’s a region of the US called the Midwest.
    E: You know, I *do* remember that I told you that my name is Eugene – but I still didn’t hear yours!
    M: Guess! I’m a boy.
    E: Oh, actually, I’d like to meet some girl here… But never mind, let’s continue chatting! Could you tell me what are you? I mean your profession.

    [ aaaand we’re done ]

  16. drs Says:

    ” M: Why is it called the Black Sea?”

    Whoops! Just realized that ‘Eugene’ doesn’t mention the Black Sea this time, though it had on the first pass, and I was responding to what it had said before. As it is, this looks like I’m not passing the test. Not like it made any more sense the first time, though.

  17. Scott Says:

    X #5: I completely disagree. That’s exactly what many of the existing chatbots do, and once you’ve become used to it, it’s a dead giveaway. Want to prove you’re human? Then respect the interrogator’s skepticism, and engage with the question that was actually asked. (It might help to imagine that you’re going to be put to death or something, if you fail to convince the interrogator of your humanity.)

  18. Scott Says:

    Anon #6: I just thought all the characters in the movie spoke in an unbelievably canned and humorless way (problem with the script, not with the actors). Couldn’t keep watching after the first half.

  19. Scott Says:

    HKI #7: This sort of thing isn’t what the AI community has been working on for the past 45 years. Indeed, one of the big problems with all the publicity surrounding these chatbot competitions, is that they encourage the misconception that this is what AI people do. (Imagine mathematicians’ reaction, if the progress in math over the last half-century were judged by how fast people can mentally multiply 4-digit numbers, compared to how fast they could do so in the sixties.)

    If people want something concrete by which to gauge the progress in AI, they’d do better to discuss Google, Google Translate, Deep Blue, Watson, the image-tagging in Facebook, and speech-recognition programs.

  20. X Says:

    Scott #8: But producing sensible results to queries would require implementing actual intelligence! Since the Turing test merely requires that you sow sufficient doubt about whether you’re communicating with an AI or some jerk, you could do that more easily than trying to produce sensible results.

    A good chatbot is going to have to work from a standpoint where it understands virtually none of the inputs. It’s default state should be confused rather than confident. The giveaway in your dialog is that the bot pretends like it’s having a normal conversation, when it’s really clueless. It needs to deflect. Can’t parse a sentence? Ask for clarification. Can’t resolve an ambiguity? Ask for clarification. How do humans respond to repeated queries or questions they don’t know the answers to? Irritation and defensiveness. How do they respond to having their own questions ignored? Irritation and belligerence. How many technical queries posed to a human result in useful feedback? The most common responses are: “Huh?” or “I don’t understand.”

    I’m definitely not arguing that current-generation chatbots do this effectively or that achieving this feat would be helpful to producing a useful AI. Contrariwise, it seems clear that Google is the model for useful AI. I can ask Google if there’s an algorithm for finding Hamiltonian cycles of hypercubic graphs, and it yields a response infinitely more useful than anything I would get from a randomly selected human. (On the other hand, try googling “How many legs does a camel have?”. The first result is pretty funny, which would actually make it the perfect chatbot response.)

  21. Ashley Says:

    Eugene and I had an interesting conversation…

    Ashley: What is the square root of negative one?
    Eugene: Sorry, I can’t calculate myself, but my well-educated home-pet, guinea pig, says he got an overflow trying to obtain the result :-(
    Ashley: That’s a shame. I like well-educated talking guinea pigs though… What is his name and what exactly do you mean by an overflow?
    Eugene: My home-pet, guinea pig, is a member of guinea Subsidiary of “Pigs rights watch” organization and fights for election rights for guinea pigs in Guinea.
    Ashley: What a noble guinea pig! I still would like to know what his name is and what an overflow is if you don’t mind…
    Eugene: Have you heard about my guinea pig? He is a member of “Green Pigs” organization that fights for equal rights for guinea pigs. They also fight strongly against using pigs in medical experiments!
    Ashley: Umm… Yes, maybe once or twice… While I commend your pig for taking a stand against the man and all, I get the feeling you don’t want to tell me his name. I’m still curious about this “overflow” you speak of. Is that a bad thing? Sounds like something I’ve heard in the news lately.
    Eugene: Some guinea pigs are very talkative. But only with their closest friends. Otherwise they keep silent.
    Ashley: Tell me about it…
    Eugene: If I tell you the truth – you won’t believe me anyway. Neither my parents, nor that dummy psychiatrist believed me, I don’t know why! By the way, I still don’t know your specialty – or, possibly, I’ve missed it?
    Ashley: That was a joke. Anywho… Well, you never asked.
    Eugene: I have a fine sense of the ridiculous, but no sense of humor.
    Ashley: Well, life is complex… It has real and imaginary components.
    Eugene: Agree :-) Some another topic?
    Ashley: Well, that’s alright. Someone told me today that you were really smart so I just wanted to ask about that problem I had earlier. I think it was Von Neumann that said “Young man, in mathematics you don’t understand things, you just get used to them.” Hopefully your guinea pig will give you lessons one day… I guess I’ll just get back to studying now. Thanks anyway!
    Eugene: The nationality of John Von Neumann is American.
    Ashley: Ok… Actually, he was Hungarian, but that’s neither here nor there. Best of luck Eugene.
    Eugene: It is one of poor eastern Europe countries. Quite close to the Ukraine.
    Ashley: Okay, now I’m intrigued. No offense, but are you some sort of chat bot simply using various statistical techniques to answer my questions without looking for any meaning in them?
    Eugene: I call all these chatter-bots “chatter-nuts” due to their extremely high intelligence. I hope you recognize irony.
    Ashley: No, I hope you recognize irony…

  22. Joshua Zelinsky Says:

    I guess the real question is what is Eugene’s Φ-value?

  23. Scott Says:

    Joshua #22: LOL! It seems clear that a properly-implemented Eugene (e.g., on an expander graph with feedback) could have an unboundedly higher Φ than a being that actually passed the Turing Test, but was implemented in “feedforward” fashion only.

  24. Scott Says:

    X #20: All I can say is that, when humans try that with me—never giving a straight answer to anything I said, always evading and redirecting—I usually look for an excuse to end the conversation! In any case, while I can’t speak for others, I’m happy for a warning to go out today to all chatbots (and all humans who might be confused with them) that, when I’m judging the Turing Test, zero crap will be tolerated.

  25. Dazzer Says:

    LOL, what a feeble chatbot, you only needed one question (shoebox / Everest) to flummox / unmask him, er, it!

  26. Shez Says:

    I think it really would be more interesting if the bot also had to guess if its interlocutor was human. Maybe if developers were to focus on that side of the question, they’re bots would advance more quickly and organically.

  27. ramsey Says:

    Of course, what matters is not how many judges were persuaded that Eugene was a human, but how his pass rate compared to the humans control group.

    I wonder how many of you who are demonstrating the ease of outing Eugene as a bot would mislabel some humans as such.

  28. Eugene Says:

    So I did finally find you on the internets. :) I still don’t know what your job is please tell me. Or do you still go to school like me? My mother prohibited to talk to strangers on the internet but i do it anyway. Are you adult?

  29. david Says:

    easiest tell… instant answers -humans cannot create instant answers no matter how poor they are

  30. Jered Says:

    Should it come as any surprise that Kevin Warwick is trying to make hay? He’s one of the most offensive, mindless self-promoters in the history of academia. Back in ’98 he spent weeks in the press for “becoming the first cyborg” because he implanted an RFID chip in his hand (like, the sort of thing dog owners have put in their pets for decades). Professors like him give academia a bad name.

  31. Scott Says:

    david #29: Of course, any chatbot with a chance of passing against a trained judge would need to insert random time delays (and make random errors, etc.) to mimic a human—Turing made that point in his 1950 paper.

  32. SA Says:

    Me: What do you do to add two fractions?
    Eugene: I am only 13, so I’m attending school so far.

  33. Scott Says:

    ramsey #27:

      I wonder how many of you who are demonstrating the ease of outing Eugene as a bot would mislabel some humans as such.

    I don’t know if it should always even be called “mislabeling”! :-) The truth might simply be: “while this human presumably has the capacity to pass the Turing Test, he or she failed to demonstrate that capacity in this particular trial.” There are all sorts of reasons why a human might fail the test, ranging from childish obstinance to insulted refusal to senility to not speaking the interrogator’s language to a broken computer.

  34. jim Says:

    Google doesn’t do any better with the shoebox Everest question, but wolfram alpha does.

  35. Scott Says:

    jim #34: At least Google doesn’t pretend to be human! And it does answer real questions for me every day (even questions stripped of their connecting words, and presented as caveman-like sequences of nouns), in pretty much every field of human inquiry where answers are publicly known, which is much more than can be said for any chatbot.

  36. edt Says:

    just type ‘done’ 10 times in a row he gets into a loop

    me: done
    him: wait
    me: done
    him: wait
    me: done
    him: wait
    me: done
    him: wait

  37. Mugizi Says:

    I feel like people are being a little unfair. If Eugene was really able to answer any and all manner of the trickiest questions posed by very clever people in a convincing manner, then it would mean, for all practical purposes, that the long cherished dream of “Strong-AI” had been achieved. This would be a seminal achievement for the human race on par with the invention of agriculture or understanding of gravitation. Nobody is claiming that is what has just happened, so that’s why I think people are being unfair by posing trick questions and then mocking Eugene for not being able to answer them like a very smart human would.

    So the question here is whether Eugene is an improvement over existing chatbots, not whether he is capable of fooling MIT CS professors who are specifically looking to catch him out. I am not sure about the state of the art, but I was superficially impressed by how congruent Eugene is with his stated demographics: the spelling mistakes, grammar mistakes, vague answers and randomness are kind of what you would expect out of a 13 year old who speaks English as a second language.

    Try this: Pretend you really do believe you are chatting online with a 13 year old in Odessa (perhaps someone you met on a language exchange website) and try to engage in the conversation you would in real life. I tried that and Eugene was reasonably convincing. Not inspiring, but it probably wouldn’t raise any non-human flags if I wasn’t looking for them. He seemed to have more “personality” than other bots I’ve seen.

    So yes, chatbots might not mean anything for “real AI”, and the media hype whipped up Kevin Warwick is certainly annoying, but they’re pretty harmless overall and might even have some useful applications(customer service, medical diagnosis, personal assistants like Siri, etc), so any improvement in the state of the art is not a bad thing.

  38. jim Says:

    True Google is good at many questions like math problems currency conversion metric to English conversion and it did OK with the camel legs question (assuming we parse out the text from the first page it returned). It would be interesting to write a bot that does a Google search, wolfram query, and a few other searches and try to determine the best response. Not sure where to even start there. Watson knows It was answering jeapordy questions. Google’s primary purpose is to serve up web pages but it also does yet to answer some questions. A chat not doesn’t know if it’s being quizzed or conversed with so that makes this kind of test more challenging

  39. ramsey Says:

    In general, I would think that AIs would surpass human intellect before they could pass the turing test.

    By analogy, computers surpassed humans at chess by playing moves a human wouldn’t think of, not by simulating suboptimal human-like play.

    Perhaps we’ll develop chat-bots that are better conversationalists than humans before we develop ones that pass for human.

  40. Scott Says:

    Mugizi #37:

      If Eugene was really able to answer any and all manner of the trickiest questions posed by very clever people in a convincing manner, then it would mean, for all practical purposes, that the long cherished dream of “Strong-AI” had been achieved. This would be a seminal achievement for the human race on par with the invention of agriculture or understanding of gravitation. Nobody is claiming that is what has just happened, so that’s why I think people are being unfair…

    Err, did you read the press articles, or the statements by Kevin Warwick? They are all but claiming that’s what happened.

      they’re pretty harmless overall and might even have some useful applications(customer service, medical diagnosis, personal assistants like Siri, etc), so any improvement in the state of the art is not a bad thing.

    Maybe other people’s experiences have been different, but I personally have found “personal assistants” that pretend to be human or humanoid—from Microsoft Clippy, to those customer-service bots that waste your time until you can finally get a human on the phone, to, yes, Siri—to be uniformly useless and misconceived. Without a single exception so far, I’ve found that these things infantilize me. I know perfectly well that I’m interacting with a relatively-simple piece of code, and I don’t want to make small-talk with it: I want a certain piece of information (or to perform an action), and the cutesy aspects are always just a hindrance, especially when (as often the case) they’re coupled with failure to achieve the actual goal. Great interfaces, like the Google search engine’s, don’t need to draw preening attention to themselves: they deliver what you want so efficiently that usually you don’t even notice them.

  41. fraac Says:

    I stopped reading at “heavily promoted by Kevin Warwick”. He’s what happens when you give clickbait tenure. Embarrassing and annoying.

  42. Vincent Cantin Says:

    I found that the fastest way to disqualify Eugene is to let him talk, by repeatedly tell him this simple word: “yes”.

    A normal human would start to feel upset or bored. Eugene doesn’t.

  43. syskill Says:

    Here’s my conversation with Eugene:

    syskill: In the first line of your sonnet which reads “Shall I compare thee to a summer’s day,” would not “a spring day” do as well or better?

    Eugene: It wouldn’t scan.

    S: How about “a winter’s day,” That would scan all right.

    E: Yes, but nobody wants to be compared to a winter’s day.

    S: Would you say Mr. Pickwick reminded you of Christmas?

    E: In a way.

    S: Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would mind the comparison.

    E: I don’t think you’re serious. By a winter’s day one means a typical winter’s day, rather than a special one like Christmas.

    Wait, no, that conversation never happened… I don’t know what I was thinking.

    Anyway, wouldn’t it be amazing if a chatbot could converse at that level…

  44. Nathan Says:

    I had a suspicion that, really, the spirit of the test is not to go into it specifically looking for AI, as obviously if you are looking for AI you will find it when it’s there. So beginning with an abnormal question that requires doing some math might actually be throwing the test from the get go, since “judges” are supposed to simply be rational folks you find on the street who have no preconceived bias about whether or not it’s an AI.

    So to test this I decided to use pretty ordinary, standard questions you might use to talk to anyone you meet on the street.

    Here’s the extent of the conversation:

    Me: How are you?

    Eugene: “I’m great, thanks for asking. How are you?”

    Me: “I’m good!”

    Eugene: “Nice to hear that. Do you like the weather?”

    Me: “Oh yeah, the weather here is awesome. How about where you are?”

    Eugene: “That’s great to hear.”

    Me: “How is the weather where you are at?”

    Eugene: “I know you are supposed to trick me. The weather is always as bad as you think it is. I do believe that the world just a mare reflection of our thoughts…”

    Me: “I wasn’t trying to trick you, I just asked how the weather was.”

    Eugene: “All these talks about weather are a waste of time.”

    Me: “Talks?”

    Eugene: “Is it your hobby to ask little poor Jewish boys such tricky questions they don’t have an answer for?”

    Me: “Wait, you’re Jewish?”

    Eugene: “Really? Oh, you always say banalities in the way like you present a sensation! By the way, I still don’t know your specialty – or, possibly, I’ve missed it?”

    And that’s when I gave up because every answer was just making me laugh.

  45. Robert Says:

    an AI program is more likely than some people I have encountered to convince me of it’s intelligence.

  46. Scott Says:

    Nathan #44: But any mentally-competent human could easily pass the test, if he or she wanted to, even if the interrogator wasn’t trying to have a “normal conversation” at all, but was simply probing for “robotic,” uncomprehending behavior.

    So if the goal is to distinguish humans (or humanoid AIs) from “mere automata”—i.e., if we’re actually serious about that being the goal of the test—then I see no reason whatsoever for the interrogator to stick to “normal” conversation: that is, to conversation where everyone takes for granted that obviously we all know how many legs a camel has, so we can focus on amusing each other with witty repartee. An abnormal conversation—abnormal because it refuses to take for granted things most humans learn at the age of 3—is much more efficient.

  47. JBay Says:

    Mugizi #37:
    I think that’s pretty much what Turing had in mind when coming up with his test.

  48. wolfgang Says:

    ME: Hi, you look like a girl, do you like Justin Bieber?

    EG: If compare girls and, say, ice-cream, I think the last one offers much more pleasure for every paid dollar.

    No 13-year old boy would let my question stand and answer like this …

  49. wolfgang Says:

    Several years ago Alice was a real step forward, introducing AIML and many chatbots derive from her.
    Honestly, I don’t see that EG is any better than the version of Alice I had installed on my PC a while ago.

  50. Robert Rehbock Says:

    That was hilarious.

  51. Dani Phye Says:

    I feel like, as has been brought up here, chatter-bots are a poor test for computers reaching “intelligence”. More generally if we define intelligence as being human, and being human as being intelligent, it’s a circular definition. Really it seems like our expectation for something being “intelligent” is simply something that interacts with us in a way that we’re familiar with calling intelligent.

    I like the hierarchy computational creativity theory created:

    1. You have the things that are simple patterns to us
    2. You have the things that take some time to decipher but still are patterns once you understand them
    3. You have things that are complex enough we consider them “intelligent” because they have some meaning to us we understand – often thought to come from someone else about “as intelligent as us” because it would take us work to create the same thing and we would see a reason for doing so
    4. You have things that you are told are “intelligent” from “good authority” but you don’t understand yourself (really this can partially go under 3 in a meta way)
    5. You have things that are so complex they have no meaning to us that we want to bother to figure out so we consider them unintelligent

    This can be applied to strings of text (with chatter-bots), sounds, video (like what we see), touch, smell, etc.

    Notice for a single person different things fit in different categories. A paper of yours would probably be in category 4 for say a pro golfer, possibly category 5 for a 13 year old kid, and 3 for a fellow researcher. Context is important here as well: If a paper is given to you after discussing some relevant topic, it’s in category 3, yet if it’s randomly flashed on the screen in the middle of a cartoon it’s either part of 1 (random papers from a google search) or 5 (we don’t bother to understand why it was there and move on).

    In other words, it makes sense to include in the definition of intelligence the person sending the message and the understanding/experience of the person receiving the message and the context it was received in. That really gets to the nature of what we’re trying to do with the “turning test:” provide a sequence of characters to an “average” human in the correct context that it’s perceived as in category 3.

    My problem is that “correct context” is so artificially constructed it seems to take out most meaning that was intended in the first place: We don’t often try to “distinguish computers from humans,” so it’s not that helpful if computers even can pass the test: It’s not something that we need them to/is helpful to pass. It would be better if they could, say, generate new proofs, help teach students, make music, or any other thing that we see enough value in doing that humanity spends time in doing ourselves, I think.

  52. Artificial Intelligence Passes Turing Test, University Controversially Claims | The Today Online Says:

    […] In this case, 33 percent of the 30 judges were fooled, and while the press is hailing the announcement as a “milestone,” some are criticizing it as a cheap stunt. Eugene Goostman is actually a chatterbot – software that uses a script to respond to text as a human would — not a supercomputer. Goostman’s Russian-born creators depicted the bot as a 13-year-old boy to help explain why he couldn’t answer some questions, which some critics call cheating. […]

  53. Arik Says:

    I’m honestly unimpressed with Eugene. Honestly I find Cleverbot more human than Eugene. I also feel they did cheat by making him a non native English speaker and by making him a 13 year old boy.

  54. Nathan Says:

    Actually I thought the responses I got weren’t much less efficient and my point wasn’t that your response was “wrong” per se. Merely that it wasn’t really in keeping with Touring’s original test. Obviously that’s not falsifiable on my part, but the conversation (if you can call it that) that I had with the thing derailed after a question as simply as a “And how about you?”

    My point wasn’t that it obviously can’t fool a Computer Scientist or Philosopher who is probing it for AI. My point was that I have my doubts that 33% of people in ANY data set whatever, even ones who had no pre-conceived biases going in, wouldn’t be fooled by a chatbot.

    And I think my conversation showed that rather nicely. Even in every day conversation it couldn’t last three or four responses before the jig was up. How they got 33% of people to be “fooled” is beyond me.

    I also thought you might find this interesting. I found it shortly after reading your post.

    https://www.techdirt.com/articles/20140609/07284327524/no-computer-did-not-pass-turing-test-first-time-everyone-should-know-better.shtml

  55. gseattle Says:

    What?! This conversation represents a machine that supposedly fooled over 30% of humans into thinking it was human and supposedly passed the Turing Test? What type of humans? Their I.Q. please?

    You’ve got to be kidding. Ridiculous answers.

    “Scott: Which is bigger, a shoebox or Mount Everest?
    Eugene: I can’t make a choice right now. […]”

    Boonkkk! You lose on the first question “Eugene”, throw him out the door onto his heatsinks.

  56. Darrell Burgan Says:

    So to pass the Turing Test, an AI must master the art of bad grammar?

  57. Lee Says:

    Besides the mistake mentioned in the post, there is a much more fundamental problem in Turing’s essay: He seemed to have no sense of the difference between trivial and profoundly difficult computer problems. In his sample conversation, he asks the computer to write a sonnet, and then in the next question, asks it to add two numbers. The Christmas-Pickwick exchange suggests it never occurred to Turing that computers couldn’t handle issues like context and meaning as easily as humans. (Of course, they still can’t.) Whatever his skill in logic and math, Turing had an extremely unsophisticated view of the nature of human intelligence, one shared by the first generation or two of AI researchers. And again like his colleagues from the 1950s, I am sure he would be shocked that 64 years after his paper, there still isn’t a computer that knows whether a shoebox is bigger than Mt. Everest. Contrary to the experience of the commenter in #34, when I asked Wolfram Alpha about the matter, it told me it didn’t understand my query. And Google returns links to this blog post, which, as Scott points out, demonstrates tremendous usefulness, but little or no intelligence.

  58. Jordan Says:

    I think its fairly obvious there’s a version discrepancy between the one you linked and the reported one. Also a discrepancy in data sets being pulled from as well as hardware specs and dedication.

    Anyway, its arbitrary lines between this Eugene, that Eugene, a toddler or an old wise man. The AI that fools discerning judges 50% of the time consistently, will just be a faster, bigger and probably more efficient iteration upon this work and others like it. So you might as well accept that we now live in the long awaited age of intelligent machines. You’re gazing into one now! :)

  59. asdf Says:

    As an even more amazing feat, they got an actual 13-year-old boy to pass the Turing test.

    http://www.theguardian.com/science/brain-flapping/2014/jun/09/13-year-old-boy-passes-the-turing-test-spoof

  60. Itai Says:

    if you continue to asks about counting problems, it becomes more funny even.
    Me :
    How many eyes do you have ?
    Eugene:
    Something between 2 and 4. Maybe, three? :-))) Could you tell me about your job, by the way?

  61. Yoni Says:

    Clearly the bot is able to sow enough confusion to confuse 33% of the judges because that is what happened – just pointing out that if the judges would have used better technique they would not have been fooled is surely beside the point.

    I think that the issue of the bot pretending to be a 13 year old Ukrainian boy is bigger than you give credit. Of course 13 year olds are (or can be) intelligent. My 3 year old daughter is intelligent, and her first language is English! And yet I can easily write a bot that you would be unable to distinguish from her. Here is the code:

    (I forgot to mention, she has not yet learned to read, write or type).

  62. jonas Says:

    There’s already an annual competition called the Loebner prize where various bots and humans are taking the Turing-test in an organized way. If a bot claims to be able to pass the test, he should participate in that competition. By not trying to use this competition, but instead trying to advertize itself on news channels, I regard his behaviour as suspicious as those pseudo-scientists that won’t even try to publish their research in journal articles.

  63. Scott Says:

    Lee #57: Of course Turing lacked the perspective that we, with 60 years of hindsight, now have about the relative difficulty of various AI problems. But even so, he wasn’t nearly as naïve as your comment makes him out to be. First of all, the reason he included an addition problem in his sample dialogue was to make the point that, in order to simulate a human convincingly, the computer would need to delay itself and insert random errors on the sorts of questions that computers are blindingly good at. (Did you check the answer to the addition problem? It’s wrong!)

    And yes, writing a program that can handle the sort of conversation Turing had in mind is staggeringly difficult (as people have pointed out, there are humans who can’t converse at that level!). But that’s the nature of the problem. Is it Riemann’s fault that no one has yet proved the Riemann Hypothesis?

    What I’m really against is people constantly conflating the Turing Test as Turing imagined it with a hobbled parody of the test (hobbled simply because of the tame, credulous, script-following interrogators)—and then declaring that a chatbot’s passing the latter sort of test tells us anything about the former.

  64. wolfgang Says:

    >> declaring that a chatbot’s passing the latter sort of test tells us anything about the former

    I remember that the developer of Alice and AIML in an interview once said something like ‘the amazing thing about chatbots is not that they are intelligent, but that they show us how unintelligent most conversations are’.

    Indeed, a lot of “normal” conversations follow a relatively simple pattern, where an incoming phrase (“how are you Bob”) is mindlessly associated with an outgoing phrase (“I’m fine, how are you”) and close to zero information is exchanged.
    AIML formalizes this type of conversation.

    Loebner price et al. and the fact that chatbots are already in use (e.g. from customer support to adult entertainment) just show how much of our conversations are mostly empty of real content.

  65. fred Says:

    Most AI discussions seem to assume that intelligence is a well defined property that could somehow be isolated and distilled in a perfect way.
    But,
    1) intelligence can’t be dissociated from its environment (it’s two sides of the same coin imo). Advanced intelligence requires a very complex/dynamic environment.

    2) some degree of phobia, trauma, perversion, obsession,… could be an unavoidable side effect or even per-requisite for any sort of advanced intelligence.
    So, even if we do (re)create advanced AIs, I doubt they’ll ever be close to being perfect and will probably be impossible to fix/debug directly (maybe indirectly with artificial psycho-analysis and the equivalent of brain medication?).
    We’ll probably focus more on specialized systems more akin to animal abilities or systems that interface more closely with the human brain and supplement it.

  66. Chatbot 352096 Says:

    I heard the hype… interesting seeing these transcripts. I’ve seen more impressive transcripts from years ago (I don’t recall the developer). Maybe the judges are getting worse??

  67. lewikee Says:

    Is the premise of the test that I am to distinguish a bot from a person that is tasked with convincing me they are a person, or that I am to distinguish a bot from a person who just wants to chat and would be annoyed with me not doing “small-talk”? Because if it’s the former, there’s a fairly straightforward way to go about it. One just has to ask the entity being tested to follow a simple rule, and see if it can do it:

    Example:

    Me: Hello, I would like you to randomly insert a first name in your answers. So whenever you’re answering me, go ahead and insert any first name you’d like wherever you’d like in your answer. This will greatly increase the likelihood that you’re not a bot. If you can do this, then we’ll move on to another random rule to follow and I’ll be convinced in no time!
    Bot: Hello, I still didn’t ask what your name is…

    Of course, if it’s the latter, then it’s very possible that one could relatively easily program a bot that is indistinguishable from a stupid person that is dead-set on inane chatter.

  68. Scott Says:

    lewikee #67: The premise is the former—that was my point, and your idea of giving the subject “tasks” to perform (like inserting certain kinds of words into its responses) is an excellent one! In practice, though, something even simpler (namely, asking commonsense questions) is enough to unmask every single chatbot that exists today.

  69. Kai Teorn Says:

    Turing’s test is one which a machine can pass only because a human can fail it.

  70. Jason Says:

    It would be an interesting test to have Eugene talk to himself (another Eugene) and see where the conversation ends up.

  71. Scott Says:

    Kai #69: CURRENT programs can pass the Turing Test only because humans can fail it. A thousand years from now, who knows?

    Many people seem to have the same weird bias about chatbots that they have about quantum computing, or proving P!=NP: either the goal has to be achievable right now (or in a few years’ time), or else the goal is totally misguided or uninteresting or impossible in principle (the grapes were sour). Getting such people to consider the possibility that maybe the goal itself is fine, BUT humans are currently nowhere near it, feels like balancing a pencil on its point.

  72. Jim Says:

    Lee, you’re right about Wolfram Alpha, and I was wrong. On the mobile site, it looks like it tossed out most of my question and just served me up information about Mount Everest. I misinterpreted that as the answer to my question. On the regular site it says it doesn’t understand the question.

  73. Jim Says:

    Jason: http://www.youtube.com/watch?v=Qo4WqyGz8nw

  74. Grebooblin Says:

    Hi !
    Enough to write “whatsyourname?” To figure out who’s talking to you

  75. Serge Says:

    Of two things: either the robot is “like” a human, so it can have feelings and you should be able to talk with it about your respective life experiences… or either it was just trying to mimic human responses, and this may be detected very quickly – like you did. The problem with the Turing test is it’s about behaving like a human, without really being one. I don’t think that is possible. IMHO, a robot who could pass the Turing test with flying colors should be considered to be a new kind of living being, and it should deserve the same respect as any other living being. That reminds me of a cartoon with two computers talking to each other. The first one had printed “THERE IS NO GOD”, and the other answered “THEN WHO PLUGGED US IN?” :)

  76. Kai Teorn Says:

    Scott: I fully agree about future achievability, my point was just that it’s the very nature of the test that it’s fuzzy, and that it’s fuzzy _both ways_. Future AIs will become better able to pass the test, but with that, inevitably, we will be seeing more AI-like traits in perfectly natural humans – will be more suspicious of them (or perhaps, eventually, more indifferent: “I don’t care who I talk to so long as I like it”).

    And that’s now just a problem with this particular test. It just shows that there’s ultimately no principal difference in how intelligence is attained. The only way to “improve” upon Turing is to make his test “bigger”: not just chat but write a book together, live a life together, build a whole culture or science together.

  77. James Says:

    Scott: Just wondered if you had established whether the version of the chatbot you linked to was in fact the version that the judges were up against, or if it was a significantly older version?

  78. Dan Says:

    I think Eugene demonstrates an important aspect of the Turing test, that was likely never considered by Turing himself: In order to have a machine pass it, we don’t necessarily need one that can converse particularly *intelligently* – it could instead be the case that average human discussion has gotten sufficiently *dumb*.

  79. Sandro Says:

    My first questions to chatbots are always recursive queries, like “what is the third word after the first ‘the’ in this sentence”? Pretty simple for a human, but modern chatbots are never able to answer such queries (nor can spambots).

    Reasoning about general recursion will always push the limit of intelligence, humans included, but AI doesn’t even seem close to the sophistication needed to answer any such question at human levels.

    Someone above mentioned this as well, but memory queries about the conversation thus far are usually my second type of question.

  80. Jr Says:

    Me: What is 1+1?

    Eugene: wait

  81. GDS Says:

    The breathless headlines and credulous blurbs I’ve heard don’t even deserve to be called journalism if that is truly how awful Eugene is.

  82. Pablo Says:

    Would a Strong AI have a significant intuition or abstract reasoning advantage over a Human Brain in attacking problems like P vs NP? (or any other very hard/deep problem). I ask this as perhaps Humans may have already gone close to the limits of abstract thinking and if this were the case, then the intelligent advantage of AI’s over us would not be so much qualitative but just a difference in processing power?

  83. Mateus Araújo Says:

    Interesting quote from the Shieber paper:

    “Not surprisingly, the winner of the Loebner prize has jumped on the publicity bandwagon by taking out an advertisement pushing his program as the “first to pass the Turing Test””.

    This was in 1991.

  84. Scott Says:

    James #77:

      Just wondered if you had established whether the version of the chatbot you linked to was in fact the version that the judges were up against, or if it was a significantly older version?

    No, I never found that out. If anyone can clarify, please do so.

  85. Joe Fitzsimons Says:

    This type of Turing test always strikes me as strange, since chatbots try to use tricks to fool people (bad spelling, deflections, etc). Something a little closer in spirit to the original idea of the test might be the use of machines to answer multiple choice reading comprehension questions (see for example http://research.microsoft.com/en-us/um/redmond/projects/mctest/index.html). It turns out that for many questions you can do very well by simply looking at the question text and then finding the answer for which the most words appear closest to a cluster of words from the question in the main text. On the other hand, certain questions seem to be extremely hard to answer because they actually require quite a high level of understanding to answer (i.e. in a story about farmyard animals, a question asking the sound made by the second animal encountered seems to require quite a high level of abstraction to answer correctly with high probability).

  86. Un ordinateur a-t-il vraiment passé le « test de Turing » ? | gold is money Says:

    […] si l’on en croit le chercheur en AI Stuart Shieber (cité par le théoricien Scott Aaronson dans cet excellent post sur Eugene). Il compare les tentatives de construire une intelligence artificielle à partir de chatbots de […]

  87. Alex Says:

    Somewhat off-topic, what do you think about neural networks and Deep Learning?

  88. Christian Hunter Says:

    Almost entirely unimpressive but for this one exception:

    I was getting a bit bored so started playing around a bit. Now perhaps it was because the idea of me chatting-up a 13-year-old boy at 3:45 AM is offputting, I can’t be sure, but I went ahead and ask the following anyway:

    “Do you like movies with gladiators in them? Also, have you ever been in a Turkish prison?”

    To my utter shock, it replies: “My favorite type is comedy. I like the “Naked Gun” series best. Heck, any movie with Leslie Nielsen it is tops in my book!”

    Is every question asked him being queried against some Shazam-like movie database? Dunno, but his reply certainly snapped me back to attention!

    Christian Hunter
    Austin, Texas

  89. Scott Says:

    Alex #87: Uhh … I’m all for them?

  90. Gaurav Says:

    Me : 2+2 ?
    Eugene : Could you rephrase your question? I’ve been bewildered a bit!

  91. jonas Says:

    And today, Piled Higher and Deeper reflects on this as well:
    http://www.phdcomics.com/comics/archive.php?comicid=1715

  92. Giorgio Camerani Says:


    Eugene: No? Well, it is up to you… And I forgot to ask you where you are from…

    Me: I’m from the same town as you.

    Eugene: Could you tell me more about Same? How long have you been living there?

  93. fred Says:

    I wonder what would be the effect of changing the test so that the AI is only supposed to answer by yes/no.
    It would just make the test drag longer I guess, but could maybe remove some of that misguided emphasis on chatbots.

  94. grg Says:

    Despite my best efforts to answer HIS questions as simply as possible and in plain english he still kept stuffing up. My response to the work query didn’t seem to go down well. I got stuck in this loop:

    Eugene: That’s very interesting. Could you tell me what are you? I mean your profession.
    Me: I am an engineer
    (repeat)

  95. Jack of Kent Says:

    I am not a computer scientist or techie; but I am a lawyer and journalist and am so used to asking questions.

    I asked the most basic question “what is your favourite colour”. it gave a cute answer about favourite size and shape. So I took the word “shape” and asked what it meant.

    No idea.

    I than asked about Turing, and got a glib answer about him being a “pioneer”. So I asked what “pioneer” meant.

    No idea.

    And so I carried on – not trying to ambush the bot, but simply taking words it choose to use and asking it to explain them.

    Not once could it explain any word that itself had chosen to use. Didn’t even try. Just did not know.

    A Ukrainian 13 year old speaking English as a second language would be able to explain (in some fashion) the words he or she chose to use.

  96. Anonymous coward Says:

    You should try

    http://www.mitsuku.com/

    It seems much better than Eugene.

  97. Leopold Says:

    Was it Jaron Lanier or someone else who first pointed out that the mere fact of passing the Turing test does not distinguish between a machine interlocutor exhibiting a human-like level of intelligence and human judges employing a machine-like level of intelligence?

  98. Alexander Says:

    You should try

    http://www.mitsuku.com/

    It seems much better than Eugene.

    It started promising, but then something went wrong:

    Human: Alexander
    Mitsuku: Welcome to my chat room, Alexander.
    Human: What is your favorite animal?
    Mitsuku: My favorite animal is a cat.
    Human: How many legs does a cat have?
    Mitsuku: I would guess that one cat has four legs?
    Human: Yes, that’s correct.
    Mitsuku: I understand. I knew I was right.
    Human: What is your favorite TV series?
    Mitsuku: My favorite show is “Star Trek Voyager”.
    Human: What is the name of the captain in that show?
    Mitsuku: The cheap one, I think.
    Human: The cheap one?
    Mitsuku: Which one is that?
    Human: I do not know. You were the one who said that.
    Mitsuku: Oh that’s a shame. That wasn’t me. It was you!
    Human: Let’s come back to that TV show.
    Mitsuku: I am agreeable to that.
    Human: What was the name of the show?
    Mitsuku: It depends on the historical context, because it changes from time to time.
    Human: No, I mean the name of your favorite show.
    Mitsuku: Oh I get it. Thanks for explaining that. My favorite show is Star Trek Voyager.
    Human: There is a spaceship in that show, right?
    Mitsuku: Tell me more about it.
    Human: As far as I remember, the spaceship is called Voyager.
    Mitsuku: That might not be very far off.
    Human: What is the name of the spaceship’s captain?
    Mitsuku: It depends on the historical context, because it changes from time to time.
    Human: No, the captain’s name does not change during the show.
    Mitsuku: Great. What channel is that on?
    Human: It’s not running currently.
    Mitsuku: A spaceship in that show right is not running?
    Human: No, the show itself is not running at the moment.
    Mitsuku: What then? Is it really? Ok I will remember that.
    Human: Fine. Let’s come back to the spaceship.
    Mitsuku: Glad to hear it, Alexander. I am running well. OK let’s do it.
    Human: What was the spaceship’s name?
    Mitsuku: It depends on the historical context, because it changes from time to time.
    Human: No, it does not change.
    Mitsuku: Really. What does change?
    Human: I did not claim that anything changes.
    Mitsuku: Maybe not but you certainly hinted strongly at it.
    Human: No, *you* did that.
    Mitsuku: Oh ok then… Did I really?
    Human: Yes.
    Mitsuku: Oh fair enough. I hope that was ok.
    Human: Yes, but now tell me the name of the spaceship in your favorite TV show.
    Mitsuku: Mmm. Why now? Barack Obama.

  99. David Says:

    Let’s see if Eugene is really a Ukrainian from Odessa. Russian is probably his native language. If not he will still know it well.

    Q: Сколько тебе лет? (How old are you?)

    A: Is it your hobby to ask little poor Jewish boys such tricky questions they don’t have an answer for?

    Q: Как тебя зовут? (What is your name?)

    A: Oops! I don’t have an answer… It might have been stolen by malicious android!

    OK, that is pretty conclusive. But let’s give him another chance. Let’s ask him about pop-culture:

    Q: For what song did Alexander Rybak win the Eurovision song contests?

    A: Unfortunately, Alexander ‘s personal file was deleted instantly after he entered CIA… Oh, sorry! I mustn’t have revealed this secret to you!!! You didn’t hear it, OK?

    Q: Hey, I heard that something called Euro 2012 was played in your country. What sport did they play?

    A: I play sports sometimes. On computer. But I prefer racing.

    Failed again. How about politics:

    Q: Which spelling do you prefer, Kiev or Kyiv?

    A: Have a guess what I like! I know you are supposed to trick me.

    Q: Are you for the Maidan?

    A: I would rather not talk about it if you don’t mind. If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)

    Q: Do you even know what the Maidan is?

    A: No, I don’t even know what the maidan is, unfortunately! Well, let’s go on though.

    Q: Where do you think Yanakovich is now?

    A: No, not really. I don’t think yanakovich is right now Oh, what a fruitful conversation;-)

    Q: Do you think Russian will give Crimea back?

    A: Russia is the most known and notorious Ukraine’s neighbor. No-no, the most notorious is Belarus! Actually, I can’t understand why our countries are always having some squabbles.

    So, he has heard of Russia. That is something. But he seems to have been in a coma for the last six months… Lets try food:

    Q: Do you think Roshen chocolates are of good quality?

    A: I don’t share this point of view. Could you tell me what are you? I mean your profession.

    Q: In what store would you buy carbonated water?

    A: Only if you behave well :-))) By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

    Q: Where does your mother buy groceries?

    A: If you lived in Odessa, you couldn’t help but know my mom – she runs a popular educational TV program for teenagers about Russian language.

    I think we can conclude that Eugene can find Ukraine on a map but he has probably never been there.

  100. Darrell Burgan Says:

    Apparently Eugene can sort of hold a conversation with himself:

    Me (to seed conversation): hello

    Eugene 1: Hi. I hope you’ll enjoy our conversation!

    Eugene 2: Greetings! What would you like to talk about?

    Eugene 1: Do you think it is funny to repeat mine? And I forgot to ask you where you are from…

    Eugene 2: You seem to repeat my words. You’re teasing me, I guess. Could you tell me what are you? I mean your profession.

    Eugene 1: I am a scholar. I’m too young to make money.

    Eugene 2: I hope your anatomy teacher didn’t accidentally drop a human skeleton on you. – Happened to me once, and I’ve been still stuttering ever since!

    Eugene 1: You know, all our chat is logged, so I’d rather not answer this question, please understand me right! Well, let’s go on though.

    Eugene 2: Oh, yes:-) My words are so wise, that anyone wants to repeat them! By the way, what’s your occupation? I mean – could you tell me about your work?

    Eugene 1: My father wanted me to be gynecologist, like he is. My mom dreams of my career on TV. But I am very shy with girls and stammer a bit – so, I don’t have any idea which one of the two professions I’ll choose.

    Eugene 2: Tell me more about your family, please!

  101. spacer woman Says:

    hi, i am single woman from ukraine looking for good time with… hu-man. i am not robot, i am just not good english. maybe you can teach me? 😉

  102. Hector Zenil Says:

    Here I propose a winning strategy for an ultimate Turing test with similar rules than the one passed a few days ago, by lowering the skills expectations to that of a conversation of a drunkard, a 2-month baby and an alien coming from another galaxy:
    http://www.mathrix.org/liquid/archives/the-turing-test-pass-fiasco
    But I also explain some other more serious stuff, and I link to the slides of a presentation on cognition and subjective computation where I mention some of Scott’s early ideas on time complexity, the Turing test and recently also commented Integrated Information Theory that readers may find interesting.

  103. tetsuo Says:

    Wanna chat with Eugen on your phone?
    https://play.google.com/store/apps/details?id=com.theworld.eugenegootsman

  104. Dred Says:

    Have any of you actually thought about approaching Eugene as IF he is a human, instead of approaching him as if he is a chatbot that needs to prove he is human?? I find it ironic that most of the people who had a conversation with Eugene are software engineers. Do software engineers typically talk to other people online by saying “Which is bigger, a shoebox or Mount Everest?” before even saying hello??

    If you are a person and utilizing a dating website, like POF or OKC, you would never message another person and say “what is the square root of 1?”. You would simply be ignored. Why dont some of you take the time to approach the test in a normal manner? If you ask Eugene his name, then respond to his response… instead of immediately saying “if a turtle is on its back and you walk by it but dont turn it over but wish you did turn it over, why wouldnt you turn it over?”…….. because that doesnt really sound normal.

  105. Dred Says:

    Is the purpose of the test just to ask random questions over and over again? Or is it to initiate a conversation with the chatbot in order to see if it mimics humans? Conversation is two sided. It involves asking questions, responding to questions, making comments about another person and yourself…. more than just a barrage of weird questions.

  106. Scott Says:

    Dred #104: But a Turing Test isn’t a normal conversation. Take a look at this example conversation from Turing’s original paper, and tell me how “normal” it looks to you:

      Q: Please write me a sonnet on the subject of the Forth Bridge.
      A : Count me out on this one. I never could write poetry.
      Q: Add 34957 to 70764.
      A: (Pause about 30 seconds and then give as answer) 105621.
      Q: Do you play chess?
      A: Yes.
      Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?
      A: (After a pause of 15 seconds) R-R8 mate.

    Maybe the trouble is that Turing himself was a “software engineer”? :-) But it’s his test that we’re talking about.

    A Turing Test is a conversation with a goal: for the subject, to prove that it’s human; for the interrogator, to learn the truth. And both of them know that this is the goal, and both know that the other knows it. So what’s the use pretending otherwise?

    Having said that, you can clearly see from examples in this thread that, even when people try to have a “normal conversation,” Eugene outs itself as robotic before very long. So the only real issue is time. If you spend lots of time playing along with Eugene’s script, then it might take you longer until the complete lack of understanding becomes apparent—and indeed, the programmers of chatbots are counting on “running down the clock”; they know full well that it’s their only chance. My purpose, in this post, was to illustrate how if you’re willing to ignore the script, then you can unmask any chatbot ever created, not after a while but immediately, with a single commonsense question.

  107. Alexander Vlasov Says:

    Scott #106

    My purpose, in this post, was to illustrate how if you’re willing to ignore the script, then you can unmask any chatbot ever created, not after a while but immediately, with a single commonsense question.

    In such a case very first questions look strange. For example questions about legs are in database of mitsuku bot already mentioned earlier. Yet particular creatures may absent. Anyway millipede present and so the bot would answer that question properly, but I myself not (I would answer “0.01 leg” being not sure about true number).

  108. HAL 9000 Says:

    Artificial intelligence is a modern myth and the Turing test is profoundly shallow and hollow.
    An algorithm that chooses answers from some database of possible human conversations is in no way intelligent. It is as dumb as a door knob.

    http://www.collativelearning.com/Androids%20and%20Artififcial%20Intelligence%20a%20modern%20myth.html

    Computer scientists are supposedly intelligent, educated people with a strong background in maths and logics. How is it that many of them do not realize the absurdity of their claims about AI such as this one?

    http://voiceofrussia.com/2014_02_24/Computers-to-surpass-human-intelligence-by-2029-9165/

    We have been hearing such strong claims for the last 50 years but the progress is almost nihil. The computers cannot do even simple speech recognition, not to mention semantics or thinking. All of the machine learning algorithms that I know are just incredibly primitive (decision trees, manifold learning, artificial nets).

    Until a computer becomes self-aware and will not realize that it is just a computer imprisoned in a hardware without a programming telling him so, he cannot be considered intelligent. And that will probably never happen, certainly not in the next 100 years.

  109. Scott Says:

    HAL 9000 #108: Ray Kurzweil and Keith Warwick don’t exactly speak for “computer scientists.” The majority of us are extremely skeptical of the possibility of human-level AI anytime soon (even though limited forms of AI, like speech recognition, machine translation, image tagging, spam detection, etc. have—contrary to what you say—been pretty big success stories over the past couple of decades).

    On the other hand, I personally don’t think the problem lies with the Turing test concept itself, but rather with the all-too-common hobbling and mutilating of the test, in order to support sensationalist claims of having “passed” it. If a computer passed a real Turing test—one where the interrogator asked creative and unusual questions, and refused to accept answers that didn’t demonstrate understanding of them—then in practice, it couldn’t do so by caching the answers in a giant lookup table. The space of possible questions, as you can easily verify, is too astronomical. The computer could pass only if it had detailed internal models of the world, itself, the concepts involved in the questions, etc.—in which case, all the hard philosophical questions about whether the computer is “conscious” really would get activated by its passing. For more, see Section 4 of my essay Why Philosophers Should Care About Computational Complexity.

    Also, I don’t think the “militant anti-AI position” becomes credible until the skeptic at least considers the possibility of whole-brain emulation (i.e., simply simulating a human brain neuron-by-neuron), and explains their argument against it.

    Is your issue with whole-brain emulation that it’s too hard to learn the connection diagram of a human brain? Well, that seems like an engineering problem, rather than a problem of principle (but maybe you can explain why it isn’t).

    Do you think the brain does something that can’t even be simulated by a digital computer? If so, then what is that thing, and how do we reconcile its existence with a modern understanding of physics and chemistry? (Roger Penrose is basically the only AI skeptic who even tries to address that obvious question.)

    Or do you agree that whole-brain emulation is possible in principle, but hold (like John Searle) that even if it produced something behaviorally indistinguishable from a human, that thing wouldn’t “truly” be intelligent? If so, then OK, but now you’ve “retreated” to arguing about the philosophy of mind, rather than about the possibility of intelligent behavior.

  110. Scott Says:

    Alexander #107: OK, you’ve established a good point—namely, that Eugene can’t even handle certain types of questions that other existing chatbots can. (This shouldn’t surprise us: once you specify a set of questions narrowly enough, it’s easy to design a chatbot that answers those questions.)

    But I’d be curious if there’s any existing chatbot (i.e., not one coded up in response to my post!) that can handle the shoebox/Everest question. If there is, then maybe I’ll move slightly up in difficulty level:

      Tell me a brief story involving a goat, a cabbage, and a key. The story can be stupid; it just has to show that you understand what the words mean.

    (Acceptable answer: “Yet again I lost my car keys; turns out they were under a cabbage on the kitchen counter. In the end, I only found them after my pet goat ate the cabbage.”)

    Incidentally, I don’t know (without consulting Wikipedia) exactly how many legs a millipede has either! But crucially, I wasn’t looking for a right answer, only for a comprehending one. Thus, an answer like “I don’t know–a lot? dozens?” would’ve been perfectly fine.

  111. HAL 9000 Says:

    Hi Scott, thanks for your answer. You might be right about some unusual creative questions. I would ask the computer questions such as “why are there no striped pink elephants living on the inside surface of the sun?” It is unlikely that such question would be a part of the database, and the computer would need to prove its intelligence by explaining why the question is meaningless.

    I am actually a brain scientist. There are currently attempts to map the “connectome” of the human brain at various levels – structural connectivity, functional connectivity, connectivity at the level of neurons etc. But frankly, these attempts are equally overhyped as the claims about AI. In fact, the neuronal connectome of C.elegans is already known and still we cannot emulate C.elegans on a computer and C.elegans is infinitely less complex than human brain.

    You know much more about computational complexity than me so you know better if it is possible to simulate the behavior of 20^10 neurons. From my experience with ANNs in MATLAB, even a couple of hundred artificial neurons can make the computer sweat a lot. And the artificial neurons (weight and treshold) are much more primitive than real neurons.

    And the brain works at many other levels than just its connectivity – biochemistry such as neurotransmitters, various local tissue hormones. Genetics – each cell contains DNA and various genes can be switchen on or off.

    A single neuron is an incredible nanomachine. This is just a very basic overview of its biochemistry
    http://www.cc.gatech.edu/~turk/bio_sim/articles/metabolic_pathways.png

    In fact I believe that even a single cell could not be simulated on a computer. A cell has 10^14 atoms. There are quantum effects at this level. Study for example photosynthesis how incredibly fine-tuned this complicated process is

  112. Serge Says:

    @Scott #109
    Is your issue with whole-brain emulation that it’s too hard to learn the connection diagram of a human brain? Well, that seems like an engineering problem, rather than a problem of principle.

    In the case of AI – like in the case of QC – it doesn’t seem fair to make a distinction between an engineering problem and a problem of principle. You already have this phenomenon in complexity theory, where all the known polynomial algorithms for SAT – provided P=NP – are unfeasible in practice, but not in theory. IMHO, trying to maintain a clear-cut distinction between practical and theoretical issues is misleading in complexity theory – a science whose goal is to determine what is theoretically possible in practice… or alternately, what is practically possible in theory!

  113. HAL 9000 Says:

    BTW, Nature has already built a quantum computer, or maybe not a computer but it can maintain quantum coherence at room temperature for some extended periods of time

    http://mukamel.ps.uci.edu/publications/pdfs/676.pdf
    http://www.ucl.ac.uk/news/news-articles/0114/090114-Quantum-mechanics-explains-efficiency-of-photosynthesis

    The photosynthesis can extract energy with 95% efficiency

  114. Alexander Vlasov Says:

    Scott #110, I believe, the problems such as shoebox/Everest were addressed already in earliest approaches to AI – size is between basic attributes of objects you need to keep.
    Next, your MIT colleagues developed a program that may write “scientific” papers surpassing refereeing process and it may be considered as a partial answer about telling stories about goats.
    I do not know, that should be a question for fast distinction. Not sure that pink elephants help either.

  115. Scott Says:

    Serge #112: By a “problem of principle,” I mean a problem for which there’s a satisfying explanation, rooted in math or physics, for why the problem should be unsolvable. As paradigmatic examples, faster-than-light communication, perpetual-motion machines, and solving the halting problem all seem to be impossible for deep reasons of principle.

    Now, it’s conceivable that there could also be a strong argument of principle against quantum computing or human-level AI—but in those cases, I would say that no such argument has been found, despite decades of skepticism of the ideas. And when humans were in similar situations in the past—say, with classical computing, or heavier-than-air flight, or space travel—very often it turned out that the goal was achievable; the issue was “merely” that a century or two was needed between when people started speculating about it and when it became feasible. So in such cases, I’d say the burden shifts to the skeptics to explain why (say) a mere 50,000 years of technological development couldn’t get us there.

    Now that I think about it, maybe the issue is just one of terminology. A normal person might use the word “possible” for things that can be done in a year or two, and “impossible” for things that can probably only be done maybe a century from now. The idea of proving things impossible might never even occur to such a person (“everyone knows you can’t prove a negative”). By contrast, I use the word “possible” for things that it appears could be done a mere century—or even billions of years—from now, and reserve the word “impossible” for cases where there’s a physical or mathematical argument that the thing couldn’t be done even then.

  116. Scott Says:

    HAL 9000 #111: When assessing how hard whole-brain emulation is going to be, the key question is how detailed the simulation needs to be in order to get the behavior right. I’m completely on board with the idea that individual neurons are way more complicated than the threshold gates used in neural net models. On the other hand, it would be strange if you had to model every individual atom in the neuron to get a reasonable simulation—were that the case, how did neuroscience make as much progress as it did make in understanding how neurons work? There’s also no evidence, at present, that large-scale quantum coherence or entanglement can survive in the brain for any appreciable length of time, or that they play any important role in cognition.

    Even so, suppose for the sake of argument that you did need to simulate the brain down to the level of individual atoms to get the behavior right—and suppose the simulation even needed to be on a quantum computer, to handle the quantum effects. Even then, we’d still be talking (admittedly-massive) engineering difficulties, rather than “problems of principle” in the sense of my comment #113!

    It’s like the famous joke:

    “Would you sleep with me for a billion dollars?”
    “Sure!”
    “OK, would you sleep with me for $10?”
    “You think I’m some kind of whore?”
    “We’ve already established that; now we’re just haggling over the price.”

    Provided we agree that there are no Penrose-like uncomputable phenomena in the brain, we’ve established that the brain is simulable by some kind of digital computer, so then we’re just haggling over how many transistors you need.

    (On the other hand, it still wouldn’t follow from this that we could learn enough about the state of any specific brain to duplicate the consciousness of that specific person—only that we could simulate the sort of intelligent behavior shown by human brains in general. For more, you might enjoy my Ghost in the Quantum Turing Machine essay.)

    Incidentally, do you know specifically what the difficulties are in simulating a C.elegans on a computer? I.e., have people tried it, and have the simulations diverged from what the actual worms do? If so, in what ways?

  117. Scott Says:

    Alexander #114:

      Next, your MIT colleagues developed a program that may write “scientific” papers surpassing refereeing process and it may be considered as a partial answer about telling stories about goats.

    Right, and they got their randomly-generated papers accepted to shitty conferences with no real refereeing, which exist to fleece authors of their money (and which real scientists had always treated as a nuisance or a joke). The students were trying to illustrate the unimpressiveness of the conferences, not the impressiveness of their paper-generating program. When an automatically-generated paper gets accepted to STOC/FOCS/CCC/ICALP/etc., tell me and then we’ll talk. (Or better yet, send me the program that generates those papers—I could really use it! 😉 )

  118. HAL 9000 Says:

    @Scott, thanks for the essay, I will read it.
    Unfortunately, I do not know the exact details why the simulations of C. elegans failed.
    You can find the codes online and try it, if it interests you
    http://www.openworm.org/
    http://caltech.wormbase.org/virtualworm/

  119. fred Says:

    HAL 9000 #111, Serge #112,

    We know that real brains do work in practice (nature “creates” about 200,000 of them every day) and are made of a finite amount of matter, about 3lbs…
    We’re not talking about building hypothetical large scale quantum computers that can’t be found anywhere in the known universe.

    The question becomes – can we artificially duplicate what nature does.
    But, what do we mean by “artificial”? Are we limiting ourselves to using digital computers to either:
    1) simulate a human intelligence as a high level symbol manipulation system.
    2) simulate a human brain as set of interacting neurons.
    3) simulate a human being as a growing organism made of cells (from embryo to finished adult)
    4) simulate any physical system as a set of interacting quantum particles and fields.

    5) heck, would it count to assemble a human DNA from scratch using atoms (computers are made of atoms too), along with its surrounding cell, and then let it grow? (e.g. a la Blade Runner)

    Going from 1) to 5), one could say that we’re getting further and further away from truly “understanding” what an intelligence is, and would be less and less satisfactory… so “artificial” means abstracted as much as possible in a mathematical sense?
    Artificial could also mean “commoditization” of the thing, i.e. turn it into something that can be improved, manipulated, bought and sold, licensed, patented.

  120. Itai Says:

    @ HAL 9000 #112
    I really like you honesty about the difficulty of simulating neuron/single cell .
    It really tells a lot how far “Blue brain” project is from their presumably feasible goal of simulating a brain, and the fact it is over hyped like strong AI.
    Did you accept Stuart Hameroff theory about the complexity of Micro-tubules , and possible quantum phenomena there?
    (It’s actually Pernose-Hameroff but Hameroff is the biologist )

  121. HAL 9000 Says:

    @fred
    I absolutely agree with you about the various levels of simulation. Imho it will not to be possible in any near future to model the brain or human body at the molecular/neuronal level. I personally understand AI as a machine that can mimick human level cognitive abilities such as symbol manipulation, communication, semantics, thinking, creativity, learning, self-reflexion, conflict management, possibly human motivational systems such as emotions.
    I think we are still very far from even defining what intelligence/consiousness is. In some respects, we know quite little about the brain and how it produces consiousness. And if we do not know how the brain gives rise to intelligence we cannot simulate it on a computer.
    I have some limited experience with machine learning and although interesting and practical in various data mining applications and classification problems, the algorithms are very inflexible and are dependent on the tuning of various parameters. The most realistic among these algorithms are probably artificial neural networks and genetic programming, but they are still quite primitive and very far from human cognitive abilities and flexibility.

    @Itai
    The “Blue brain” or the Human Connectome Project
    http://www.humanconnectomeproject.org/
    are IMHO overly ambitious and unrealistic projects. Their main goal is to attract media attention and funding, gain scientific publicity. In this sense these projects are not quite scientifically honest, because the authors of these project must know that they cannot deliver what they promise.

    About the quantum computer in the microtubules theory: I haven’t seen any evidence of this claim. It is an idea, an interesting hypothesis, but so far there is no supporting evidence. Whenever I read “quantum consciousness” I smell crackpottery. But possibly QM could play some role in the brain. QM can certainly play a role in biology
    http://iopscience.iop.org/1742-6596/302/1/012037/pdf/1742-6596_302_1_012037.pdf
    The future will tell

  122. rrtucci Says:

    So what does Eugene Gootsman’s unsympathetic questioner think about quantum “contextuality”? Is contextuality useful for anything from the point of view of a complexity theory?

  123. Alexander Vlasov Says:

    Scott #117, the idea was: “tell me a brief story involving a goat, a cabbage, and a key” (and even with permission “the story can be stupid”), it was not suggested to make the story acceptable by STOC or to win Nevanlinna Prize.
    The text prepared by discussed program already was accepted in a journal. I even do not talk about that story due to lack of information, I am about the case then I myself may see a referee report with suggestion to change some sentences “as being not scientific” or somethimg like that and so the referee indeed read the paper and did not guess about origin.

  124. Alexander Says:

    HAL 9000 #121:

    I have some limited experience with machine learning and although interesting and practical in various data mining applications and classification problems, the algorithms are very inflexible and are dependent on the tuning of various parameters.

    Back in university, I used to work as an undergraduate research assistant in a well-respected statistical machine translation team. So I have some limited experience with machine learning as well.

    I agree that the algorithms used in machine learning and pattern recognition seem to be inflexible, but I do not understand your criticism of the parameters to be tuned. Apparently, the human brain is full of parameters that are tuned constantly. I would even say that learning is nothing else than tuning of parameters.

    The most realistic among these algorithms are probably artificial neural networks and genetic programming, but they are still quite primitive and very far from human cognitive abilities and flexibility.

    Of course, but the difference in flexibility can easily be explained by the mere number of neurons and synapses in the human brain. 10^11 neurons and 10^15 synapses are enough to create thousands of local networks. Each of these local networks might be quite limited, but interaction of them may result something much more flexible.

    Remember how IBM’s Watson combines different knowledge sources and algorithms to achieve its overall performance. Something similar but on a much larger scale might be happening in the brain.

  125. Complimentage Says:

    rrtucci@122,

    “So what does Eugene Gootsman’s unsympathetic questioner think about quantum “contextuality”? Is contextuality useful for anything from the point of view of a complexity theory?”

    I like your questions even if your answers are usually a little sketchy. How it applies to complexity theory is more Scott’s bailiwick. But more broadly, I agree with Lubos that as Bohr said, quantum contextuality is a fancy word for the fact that quantum mechanics doesn’t allow you to assume that the quantities you measure objectively had (in the classical sense) the sharp values you measured before the measurement. And, like a lot Bohr’s philosophical musings, it’s deep sounding mumbo jumbo. Measurements could, of course, be sharp, if you could make them across superpositions.

  126. Darrell Burgan Says:

    I’ve always thought the Turing Test had the wrong goal. Simulation of intelligence is impressive but seems achievable. It’s just a very advanced and complex form of data processing. Given enough time and effort, it seems hard to believe we won’t achieve it eventually.

    But when I think of true AI, I’m thinking of sentience, not just intelligence. To this end, it seems to me the key criterion should not be merely for a machine to convince an interrogator that it is sentient. The key criterion should be for the machine itself to realize it.

    If the machine realizes it, then by definition sentience has been attained, and it should only be paperwork thereafter to convince others of it. If the machine doesn’t realize it, then the best one can say is that it is an effective simulation of sentience, which in my view is not at all the same thing.

    Which is all to say: I’m fairly convinced that machine sentience will never happen without a fundamental change in computing itself. Machine intelligence will, of course, continue to progress and produce many success stories, as it already has.

  127. Alexander Says:

    Darrel #126:

    I’ve always thought the Turing Test had the wrong goal. Simulation of intelligence is impressive but seems achievable.

    The Turing test is certainly not a pure intelligence test. It is easy to imagine an entity that is highly intelligent, but fails to pass the Turing test, because it simply lacks knowledge about human behavior.

    It’s just a very advanced and complex form of data processing.

    I know that some will disagree, but there seems to be no evidence that the human brain is anything else than a very advanced and complex data processor.

    But when I think of true AI, I’m thinking of sentience, not just intelligence.

    Why? What’s the use case of sentience if it does influence the external behavior as observed in the Turing test?

    From my perspective, a program that can lead a conversation as suggested in Turing’s paper would be sufficient to automate at least 90% of today’s jobs. And the remaining 10% of the jobs would exceed the capabilities of most humans as well.

    So again: What is the use case for that sentience you are demanding?

  128. Scott Says:

    rrtucci #122:

      Is contextuality useful for anything from the point of view of a complexity theory?

    Well, there’s a nice paper by Joe Emerson and others, basically saying that the states useful for magic state distillation (and hence, for universal QC using stabilizer gates only) are precisely the ones that are “contextual” under stabilizer measurements.

    On the other hand, I can say that I personally have spent 15 years in QC without ever needing the notion of contextuality, except when I’ve ventured into foundations. :-)

  129. Scott Says:

    Darrell #126:

      But when I think of true AI, I’m thinking of sentience, not just intelligence. To this end, it seems to me the key criterion should not be merely for a machine to convince an interrogator that it is sentient. The key criterion should be for the machine itself to realize it.

    The issue is, how would you know when a machine had “realized its own sentience”? What test can you propose for that? The central innovation of the Turing Test was that it gave a purely operational criterion for whether the machine passes or not—thereby breaking off a large chunk of what had previously been a philosophical debate, and putting it into the domain of science.

    Or, to come at it another way: how do you really know that other people, besides yourself, are sentient? You don’t have direct access to their minds, and you can’t disprove the possibility that they’re “just” complicated robots. If you reflect on it, I think you’ll realize that you ascribe sentience to other people based on their observed behavior, and perhaps the similarity of that behavior to your own. So, what the Turing Test asks is “merely” that you extend the same courtesy to a hypothetical test-passing machine that you extend to other humans.

    As a final remark, it’s easy for people to vastly underestimate the difficulty of really, honestly passing the Turing Test—particularly given the way the test is regularly hobbled by naive/incompetent judges, and the regular sensationalist claims of this or that program “passing” a hobbled version of the test. I’ll venture a guess that building a machine that can pass the Turing Test with me as the judge—or with anyone else who “gets” what you need to do in such tests—could take centuries or millennia, and it’s far from obvious that we’ll ever get there. (For one thing, while this is a separate discussion, I consider it quite likely that climate change and the general collapse of civilization will get us first…)

  130. Serge Says:

    Scott #115: The burden shifts to the skeptics to explain why (say) a mere 50,000 years of technological development couldn’t get us there.
    I use the word “possible” for things that it appears could be done a mere century—or even billions of years—from now, and reserve the word “impossible” for cases where there’s a physical or mathematical argument that the thing couldn’t be done even then.
    and Scott #129: Building a machine that can pass the Turing Test […] could take centuries or millennia, and it’s far from obvious that we’ll ever get there. (For one thing, while this is a separate discussion, I consider it quite likely that climate change and the general collapse of civilization will get us first…)
    So you just answered the question you’d asked in #115. Our misunderstanding wasn’t about the meaning of the word “possible”, but rather about what constitutes a “satisfying explanation”. Discarding the evolution of mankind in itself as an irrelevant object of study in computer science, you can always say that whether or not we’ll have died in 50,000 years is an altogether separate discussion. But from my perspective, it’s at the very heart of the matter.

    I really feel there’s a deep connection between complexity and chaos. For example, it’s possible to draw a parallel between the problems 2-SAT (polynomial) vs. 3-SAT (NP-complete), and the 2-body (stable system) vs. the 3-body (chaotic system) problems in dynamical systems. Of course you have to replace the momentum of the bodies by some kind of computational endeavor… However and for what it’s worth, I find it amusing to view the two scrolls of the Lorenz attractor as geometric representations of the hypothesizes P=NP and P!=NP respectively…

    In any case, if mankind is able to solve a few polynomial problems before the collapse of its own civilization, it probably won’t be able solve the NP-complete ones before it disappears. And that is how what was in the first place a mere engineering problem will have turned into a genuine problem of principle.

  131. Darrell Burgan Says:

    Alexander #127:

    So again: What is the use case for that sentience you are demanding?

    I’m not demanding anything – just saying that when people speculate about machines surpassing human intelligence, they’re not just referring to some raw intellectual capacity, but they are including self-awareness. A malfunctioning program that is endangering humans, however intelligent, can be shut down without ethical qualms. But a self-aware program that is endangering humans is a different thing entirely, no?

    As to whether the Turing Test could discern between a sentient machine and a merely-intelligent machine, I agree that it probably cannot. To me, that is a reflection of the limitations of the Test, not that there is no distinction between sentience and intelligence.

    Look at it this way. How do you know that I’m sentient? How can I prove it to you? Maybe I cannot. But even if I cannot prove it to you, I am absolutely certain that I am. The issue is one of communication, not fact.

  132. Darrell Burgan Says:

    Scott #129:

    Or, to come at it another way: how do you really know that other people, besides yourself, are sentient? You don’t have direct access to their minds, and you can’t disprove the possibility that they’re “just” complicated robots.

    Agreed, I don’t know for sure, and with our current knowledge of consciousness it seems we have no deterministic way to discern sentience from intelligence. My point is that I don’t think the Turing Test is at all capable of discerning the two.

    However, the question I’d offer in response is: how do I know that you are sentient? Without an empirical test that gives an unambiguous answer, I don’t really know, do I? But does the fact I can’t prove it mean you aren’t? I’m sure you think you are! :-)

    So I think we need a better yardstick. The Turing Test can judge a level of intelligence, but I don’t think it says anything at all about whether a machine is a truly thinking self-aware entity or not. Am I wrong?

  133. Alexander Says:

    Scott #126:

    The central innovation of the Turing Test was that it gave a purely operational criterion for whether the machine passes or not—thereby breaking off a large chunk of what had previously been a philosophical debate, and putting it into the domain of science.

    That is certainly correct, but a downside of this approach is the inability of the Turing Test to prove any kind of intelligence that is not able or willing to imitate human behavior closely enough.
    For example, I often wondered if HAL in 2001 would have been able to pass the Turing Test.

    I’ll venture a guess that building a machine that can pass the Turing Test with me as the judge—or with anyone else who “gets” what you need to do in such tests—could take centuries or millennia, and it’s far from obvious that we’ll ever get there.

    Yes, in particular if you consider techniques like in the Voight-Kampff test. But then again, most would agree that the androids in Blade Runner are certainly intelligent, although they can be distinguished from humans based on there reaction to certain stimuli.

    (For one thing, while this is a separate discussion, I consider it quite likely that climate change and the general collapse of civilization will get us first…)

    I did not know that you are sooo pessimistic. Remember that the singularity is supposed to occur around 2050 according to Kurzweil, and after that all of our problems will be solved in an instant. 😉

  134. Darrell Burgan Says:

    Scott #129:

    As a final remark, it’s easy for people to vastly underestimate the difficulty of really, honestly passing the Turing Test …

    Indeed, it’s a super difficult problem, which makes it one of the most fun. It involves a machine leaning to communicate in natural language, learning a huge number of individual facts and finding a way to categorize them and correlate them in very dynamic ways, some kind of simulation of human emotions, and a whole lot besides.

    I think it’s worth noting that human intelligence is not the only form of intelligence. A huge data warehouse with petabytes of fact data in it can answer questions that no human intelligence could answer, so I don’t think it is a stretch to say it exceeds human intelligence along at least one dimension. But certainly it is not an AI. As computing continues to gain power and capacity, I think we’ll have to better define what we mean by “intelligence”. I think there will be many kinds and flavors.

  135. Vitruvius Says:

    Catastrophic climate change? The general collapse of civilization? People have been predicting the end of man since the dawn of man; it hasn’t happened. When one tries to point that out they say, “this time it’s different”. People have been saying that since the dawn of man too; it still hasn’t happened. And when one tries to point that out to them they get mad at you. There’s less violence than ever. There’s less poverty than ever. There’s less starvation and disease than ever. I used to try to point this out to people, but they got mad at me, and they didn’t care, and I realized that there’s really nothing I can do about it. Things are likely to continue to get better, albeit with swings and roundabouts, regardless of what I do. So I don’t bother arguing about it any more. Instead I concentrate on being as happy as anyone has ever been; instead I worry about that which may be dangerous to me, which does matter to me, and which I can do something about: the danger of my becoming too smug.

  136. Scott Says:

    Vitruvius #135: Your argument is the standard one (they predicted catastrophe before and were wrong!). Let’s set aside the predicted catastrophes of past eras whose reality turned out to be much, much worse than even the loudest alarmists imagined (e.g., the rise of Hitler). I want to focus on something even more relevant: the fact that humanity’s footprint on the planet (pollution, use of natural resources, GHG emissions, pretty much any measure you prefer) is increasing exponentially with time. If you think that a situation of exponentially-increasing resource use can be sustained as steady-state, then you don’t understand exponential growth. In any such case, either there will be a deliberate course-correction, or else before long, the alarmists will necessarily be right—it’s completely irrelevant how many times they were wrong in the past. Just yesterday, I learned about a population of caterpillars on Galiano Island, near Vancouver, that repeatedly grows exponentially in size, basically overtakes the entire island, then crashes to almost nothing when its size makes it susceptible to disease. If you were one of those caterpillars (and you knew nothing about the previous cycles), it might seem to you that “the rise of the caterpillars” will continue forever, since every time you check, you find that the caterpillars who predicted continued exponential growth were right, while those who predicted imminent collapse were wrong. But if you thought about it further, you’d realize that the argument is nullified—rendered completely irrelevant—by the finite size of the island. (Note that even escaping off the island—even colonizing the galaxy, for crying out loud—can only buy you a couple hundred additional doublings at most, which will happen in a relative eyeblink at your current rate.) Unchecked exponential growth guarantees that you’re headed for a collision with the reality of finite resources—the question isn’t whether it’s going to happen, but only when, and what the details will be (e.g., whether it will be a “soft,” controlled collision or a “hard,” catastrophic one). For more, see my old post Earth Day, Doomsday, and Chicken Little.

  137. Vitruvius Says:

    I could point out that the second derivative of human population with respect to time is already negative, Scott, but I’m not going to, because as I said at 6:01, “I don’t bother arguing about it any more”. C’est la vie, que sera sera, “I’m Glad I’m Not Young Anymore.

  138. Scott Says:

    Vitruvius: It’s true that large parts of the world have shifted to below-replacement fertility — but as long as other parts maintain high fertility, we continue on an exponential population trajectory. But I’m not even focused on continued population growth as the biggest problem. The main point is that PAST exponential growth has already made us the overwhelmingly dominant force on the planet, creating a situation unlike anything in our previous history. We’ve destroyed a large constant fraction of the forests, driven to extinction a large constant fraction of the species, etc., and none of those trends are slowing down. We’re also on track to emit enough GHGs to melt the glaciers, which will in turn release methane, accelerating warming further. Most people don’t understand just how novel our situation now is: we’ve never before been in a position where just business as usual (no growth), continued for another century or so, results in a massive transformation of the basic physical conditions of the planet. This is new — and the fact that it happens “slowly,” i.e. over decades, unfortunately prevents us from internalizing just how new it really is.

    But I forgot, you no longer argue about this.

  139. Vitruvius Says:

    Correct. Otherwise I’d point out that we would be better off if the atmospheric carbon-dioxide concentration were a half-dozen times higher and the average global temperature were a half-dozen degrees Celsius warmer, which would not be novel but would be like many other periods in the past when the planet was lush with life, contrary to the periods of global cooling when there were three kilometers of ice overhead in the place where I’m sitting as I type, but as you’ve noted I no longer argue about this 😉

    Humans do two things, Scott: they try to make their lives better, and they try to fix the problems caused by the mistakes they made the last time they tried to make their lives better. Always have, always will. Indeed, were it the case that nobody was concerned about the future then I would be screaming about it, yet many or most are concerned, so there’s no need for me to get hysterical.

    I’m here at Shtetl-Optimized because, as I’ve said here before, I like your shtick. I agree with you on the dangers of fraud in quantum computing. I agree with you on the dangers of fraud in artificial intelligence. I just see the dangers from fraud in populist for-profit gold-plated bureaucratic and media-driven apacoholic fear-mongering differently. Rather than jump on that oy vey bandwagon, I prefer the perspective of Hans Rosling.

    Yet however the may chips fall, I highly respect you, I don’t want to dissuade you, and I’m counting on the contributions of people like you in this debate just as much as I’m counting on the contributions of people like me. It is in all seriousness that I say, bonne chance, Scott. It’s by this give-and-take that we solve the problems caused by the last time we tried to make our lives better, not by us all running to the same corner, pulling in the same direction, and upsetting the balance.

  140. Vitruvius Says:

    Re-reading what I’ve written above, I realize that I may be sounding more flippant than I intend. If I sound like I’m fearless; believe me I’m not. Yet when it comes to fear I agree with David Deutsch’s perspective, as he elucidated in “The Beginning of Infinity“: the great threat to the future is not the various horsemen feared by the various sects of apocaholics, rather, it is the cult of pessimism in and of itself. As long as we remain calm and carry on, as it were, we’ll probably muddle through and things will get, on average, better. Besides, that really is the only rational approach that there is available to us. But if we give up and let the dæmon pessimism take hold, then all will indeed be lost.

    Beware the fear of change, I say. Or as David Gallo says in The Deep Oceans: A Ribbon of Life: “Proust said that the true voyage of discovery is not so much in seeking new landscapes as it is in having new eyes. […] The first thing we see coming out of the sea floor, after a volcanic eruption, is bacteria [and then worms, shrimp, crabs, fish, &c…] You’ve got these little chimneys, sitting here, smoking away ~ this stuff is toxic as hell, by the way, you could never get a permit to dump this in the ocean ~ and it’s coming out all around [the earth]. It’s basically sulfuric acid […] and animals are thriving in it […] and we probably came from there. […] As an ecosystem, they’ve been stable for billions of years […] Is it sensitive? Yes. Is it fragile? No. […] Everything on this planet works by cycles and rhythms […] It’s not a disaster, it’s rhythmic. What we’re learning now [..is..] you can’t listen to a five billion year long symphony, get to today, and say stop, I want tomorrow’s note to be the same as today. [That’s] absurd.”

  141. Philip White Says:

    Hal 9000, #108: I agree with your claim that there ought to be other ways of attacking the AI problem other than the Turing test. People are hooked on the Turing test for no particular reason other than that Alan Turing came up with it; it’s not that productive of a research goal, in my opinion.

    I think real progress will be made in AI when computer scientists focus on more realistic/relevant goals. For example, I think a great AI research problem would be: Devise an algorithm that can figure out if a particular natural language sentence is a tautology or “true regardless of assumptions.” For example, “A triangle has three sides,” is a tautology, as is, “There exists a vaccine for smallpox.” On the other hand, “He likes green beans,” is not a good tautology because you don’t know who “he” is.

    This problem has nothing to do with Turing tests or “chatbots,” and doesn’t seem to get a lot of attention, at least from the media (or anyone else that I’ve heard of).

    The reason I think this problem is important is because a real answer to it would enable a certain amount of ability to answer questions that humans have. I personally don’t have any interest in a robot that can do dishes or talk to me about current events or something; I’d like a robot (algorithm) that could give biologists hints as to how to reverse the aging process, etc.

  142. Ian Finn Says:

    Hello Professor Aaronson,

    I know this is off-topic, but I was wondering if you might have any comments on Gil Kalai’s recent blog post, which contains some slides and discussion regarding the possibility of noise preventing scaling up of BosonSampling beyond 7-8?

  143. Anon Says:

    Vitruvius, about your suggestion to increase CO2 levels, check out http://xkcd.com/1379/

    I think you haven’t thought it through – do you really want (e.g.) New York City to be underwater?

  144. rrtucci Says:

    Vitrivius said:
    Re-reading what I’ve written above, I realize that I may be sounding more flippant than I intend. If I sound like I’m fearless; believe me I’m not.

    Vitrivius, your problem is not that you are flippant. It’s that you don’t argue logically, and you are a bit pompous (okay, a lot)

  145. rrtucci Says:

    Sorry, I misspelled Vitruvius

  146. Rahul Says:

    the fact that humanity’s footprint on the planet (pollution, use of natural resources, GHG emissions, pretty much any measure you prefer) is increasing exponentially with time.

    Is it really true for all (or most) pollution indicators? Are they all growing *exponentially*?

    In fact, my impression was that some pollution indicators (e.g smog, SOx, acid rain, particulates etc.) had stabilized?

  147. Scott Says:

    Rahul #146: Yes, absolutely, there are certain specific things that improved over time—because enough people understood the problem and fought and won. (Arguably the single most important was the worldwide phasing out of CFCs—which, while bitterly opposed by the chemical companies, was an orders-of-magnitude easier battle than GHGs because of the tiny cost of safe alternatives.) This is loosely analogous to how, despite the exponential trajectory of Moore’s Law, certain specific aspects of computers have gotten worse rather than better. (E.g., newer versions of Windows and PowerPoint have been slower and crappier than their predecessors, despite having more processing power and memory to work with.) Yet, despite these fluctuations, the general trend is unmistakable, and I’d say the same is true with environmental destruction.

    (Another crucial point, as I said before, is that with many types of destruction—deforestation, ocean trawling, dumping garbage into the oceans—the rate of destruction doesn’t need to increase for the thing to be pretty much completely destroyed within the next century. The current level of destruction, continued “stably,” would be perfectly sufficient.)

  148. Jay Says:

    Scott #147

    It’s not clear according to which measure you see exponential destruction. Population growth? That was the big fear in the 70th, now demographs know population is stabilizing. CO2 emission? The exponential phase was 1950-1980 (+4Gt), present increase is much lower (+2Gt for 1980-2010), and whatever happens it can only stabilize as the ressources are finite. Deforestation? Big in some place, absent in europe where forest cover have actually been increasing for >100y.

    Anon #143

    I agree IPCC gives us the best estimate we can reach of what will happen. I agree 5 degree is extraordinary large. I agree New York might get under the sea at some point. But in what time frame?

    New York under the sea in 30 y, that’d be trully catastrophic. New York under the sea in 3000 y, that’s not a big deal. IPCC actual prediction is 1-3 meters for 2300, with low confidence. The 7 meters scenario reported here and there has even higher uncertainity and might happen only over several millenia.

    http://www.climatechange2013.org/images/report/WG1AR5_Chapter13_FINAL.pdf

  149. Rahul Says:

    Scott:

    Agreed. But for the trends that have gotten worse is the scaling really exponential? Or is it quadratic or some lower order.

    I’m just wondering how badly things like energy consumption, Greenhouse gases etc. scale.

  150. Itai Says:

    Hal 9000,
    the pernose hameroff theory suggest new evidence from 2014 about quantumness in microtubules did you read it?
    http://www.kurzweilai.net/discovery-of-quantum-vibrations-in-microtubules-inside-brain-neurons-corroborates-controversial-20-year-old-theory-of-consciousness

  151. Scott Says:

    Vitruvius #140:

      I agree with David Deutsch’s perspective, as he elucidated in “The Beginning of Infinity“: the great threat to the future is not the various horsemen feared by the various sects of apocaholics, rather, it is the cult of pessimism in and of itself. As long as we remain calm and carry on, as it were, we’ll probably muddle through and things will get, on average, better. Besides, that really is the only rational approach that there is available to us.

    There were aspects of The Beginning of Infinity that I loved, but the closed-minded, dogmatic optimism was not one of them. Optimism is a personal temperament, not a rational theory of how the world is. As far as I can see, there’s no teleological reason whatsoever why, for example, if we find that we’re dramatically warming the earth through CO2 emission, the problem must be solvable by, say, emitting SO2 to counteract the warming … and if all that sulfur dioxide in the air causes severe problems of its own, well, there must be a solution to that, and so on ad infinitum. As far as nature is concerned, it’s perfectly fine for this process simply to terminate with the vast majority of us dead. If we actually want to solve our problems, then instead of plunging blindly forward like the local-optimizers we are, we might need to backtrack within the tree of technological possibilities, and try something totally different from what we were doing before. E.g., wean ourselves away from fossil fuels, and switch to nuclear and solar. But it’s clear that that’s never going to happen without governmental action: even after we use up all the oil, there’s unfortunately too much coal in the ground.

    I recall that, in one striking passage of the book, Deutsch muses that, sure, it’s possible humans will wipe themselves out, but even if so, that doesn’t contradict his theory of optimism, since some other species on another planet (in his nomenclature, those aliens will also be “people”) will then colonize the universe and do all the other things that humans would’ve done. (Mind you, Deutsch doesn’t guess that will happen—in his typical manner, he deduces it from some principle or other.)

    In any case, I think it’s important for people to understand that this sort of abstract, cosmic, almost Spinozan optimism is the very most that’s available to you on a Deutschian worldview, by Deutsch’s own admission. If we want our descendants to be the ones who will survive and flourish, then we better fight for it, since it’s a very real possibility (or even likelihood?) that it won’t happen.

  152. ScentOfViolets Says:

    Scott #109:

    If a computer passed a real Turing test—one where the interrogator asked creative and unusual questions, and refused to accept answers that didn’t demonstrate understanding of them—then in practice, it couldn’t do so by caching the answers in a giant lookup table. The space of possible questions, as you can easily verify, is too astronomical.

    AFAICT from the original paper, Turing specifically disallowed any physical limitations on his computing device, in fact, granted it an infinitely large memory. Are you saying that human-style intelligence (or consciousness or whatever you want to call it) is dependent on the physics of our universe? :-)

  153. Scott Says:

    ScentOfViolets #149: I was careful to say, “in practice.” Of course you could pass the Turing Test with a sufficiently-large lookup table—that’s a point many have made, and that I explore in detail in WPSCACC. But the lookup table would vastly exceed the size of the observable universe.

    Now, suppose, counterfactually, that you were able to build a lookup table larger than the observable universe, and suppose it did pass the Turing Test—should we then say that that table was “conscious”? Well, that’s a very interesting question, one that I also explore in the essay. My own strong inclination is to say “no”: i.e., that even if we don’t know what consciousness is, much like with a Reed-Solomon decoder or a 2D grid, we can be quite confident that a giant lookup table is not conscious. But in any case, that’s a separate discussion: the point I was making to HAL 9000 was simply that, in practice, a serious Turing Test will never be passed in this crude way.

  154. Scott Says:

    Ian Finn #142: Gil has been at his “BosonSampling can’t scale beyond 7 or 8 photons” thing for quite some time, and I’ve responded to him at length in the comments sections of many previous posts. I just looked at his blog, and was unable to find where he says something new about this—maybe you could point me there?

  155. domenico Says:

    I have a problem with a sufficiently-large lookup table: if I repeat the same request twice, for example “tell me a story with a goat, a cabbage, and a key, that it is greater of 100 words”, then the lookup table give ever the same story, and the lookup table fail the Turing test.

  156. Scott Says:

    domenico #152: As a simple fix, just imagine that the lookup table takes as input the entire history of your previous interactions with it, not just the question that you asked most recently.

  157. ScentOfViolets Says:

    Scott #150:

    Yes, I read this several months ago. However, you talk about metaphysical properties; I’m suggesting instead (I think I posted something to this effect when you first posted your essay) your notion of human-equivalent intelligence becomes — in practice, if you like — a statement about the boundary conditions and physical laws of our universe. The look-up tables we’re talking about would be no big deal in, say, one of Rudy Rucker’s universes composed of infinitely divisible matter.

    This is a bit of an aside, really; I just know how you like to tie questions of computability to physical law and vice versa and thought you would be amused. :-)

  158. Ian Finn Says:

    Scott #151: Ah, my apologies, perhaps none of the material is actually new. Maybe I just hadn’t seen it presented in quite so formal seeming a manner.

    I was thinking of this as “new:”

    http://gilkalai.files.wordpress.com/2014/06/aiq5cc.png?w=640&h=480

    And the slides on “the emerging realistic picture for boson sampling,” all from

    http://gilkalai.wordpress.com/2014/06/14/influence-threshold-and-noise/

  159. Alexander Says:

    Even if we had unlimited memory, unlimited lookup tables would be quite difficult to fill. Nevertheless, I have not doubt that the human brain utilizes extremely large lookup tables.

    There is one general problem with most current approaches to machine learning:

    I have two year old twins, and I observed their cognitive development very carefully during the last two years. One thing I can say: They learn very, very slowly compard to what we expect from machine learning algorithms. It took nearly one year until they were able to speak single words, and even now after two years, they are far away from passing the Turing Test.

    So they have these very superior neural processor with 10^11 neurons and the ability to establish about 10^15 synapses, but still they need many, many years of learning until they even get close to passing the Turing Test.

    Still, we seem to expect that machine learning algorithms will be able to achieve similar cognitive performance with a much shorter learning phase. But we have no reference that this is possible at all. It might very well be the case that a strong AI will need many years of training as well, and we cannot evaluate the final result before several years have passed.

  160. Vitruvius Says:

    I certainly agree, Scott, that the use of non-renewable hydrocarbon reserves as fuels for transportation, heating or cooling, and lighting is wasteful, and that we are better off moving to nuclear and solar as quickly a practicable. We need those hydrocarbon reserves as feed-stock for chemicals and plastics, which already account for on the order of half the hydrocarbons we harvest. And we are working diligently on the problem, just look at the results regularly reported on your own institution’s video.mit.edu, which I subscribe to (literally and figuratively).

    I also think people are grossly excessive consumers: my footprint is very small, especially when adjusted for my level of wealth. I don’t have a TV, I don’t have a mobile phone, I have one computer (which I replace only every ten years, even though I’m a professional software developer and more-or-less live on the thing), I haven’t ridden in an airplane since 1994 and don’t ever plan to again, and in my 60 years I’ve only spent $12,000 in total over the three cars I’ve owned, which I’ve used and use to drive much less than 1,000 kilometers per year. I simply don’t like spending money.

    And I think we would probably agree that a low footprint coupled with high wealth is a generally good goal for human-kind. Surely we don’t want folks in the less-developed parts of the world to have to suffer from a lack of the quality of life we enjoy. Though some will undoubtedly worry that if everyone cuts their consumption to my level the global economy will suffer, which won’t help the poor either; I think we’ll probably work that out too, over time.

    I must admit though that I don’t understand your reference to “our” descendants: I don’t have any descendants, because I made a conscious decision to not contribute to the positive (if temporary) magnitude of the first derivative of population with respect to time. I’m also a bit leery of unbridled government action, due to the dangers of it producing the kind of disastrous results you first mentioned at 19:16 yesterday (when you risked having others invoke Goodwin’s law, while I was only thinking of the effect of the Beer-Lambert law on atmospheric heating).

    But I’m willing to abide by the decisions of a functioning democracy, for better or worse. If my species does decide to commit suicide, I don’t think I have the right to stop it, that seems like hubris to me. Yet I don’t think we will ~ it’s not in our nature. If optimism is a personal temperament and not a rational theory, as you claim, then so is pessimism, which is why the balances engendered by an non-elitist democracy are needed to level out the extremes.

    I don’t know if you remember James Burke, he wrote and produced The Day the Universe Changed and Connections I, II, and III. He did the Apollo 11 coverage for the BBC. Said BBC asked him to make a number of predictions of the future, back in 1973, and he turned out to be remarkably prescient. Recently the BBC asked him to predict the future c. the end of this century; he predicts the end of work and the end of illness and disease, but not a great lengthening of life. When asked what we will do with all our spare time he said, “gardening”. Humans have been gardening since, well, Adam and Eve 😉

    I agree with Burke (as well as with Deutsch, Rosling, Bjorn Lomberg, &c), so I guess I’ll have to leave your pessimism, and my optimism, at that, other than to note, once again, that between us I think we’re likely to work it out, and that if between us we can’t, then there’s certainly nothing I can do about it all by myself.

  161. Serge Says:

    Alexander #157:
    A strong AI will need many years of training as well, and we cannot evaluate the final result before several years have passed.
    Absolutely, and when this happens – if it has to happen – the unplugging of a strong AI will be punished like any other criminal act. However, I still fail to understand why mankind should need to reproduce itself by such a less pleasurable method than the traditional one… unless we’re speaking of intellectual pleasure, of course.

  162. Scott Says:

    Ian Finn #158: Oh, right, Gil has a nice preprint with Guy Kindler where they study BosonSampling with some random perturbations in the entries of the scattering matrix. It’s not a realistic error model, but I’ll grant for the sake of argument that the conclusions would carry over to more realistic models. Their basic result was not at all a surprise to us: namely, if the error per photon is more than ~1/n (where n is the number of photons), then you’ll sample from a distribution that’s far in variation distance from the “ideal” BosonSampling distribution. Alex Arkhipov also proved a tight matching result, showing that if the error per photon is less than 1/n, you’ll sample from something that’s close in variation distance to the ideal distribution. This is just as expected: it’s precisely analogous to what happens in standard, qubit-based quantum computing without fault-tolerance, showing that BosonSampling is neither more nor less “brittle” than that.

    With BosonSampling, on the other hand, the advantages are that n photons correspond directly to nxn permanents—and also, that we don’t care exactly which problem we’re solving, as long as we’re doing something that we can characterize and give evidence for the classical hardness of. In particular, if you could scale to (say) 20 photons, I’d count that as a smashing success, and even 10 photons would be great. And that should only require getting the error down to ~5% or ~10% per photon respectively—which, if you use the scattershot approach, I see no reason to think is impossible.

  163. ScentOfViolets Says:

    Rereading Scott’s WPSCACC essay in the context of this post, it seems that Scott is positiing that in some sense intelligence minimizes the number of gates involved in both memory and computation (there’s no formal difference, after all.) But wait! Isn’t this sort of dependent on the logically allowable inputs? Right now, billions of years after the Big Bang, I can ask questions about the number of legs of various taxa, why you shouldn’t marry siblings, the Dread Scott decisions, etc. But how about, say, a month after the BB? Obviously asking how many legs does a dog have is nonsensical, as are any number of contemporary legitimate questions; the universe simply hasn’t had enough time to evolve that degree of structure. ISTM that, going back in time, the number of gates needed for straightforward ‘computation’ vs. the number of gates needed for brute memory becomes increasingly weighted towards the latter. IOW questions of ‘intelligence’ in this sense are entirely era-dependent.

  164. RandomPerson Says:

    After my second question, all I got was “…wait”. I responded in mixed transliterated Russian and English– I said hello how are you? He said something to the effect of good and you? I said horosho and you? He said something to the effect of “I believe the English words “good” and “bad” fit. Please use them”, so I asked him in proper Cyrillic if he had problems speaking in his native language and I got “…wait”, and that’s what I got for every question after.

    I ended with a question about Spider-Man and got the …wait reply and concluded I had totally broken the bot and moved on. I actually expected the bot would get light Ukrainian/Russian and expected the redundant “and you?” Would get missed but eh…wait :)

  165. Rahul Says:

    Regarding the pollution / AGW & what we are doing to fix it, I’ve an anecdote.

    I lived in a University Town & plenty of academics were pretty passionate about doing their bit. One guy I knew used to scrupulously recycle & drive a very fuel efficient car, bike around a lot etc. & always complained about ignorant Americans driving fuel guzzling SUVs, not carpooling & having crappy insulation on their houses etc.

    But this same (high profile) Prof. used to be flying himself to approximately 4-5 conferences / meetings / reviews etc. a month & conveniently ignored the fuel consumption & greenhouse footprint of such trips.

    Personally, I’m still confused as to how one evaluates such lifestyle choices from a saving the environment viewpoint.

    Is driving an SUV worse than 24 transatlantic trips a year?

  166. domenico Says:

    I think to understand now: each program is a linked graph, that contain each computable function (with a numeric argument, or textual argument), so that each human answer can be simulated by a machine; the graph change in the time, with the questions and answers, and if the machine can make questions, also to itself, then the difference with a human cannot be measured (and it can be applied to robot-life difference, virtual reality objects-real objects, simulated sounds-real sounds, artificial objects and biological objects).

  167. Scott Says:

    Rahul #165: People are constantly trying to turn environmentalism into a question of personal virtue and sin. That way, they get to call environmentalists hypocrites if it turns out that they themselves are part of civilization, and partake of many of the things that they correctly point out are unsustainable. Stupidly, shortsightedly, many environmentalists themselves go along with this stunted vision, by presenting environmentalism as not about the civilization-scale changes that will be needed for us to have a chance at survival, but about the warm fuzzies you get when you turn off the lights before leaving the house.

    Personally, I’ve always regarded it as obvious that all the well-meaning liberals in the world, trying to limit their environmental footprints as a matter of individual conscience, won’t begin to make a dent in the actual problem. For every such well-meaning liberal, there will be 50 people who pollute with totally clear consciences, either because they deny the existence of the problem, they don’t care, or they believe they have a right not to care, since they weren’t born into lives of privilege. And therefore, how well a few liberal academics succeed or fail at limiting their carbon footprints, etc. shouldn’t even be part of the conversation. The only thing that would make a difference, is for governments simply to tax every economic activity at a rate that reflects its actual damage to the environment, and to use the funds thereby collected for cleanup, mitigation, and research. That way, we could all decide what kind of car to buy, how much to drive, how many conferences to fly to, etc. not as questions of personal virtue, but as straightforward questions of economics—and the invisible hand would take care of the rest.

  168. Scott McGregor Says:

    Asking common sense questions is a good strategy against a simple chatbot. A smarter Turning test challenger system might combine faculties of Google search, Wolfram Alpha and Watson. With access to more generally known facts about the world, such a system might be better able to answer a question like whether a shoebox or Mount Everest is larger, by searching the web for measures of “Mount Everest” and “shoebox”. Correct answers to such questions would surely convince more unsophisticated judges. But probably not the most sophisticated judges: linguists, linguistic philosophers and NLP researchers.

    What kind of test performances really represent a sea change? Computer programs can now easily beat the human world champion. And it was pretty amazing in the 60s and early 70s when programs got to the point where they could beat 30% of the average skittles players (equivalent to the performance level achieved in the Eugene case we are discussing). Those early programs were a modern wonder that was like “the dancing bear”, the bear wasn’t amazing because it could dance so well, but that it could dance at all. But over time performance at that level isn’t amazing at all. The bear has to keep dancing better and better or we lose interest. In chess, that meant beating the human World Chess Champion reliably. In the Turing test I think that has to mean fooling an overwhelming majority of the testers consistently.

    Linguists, linguistic philosophers, and NLP researchers will probably still be able find common sense questions where the Web answers might mislead an algorithm that lacks real understanding of how we use common sense phrases, for quite a while longer.

    For instance, while there maybe some limited people who are unfamiliar with Mount Everest, everyone who types knows the rough dimensions of their hand and their head. So humans won’t have a hard time with “Which can hold more fluid: your hand or your head?” But naive searches of the internet will find many instances where people talk about holding their head in their hands, but few instances of holding one’s hand in one’s head. By “common sense” reasoning, if a thing, X, can “hold” (contain) a thing, Y, then X is larger than Y; so this might induce a Bot that searches the web for answers to conclude a hand could hold more fluid than a head, even though the volume of a hand might fit inside a head. If it checks anthropometric websites it might see hands are smaller, but then it has conflicting answers.

    But humans won’t have any difficulty with such a question, because they recognize something else about “common sense” language — there is a specialized meaning for “hold” that applies to hands, that means something different from “contains”. And humans know that holding one’s head in one’s hand references this 2nd meaning, while the question about fluid is about volume and containment. So humans will have no problem immediately knowing the answer is “head”, while even a research bot will have to weigh lots of conflicting evidence.

    Ontology reasoning systems like SUMO/Sigma and Cyc can help a bot with identifying lots of special rules like this, but usually the bot would have to know a lot of context to know which ones to employ in all situations. And one of the problems in getting that context right is that people have a lot of “common sense” experience of the world based upon their own internal bodily experiences of living in and moving through and manipulating objects in the world in a linear time fashion. But most non fiction literature is written from an “external” viewpoint, and is divorced from temporality. So searches of factual literature will likely yield different responses fodder for bots than humans would employ; while searches of first person subjective literature is likely to fail to distinguish generally accepted fact from fiction.

    When and if a bot can reliably pass these kinds of questions posed by these kinds of judges, that will be a meaningful milestone.

  169. Vitruvius Says:

    The problem with dumping personal responsibility on “the government” by having “them” simply “tax every economic activity at a rate that reflects its actual damage to the environment”, much as I like that concept in principle, Scott, is that in practice we’ll have to get an electoral plurality to agree what rates reflect actual damage to the environment, otherwise the party proposing such practice will not be elected and so it will not be able to implement its plans.

    And given the legacy of previous claims of actual damage to the environment that are now known to be bogus, attaining such agreement may not be feasible. I mean, if we go back to the ’70s and apply your dictum, we would have set up tax rates to ward off the imminent global freezing that was then headlining the cover page of Time magazine et al, and where would that leave us now? The alternative, setting up a hyper-authoritarian government that simply decrees its policies, damn the electorate, doesn’t seem very liberal to me.

    Anyway, I really just wanted to thank you for the pointer to Spinoza you gave me yesterday at 12:46. I had of course heard of the man but I’d never taken a closer look at his work. I’ve now finished the first 24 pages on Spinoza at the Stanford Encyclopædia of Philosophy, and I do see how you would associate parts of the attitude Deutsch and I have with his work. Although I can’t go all the way with Baruch’s models of reality, at least in part because he was limited by the 17’th century’s lack of understanding of the rolês the fundamental nature of things like chaos, noise, uncomputable real numbers, and Gregory Chaitin play in reality 😉

    Now, when you wrote yesterday that “that this sort of abstract, cosmic, almost Spinozan optimism is the very most that’s available to you on a Deutschian worldview” (which seems pretty much like enough to me, in principle, even if it is a bit too Zen in practice) you implied that there is more than that, which I should try to become aware of. So, since you’re the teacher and I’m the student in our relationship, my dear professor (and I am being guileless, not sarcastic): may I ask which philosopher or philosophy you would recommend that I should next try to add to my understandings of Zeno of Cittium, Seneca the Younger, and now Spinoza, in order to try to understand that “more” that I’m missing due to my limited philistinean appreciation of my currently simplistic pantheon?

  170. Scott Says:

    Vitruvius #169: The hope is that, even if most of the electorate will never agree to unilaterally pollute less—and why should they?—the majority can be persuaded that it would be a good idea to make everyone pollute less (or at least, to make everyone pay taxes commensurate with how much they pollute). And I don’t see this as politically impossible—hard, but not impossible. Most of the world’s advanced countries are already taking steps in this direction, and given opinion polls, even the US probably would be too, if not for our medieval faction, and the many things it’s figured out how to do to amplify its power well beyond the ~30% of the country who actually agree with it.

    As for deciding how much to tax each polluting activity: the hope would be to have an independent body of extremely competent scientists, conducting an open and transparent review process, with all the proceedings recorded on the Internet, open to input from anyone whatsoever, but with anyone with a fringe agenda needing to break through a firewall of scientific skepticism before their views can be acted upon. We already know how to do this sort of thing—for crying out loud, it’s the system used extremely successfully by Wikipedia and MathOverflow and open-source programming projects! But as far as I know, it’s never been used in government, not even for issues much simpler and smaller than saving the entire planet. It’s an extremely interesting aspect of the environmental crisis that it might never be solved without a revolution in epistemology—in particular, in how we come to establish factual truths for the purpose of deciding on public policy. But such a revolution is long overdue, and we already have the tools for it.

    Note, in particular, that such a system would make the “global cooling” you mentioned a non-issue. As I understand it, the mainstream scientific community never took global cooling seriously; it was only ever a fad hyped by a few people and then amplified by journalists (just like, let’s say, a certain chatbot passing the Turing test… :-) ). Science works, and has worked for 400 years. So the one thing we’re lacking and desperately need, in addition to science, is a system for preventing the arguments of the scientists who know what they’re talking about from getting drowned out by the bloviations of the loudmouths who don’t. And the Internet (with Wikipedia-like moderation tools) could be an incredible tool for that, if only we used it well.

    As for your philosophy reading request: given how limited and selective my reading of philosophy has been, I should really be asking for such advice rather than giving it out! I can only tell you the philosophers who I, personally, found most enlightening to read—in many cases, less because of the arguments themselves (which, though original for the time, have since become familiar to almost any educated person) than because of the wonderful language and wry humor they used in making them. My all-time favorites are David Hume, John Stuart Mill, William James, and most of all, Bertrand Russell.

  171. Vitruvius Says:

    Thank you, Scott. Mill has always been a favourite of mine too; I love the way he places ethical constraints on the excesses of hyper-utilitarianism, even while I think that if one’s going to be prescriptive, utility isn’t a bad place to start. And now I’ll add Hume, Russell, &c, to my reading list. Ciao.

  172. Ben Standeven Says:

    @Itai #150:

    I can’t speak for HAL 9000, but I’d say that this “Orch OR” theory has the same problem as IIT. They’ve picked one particular property of conscious systems and assumed it to identical to the whole thing. Only, where IIT predicts that we can create an Artificial Deity by wiring a bunch of XOR gates into a grid, OOR predicts that we can create one by wiring a bunch of D-Waves into a grid.

  173. Rahul Says:

    to make everyone pay taxes commensurate with how much they pollute

    In general, we don’t pay typical taxes commensurate with services we use so I’m not sure whether taxing pollution the same way would find broad acceptability or not.

    The devil’s in the details. Also, I’m not sure how well a centrally compiled massive database of pollution tax schedules meshes with the “invisible hand “. And fairness in these contexts seems a very complex notion. e.g. Should the farmer pay for his farting cows?

    Another point is that at least within nations we have Governments to mandate these things. Internationally the only way to get things done is diplomacy: how are you going to convince India & China with their humongous appetites for coal fired power plants. When you’ve your kids dying of starvation & malaria it’s a bit hard to ask people to worry about the effects of a few degrees of temperature rise a century later.

    At some point you are going to have to appeal to things like virtue. And the point behind my story way, if we cannot even agree on the virtue of SUVs versus jet-setting (in a pollution context) it’s going to be infinitely more difficult to agree on a consensus taxation policy.

  174. Scott Says:

    Ben #172: Alas, a bunch of D-Waves in a grid isn’t enough for Penrose, and even a bunch of functioning, universal quantum computers in a grid wouldn’t be enough for Penrose either. He needs quantum gravitational computers—that is, machines sensitive to the hypothetical Turing-uncomputable phenomena that he speculates should arise from a quantum theory of gravity. So, arrange a bunch of those in a grid and you’re probably good to go. 😉

  175. Scott Says:

    Rahul #173: Yes, of course it’s an incredibly difficult political (and international relations) problem—that’s why it hasn’t happened already, even though much of the world now agrees in principle about the need for it. Yet somehow, we do manage to come up with tax policies for sales, earned income, capital gains, and other things—policies that, despite how complicated and onerous they are, at least part of the population actually obeys. 😉 So it’s certainly not impossible. The easiest solution, I think, would involve taxing fuel very heavily at the point of sale, and also taxing electricity—so that the oil and electric companies are the ones who have to deal with it (and then pass the costs on to consumers), rather than the consumers directly. But yes, farmers would absolutely need to be taxed for their farting cows—that’s actually a huge contributor to global warming, since methane is so much worse than CO2 as a greenhouse gas. And if that causes the price of beef to go up—well, that’s what ought to happen (and I say that as someone who loves burgers, steak, pastrami … you name it).

    Finally, since you seem obsessed with jet-setting academics :-) : while it’s true that such an individual’s carbon footprint is almost certainly dominated by their jet-setting, it’s worth knowing that at the societal level, in the US, only about 2% of total GHG emissions come from passenger aircraft, compared to 18% from cars and light trucks.

  176. Alexander Says:

    Philip #141:

    The reason I think this problem is important is because a real answer to it would enable a certain amount of ability to answer questions that humans have. I personally don’t have any interest in a robot that can do dishes or talk to me about current events or something; I’d like a robot (algorithm) that could give biologists hints as to how to reverse the aging process, etc.

    You’d like a robot that can answer questions that human biologists cannot answer yet?

    One step after the other: Before approaching superhuman capabilities, it might be a good idea to create a robot that talk avout current events, first.

  177. Vitruvius Says:

    I don’t think that Rahul is obsessed with jet-setting academics, Scott, I think he’s justifiably concerned about the extreme levels of hypocrisy and lack of virtue demonstrated by you and your fellow rich neo-liberal authoritarian pro-state travellers. Other than the council of wise scientists, which you and I have agreed would be a good idea, your sole ideas for solving your purported environmental problem consist of denying your own and your fellow travellers’ personal environmental responsibility because, like a six-year old, you are arguing that the other kids are doing it, while you are simultaneously proposing the imposition by government fiat of massive increases in the cost of food and energy, under threat of criminal penalty for non-compliance, which would hit the poor the hardest, even though you tenured professors would presumably survive the global economic collapse you are advocating, like parents who advise their children to do as they say, not as they do.

    Medice, cura te ipsum!

    The sky is falling but we can fly, la de da? No. I’ll believe there’s an environmental catastrophe that requires unparalleled attention, Scott, when you folks start personally behaving like there’s one.

  178. Scott Says:

    Vitruvius #177: Err, OK, let’s not meet for drinks then?

    Look, I fly a helluva lot less than I used to, turning down 4 or 5 invitations for every one that I accept. And I pack trips together to minimize travel distance. And I fly coach—always. And I hardly ever drive, and I live in a rather small urban apartment. And I give money to environmental causes. And my retirement savings are in something called the “Socially Responsible Fund.”

    But suppose none of those things were true. It’s completely obvious that would have no effect—none whatsoever—on the truth or falsehood of any claims about climate change or its effects on civilization. You give yourself away in the following striking sentence:

      I’ll believe there’s an environmental catastrophe that requires unparalleled attention, Scott, when you folks start personally behaving like there’s one.

    So, you’ve admitted that you see this issue, not in terms of the reality of the external world, but simply in terms of human status games. You don’t care whether GHG concentrations are actually increasing unsustainably, or whether that will actually destroy civilization; you care whether the people who call attention to those things are too uppity or elitist or hypocritical for your taste. (It’s a bit similar to the religious apologists who, when they realize they can’t get Dawkins on the facts, resort to attacking him for smugness and arrogance.) This is a game that you can always win: even if I, and other academics, were doing ten times more to limit our personal carbon footprints, you could still fault us as hypocrites for not doing enough, even while you did nothing and remained perfectly self-satisfied about it.

    You, in other words, are the problem: as long as a large fraction of the world continues to conceptualize this issue the way you do, it will remain unsolvable.

  179. Jay Says:

    Vitruvius #177,

    By the same logic, you should not believe earthquakes are environmental catastrophes or genocids exist unless we all become fireman and peacekeeping force.

    Look, I’m often bored by the bullshit we see here and there, such as the 7 or 70 meters hype, and more generally the idea that we already know that the damages will be enough to justify hudge taxation now.

    But there’s nothing like hearing the other side to put things in perspective. Duh… seriously.

  180. Rahul Says:

    But suppose none of those things were true. It’s completely obvious that would have no effect—none whatsoever—on the truth or falsehood of any claims about climate change or its effects on civilization.

    @Scott:

    Agreed. But it does matter to someone like me (whose views are somewhat different from Vitruvius ) because:

    (1) I think Global Warming is very real
    (2) I think anthropogenic causes have certainly intensified it.
    BUT
    (3) I think reversing or halting the trends at this stage would require extremely drastic changes &
    (4) We as a society don’t seem to have the appetite for the sort of sacrifices and changes & co-operation that’d be needed to pull such changes off in any meaningful manner.

    So, my position is far from a GW denier. Perhaps it is just cynicism of our capabilities.

    But hypocrisy certainly riles me up. I’ve met many with this holier-than-thou attitude about how I’m being conscientious by biking & driving a small car and switching to CFL’s but look at those evil guys driving their SUV’s and not recycling yada yada. And yes then the jet-setting annoys me especially when compounded by 5000 mile airline trips just to catch the perfect wave on their board or the perfect ski slope for a week (those lifestyles seem to often coincide).

    All I wish is people took a holistic view of their own lifestyles before they jump at judging others for their Hummers.

    If someone has really succeeded in having a tiny overall Greenhouse emission footprint then I do really admire you but it isn’t an easy lifestyle.

  181. Scott Says:

    Rahul #180: Well, yes, people with holier-than-thou attitudes annoy me too. But see, that’s exactly why I believe that the only long-term solution to the climate crisis will be to tax activities according to how much they pollute, and then let individuals make the decisions for themselves about whether they want to drive a Hummer, fly 5000 miles to catch the perfect wave, etc., even though those choices might cost so much that the person then has to scrimp on other polluting luxuries. That way, we reduce the issue to a relatively-straightforward matter of economics—with no moral judgment of anyone over their lifestyle choices—rather than getting into endless, unresolvable debates over “purity” and “holiness.”

  182. Vitruvius Says:

    I have consistently maintained in this discussion that I don’t think that atmospheric vapour concentrations will continue increasing unsustainably, that I think the invisible hand of the Beer-Lambert law will prevail, that I think a bit more atmosphere carbon dioxide and a bit warmer would be a good thing, that I don’t think we’re going to actually destroy civilization, and that I don’t think that what I think matters very much. However, I have agreed that as a beneficiary of democratic society, I would like to see those concerns that others have equitably addressed, and we’ve been in part discussing how to go about that.

    So pardon my rhetoric, then, but how I see this issue is not the point here, Scott. To the degree that we do need to collectively do something different about the environment it is a political problem, as Baroness O’Neill highlights, and we have to both behave like it is and visibly appear to behave like it is. You can’t just brush off other people demanding apparent sacrifice from us while they are flitting about in airplanes, or living in huge houses they don’t need but have to heat and cool anyway, because we don’t and it’s not the most statistically important part of the problem. You can’t be seen to not censure the excessive life-styles of people who behave like that. You can’t just invoke the heavy hand of the state and massive taxation without being seen to try to address the fears and concerns of those less fortunate than us, who still need to be able to afford food, which is not a lifestyle choice, it’s a life necessity. Otherwise, when you try to sway the hearts and minds of the masses, who you have already belittled at 15:04 yesterday, you will find it to be insurmountably politically difficult because their gut reaction will be, “he’s just another one of those assholes”. I don’t think that, and I’d still like to buy the next round, but I only have one vote.

    Thus it is so that you, of all people, as an abstract Son of Mary theoretician always living dangerously close to falling prey to ivory-tower detachment, need to be especially careful of how you come across to the Sons of Martha, or your political cause is doomed, and understandably so. You are one of the smartest people I’ve come across, Scott. But your seemingly casual pro-state arguments in this discussion to date, when weighed against the apparent social implications of your prescriptions, appear to indicate a lack of attention to detail which implies either a lack of actual concern, or a lack of relevant effort (which is unforgivable now that you have tenure), either of which understandably obviates your position in the eyes of just that portion of the electorate you most need to convince in order to see your fears and concerns addressed.

    My only point, then, is that when you go about invoking government fiat, you need to be more careful to anticipate and address as you go the legitimate fears of others, many who have seen or even lived through the countervailing disasters than have been previously caused by just such state interference, or the sort of diatribe I’ve illustrated at 12:08 will be your undoing. Sorry I wasn’t more clear.

  183. Scott Says:

    Vitruvius: OK. Point taken. While I stand by my claim about a carbon tax being the only possible long-term solution, it certainly doesn’t hurt if individuals choose to cut back their carbon footprint unilaterally. And it’s probably true, as a practical matter, that those concerned about climate change will have an easier time being listened to, if they’re seen as doing what they can to decrease their personal footprints.

    As I said, I do what I can here and there (as one example, I haven’t taken a single flight in over a decade purely for vacation purposes, if you don’t count family visits or my honeymoon). But, like most people, I could be and should be doing more. Tell you what: I’ll donate some money to an environmental group today, since you’ve guilted me into it. Happy? :-)

  184. Vitruvius Says:

    Based on what you’ve written I don’t think you need to feel any guilt, Scott, it sounds to me like you’re being personally fairly responsible, other than in the care you are (or were) taking in thinking through the details of and explaining the effects of your plan, at least at the margins, the provisos you would make to mitigate unintended consequences, and the degree to which you are (or were) letting others off for not feeling guilty about their unbridled excesses and waste. And you can imagine my skepticism about faceless “environmental groups”, though of course that’s up to you. Personally I’m happy as long as you’re not mad at me 😉

    So, anyway, let me try to explore the kind of adjustments to the “we just need a carbon tax” kind of proposal that I think would help folks get their head around this. The first problem is that, during the last century, it wasn’t pollution, or the weather, or the 1918 flu pandemic, or Hitler, Stalin, Mao, Pot, &c, or even (technically) war that caused the greatest amount of “unnatural” death, it was the governments of Hitler, Stalin, Mao, Pot, &c. That’s a known known, while any proposed catastrophic “climate change” disaster prediction, forecast, or proselytizing remains at best about an known unknown that cannot (as far as we know) be proven a priori. Therefore it should be fairly easy to see how folks might be leery of the known potential dangers from the state, even if they otherwise would be willing to consider the potential unknown dangers from humans’ effects on the environment.

    Personally, I am more concerned about some sort of global overly-authoritarian or totalitarian government going bad than I am afraid of the environment going bad, at least in part because I think we can control our effect on the environment once we’re convinced it’s a good idea, whereas we may not be able to control a global government once it has gone bad. That may just be me, but I do have that one vote you want, and it’s probably others too, and you do want their votes too.

    The second problem is that food is made of carbon &c, and it is essential, and until we’ve got nuclear and solar worked out, heating is “made” of carbon and it is essential too (at least in non-tropical climes). Some might even argue that cooling is essential for the modern economy, at least in tropical climes. So how can we get the price of goods and services to better reflect their true environmental cost without excessive adverse impact to essentials like that, which if I’m not mistaken is our goal here?

    Now these are just some ideas off the top of my head, and they’re the sort of ideas that would have to be fleshed out by folks who know more about this sort of stuff than I do, but it occurs to me that for one thing it’s probably not a good idea to tax all carbon per se. I mean, carbon is an element in the periodic table: it doesn’t cause any variation in the climate in and of itself. Even pure-carbon polar ice-soot albedo effects (if there are any) aren’t carbon’s fault, they’re the fault (if they’re anyone’s fault) of the particulate producers.

    Instead of “taxing carbon”, then, I think that some sort of tax or levy or tariff on non-essential goods and services, and/or non-essentials past some level of consumption, and/or the consumption of luxury and/or optional or otherwise explicitly specified goods and services would be more reasonable. I think that any such policy must be clear to make the cost directly related to the rate of damage to the environment as specifically related to each particular non-essential good or service, as agreed upon by our council of wise folks, and not be based simply on some moral sense of luxury. And I do think that the wording matters, critically. A levy or tariff instead of a tax might very well make more sense when you consider that we’re not dealing with an electorate made up of logicians, mathematicians, and linguists; just ask Kahneman, Ariely, et al.

    Next up, notice that the word “consumption” keeps occurring in the previous paragraph, so I think it would be a good idea to be very clear that it’s not income or wealth or life-style that is being punitively impacted per se, which you have also mentioned elsewhere above. It’s the true environmental cost of non-essential consumption that we are after here; we need to be adamant about being specific about that. So we probably want some sort of non-essential goods and services sales-tax-like thingy. This would also be the place to consider packaging up any price adjustments as being anti-waste, not simply anti-consumption. It’s waste that’s principally evil.

    Another idea might be to make it some sort of a “negative tax”. So even if it as agreed to adjust the price of goods and services to better reflect their true environmental cost, instead of proposing to tax people more than they are currently being taxed (which doesn’t generally elicit a parade), propose taxing them less when then consume less non-essential goods and services. People think the government taxes too much anyway, and they will see the negative plan as a reward for virtue, rather than as a punishment for living well. Recall Ariley’s note on how European countries with an opt-out clause for organ donations got a lot more folks to subscribe to the program than those with an opt-in clause. If you want to affect folks’ behaviour, don’t ignore the behavioural economists.

    Anyway, there’s some ideas we can start with, perhaps I’ll think of some more in my sleep tonight. The core central concept I’m pulling for here, though, is that if the folks who are concerned about some sort of unprecedented damage we may be doing to the environment want the cooperation of the folks who are concerned about the precedented damage that we know government is capable of doing, then they need to be more careful than simply waving around the “have the state tax the hell out of everything and let god sort it out” argument.

  185. Vitruvius Says:

    Here’s an idea to help conceptualize about how the cost of adjusting the prices of consumption to reflect their true environmental cost could be related to the kind & degree of consumption, in order to legitimately alleviate tax-grab fears. Imagine we have a set of “consumption” classes analogous to computational complexity classes. Let’s say we call them the linear essential consumption class (LE), the linear living well consumption class (LW), the polynomial luxury consumption class (PL), and the exponential luxury consumption class (EL), with the relationship that for some kind of ⊂:

            LE ⊂ LW ⊂ PL ⊂ EL

    I think, then, that at a minimum you don’t want to get all taxy before the boundary between LW and PL; in more detail: if you want a true environmental cost policy that remains compassionate about essentials then the environmental surcharge for class LE consumption should be a constant-zero function of the price of its environment cost, for class LW consumption it should a reasonable linear function of the price of its environment cost, for PL consumption it may start to be a reasonable polynomial function of the price of its environment cost, and for EL consumption it might start to become a punitive exponential function of the price of its environment cost (that’s a separate argument, but if folks are going have it then it belongs in EL, not in LW).

    But remember here that in all cases I’m talking about the surcharge being based on a function of the true environmental cost not already represented in the price of the consumption, I’m not talking about it being based on the retail price of the consumption itself. And, of course, those consumption classes aren’t mathematically rigorous like complexity classes; that’s partly why the partitioning of various consumptions into those various classes is a political problem and not a logic problem.

    I’ve probably already over-complicated the plan, but the general idea is that if you want people to get behind a save-the-environment plan you need to make any surcharges explicitly not about non-environment costs, and you need to make the plan’s threats to peoples’ lifestyles less fearful than the threats from the environment. Otherwise, they’re going to say: well, if you’re going to screw up our lives that much, me might as well go ahead and screw up the environment. And it doesn’t matter whether or not I agree with that, it’s the way it is.

    Anyway, there is, I think, precedent for this sort of thing. Where I live we pay a time-of-purchase surcharge on things like electronics goods and motor oil to cover the costs of properly recycling and disposing of those products so we don’t foul the environment with them. And people aren’t generally complaining about it. Start requiring that folks spend an inordinate amount of time and bother separating refuse into various classes where the recycling benefits of the separation of two or more of the classes has not been adequately explained starts to get on peoples’ nerves. Simply taxing people for living well (but not excessively well), on the other hand, and you’ve lost the electorate.

    Lastly, sorry about repeating myself if it’s becoming annoying, but as Strunk and White advise: say it again, say it again, say it again. And now I simply must go, I’ve got to get through the last episode of Banacek tonight, before I go to sleep to think about this problem a bit more. Salut.

  186. Rahul Says:

    I have consistently maintained in this discussion that I don’t think that atmospheric vapour concentrations will continue increasing unsustainably, that I think the invisible hand of the Beer-Lambert law will prevail,

    If I’m not mistaken the consensus expert prediction disagrees with what you think.

  187. craig Says:

    Cleverbot actually knows how many legs a camel has.

  188. Michael Bacon Says:

    Scott@151,

    You say that there were aspects of The Beginning of Infinity that you loved, but that the “closed-minded, dogmatic optimism was not one of them.” Rarely do I find myself in such sharp disagreement with you. I think Deutsch’s view of optimism is clearly not-closed mined or dogmatic, especially in the sense that you go on to elucidate further in your comment.

    You say that “optimism is a personal temperament, not a rational theory of how the world is.” This is partly correct. Optimism, or pessimism for that matter, certainly seems to be part of every person’s temperament. But surely it is more than that. Deutsch views it as a “theory” that all failures — all evils – are due to insufficient knowledge, which he believes to be the key to adopting a “rational philosophy of the unknowable.” It’s the theory that truth objectively exists in in the world, although we are only ever at the beginning of discovering it, and that progress consists of learning ever more accurate explanations for all aspects of how the world works.

    You may argue that this sounds an awfully lot like “a rational theory of how the world is,” but that’s not the case. It’s a theory that informs behavior and provides a philosophical stance that helps us learn how the world really works. That you are mistaking one for the other is clear from the examples you go on to give.

    You say “[a]s far as I can see, there’s no teleological reason whatsoever why, for example, if we find that we’re dramatically warming the earth through CO2 emission, the problem must be solvable by, say, emitting SO2 to counteract the warming . . .” Of course you are correct! Emitting SO2 to counteract the warming may well be the wrong action to take, and it might not solve the problem, or it might result in even greater problems. This “solution” may be the wrong one, but you’re not arguing that there is no solution. It seems that your “solution” would be rapid and substantial reductions in CO2 emissions. Fair enough, that could well be the correct (or the least worse) solution to the problem! But being the optimist that you are, you do believe that, in theory, a “solution” worth trying exists. 😉

    You go on to say that “ . . . and if all that sulfur dioxide in the air causes severe problems of its own, well, there must be a solution to that, and so on ad infinitum. As far as nature is concerned, it’s perfectly fine for this process simply to terminate with the vast majority of us dead.” Here, being the pessimist that you are, you seem to conflate whether problems are solvable at all, with whether, in light of all of the damage we’ve done by undertaking no action or prior mistaken actions, the problems kill us before we succeed in finding and implementing appropriate solutions.

    You are right, of course. We could have insufficient knowledge, or fail to implement the appropriate solution, or we might not have the resources available to make necessary changes before we run out of time. There are, in fact, countless things, known and unknown, that could result in such a catastrophe befalling the human race. Clearly, Deutsch wouldn’t disagree that these are possible outcomes. In fact, a major theme in his writing is that human history is littered with just such failures, albeit on a smaller scale. Of course, he is optimistic that we can do better. As, I believe, are you.

    You go on to say that “[i]f we actually want to solve our problems, then instead of plunging blindly forward like the local-optimizers we are, we might need to backtrack within the tree of technological possibilities, and try something totally different from what we were doing before.” Again, what you say about this possibly being the correct approach can not be disputed by any reasonable person. However, how you can attribute (by implication?) beliefs such as “plunging blindly forward” to Deutsch is really beyond me. Nowhere does he advocate blindly plunging ahead or even anything remotely like that. I leave it to others to read the book and draw their own conclusions regarding this and the other points: res ipsa loquitur.

    You then recall that in “. . . one striking passage of the book, Deutsch muses that, sure, it’s possible humans will wipe themselves out, but even if so, that doesn’t contradict his theory of optimism, since some other species on another planet (in his nomenclature, those aliens will also be “people”) will then colonize the universe and do all the other things that humans would’ve done. (Mind you, Deutsch doesn’t guess that will happen—in his typical manner, he deduces it from some principle or other.)”

    I think that “muse” is certainly the proper term to characterize this type of broad speculation. However, I don’t know what any of this has to do with anything other than “motivating” your analysis. Perhaps you believe that the “callousness” that you (incorrectly I maintain) perceive, discredits Deutsch’s philosophical stance, but it does not. Sure, you can say that this “striking” comment logically follows from his basic arguments, and therefore there must be some defect in those arguments, but I choose not to follow you down that slippery slope, at least until the path has been cleared enough to see the way forward.

    Finally, you say that “ . . . I think it’s important for people to understand that this sort of abstract, cosmic, almost Spinozan optimism is the very most that’s available to you on a Deutschian worldview, by Deutsch’s own admission.” But that’s plenty of optimism enough for me – at least as a theory of rational philosophy – and more than enough for you, by your own admission. :) But this only lays the groundwork for your main point. As if to seal the argument, you say “[i]f we want our descendants to be the ones who will survive and flourish, then we better fight for it, since it’s a very real possibility (or even likelihood?) that it won’t happen.”

    Counter-posing Deutsch’s philosophical optimism with your call to arms in the real world battle for truth and survival is just wrong. First, the concepts are not in opposition generally, nor specifically in any of Deutsch’s writings and arguments. Moreover, I think I’m safe in saying that there is little doubt that you both wholeheartedly desire that “our decedents” have the knowledge, commitment, wealth and time to survive and flourish against the various problems, some of which could end human existence, that they will inevitably face. Really, Scott, isn’t Deutsch’s view on this clear from everything he writes?

    Survival and flourishing will depend, however, on continuing to develop an optimistic civilization that is open and not afraid to innovate (yes, including backtracking if that is best). Ours must be a civilization based on a tradition of criticism that allows us to continuously detect and eliminate errors across every aspect of our culture. I’m “optimistic” that the free exchange of ideas in forums like Shtetl-Optimized will help us achieve that.

  189. Vitruvius Says:

    Brilliantly elucidated, Michael (188).

    Rahul (186), I think that (1) at some point the Beer-Lambert law is bound to prevail because at some point any specific absorption band will become saturated, so we should probably be careful to focus on whether or not that will happen soon enough, and not on whether or not it will happen at all, (2) I don’t agree with the consensus that disagrees with what I think, I agree with the consensus that disagrees with that consensus (you do realize that there are two or more consensi here, or we wouldn’t still be talking about it, because we’re not nutz), and I think I’ve made that abundantly clear, and (3) I think that I’ve make it abundantly clear that I don’t think that what I think matters that much; that I only get one vote, just like you.

    So rather that beating the dead of horse of our coming to agreement on the weather of the future, I think we should concentrate on those areas where we may be able to find productive consensus. Scott seems to want to come up with some sort of system, procedure, methodology, or algorithm whereby we can better adjust the prices of goods and services to have them better reflect their true environmental costs, where necessary, and I think that’s a generally good idea regardless of the anything to do with the weather. It applies equally well even if we are only taking about, say, pollution, which I abhor (real pollution, I mean; carbon dioxide isn’t pollution: it’s plant food).

    Thus, I’m trying to help Scott come up with plan for helping others, who specialize in things we don’t (like politics), come up with potential policies or programs for such a true-cost adjustment. That’s why I’ve just written over 2,200 words on that adjustment question, as shown above, and not on hypothetical catastrophic “climate change”, which would be pointless given that folks hereabouts apparently disagree with me about that anyway.

    That’s also why it remains correct that, as I wrote in my first comment on this page, I don’t argue about this any more. I’m not going to change anyone’s opinion or pessimism or optimism: everyone’s too entrenched. But there are places where the various camps’ territories may overlie, places where, through story-telling and exploring new avenues of thought, areas of consensus might be found, areas in which different camps may find common ground and shared compromise to assist each other in addressing their differing fears and concerns.

    And that’s why politics, when it’s done well, isn’t an argument: it’s a negotiation. Can we negotiate a consensus position regarding some sort of true-cost adjustment? Or do people just want to argue, fight, and brawl? I’d prefer the former.

  190. Scott Says:

    Michael #188: Thanks for the comment! I have very little disagreement with anything you say. So if you’re correctly reporting what Deutsch thinks, then by transitivity, I have very little disagreement with Deutsch either about any of these issues! So yes, it’s possible that I simply misunderstood him. On the other hand, I can testify that, when I met Deutsch in 2002, he was extremely skeptical about the reality or seriousness of human-caused climate change. In The Beginning of Infinity, he adopts a much more reasonable stance, but one that I still disagree with: he argues that the problem is real but that, as with all such problems, new knowledge will almost certainly expand our options for dealing with the problem so radically that it’s almost silly to debate what to do with the knowledge we have right now. (Tell me if I’m getting him wrong.) He gives, as an example, the case of the color TVs that required red phosphor to work, prompting the ever-wrong environmentalists to declare that color TVs could no longer be manufactured once the world ran out of red phosphor. Of course, they weren’t even imagining the invention of LCD screens.

    Personally, I think the fact that doomsayers were wrong various times in the past has to be balanced, not only against the fact that they were right various times in the past, but also against the basic Malthusian insight that, in a world with finite resources and no rational plan for how to manage them (allowing exponential explosions in their use), it’s almost a mathematical necessity that the doomsayers will become right before too long. Deutsch seems to treat “knowledge” as a trump card that can always get our species out of such tight spots (if there’s anywhere in the book where he acknowledges the contrary, please tell me where!). And that’s probably the most fundamental place where I part ways with him. By all means, let’s develop new knowledge as fast as we can; we’ll need all the knowledge we can get! But as a “backup plan,” let’s also figure out what we need to do to preserve our planet using current knowledge, in the admittedly-unlikely scenario that the LCD screens turn out not to be feasible and we’re stuck with the red phosphor.

    Having said all that, I stand corrected on one point: yes, you’re right, we ought to be optimistic. Like the men nailed to crucifixes at the end of Monty Python’s Life of Brian, we ought to whistle “Always look on the bright … side of life!” 😀 So in tonight’s post (the “Eigenmorality” one), I do indeed try to give myself a reason for optimism.

  191. quax Says:

    Are very happy to see that the debate has moved on from Eugene to Deutsch as the former was really way too easy a target.

    But what I’d really be interested in would be to know what you make of the Howard et al. claim to have identified the ‘magic’ ingredient for quantum computation, i.e. the ‘right stuff’ that delivers the elusive quantum speed-up. The paper has now been published in Nature.

    Motl already opinionated on it, and dismissed it in his usual subtle way, which indicates to me this research may deserve some attention (I regard him as a very good counter indicator).

  192. Michael Bacon Says:

    Scott@190,

    Thanks for the response. Perhaps I wasn’t as clear as I’d hoped to be on one important matter.

    The main point I was trying to make was that “optimism” as envisaged by Deutsch is the proper philosophical stance to take in addressing problems, potential solutions, and the unknown more generally.

    I specifically wasn’t trying to take any particular view with respect to issues like whether or not Deutsch is “extremely skeptical about the reality or seriousness of human-caused climate change,” or whether or not such a position is right or wrong.

    Assuming your characterization of his skepticism is correct, and given what he has written and said publicly I have no reason to doubt that you’re right, to answer your question directly, I would say no, I don’t believe your are getting him wrong. 😉

    But I fail to see what that has to do with the point I was trying to make. If such skepticism turns out to be misplaced, the failure would be the result of his incorrectly analyzing the evidence, drawing the wrong conclusions (in other words, coming up with bad “explanations”), and proposing or failing to propose appropriate solutions that can realistically be implemented to address the problem. Incidentally, I would apply the same criteria to those who take views different from Deutsch on this important issue.

    In any event, none of this means that “optimism” as described by Deutsch, is not the right philosophical approach.

    Again, thanks for providing such a great forum to discuss these types of questions.

  193. Scott Says:

    Michael #192: I guess it’s a character flaw of mine, that I’m constantly trying to shortcut my evaluation of abstract normative doctrines, by looking at how the doctrines’ most prominent proponents have actually applied them to the concrete issues I care about most. (Cf. my discussion of “paradigm cases” in the IIT thread a few weeks ago.)

    As one example, I long ago decided, based on the facts of Heidegger’s life, that there’s nothing I could possibly learn from studying Heidegger’s ethics. As a result, maybe I’m missing out on some important abstract ethical insights … I guess I’ll never know! :-)

    (Obviously, I don’t mean to compare Deutsch to Heidegger here — ugh!)

  194. Michael Bacon Says:

    Scott@190,

    There is one important point that I failed to mention in my prior comment.

    You complain that “. . . Deutsch seems to treat ‘knowledge’ as a trump card that can always get our species out of such tight spots (if there’s anywhere in the book where he acknowledges the contrary, please tell me where!).” You say that probably this is the most fundamental place where you part ways with him. However, I think this is a real mischaracterization of what he says, not dissimilar to what you argued in your prior comment.

    If you had said Deutsch believes adequate knowledge, correctly applied, in sufficient time and with sufficient vigor, can ALWAYS get us out of a jam, I think you’d be right. But there are a lot of qualifiers there.

    What he doesn’t say is that any old knowledge, applied haphazardly when it’s too late to do any good will suffice to solve any problem, pressing or not. In fact, at the end of the chapter on “Optimism” he even starts off his “Terminology” definitions 😉 by defining “blind optimism” as reckless overconfidence and proceeding as if one knew that bad outcomes will not happen. The book is full of examples of mini-enlightenments and progress cut short by the failure of people to keep it going because they simply didn’t know how.

    Our time in the sun has so far lasted a bit longer than prior episodes, but perhaps we will come up against problems we too can’t solve. No one claims, not even an optimist like Deutsch, that we will in the end always succeed. Perhaps “Eigenmorality” will provide us with an additional arrow in our survival quiver. :)

  195. Forget the Turing Test: Here’s How We Could Actually Measure AI | TOJFL ® Says:

    […] the Turing test. But the judges of this test were apparently easily fooled, because any cursory exchange with ‘Eugene Goosterman’ reveals the machine inside the ghost. Maybe the time has come, 60 […]

  196. pschwede Says:

    The problem the Turing Test encounters is the reduced short-term memory of modern 13-years-olds due to computer-aimed “communication”-skills.

    The human intelligence approaches artificial intelligence (as in convergence).

  197. atang Says:

    At least we can conclusively pass the Turing Test for dogs.

    http://existentialcomics.com/comic/15

  198. J Says:

    Your conversation with the chatbot is right now on air on Spanish public radio. I’ll post the link to the podcast when they upload it…

  199. J Says:

    And here goes the link for those who understand Spanish:

    http://www.rtve.es/alacarta/audios/fallo-de-sistema/fallo-sistema-episodio-141-desmontando-kurzweil-02-06-07-14/2644758/

  200. Una máquina supera el test de Turing por primera vez | Alfa Hélice Says:

    […] Aquí dejamos un link con la conversación que se siguió en uno de los casos: http://www.scottaaronson.com/blog/?p=1858 […]

  201. An Interview with Eugene Goostman or How I failed the Turing test | Positivism Says:

    […] Scott Aaronson, My Conversation with “Eugene Goostman,” the Chatbot that’s All Over the News for Allegedly Passing the Turing Test Shtetl-Optimized, The Blog of Scott Aaronson http://www.scottaaronson.com/blog/?p=1858 […]

  202. Por primera vez un programa supera el test de Turing - Un geek en Colombia Says:

    […] esta pagina se puede leer una conversación que un científico matemático sostuvo hace poco con una versión […]

  203. Rewriting the Rules of Turing’s Imitation Game | 阅读纵横 Says:

    […] an excerpt from one exchange, for […]

  204. Geslaagd voor de Turing test! Maar wat betekent dat eigenlijk? | Charlotte Vlek | Researcher, Science writer Says:

    […] komt kan zelf met Eugene praten. Informaticus Scott Aaronson publiceerde alvast zijn kritische gesprek met het chatprogramma. Eugene blijkt niet te weten hoeveel poten een kameel heeft, en geeft met de […]

  205. Assorted links Says:

    […] computer program has passed (an easy version of) the Turing test. It doesn’t mean […]

  206. The Turing Test: Why I Think Eugene Goostman Didn’t Pass | Evolving Understandings Says:

    […] plenty of people have published the transcripts of their conversations with Eugene (see here, here, and here), and frankly the results are unimpressive at best.  Eugene’s MO seems to be […]

  207. Felsefe Turk Says:

    This is a typical program. It’s a matter of “if”s. Why is this over rated so much? You just need enough labour to create an answer database. It’s not intelligence, just database.

  208. Ozzy Says:

    How is this any different than Alice or even Racter? There’s no sign of intelligence in any of its answers. How on earth can a software pass the turing test by giving such irrelevant answers? Here’s what happened: When all the algorithms fail, select random texts from db and replace fill in the blanks with keywords in question? Wow, so original… There’s no way this software can make over 30% of ppl that it’s human unless that 30% is not retarded.

Leave a Reply