NYTimes: Approximating Life. Richard Wallace battles manic depression and paranoia, scares one of his former classmates (now a professor) so much that a restraining order was granted, and writes award-winning "chat bots" which come closer than any other software to passing the "Turing Test."
I think the tactic his "Alice" uses highlights the ultimate shallowness of the Turing Test: it's only testing "conversational intelligence within a bounded interaction," not "useful intelligence" and definitely not "consciousness" or "self-awareness" (which people often confuse with intelligence). I'm sure there are already programs which can pass the Turing Test -- if you're only given a few minutes to interact with them. In a few years, we'll have programs that can pass the Turing Test even over a period of hours or days, longer if you can't force them to engage in some open-ended complicated task -- like performing work for hire -- between consultations. (And since a real person, if you asked them to spend a few weeks gaining real-world experience in some new field, would likely say, "sorry, I haven't the time," a program's protestations would be credible as well.)
Even when we get to the point that a program can fool a person indefinitely, that program may not be useful -- it may be effectively simulating an annoying or unproductive person! -- and won't necessarily be "self-aware", even if it has been trained to say, "yes, I know what you're talking about, I have a strong sense of self," etc.
We're going to need better tests and standards for deciding when our creations rate as having reached human-level intelligence.