On Friday, 17th of November we had a chance to hear Sherry Turkle's presentation on "New Complicities for Companionship: A Nascent Robotics Culture". Turkle is the professor of Social Studies of Science and Technology at MIT. She is the author of several well known books including "Life on the Screen: Identity in the Age of the Internet" (1995). The talk was a part of Stanford's Seminar on Science, Technology, and Society.
In her presentation, Turkle emphasized that technologies are never just tools but that they are evocative objects: things we think with (which is the name of a book that will be published by MIT Press this year). She discussed relational artifacts that present themselves as having a mind of its own. She explained that there is a movement in computation from users projecting themselves on the screen to computational entities (agents/robots) becoming relational companions. A point that Turkle underlined in her presentation was that the art and science of the construction of relational artifacts is related to understanding human psychology and human vulnerability. Interactions with computational objects moves more and more towards the psychology of engagement and object relations.
The presentation led into a lively discussion. Turkle correctly pointed out that there is a tendency to present AI systems as having some human like qualities and skills which is, to a large degree, only an illusion. I presented her a question about the possibility of having in the future a system that would be able to recognize some real psychological patterns. For instance, in the future we could have an agent that would follow a conversation and could suggest something like: "You seem to be unnecessarily harsh towards your spouse which might stem from the fact that he resembles your parent that was unfair towards you in your childhood." Turkle answered briefly that that needs to be thought about. The shortness of her answer made me wonder if she thought that I did not understand the core point of her presentation. However, I think there is a chance to develop systems that could to some degree model psychological phenomena to such level of detail that they could help people to gain increased understanding of themselves.
Weizenbaum's Eliza and all its successors only react to some keywords in an illusionary therapeutic conversation. An alternative would be to develop systems that would collect their "understanding" of psychological phenomena from a large number of interactions among people over an extended period of time. A modest attempt to this direction was our paper "Emotional Disorders in Autonomous Agents?" in which we outlined a minimalistic model of anxiety, depression and mania (Hyvärinen and Honkela, Proceedings of ECAL'99, European Conference on Artificial Life, Springer, 1999, pp. 350-354). The paper itself is rather sketchy but I think the core ideas are still valid.
3 comments:
This sounds fascinating; I wish I could have been there. Does Turkle, to your knowledge, distinguish the uniqueness of "modern" technology in any way? Making meaning with objects is certainly an old phenomenon, right?
Jonathan, thank you for you comment and sorry for the huge delay in replying. You are right that making meaning with object is an old thing. Turkle seems to warn us that we shouldn't think too easily that even our modern AI technologies would be conscious or have emotions. I tend to agree with her but with the important addition that machines can perform such behaviors that can cause significant causes in us - even if we know that it is just a machine. One good example is Sparx, a game that is designed to treat depression: http://www.bmj.com/press-releases/2012/04/19/effectiveness-sparx-computerised-self-help-intervention-adolescents-seekin
Thanks for your ideas and links, Timo:-)
Post a Comment