Like Mark, I found Weizenbaum's analysis of his own computer program ELIZA to be both innovative and before its time. I remember instant messaging Smarterchild as a preteen and having conversations with this technological machine-humanlike hybrid that often lasted longer than with my own middle school friends. Additionally, Smarterchild could be used to look up movie times, weather, news, or even to play games, often providing more entertainment and information than you could find in ten minutes surfing the web or talking to a friend. Nowadays, though, you get an automatic message from Microsoft when you IM Smarterchild, telling you to be careful what personal information you share on the Internet. Then, Smarterchild asks if he can ask you a series of questions, like your first name and where you go to school. Then you get a menu with numbers that correspond to certain services, like movie times, weather, a library (including a thesaurus), or tools (like spell check). It seems that even this technology has become streamlined and even more generic than I remember it, and when you try to carry on a conversation with Smarterchild, he automatically jumps to more menus (you say: "I'm doing homework" and he says "Cool! I've got lots of services that can help you with your homework.")
Now, after experiencing this kind of program firsthand and reading the example of ELIZA's "Doctor" application in the textbook, I have a very difficult time understanding how anyone could possibly garner a real therapeutic response from the kind of stunted, repetitious, mechanical structure of the language that programs like ELIZA and Smarterchild run on. The value of therapy often lies in the human companionship and response that a patient receives from a psychotherapist, and I really have a hard time envisioning any kind of software or program that could provide real solutions or aid in a human decision-making process in terms of therapy or overcoming mental illness. And, like Weizenbaum observes, what kind of psychiatrist thinks that "the simplest mechanical parody of a single interviewing technique" could have "captured anything of the essence of a human encounter"? Furthermore, what kind of patient is going to sit down at a computer screen and feel any sort of warmth or security or even incentive to open up? Although Weizenbaum does write about how quickly and deeply people "become emotionally involved" with ELIZA, I still find his case a bit unrealistic. I will admit, I have an emotional connection with my iPod - and have even given it a name - but I know that it is not the iPod itself that has engaged my emotions, but the music (created by other humans) loaded onto it. The tool, or medium, itself is not what people connect to, but the content of that tool. I feel the same way about my computer; there is only an emotional connection there because of the content (word documents, accessibility to information, pictures, music) that it holds. The computer and iPod are signifiers of power (accessibility to information), but do not truly hold any emotional tie beyond the content they supply. My emotional connection to, say, my mother is a much different one than what I might have with my car. When my little Mitsubishi stops working, I am sure there will be some sort of emotional reaction on my part involved, but much different and to a different caliber than how I would respond if I found out my mother had died.
I've spent a lot of time thinking about the idea of technological instruments as extensions of man's body, and while I can rationalize this idea, I could also see myself functioning without my computer, cell phone, and iPod, even after using them with such intensity and constancy as I do now. While I use such instruments for certain purposes, mostly to enhance my own human capabilities (communication or art, for example), I could survive perfectly well without them, although in this society, it would, of course, be a challenge. To imagine any human being really viewing the television as a part of his physical self is staggering. When I use instruments such as these for self-expression, there is an emotional connection to the machine itself in its ability to convey what I am attemping to express, but man is man and nature is nature and machine will never TRULY be either of those things.
I could see ELIZA's use as a psychiatric tool, perhaps, to help a patient with an identified disorder or illness to choose which medication is best for him, where the human psychiatrist himself often only functions in a question-answer manner (what medications are you allergic to? what is your diagnosis? have you ever been on any other medication to treat this illness?). However, trying to imagine a schizophrenic patient in an inpatient hospital typing his neuroses on a computer that often responds by simply restating the information inputted seems absolutely absurd. And if such a patient were to create an emotional bond with the program or the computer, wouldn't this pose a whole different group of emotional problems to tackle? I would imagine that emotional ties to a mechanical object would certainly alter and devalue anyone's view of human relationships.
Finally, Weizenbaum himself says that the emotional connection between human and computer program "could induse powerful delusional thinking in quite normal people," not to mention psychiatric patients. This observation, I believe, is right on point. After reading the Weizenbaum article, and cogitating on the cyborg technologies and implants we discussed in class, I was thoroughly terrified. Like I mentioned in my last post, the idea that humans would want computer technology to take on many of the responsibilities or characteristics of humans is shocking. Does our society really value productivitiy more than human life? If technology is soon to surpass human capability, what is the point of human existence at all, except to create these technologies? And, furthermore, aren't these technologies meant to aid the human in its quest for productivity and information, not obliterate the race altogether? In this sense, it is imperative that scientists take full responsibility for their public work, as Weizenbaum asks, and the scientist is therefore not only responsible for himself and his work, but for its copyrighted use in other contexts.
Ultimately, I believe Weizenbaum when he says that "a line dividing human and machine intelligence must be drawn," and I have concluded that perhaps that line should have already been drawn.
Monday, October 29, 2007
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment