A panel discussion on: The Ethic and Moral Implications of Affective Computing Lee Jonathon Steen Robert Morris University ljsst8@mail.rmu.edu Abstract As human-computer-interaction (HCI) begins to be more seamless and natural, people will have a tendency to treat computers like real people. Reeves and Nass (1996) have identified this link in a theory dubbed “the media equation” which theorizes that people treat media like another person. In order to ensure the HCI becomes more sophisticated, many researchers have theorized that computers will need to understand and respond to human emotions in order to break new ground in HCI (Hawkins & Blakeslee, 2004; Minsky, 2006; Norman, 2006; Picard, 1997; Thagard, 2005). The problem with current HCI designs is that they take an approach to stop user frustration only by enhanced design. Picard (1999), an advocate of affective computing, states that “[m]ost HCI research has aimed to prevent frustration … [h]owever, there is also a need to address frustration at run-time” (p. 829). Picard (1997) believes that “affective computing,” which she defines as “computing that relates to, arises from, or deliberately influences emotions” (p. 3), can address two major design issues in HCI. The two design issues Picard (1999) believes affective computing can solve are “[d]esign-time and run-time identification of frustrating situations, and [h]elping reduce user frustration during an interaction” (p. 830). As computers begin to acquire this capability to read and understand human emotions, private and intimate details, that have in the past been truly private to a person, may be revealed. For example, D’Mello et al. (2008) and Kapoor et al. (2007) have devised auto-tutor systems that detect user frustration, boredom, confusion, and interruption in flow. If the system identifies one of those emotions, it can interrupt the session or change the flow. For example, if the system detects confusion, it can interrupt the user and attempt to help clarify any issues. This way, the computer is more interactive and the interaction actually becomes computer-human-interaction (CHI). This is where the computer will assume the active role in interaction. This is one scenario of the system helping a person through the identification and response to an emotion. However, future systems may not as welcomed. For example, if a mobile system was able to be used by a car salesman that helped identify a customer’s feelings about a car or the pressure to buy, it may be considered an invasion of privacy to the customer. Traditionally, a customer would have the safe feeling of knowing the salesman would know how she feels. But with this new system, that is no longer the case. Another example may be an employee at work whose emotions are monitored by the employer. The employer would be able to identify confusion or boredom to be able to “increase performance” in the employee by providing some treatment. While many of the applications of affective computing may be advantageous to the person using it, there are still many questions about the moral and ethical implications of these systems. As these systems advance, they are likely to become more and more accurate, and thus, more and more invasive. The purpose of the proposed panel is to discuss what possible moral and ethical issues the audience foresees as problems. There are two target audiences for this. The first is researchers in HCI/CHI, interface design and affective computing. The second is people who would potentially use affective computing systems. While the second group may be large, it is intentionally made this way. In order to obtain a broad sample of feedback, the panel needs to be open to all people. There are open slots left intentionally open for panel discussion leads that will be selected from desired participants. In order to entice a good debate and discussion, the panel leads need to bring in different aspects of different research fields to the table. A list of questions will be handed out to the audience in order to illicit feedback on specific case scenarios that Affective Computing has led to or will lead to. REFERENCES D'Mello, S., Jackson, T., Craig, S., Morgan, B., Chipman, P., White, H., Person, N., Kort, B., el Kaliouby, R., Picard., R.W., & Graesser, A. (2008, June). AutoTutor Detects and Responds to Learners Affective and Cognitive States. Workshop on Emotional and Cognitive Issues at the International Conference of Intelligent Tutoring Systems, Montreal, Canada. Kapoor, A., Burleson, W., & Picard, R.W. (2007). Automatic prediction of frustration, International Journal of Human-Computer Studies, 65, 724-736. Hawkins, J., & Blakeslee, S. (2004). On intelligence. New York: Times Books. Minsky, M. (2006). The emotion machine: Commonsense thinking, artificial intelligence, and the future of the human mind. New York: Simon & Schuster. Norman, D. A. (2004). Emotional design. New York: Basic Books. Picard, R. (1997). Affective computing. Cambridge, MA: The MIT Press. Picard, R. (1999). Affective computing for HCI. Proceedings of HCI International (the 8th International Conference on Human-Computer Interaction) on Human-Computer Interaction: Ergonmics and User Interfaces, 1(1), 829-833. Reeves, B, & Nass, C.I. (1996). The media equation: How people treat computers, television, and new media like real people and places. Cambridge, MA: Cambridge University Press. Thagard, P. (2005). Mind: Introduction to cognitive science (2nd, ed.). Cambridge, MA: The MIT Press.