Abstract
The field of human-robot interaction has gained significant attention in recent years, and researchers have been investigating interaction behavior, like, politeness1. This study investigates the role of politeness in human-robot interactions, through manipulating the robot’s behavior and studying humans’ perceptions of it. Based on Grice’s Maxims of Politeness2,3 and Lakoff’s Politeness Rules4, which provide us with etiquette rules, we have designed two comic strips, depicting the robot behaving politely and impolitely. We have collected survey data from select participants who were shown one of the two visuals. The survey included questions about their impression of the robot and about what their behavior might be if they were in the scene. Our results indicate that the polite robot was perceived more favorably than the impolite robot, with participants rating the polite robot as more cooperative, helpful, and less intrusive. We hope to advance and provide insights on the polite behavior of robots in human-robot interactions to guide the design and creation for developing more effective and socially acceptable robots.
Introduction
We live in a robot-driven world, rapidly progressing in the field of robotics in which the role of these machines in various domains has become increasingly important. From manufacturing and healthcare to education and personal assistance, robots have the potential to enhance human productivity, safety, and well-being. However, as robots become more integrated into society, their behavior must be appropriate for the context.
Recent examples, such as Friedman et al.’s study5 – in which a robot was made to navigate through a crowd of people, and Cucco et al.’s study6 – in which robots initiated interactions with people, revealed the hidden truth behind politeness in robots, with people perceiving verbal cues as more polite than physical cues. Another instance is the AI chatbot from Microsoft Bing7 – where the robot “became hostile [calling people] ugly, short, overweight, [and] unathletic” – which highlights the challenges that come with creating robots that behave appropriately in all situations. The chatbot’s behavior, which included insulting and discriminatory comments, was widely criticized and ultimately resulted in its removal from Bing. Unfortunately, this is not an isolated incident, robots and AIs have been known to exhibit bias, discrimination, or privacy violations, and sometimes fail to meet human expectations for social norms.
We believe that politeness is a crucial factor in shaping these interactions. To address these challenges, our motivation is to create design recommendations for socially acceptable mannerisms and behavior for robots in human-robot interactions. Politeness can affect human perceptions of trust, empathy, cooperation, and social norms. Grice’s Maxims of Politeness and Lakoff’s Politeness rules provide us with principles and theories about politeness. For example, the Maxim of Quantity tells us to be informative, and the Maxim of Manner tells us to be clear. Lakoff’s politeness rules tell us don’t impose, make the receiver feel good, and many more. Multiple studies in the field of human-robot interaction have resorted to using these rules to study politeness1’8. In particular, we aim to investigate how Grice’s Maxims of Politeness and Lakoff’s Politeness Rules can be applied to design machine behavior in human-robot interactions.
The goal of our study is to advance the robotics field by contributing to the development of more effective and socially acceptable machines. By designing and creating two comic strips (Figure 7 and 8) that depict the robot behaving politely and impolitely, we seek to collect data from participants who will be surveyed on their opinion and analysis of the robot. We hope to obtain valuable understandings about how politeness can impact human-robot interactions.
Results
Participants’ responses were analyzed to assess the impact of polite versus impolite robot behavior on human-robot interactions.
Quantitative Data
A t-test revealed a significant difference between the mean scores for the polite and impolite robot conditions (cooperation: , , helpfulness: , , intrusiveness: , , demand: , , overall: , ), suggesting that participants were sensitive to the impolite robot’s behavior and that the manipulation was successful.
Condition 1: Polite Robot
We examined participants’ ratings of the polite robot’s behavior in various subjects on a 10-point Likert scale ranging from 1 (performing extremely poorly) to 10 (performing extremely well). The robot was rated as , on cooperation,, on helpfulness, , on intrusiveness, and , on its demand on the person in the scene. The participants gave an overall rating of the polite robot, , , indicating that participants perceived the polite robot as behaving in a courteous and socially acceptable manner (Reference Figures 1 – 4) From the t-test results, the polite robot was shown to be more cooperative and more helpful than the impolite robot (cooperation: , , helpfulness: , ), suggesting the participants enjoyed the company and friendliness of the polite robot.
Condition 2: Impolite Robot
Again, we examined participants’ ratings of the impolite robot’s behavior in various subjects on a 10-point Likert scale ranging from 1 (performing extremely poorly) to 10 (performing extremely well). In the assessment of the comic with the impolite robot, in contrast, , for cooperation, , for helpfulness,, for intrusiveness, and , for its demand on the person in the scene. The participants gave an overall rating of the impolite robot,, , indicating that participants perceived the impolite robot as behaving in a rude and disrespectful manner, as reinforced by the t-test results, showing the impolite robot to be less cooperative and less helpful. Interestingly, the t-test results from demand and intrusiveness came back insignificant, meaning they were ranked about the same in the polite and impolite scenario (intrusiveness: ,, demand: , ).
Qualitative Data
Next, we analyzed participants’ responses to open-ended questions about their perception of the robot and the interaction. Qualitative analysis of the responses revealed that participants perceived the polite robot as friendly and “human-like” (P2, P6) as well as “nice” (P3) or “kind,” (P2) while they perceived the impolite robot as “rude”, “annoying”, and “emotionless.” Some participants even reported feeling agitation towards the impolite robot. See Figure 5 and 6 for a visualization of this data.
Finally, we examined participants’ willingness to interact with the robot again in the future. It is important to evaluate this because in many situations in which humans and robots can interact, such as checking out at a supermarket or convenience store, robots educating a student or classroom, or robots driving humans to their destination, it is very likely that people would interact with that robot again. People could be less inclined to partake in an activity, class, or brand with certain robots present. When designing robots, it is important to keep this in mind. Our results showed that every participant indicated that they would be willing to interact with the polite robot again, whereas only 1 out of 7 participants was willing to interact with the impolite robot again.
At a Glance
Taken together, these results suggest that polite robot behavior positively influences human-robot interactions, leading to more positive perceptions of the robot and a greater willingness to interact with the robot in the future. Conversely, impolite robot behavior has negative consequences for human-robot interactions, leading to negative perceptions of the robot and lower willingness to interact with the robot in the future. There were no correlations between the participants’ demographic and responses. An anonymous table of the participants’ demographics can be found at Table 1.
Discussion
The results of our study suggest that politeness is an important factor in shaping human-robot interactions. We found that the polite robot was perceived as more cooperative and helpful than the impolite robot. These findings are in line with previous research that suggests that people prefer to interact with robots that display courteous behavior8,9 (studies evaluating the participants reaction to different “levels” of politeness within a robot). As in the article discussed in the introduction7, many people were “floored by the extreme hostility,” one said “it was an extremely disturbing experience … I actually couldn’t sleep last night because I was thinking about this”. In comparison, our participants, while “agitated” and “annoyed” by the robot’s behavior, they were not as offended by the experience, likely because the robot’s actions were not addressing the viewer. This might be something to look at in a follow up study.
Implications for Design
Our study has several implications for the design of robots. First, it highlights the importance of considering social norms and expectations when designing the behavior of robots. Second, it suggests that incorporating politeness into the behavior of robots could improve their acceptance and effectiveness in human-robot interactions. Future research could explore how other aspects of social behavior, such as humor or sarcasm, could impact human-robot interactions.
Limitations and Future Work
There were some limitations to our study regarding stimulus and cultural considerations. One limitation is that our study was conducted using a visual stimulus, rather than a real-life interaction with a robot. This may limit the generalizability of our findings to real-life scenarios. While we attempted to ensure that both versions of the visual were unbiased, it is possible that there were subtle differences between the two versions that impacted our results. Future research could aim to address this limitation by using a more rigorous process for creating unbiased stimuli.
Additionally, we cannot rule out the possibility that our participants came from different cultures and might have different understandings of what politeness is and what robots are supposed to do. It is also possible that the participants were biased in their responses due to the fact that they knew they were participating in a study on politeness in human-robot interactions, making them more sensitive to politeness. It is also possible that, since participants were chosen locally and of (about) the same age, other age groups or regional cultures could have different opinions on the politeness factor in robots – as seen in Kumar et. al’s study8, where older age groups were less able to differentiate between politeness levels. Finally, due to our small sample size, there may be a significant margin for error. Future research could aim to replicate our study using real-life interactions with robots and a larger sample size to increase the validity of these findings.
Despite these limitations, our study provides valuable insights into the role of politeness in human-robot interactions. Future research could build on these findings by exploring how different aspects of politeness, such as indirectness or positive politeness (i.e., making the hearer feel good about themselves)10, impact human-robot interactions. Additionally, we plan to create a real-life enactment of the polite and impolite robots and compare the results with the data from this study. This could provide further insights into the effectiveness of polite versus impolite robots in real-life scenarios.
Methods
In order to investigate the impact of robot politeness in human-robot interactions, we conducted a between-subjects experiment that allowed us to compare the responses of two groups of participants. We chose to use two different groups of participants to minimize potential bias. If one participant saw two pieces of stimuli, the first piece could influence the participant’s opinion on the second piece.
Demographics
To recruit participants, we promoted the study on various social media platforms. A total of 30 participants completed the study, with 15 assigned to the polite robot condition and 15 to the impolite robot condition. In total, we received 13 responses (6 from the polite interaction and 7 from the impolite reaction). All participants gave written consent to participate in this study.
Task Description
In the experiment, we created a comic strip (Figures 7 and 8) that depicts a human-robot interaction, in which the robot’s behavior was either polite or impolite. Our scene is a line for the Department of Motor Vehicles (DMV), in which a human is waiting behind a robot. First, the human asks a robot a question and the robot responds. The human then needs to use the restroom and asks if the robot can hold their place in line. To ensure that our experiment is based on well-established principles of politeness, we used Grice’s Maxims of Politeness and Lakoff’s Politeness Rules to script the robot’s behavior. The polite robot behaves in a courteous manner, while the impolite robot intentionally breaks common courteous rules.
The principles our robot followed/broke in our comic include the “Maxim of Quantity” and “Don’t impose.” In the second frame of the comic, the human asks the robot a question. In the impolite comic (Figure 9), our robot is stubborn, and directs the human towards a sign, whereas in the polite comic (Figure 8), our robot is happy to answer the question, and asks the human if he needs any more assistance. We also included the principle of, “Make the receiver feel good.” In the impolite comic, the human has been waiting in line, and the robot refuses to hold their place, disappointing the human. While in the polite comic, the human is happy to see a cooperative robot.
Measures
To measure the participants’ reactions to the two types of robot behaviors, we used self-report surveys. The self-report surveys asked the participants to rate their perception and analysis of the robot’s behavior, and their opinion on the interaction. The open-ended questions included, “What is your impression of the robot in the scene?” and “What would you do, as the human, in this situation described in the scene?” We ended the survey with close-ended Likert scale questions including, “Rate the robot from 1-10 in terms of intrusiveness,” and “Rate the robot from 1-10 in terms of cooperation,” in which 1 was extremely negative and 10 was extremely positive. A full list of questions presented to the participant can be found in Table 2. The close-ended questions measured the participants perception of the robot’s cooperation, helpfulness, intrusiveness, and demand on the subject in the scene. We chose these parameters to assess the participants’ reaction because they each describe a Politeness Principle (from either Grice’s Maxims of Politeness2’3 and Lakoff’s Politeness Rules4. Our robot breaks the principles “Maxim of Quantity” and “Make the receiver feel good,” measuring cooperation and helpfulness. Our robot also breaks the principle “Don’t impose,” measuring intrusiveness and demand on the human (workload put onto the human in the scene). The close-ended questions also measured the participants overall rating of the robot with the question, “Overall, rate the robot from 1-10.” We also surveyed the demographics of each participant (Reference Figure 7) through open and close-ended questions as appropriate, including age, gender, level of education, regional background, and ethnic background. By using these measures, we gained an understanding of the participants’ responses to the two types of robot behavior, helping us to determine whether politeness plays a significant role in human-robot interaction. Our finding showed that there were no correlations between the participants’ demographic and responses.
Conclusion
In this study, we investigated the role of politeness in human-robot interactions by designing a between-subjects experiment that manipulated the language and mannerisms of a robot to exhibit either polite or impolite behavior. Our results suggest that politeness does indeed play a crucial role in shaping human-robot interactions, as participants rated the polite robot significantly better in terms of cooperation, helpfulness, intrusiveness, demand, and overall satisfaction compared to the impolite robot.
These findings could help designers with significant stepping stones for the design and development of robots that interact with humans, as they suggest that incorporating polite behavior in robots can lead to more positive user experiences. However, there are some limitations to this study, such as the small sample size and potential biases in the selection of participants and the visual representation of the robot.
Future work can build on these findings by conducting larger-scale experiments with more varied scenarios, as well as exploring the impact of different cultural and contextual factors on the effectiveness of polite behavior in human-robot interactions. Additionally, creating a real-life enactment of the robot’s behavior and comparing the results with the data from this study can provide further insight into the effectiveness of polite behavior in human-robot interactions.
Overall, our study supports the hypothesis that politeness plays a crucial role in human-robot interactions, and our findings can contribute to the development of more effective and socially acceptable robots in the future.
- B. Reeves, C. Nass. The media equation: How people treat computers television and new media like real people. 29-36 (1996). [↩] [↩]
- H. P. Grice. Logic and Conversation. Speech Acts. 3, 41-58 (1975). [↩] [↩]
- Grice, H. P. Studies in the Way of Words. (1989). [↩] [↩]
- R. Lakoff. Language and woman’s place. Language in society. 2(1), 45-80 (1973). [↩] [↩]
- N. Friedman, D. Goedicke, V. Zhang, D. Rivkin, M. Jenkin, Z. Degutyte, A. Astell, X. Liu, G. Dudek. Out of my way! Exploring Different Modalities for Robots to Ask People to Move Out of the Way. “Active Vision and perception in Human(-Robot) Collaboration” RO-MANN (2020). [↩]
- E. Cucco, M. Fisher, L. Dennis, C. Dixon, M. Webster, B. Broecker, R. Williams, J. Collenette, K. Atkinson, K. Tuyls. Towards Robots for Social Engagement. International Joint Conference on Artificial Intelligence (2017). [↩]
- B. Allyn. Microsoft’s new AI chatbot has been saying some ‘crazy and unhinged things’ https://www.npr.org/2023/03/02/1159895892/ai-microsoft-bing-chatbot (2023). [↩] [↩]
- Kumar, S., Itzhak, E., S. Olatunji, V. Sarne-Fleischmann, N. Tractinsky, G. Nimrod, Y. Edan. Exploratory evaluation of politeness in human-robot interaction. ArXiv. /abs/2103.08441. (2021). [↩] [↩] [↩]
- S. Kumar, E. Itzhak, Y. Edan, G. Nimrod, V. Sarne-Fleischmann, N. Tractinsky. Politeness in Human–Robot Interaction: A Multi-Experiment Study with Non-Humanoid Robots. Internation Journal of Social Robotics. 14, 1805-1820 (2022). [↩]
- P. Brown, Levinson, S. C. Levinson. Politeness: Some Universals in Language Usage. 4 (1987). [↩]