We are living in a world where we constantly interact with voice assistants, chatbots, robots and virtual agents. We have robotic vacuum cleaners at home and encounter humanlike robots in public spaces that deliver food, sanitize shopping malls and serve as museum guides. We ask Alexa to play music, report weather conditions and place orders. The groundbreaking technology ChatGPT and the relevant generative AI tools like Mid-Journey and Dall E have spurred us to revisit the meaning of “human creativity” and “human intelligence.”
What are some of the promises and perils of co-living withthese technologies?
On one hand, those AI technologies play multiple roles in society. They can be extensions of human beings. For example, autistic children use telepresence robots to attend classes. During the COVID-19 pandemic, some doctors relied on telepresence robots to meet patients.
AI technologies can also mimic human beings regarding behavior and emotions. Humanoid social robots, for instance, have been used to provide humanlike hugs, serve as playmates with children and act as companions for elderly people. Moreover, AI technologies can serve as role models to better meet human needs. One of our studies indicated that when social robots are properly designed to exhibit pro-social behavior and present pro-social behavioral outcomes, human beings are likely to learn from the robots’ behavior. The finding can be applied to many environmental protection scenarios where human beings need to refer to specialists to learn about medical or electronic waste recycling.
On the other hand, ethics become an important theme when we co-live with these AI technologies. We want the technologies to be social and humanlike for communication purposes, but how much human likeness do we want them to have? We also expect technologies like our personal AI assistants to be loyal and fully devoted to our individual needs, but these technologies andtheir backstage algorithms have the capacity to engage with multiple users atthe same time, as depicted in the plot of the movie “Her.” What attitudes shall we adopt toward these AI technologies’ multi-faceted communication ability?
Humans value a smooth and positive experience in interactions with robots. But robots need to be equipped with a variety of AI systems (e.g., facial recognition, speech recognition, gesture recognition, location tracking, emotion analyses) to capture users’ physiological and psychological status. How transparent should these technologies be during our interactions with them? How can we best protect users’ privacy while allowing these technologies to create positive experiences for us? How should companies, developers and researchers explain the use, ownership and impact of these AI technologies to users?
I continue to work with graduate students to explore these questions, seeking to understand the benefits, applications and potential risks of AI development.