People may be reluctant to give their personal information to artificial intelligence (AI) systems even though it is needed by the systems for providing more accurate and personalized services, but a new study reveals that the manner in which the systems ask for information from users can make a difference. In a study, Penn State researchers report that users responded differently when AIs either offered to help the user, or asked for help from the user. This response influenced whether the user trusted the AI with their personal information. They added that these introductions from the AI could be designed in a way to both increase users' trust, as well as raise their awareness about the importance of personal information. As AIs become increasingly ubiquitous, developers need to create systems that can better relate to humans, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory. "There's a need for us to re-think how AI systems talk to human users," said Sundar. "This has come to the surface because there are rising concerns about how AI systems are starting to take over our lives and know more about us than we realize. So, given these concerns, it may be better if we start to switch from the traditional dialogue scripts into a more collaborative, cooperative communication that acknowledges the agency of the user." Here to help? In fact, power users may be put off by the way AIs typically communicate with users, which may seem patronizing to them, said Sundar, who is also an affiliate of Penn State's Institute for Computational and Data Sciences (ICDS). For example, the researchers cite Facebook's request for birthday information so its AI can provide age-appropriate experience to its users. "AIs seem to have a paternalistic attitude in the way they talk to the user - they seem to tell users they are here... |